71 messages
MrAtheistover 4 years ago(edited)
hey ya’ll, not sure if it’s possible, but heres a tiny problem im hitting…
1> Someone deployed some tf stuff from local, state file is stored in s3
2> Presumably this someone got thrown under the bus and didnt have a chance to push the iac, assuming iac is lost
3> The actual resources went thru some manual hell… and i would like to restore/revert back to the original state based on the json
is this possible? something to do with tainting…?
1> Someone deployed some tf stuff from local, state file is stored in s3
2> Presumably this someone got thrown under the bus and didnt have a chance to push the iac, assuming iac is lost
3> The actual resources went thru some manual hell… and i would like to restore/revert back to the original state based on the json
is this possible? something to do with tainting…?
Mohammed Yahyaover 4 years ago
https://www.hashicorp.com/blog/announcing-terraform-aws-cloud-control-provider-tech-preview no more waiting for resources to be supported, now automatically generated - let
OliverSover 4 years ago
After a bit more reading, it appears that AWS CC (cloud control) does not use cloudformation behind the scenes. Rather CC is just an interface to the AWS API for creating and interacting with AWS resources.
In fact, AWS CC does not manage resources, let alone a stack of resources; you can list and update any/all resources, not just those created with AWS CC.
So there is no notion of "importing a resource to be under AWS CC control". AWS CC does not manage resources, and does not use CloudFormation to create or destroy them.
The quoted text just says that, because of how AWS CC and CF were implemented, resource types made accessible to AWS CC are automatically available in cloudformation.
In any case, AWS CC probably lowers the amount of work required by infra management tools to support AWS resource management, because of the unified json-based API that it provides. Eg Crossplane and ACK (AWS Controllers for Kubernetes) might be able to accelerate their coverage of aws resources dramatically through the use of AWS CC.
In fact, AWS CC does not manage resources, let alone a stack of resources; you can list and update any/all resources, not just those created with AWS CC.
So there is no notion of "importing a resource to be under AWS CC control". AWS CC does not manage resources, and does not use CloudFormation to create or destroy them.
The quoted text just says that, because of how AWS CC and CF were implemented, resource types made accessible to AWS CC are automatically available in cloudformation.
In any case, AWS CC probably lowers the amount of work required by infra management tools to support AWS resource management, because of the unified json-based API that it provides. Eg Crossplane and ACK (AWS Controllers for Kubernetes) might be able to accelerate their coverage of aws resources dramatically through the use of AWS CC.
Zachover 4 years ago
resource types made accessible to AWS CC are automatically available in cloudformation.
isn’t it the other way around? They add resources to cloudformation and they are automatically in AWS CC because they derive the schema from Cloudformation
OliverSover 4 years ago
Probably. The point (for me anyways) is that CC does not use CF in any way that matters to using CC from TF.
lorenover 4 years ago
No, it just depends on aws services teams writing the CF registry types and resources for the service and for new releases
lorenover 4 years ago
Or it depends on third-parties like hashicorp writing and publishing third-party CF registry types for aws services.
lorenover 4 years ago
All of which benefits CF and AWS even more than hashicorp
Yoni Leitersdorf (Indeni Cloudrail)over 4 years ago
For those using TFE:
We have recently had to integrate our product (Cloudrail) with it. The integration was a bit wonky to begin with, but Javier Ruiz Jimenez just cleaned it up very nicely. It’s actually beautiful. I think it’s worth looking at if you’re thinking of using Sentinel policies to integrate tools into TFE (irrespective of the usage of Cloudrail in there).
Code repo: https://github.com/indeni/cloudrail-tfe-integration
Javier’s PR: https://github.com/indeni/cloudrail-tfe-integration/pull/3
We have recently had to integrate our product (Cloudrail) with it. The integration was a bit wonky to begin with, but Javier Ruiz Jimenez just cleaned it up very nicely. It’s actually beautiful. I think it’s worth looking at if you’re thinking of using Sentinel policies to integrate tools into TFE (irrespective of the usage of Cloudrail in there).
Code repo: https://github.com/indeni/cloudrail-tfe-integration
Javier’s PR: https://github.com/indeni/cloudrail-tfe-integration/pull/3
Almondovarover 4 years ago
Hi all, can someone help me to translate this logic into “WAF rule language” please?
IF url contains production
ELSEIF request has (xx.production.url1.com
ELSE block all
(the ip filter list has already been prepared and tested ok).
Thanks!
IF url contains production
AND NOT (xx.production.url1.com OR yy.production.url2.com) = pass without examining IP’sELSEIF request has (xx.production.url1.com
OR yy.production.url2.com) AND one of the IP`s in the list = passELSE block all
(the ip filter list has already been prepared and tested ok).
Thanks!
Devops alertsover 4 years ago
Hi everyone! i am trying to create vpc endpoint service (private link )so i can access my application from base vpc to custom vpc in aws. i am using terraform with jsonecode function to interprets my configuration.
issue is that terraform tries to create vpc endpoint service link before the network load balancer creation. so how i can pass through json depend up condition so it will wait and after NLB creation then create endpoint service.
Thanks
issue is that terraform tries to create vpc endpoint service link before the network load balancer creation. so how i can pass through json depend up condition so it will wait and after NLB creation then create endpoint service.
Thanks
"vpc_endpoint_service": {
"${mdmt_prefix}-share-internal-${mdmt_env}-Ptlink": {
"acceptance_required": "true",
"private_dns_name": "true",
"network_load_balancer_arns": "${mdmt_prefix}-share-internal-${mdmt_env}-nlb.arn",
"iops": "100",
"tags": {
"Name": "${mdmt_prefix}-share-internal-${mdmt_env}-PVT"
}
}
}, Moshik Baruchover 4 years ago
Hey all!
I'm trying to create a user defined map_roles for my eks using vars:
eks.tf:
variables.tf
when I'm not adding the default node permissions it get deleted on the next apply, and I wish to add more roles of my own.
but thats returns an error:
I believe it is because I creating a - list(object) in list(object)
Can I have your help pls? 🙂
I'm trying to create a user defined map_roles for my eks using vars:
eks.tf:
map_roles = [ { "groups": ["system:bootstrappers","system:nodes"] , "rolearn": "${module.eks.worker_iam_role_arn}", "username": "system:node:{{EC2PrivateDNSName}}" }, var.map_roles ]
map_users = var.map_users
map_accounts = var.map_accountsvariables.tf
variable "map_roles" {
description = "Additional IAM roles to add to the aws-auth configmap."
type = list(object({
rolearn = string
username = string
groups = list(string)
}))
default = [
{
rolearn = "arn:aws:iam::xxxxxxxxxx:role/DelegatedAdmin"
username = "DelegatedAdmin"
groups = ["system:masters"]
}
]
}when I'm not adding the default node permissions it get deleted on the next apply, and I wish to add more roles of my own.
but thats returns an error:
The given value is not suitable for child module variable "map_roles" defined at .terraform/modules/eks/variables.tf:70,1-21: element 1: object required.I believe it is because I creating a - list(object) in list(object)
Can I have your help pls? 🙂
Naveen Reddyover 4 years ago
Hi Everyone, currently I'm new to terraform and working on creating AWS ECR Repositories using Terraform. how can i apply same template like ecr_lifecycle_policy, repository_policy to many ECR repositories. can someone help me in this.
Naveen Reddyover 4 years ago
resource "aws_ecr_lifecycle_policy" "lifecycle" {
repository = aws_ecr_repository.client-dashboard.name
policy = <<EOF
{
"rules": [
{
"rulePriority": 1,
"description": "Keep only 5 tagged images, expire all others",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": [
"build"
],
"countType": "imageCountMoreThan",
"countNumber": 5
},
"action": {
"type": "expire"
}
},
{
"rulePriority": 2,
"description": "Keep only 5 tagged images, expire all others",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": [
"runtime"
],
"countType": "imageCountMoreThan",
"countNumber": 5
},
"action": {
"type": "expire"
}
},
{
"rulePriority": 3,
"description": "Only keep untagged images for 7 days",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 7
},
"action": {
"type": "expire"
}
}
]
}
EOF
}
repository = aws_ecr_repository.client-dashboard.name
policy = <<EOF
{
"rules": [
{
"rulePriority": 1,
"description": "Keep only 5 tagged images, expire all others",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": [
"build"
],
"countType": "imageCountMoreThan",
"countNumber": 5
},
"action": {
"type": "expire"
}
},
{
"rulePriority": 2,
"description": "Keep only 5 tagged images, expire all others",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": [
"runtime"
],
"countType": "imageCountMoreThan",
"countNumber": 5
},
"action": {
"type": "expire"
}
},
{
"rulePriority": 3,
"description": "Only keep untagged images for 7 days",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 7
},
"action": {
"type": "expire"
}
}
]
}
EOF
}
Naveen Reddyover 4 years ago
Applying this policy to N-number of repositories
Paul Stagnerover 4 years ago
Howdy! So I am new to the community here. I am glad to be here.
I am having some issues with https://github.com/cloudposse/terraform-aws-ecs-alb-service-task and https://github.com/cloudposse/terraform-aws-ecs-web-app
I am trying to use it to deploy ecs services onto ecs instances but I am running into issues with the security group and network settings.
The errors I keep receiving are here:
I am not sure if there have been issues with this in the past. The version of terraform-aws-ecs-alb-service-task is 0.55.1 which is set in the terraform-aws-ecs-web-app.
I am setting the network_mode to bridge and that is when I run into these errors. I also am excluding the pipeline stuff which we had to create our own fork in order to do so.
I have also tried to hardcode the target_type to host for the targetgroup type but it keeps setting it to the default which is ip in the variables.tf
Just wanted to reach out and see if there was any advice or direction inside the cloudposse collection for folks that don't want to use the awsvpc/fargate solutions.
I am having some issues with https://github.com/cloudposse/terraform-aws-ecs-alb-service-task and https://github.com/cloudposse/terraform-aws-ecs-web-app
I am trying to use it to deploy ecs services onto ecs instances but I am running into issues with the security group and network settings.
The errors I keep receiving are here:
╷
│ Error: too many results: wanted 1, got 219
│
│ with module.apps["prism3-rmq-ecs-service"].module.ecs_alb_service_task.aws_security_group_rule.allow_all_egress[0],
│ on .terraform/modules/apps.ecs_alb_service_task/main.tf line 273, in resource "aws_security_group_rule" "allow_all_egress":
│ 273: resource "aws_security_group_rule" "allow_all_egress" {
│
╵
╷
│ Error: error creating shf-prism3-rmq-ecs-service service: error waiting for ECS service (shf-prism3-rmq-ecs-service) creation: InvalidParameterException: The provided target group arn:aws:elasticloadbalancing:us-west-2:632720948474:targetgroup/shf-prism3-rmq-ecs-service/c732ab107ef2aacb has target
type ip, which is incompatible with the bridge network mode specified in the task definition.
│
│ with module.apps["prism3-rmq-ecs-service"].module.ecs_alb_service_task.aws_ecs_service.default[0],
│ on .terraform/modules/apps.ecs_alb_service_task/main.tf line 399, in resource "aws_ecs_service" "default":
│ 399: resource "aws_ecs_service" "default" {
│I am not sure if there have been issues with this in the past. The version of terraform-aws-ecs-alb-service-task is 0.55.1 which is set in the terraform-aws-ecs-web-app.
I am setting the network_mode to bridge and that is when I run into these errors. I also am excluding the pipeline stuff which we had to create our own fork in order to do so.
I have also tried to hardcode the target_type to host for the targetgroup type but it keeps setting it to the default which is ip in the variables.tf
Just wanted to reach out and see if there was any advice or direction inside the cloudposse collection for folks that don't want to use the awsvpc/fargate solutions.
rssover 4 years ago(edited)
v1.1.0-alpha20211006
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
Mohammed Yahyaover 4 years ago
https://github.com/terraform-docs/terraform-docs/releases/tag/v0.16.0 terraform-docs release with very nice new features
Devops alertsover 4 years ago(edited)
module "foundation" {
source = "git::<https://xyz/terraform-aws-foundation.git?ref=feature/hotfix>"
spec = local.data.spec
depends_on = [module.iaas.aws_lb.loadbalancer]
}
how i can call within one module to other module resources or can we define in module multiple source ?Stephen Tanover 4 years ago
Hi - I’m using what looks to be a really useful module - https://registry.terraform.io/modules/cloudposse/code-deploy/aws/latest?tab=inputs - thank you for this. Sadly, I’m trying to use your ec2_tag_filter input and we get a fail. I’m happy to create a PR to fix this if you can confirm the issue with the lookup - I’ve created a bug here: https://github.com/cloudposse/terraform-aws-code-deploy/issues/6 - do let me know if you’ll accept a PR for this - thank you! cc @Erik Osterman (Cloud Posse) who seems to be the author
Jon Butterworthover 4 years ago
Hiya, QQ regarding the dynamic subnet module. I'm calling the module twice for different purposes. Once to create subnets for an EKS cluster, and another time for an EC2 instance. My VPC has one
/16 cidr block... the problem is, when the module is called the second time for EC2, it tries to create the same subnets it created for the EKS cluster, because it doesn't know what it's already used from the cidr block.muhahaover 4 years ago
Guys, is there any terraform wrapper that can pre-download binaries for providers that are requiring binaries installed on $PATH ? Thanks
Chris Dobbynover 4 years ago
Does anyone know is this module dead or are the approvers just afk?
https://github.com/cloudposse/terraform-aws-multi-az-subnets
https://github.com/cloudposse/terraform-aws-multi-az-subnets
Geraldover 4 years ago(edited)
Hi folks, I've got this error after implementing a bind mounts from docker container to EFS storage directory.
I added this line in my ECS task definition
Here's the module I used https://github.com/cloudposse/terraform-aws-ecs-container-definition
Error: ClientException: Fargate compatible task definitions do not support devicesI added this line in my ECS task definition
linux_parameters = {
capabilities = {
add = ["SYS_ADMIN"],
drop = null
}
devices = [
{
containerPath = "/dev/fuse",
hostPath = "/dev/fuse",
permissions = null
}
],
initProcessEnabled = null
maxSwap = null
sharedMemorySize = null
swappiness = null
tmpfs = []
}Here's the module I used https://github.com/cloudposse/terraform-aws-ecs-container-definition
Geraldover 4 years ago
Does anyone knows what is the workaround to support
devices argument.ememover 4 years ago(edited)
has anyone experienced this before while using terraform to manage github repo creation.
Error: GET <https://api.github.com/xxx>: 403 API rate limit of 5000 still exceeded until 2021-10-13 12:05:50 +0000 UTC, not making remote request. [rate reset in 6m36s]Almondovarover 4 years ago
hi all, i am planning to implement a WAF v2 rule that "lets everything else pass" - am i right thinking that if i dont have any statement - it will allow everything?
rule {
name = "let-everything-else-pass"
priority = 2
action {
allow {}
}
# left without statement
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "rule-2"
sampled_requests_enabled = true
}rssover 4 years ago(edited)
v1.0.9
1.0.9 (October 13, 2021)
BUG FIXES:
core: Fix panic when planning new resources with nested object attributes (#29701)
core: Do not refresh deposed instances when the provider is not configured during destroy (<a href="https://github.com/hashicorp/terraform/issues/29720" data-hovercard-type="pull_request"...
1.0.9 (October 13, 2021)
BUG FIXES:
core: Fix panic when planning new resources with nested object attributes (#29701)
core: Do not refresh deposed instances when the provider is not configured during destroy (<a href="https://github.com/hashicorp/terraform/issues/29720" data-hovercard-type="pull_request"...
othman issaover 4 years ago
Hello,
I have an issue to automate TF in Jenkinsfile to Apply terraform.tfstae from the backend S3. how I can write the correct command?
////////////////////////////////////////////////////////////////////////////////////
pipeline {
// Jenkins AWS Access & Secret key
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
}
options {
// Only keep the 5 most recent builds
buildDiscarder(logRotator(numToKeepStr:'5'))
}
agent any
tools {
terraform 'terraform'
}
I have an issue to automate TF in Jenkinsfile to Apply terraform.tfstae from the backend S3. how I can write the correct command?
////////////////////////////////////////////////////////////////////////////////////
pipeline {
// Jenkins AWS Access & Secret key
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
}
options {
// Only keep the 5 most recent builds
buildDiscarder(logRotator(numToKeepStr:'5'))
}
agent any
tools {
terraform 'terraform'
}
stages {
// Check out from GIT, Snippet Generato from pipeline Syntax --> Checkout: Check out from version control
stage ("Check from GIT") {
steps {
git branch: 'master', credentialsId: 'Jenkins_terraform_ssh_repo', url: 'git@github.com:mickleissa/kobai.git'
}
}
// Terraform Init Stage
stage ("Terraform init") {
steps {
// sh 'terraform -chdir="./v.14/test_env" init -upgrade'
// terraform init -backend-config="bucket=kobai-s3-backend-terraform-state" -backend-config="key=stage-test-env/terraform.tfstate"
sh 'terraform -chdir="./v.14/test_env" init -migrate-state'
}
}
// Terraform fmt Stage
stage ("Terraform fmt") {
steps {
sh 'terraform fmt'
}
}
// Terraform Validate Stage
stage ("Terraform validate") {
steps {
sh 'terraform validate'
}
}
// Terraform Plan Stage
stage ("Terraform plan") {
steps {
sh 'terraform -chdir="./v.14/test_env" plan -var-file="stage.tfvars"'
// sh 'terraform -chdir="./v.14/test_env" plan'
}
}
// Terraform Apply Stage
stage ("Terraform apply") {
steps {
sh 'terraform -chdir="./v.14/test_env" apply -var-file="stage.tfvars" --auto-approve'
// sh 'terraform -chdir="./v.14/test_env" apply --auto-approve'
}
}
// Approvel stage
stage ("DEV approval Destroy") {
steps {
echo "Taking approval from DEV Manager for QA Deployment"
timeout(time: 7, unit: 'DAYS') {
input message: 'Do you want to Destroy the Infra', submitter: 'admin'
}
}
}
// Destroy stage
stage ("Terraform Destroy") {
steps {
sh 'terraform -chdir="./v.14/test_env" destroy -var-file="stage.tfvars" --auto-approve'
// sh 'terraform -chdir="./v.14/test_env" destroy --auto-approve'
}
}
}
post {
always {
echo 'This will always run'
}
success {
echo 'This will run only if successful'
}
failure {
echo 'This will run only if failed'
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}Flávio Moringaover 4 years ago(edited)
Hello, maybe someone could help me with creating a simple redis cluster on aws, with automatic_failover enabled, 1 master and 1 read replica. I’m trying to use cloudposse/terraform-aws-elasticache-redis but it seems there is no way to create a cluster with read replica, and that makes no sense to me. Here is my code so far:
Flávio Moringaover 4 years ago
module "redis" {
source = "cloudposse/elasticache-redis/aws"
version = "0.30.0"
stage = var.stage
name = "redis"
port = "6379"
vpc_id = data.terraform_remote_state.conf.outputs.vpc_id
subnets = data.terraform_remote_state.conf.outputs.private_subnet_ids
# need az's list due to bug:
# <https://github.com/cloudposse/terraform-aws-elasticache-redis/issues/63>
availability_zones = data.aws_availability_zones.azs.names
#In prod use 2 nodes
cluster_size = var.cicd_env != "prod" ? 1 : 2
# only really helpful in prod because we have 2 nodes
automatic_failover_enabled = true
instance_type = "cache.t3.small"
apply_immediately = true
engine_version = "6.x"
family = "redis6.x"
at_rest_encryption_enabled = true
transit_encryption_enabled = false
kms_key_id = aws_kms_key.redis.arn
#used only on version 0.40.0 and above:
#security_groups = ["module.sg-redis.security_group_id"]
#for version 0.30.0 use:
use_existing_security_groups = true
existing_security_groups = [module.sg-redis.security_group_id]
#used only on version 0.40.0 and above:
#multi_az_enabled = true
maintenance_window = "Tue:03:00-Tue:06:00"
tags = {
Name = var.cicd_domain
contactinfo = var.contactinfo
service = var.service
stage = var.stage
Environment = var.cicd_env
}
#used only on version 0.40.0 and above:
# Snapshot name upon Redis deletion
#final_snapshot_identifier = "${var.cicd_env}-final-snapshot"
# Daily snapshots - Keep last 5 for prod, 0 for other
snapshot_window = "06:30-07:30"
snapshot_retention_limit = var.cicd_env != "prod" ? 0 : 5
}Flávio Moringaover 4 years ago
But I’m getting this error:
Error: error updating ElastiCache Replication Group (alpha-harbor): InvalidReplicationGroupState: Replication group must have at least one read replica to enable autofailover.status code: 400, request id: 22997e65-2bcb-41a1-861e-7adb7089e9e0Flávio Moringaover 4 years ago(edited)
Any help?
Zachover 4 years ago
VS Code’s terraform language-server was updated with experimental support for pre-fill of required module/resource parameters!
https://github.com/hashicorp/vscode-terraform/pull/799
To enable add this to your extension settings
https://github.com/hashicorp/vscode-terraform/pull/799
To enable add this to your extension settings
"terraform-ls.experimentalFeatures":{
prefillRequiredFields": true
}Zachover 4 years ago
also @Erik Osterman (Cloud Posse) this seems particularly relevant to your org as just yesterday I was looking at the extensive changes in the eks-node-group module - https://discuss.hashicorp.com/t/request-for-feedback-config-driven-refactoring/30730
Steve Wade (swade1987)over 4 years ago
am i missing something or is there no way to attach a loadbalancer to a launch template?
Steve Wade (swade1987)over 4 years ago
i am wanting to use EKS node groups in my new role as they now support taints/labels and bottlerocket
but i need to attach an ELB to our ingress launch template
but i need to attach an ELB to our ingress launch template
Elvis McNeelyover 4 years ago
Hi All, I stumbled upon the SweetOps TF code library…
Elvis McNeelyover 4 years ago
I have a few questions… 🙂
Elvis McNeelyover 4 years ago
(1) What is BridgeCrew’s relationship to some of these modules? I see their logo stamped on a few of the modules?
Elvis McNeelyover 4 years ago
(2) terraform-null-label is used a lot in other SO TF modules:
https://github.com/cloudposse/terraform-null-label
I’m trying to understand why this module is so important to the other SO TF modules
https://github.com/cloudposse/terraform-null-label
I’m trying to understand why this module is so important to the other SO TF modules
Elvis McNeelyover 4 years ago
(3) Has anyone used the SO TF modules in a statefile management tool like TFC or Scalr? I’m wondering how the use of so many SO modules operate in these tools? Any links to resources or thoughts would be appreciated.
Zikkoover 4 years ago
Hi, I would like to create a Mongodb container with persistent storage (bind volumes). But how to do it with TF? Also, how can I create users for the container/database? Do I have to SSH into the container? Is there any other way?
Thanks
Thanks
Grubholdover 4 years ago(edited)
Hi folks, using the aws-dynamic-subnets module we have reached a limit of 54 subnets even though our ranges are either /24 or /27. Not understanding exactly how the module is increasing the number can you maybe hint us to a way to increase this number? Running ECS Fargate across 2 AZs. It seems related to the number of CIDRs
Edit: For anyone interested, it turns out that /27 range was limiting as increasing that to /24 range it should now cover our usage. It seems that on AWS, in addition to using the first and last bit in the CIDR it is using a 3rd IP for some unknown reason to us.
Edit: For anyone interested, it turns out that /27 range was limiting as increasing that to /24 range it should now cover our usage. It seems that on AWS, in addition to using the first and last bit in the CIDR it is using a 3rd IP for some unknown reason to us.
idan leviover 4 years ago(edited)
Hi’ im using terraform-aws-elasticsearch module (https://registry.terraform.io/modules/cloudposse/elasticsearch/aws/latest) , its works great ! so first of all Thanks, just a question, is it possible to create ES env with only username and password authentication ? (without IAM ARN) I tried
Thanks for all!
advanced_security_options_master_user_password and advanced_security_options_master_user_name but still i must to access the es with iam user authentication.Thanks for all!
Orest Kapkoover 4 years ago
Hello, need a quick help.
I’m trying to configure Amazon MQ RabbitMQ using https://github.com/cloudposse/terraform-aws-mq-broker/releases/tag/0.15.0 latest version
I found that it was already fixed in https://github.com/hashicorp/terraform-provider-aws/issues/18067 3.38.0 TF provider.
But how to pin provider version if TF 0.14 version that I currently use ?
I’m trying to configure Amazon MQ RabbitMQ using https://github.com/cloudposse/terraform-aws-mq-broker/releases/tag/0.15.0 latest version
Error: 1 error occurred:
* logs.audit: Can not be configured when engine is RabbitMQ
on .terraform/modules/rabbitmq_broker_processing/main.tf line 89, in resource "aws_mq_broker" "default":
89: resource "aws_mq_broker" "default" {
Releasing state lock. This may take a few moments...I found that it was already fixed in https://github.com/hashicorp/terraform-provider-aws/issues/18067 3.38.0 TF provider.
But how to pin provider version if TF 0.14 version that I currently use ?
Taylorover 4 years ago
In my terraform I am calling a local module multiple times. Since it's the same module it has the same
output {}. It looks like terraform doesn't support splatting the modules like module.[test*].my_output . Does anyone know a better way to solve this?idan leviover 4 years ago
Hi, im using terraform-aws-elasticsearch module (https://registry.terraform.io/modules/cloudposse/elasticsearch/aws/latest) , its works great ! so first of all Thanks, just a question, is it possible to create ES env with only username and password authentication ? (without IAM ARN) I tried
advanced_security_options_master_user_password and advanced_security_options_master_user_name but still i must to access the es with iam user authentication.Naveen Reddyover 4 years ago
Hi all, can someone help me with adding redrive_policy for multiple sqs queues in calling one resource
Kevin Lesherover 4 years ago
Hi! I’m using the sns-topic module for standard queues currently, but need to make use of a FIFO queue now. Since it’s a fifo queue, AWS requires that the name end in
Digging through the module, I see there’s a
Ex: Setting it to -fifo leaves it unchanged:
But ending in .fifo results in:
.fifo. For some reason the module is stripping my period out. Is there some other variable I’m missing setting for this besides setting fifo_topic to true?Digging through the module, I see there’s a
replace(module.this.id, ".", "-")for display_name, but I’m not seeing why it’s happening for the Topic nameEx: Setting it to -fifo leaves it unchanged:
results-test-fifoBut ending in .fifo results in:
results-testfiforssover 4 years ago(edited)
v1.1.0-alpha20211020
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
Ryanover 4 years ago
Anyone run into TF hanging when refreshing state? Really weird, specific to a single module, hangs when trying to get the state of an IAM role attached to a Google service account. Hangs forever. Other modules work fine (which also get the state of the same service account).
Adnanover 4 years ago
I'm having trouble importing a name_prefix'd resource.
After I import it successfully and plan I get
Anyone had the same issue and a solution?
After I import it successfully and plan I get
+ name_prefix = "something" # forces replacementAnyone had the same issue and a solution?
Naveen Reddyover 4 years ago
Hey everyone, can help me with how can i fix the redrive policy for deadletter queue
Naveen Reddyover 4 years ago
resource "aws_sqs_queue" "queue1" {
for_each = toset(var.repolist)
name = "${each.value}${var.environmentname}"
delay_seconds = 10
max_message_size = 86400
message_retention_seconds = 40
receive_wait_time_seconds = 30
visibility_timeout_seconds = 30
}
resource "aws_sqs_queue" "deadletter" {
for_each = toset(var.repolist)
name = "${each.value}-deadletter-${var.environmentname}"
delay_seconds = 10
max_message_size = 86400
message_retention_seconds = 40
receive_wait_time_seconds = 30
visibility_timeout_seconds =30
redrive_policy = jsonencode({
deadLetterTargetArn=values(aws_sqs_queue.queue1)[*].arn
maxReceiveCount = 4
})
}
for_each = toset(var.repolist)
name = "${each.value}${var.environmentname}"
delay_seconds = 10
max_message_size = 86400
message_retention_seconds = 40
receive_wait_time_seconds = 30
visibility_timeout_seconds = 30
}
resource "aws_sqs_queue" "deadletter" {
for_each = toset(var.repolist)
name = "${each.value}-deadletter-${var.environmentname}"
delay_seconds = 10
max_message_size = 86400
message_retention_seconds = 40
receive_wait_time_seconds = 30
visibility_timeout_seconds =30
redrive_policy = jsonencode({
deadLetterTargetArn=values(aws_sqs_queue.queue1)[*].arn
maxReceiveCount = 4
})
}
Naveen Reddyover 4 years ago
everything is working but unable to add redrive policy to each value in variable repolist.
Naveen Reddyover 4 years ago
Any help would be appreciated
Ramover 4 years ago(edited)
Hello Everone, Does anyone done before for EC2 (Windows) format the additional disks and mount as different drives via TerraformAlmondovarover 4 years ago
hi guys, this is a question regarding EKS - is anyone implementing the node termination handler?
This is a copy of what the notes of the eks module mention:
• _Setting
What can go wrong in a production system if we don't drain the nodes first? as the new nodes will be spinned up already, and k8s will shift the load to them when we kill the old nodes, correct?
This is a copy of what the notes of the eks module mention:
• _Setting
instance_refresh_enabled = true will recreate your worker nodes without draining them first. It is recommended to install aws-node-termination-handler for proper node draining. Find the complete example here instance_refresh._What can go wrong in a production system if we don't drain the nodes first? as the new nodes will be spinned up already, and k8s will shift the load to them when we kill the old nodes, correct?
Thomas Eckover 4 years ago
Any idea how to adjust
read_capacity or write_capacity in https://github.com/cloudposse/terraform-aws-dynamodb when autoscaler is turned off? We’ll need the lifecycle ignore to make the autoscaler work but it breaks the manual scale https://github.com/cloudposse/terraform-aws-dynamodb/blob/master/main.tf#L68. AFAIK there is no way to have dynamic lifecycle blocks in terraform so farOliverSover 4 years ago
looks like the thread about pros and cons of tf over cloudformation is gone, too old so Slack discarded it. I remember modules, imports, resume from where left off at last error, state manipulation and refactoring, were all mentioned as advantages of tf.
CDK and recent improvement to cloudformation (resume from error) shortens that list a bit, with import and state file manip still big ones IMO.
Any other big ones I'm forgetting?
CDK and recent improvement to cloudformation (resume from error) shortens that list a bit, with import and state file manip still big ones IMO.
Any other big ones I'm forgetting?
A Holbreichover 4 years ago(edited)
Hi all, i see strange error
when executing totaly unrelated import like:
cloudposse/ terraform-aws-dynamic-subnets latest version..
Any ideas?
╷
│ Error: Invalid resource instance data in state
│
│ on .terraform/modules/subnets/public.tf line 46:
│ 46: resource "aws_route_table" "public" {
│
│ Instance module.subnets.aws_route_table.public[0] data could not be decoded from the state: unsupported attribute "timeouts".when executing totaly unrelated import like:
terraform import kubernetes_cluster_role.cluster_admin admincloudposse/ terraform-aws-dynamic-subnets latest version..
Any ideas?
D
discourseover 4 years ago
Sherifover 4 years ago
Have you checked the new PR “merge queue” feature ? This is gonna be so useful for automating Terraform
Thiago Almeidaover 4 years ago
Hey, I'm looking to use the module terraform-aws-tfstate-backend and got curious on why do you have two dynamodb tables, one with and one without encryption, instead of using some ternary logic to set
https://github.com/cloudposse/terraform-aws-tfstate-backend/blob/master/main.tf#L230
encryption=false in a single table.https://github.com/cloudposse/terraform-aws-tfstate-backend/blob/master/main.tf#L230
rssover 4 years ago(edited)
v1.0.10
1.0.10 (October 28, 2021)
BUG FIXES:
backend/oss: Fix panic when there's an error looking up OSS endpoints (#29784)
backend/remote: Fix version check when migrating state (<a href="https://github.com/hashicorp/terraform/issues/29793" data-hovercard-type="pull_request"...
1.0.10 (October 28, 2021)
BUG FIXES:
backend/oss: Fix panic when there's an error looking up OSS endpoints (#29784)
backend/remote: Fix version check when migrating state (<a href="https://github.com/hashicorp/terraform/issues/29793" data-hovercard-type="pull_request"...
Mike Robinsonover 4 years ago
Has anyone seen this before when trying to interface with an EKS cluster via the Terraform Kubernetes provider?
Context:
• Cluster created using https://github.com/cloudposse/terraform-aws-eks-cluster 0.43.1
• CI workflow assumes the IAM role that created the cluster (lets call it TerraformRole).
• used
• Calls using
• Attempts to plan anything requiring the kubernetes or helm provider explodes with:
From a few search results I've discovered that sometimes this can be worked around by target-planning the eks cluster module. Having done that I've noticed that the
https://gist.github.com/mrobinson-wavehq/928ee4ef5ec54c35fc2bb6abc4648189
Debug output hasn't provided much of value but if it helps I can gist it.
Context:
• Cluster created using https://github.com/cloudposse/terraform-aws-eks-cluster 0.43.1
• CI workflow assumes the IAM role that created the cluster (lets call it TerraformRole).
• used
map_additional_iam_roles to add a role assumed by Ops (lets call it OpsRole)• Calls using
kubectl work fine when either OpsRole or TerraformRole• Attempts to plan anything requiring the kubernetes or helm provider explodes with:
Error: configmaps "aws-auth" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
Error: query: failed to query with labels: secrets is forbidden: User "system:anonymous" cannot list resource "secrets" in API group "" in the namespace "reloader"From a few search results I've discovered that sometimes this can be worked around by target-planning the eks cluster module. Having done that I've noticed that the
tls_certificate datasource always wants to refresh, which also triggers the OIDC provider refreshing.https://gist.github.com/mrobinson-wavehq/928ee4ef5ec54c35fc2bb6abc4648189
Debug output hasn't provided much of value but if it helps I can gist it.
Eugeneover 4 years ago
Hello all, currently working on a project in AWS and eventually want to put our architecture into Terraform but was wondering if there is value in using AWS Cloudformation or AWS Cloud Development Kit (CDK) additionally or as a substitute to Terraform. Thanks!
rssover 4 years ago
v1.1.0-alpha20211029
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
Joan Portaover 4 years ago
Hi guys, what is terraform command to can iterate the creation of the resource here?