48 messages
dan callumover 3 years ago
is there a way around using count & for_each in a module? I know that both cannot be used but what I am trying to do is, write a module, which will iterate over a map and create resources using for_each but at the same time, also using a count to check a boolean value in the map to conditionally create the resource
if the boolean value is false then no creating and if true then create
any suggestions/work arounds?
if the boolean value is false then no creating and if true then create
any suggestions/work arounds?
Isaac Campbellover 3 years ago(edited)
Got a PR id love for y'all to look at https://github.com/cloudposse/terraform-aws-eks-node-group/pull/125 !
Sam Skynnerover 3 years ago
Any chance this could get merged, it was approved 8 days ago
https://github.com/cloudposse/terraform-aws-ssm-tls-self-signed-cert/pull/14
Would be incredibly helpful 🙏
https://github.com/cloudposse/terraform-aws-ssm-tls-self-signed-cert/pull/14
Would be incredibly helpful 🙏
rssover 3 years ago
v1.3.0-alpha20220803
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...
Brent Farandover 3 years ago
Hello! Our organization has been making use of your https://github.com/cloudposse/terraform-aws-components as a starting point for our infrastructure. I notice that the iam-primary-roles and iam-delegated-roles components have been replaced by the aws-teams and aws-team-roles components respectively. I was planning on moving to these new components, but it doesn't look like the account-map component has a module that they refer to -
Is there an ETA on when these pieces will make their way to the main branch of the repo? Thank you!
team-assume-role-policy. I also see a reference to an aws-saml component in the documentation and code that also doesn't appear to be present in the repo.Is there an ETA on when these pieces will make their way to the main branch of the repo? Thank you!
Seanover 3 years ago(edited)
What are y’all doing these days for feeding data (such as outputs) between root modules?
• a) tagging resources in the source module and using data resources from the target (this works within providers, such as looking up with AWS tags)
• b) remote state
• c) terragrunt
• d) something else
• a) tagging resources in the source module and using data resources from the target (this works within providers, such as looking up with AWS tags)
• b) remote state
• c) terragrunt
• d) something else
mogover 3 years ago
does anyone have any experience setting up GuardDuty in an AWS Org? i’m a bit confused about what the difference is between
aws_organizations_delegated_administrator and aws_guardduty_organization_admin_accountAdarsh Hiwraleover 3 years ago
Hi everyone!
I am trying to attach multiple load balancer with ECS service ecs-alb-service-task and ecs-container-definition this is the module I am using, is it posible to attach multiple load balancer, application lb for internal use and Network LB for external use??
I am trying to attach multiple load balancer with ECS service ecs-alb-service-task and ecs-container-definition this is the module I am using, is it posible to attach multiple load balancer, application lb for internal use and Network LB for external use??
Bradley Petersonover 3 years ago
Hi! Anyone know how to work around this bug? I hit it when using
cloudposse/ecs-web-app/aws , same as the reporter. https://github.com/cloudposse/terraform-aws-alb-ingress/issues/56Adam Kennewegover 3 years ago(edited)
Hi for
I manually made a resource
but it breaks because eks_cluster.eks_cluster_id is delayed in it's creation and the value is wrong so it takes mutliple terraform applies to work, messing up the rest of my terraform stuff
<https://github.com/cloudposse/terraform-aws-eks-cluster> is there a way to automatically update my kubeconfig? (like: aws eks update-kubeconfig) so I can apply kubectl manifests later?I manually made a resource
resource "null_resource" "updatekube" {
depends_on = [module.eks_cluster]
provisioner "local-exec" {
command = format("aws eks update-kubeconfig --region %s --name %s", var.region, module.eks_cluster.eks_cluster_id)
}
}but it breaks because eks_cluster.eks_cluster_id is delayed in it's creation and the value is wrong so it takes mutliple terraform applies to work, messing up the rest of my terraform stuff
Abdelaziz Besbesover 3 years ago
Hello all!
A while ago, I have created a backup module as follows that I deployed with Terraform:
When I try to redeploy my stack again, it states that my recovery points has been changed outside of terraform (which is logical)
How could I add a
Thank you all!
A while ago, I have created a backup module as follows that I deployed with Terraform:
module "backup" {
source = "cloudposse/backup/aws"
version = "0.12.0"
...
}When I try to redeploy my stack again, it states that my recovery points has been changed outside of terraform (which is logical)
# module.backup.aws_backup_vault.default[0] has changed
~ resource "aws_backup_vault" "default" {
~ recovery_points = 2 -> 78
...
}How could I add a
lifecycle that ignores this change on the module side!Thank you all!
rssover 3 years ago(edited)
v1.2.7
1.2.7 (August 10, 2022)
ENHANCEMENTS:
config: Check for direct references to deprecated computed attributes. (#31576)
BUG FIXES:
config: Fix an crash if a submodule contains a resource whose implied provider local name contains invalid characters, by adding additional validation rules to turn it into a real error. (<a...
1.2.7 (August 10, 2022)
ENHANCEMENTS:
config: Check for direct references to deprecated computed attributes. (#31576)
BUG FIXES:
config: Fix an crash if a submodule contains a resource whose implied provider local name contains invalid characters, by adding additional validation rules to turn it into a real error. (<a...
ememover 3 years ago
Hello all, I recently had this error while applying terraform for github_repository_webhook. I dont know if anyone has come across something like this before while working on code-pipeline
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to
│ module.xxxx.aws_codepipeline_webhook.main, provider
│ "module.xxxxx.provider[\"<http://registry.terraform.io/hashicorp/aws\|registry.terraform.io/hashicorp/aws\>"]"
│ produced an unexpected new value: Root resource was present, but now
│ absent.
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
Error: POST <https://api.github.com/repos/xxx/test/hooks>: 422 Validation Failed [{Resource:Hook Field: Code:custom Message:The "push" event cannot have more than 20 hooks}]Frankover 3 years ago(edited)
It seems that
It is throwing unexpected
Anyone else experienced this? Based on their changelog it is likely due to this change: https://github.com/hashicorp/terraform/issues/31576
terraform-aws-ses is broken since Terraform 1.2.7 because its dependency terraform-aws-iam-system-user is:It is throwing unexpected
parameter_write and context errors when running terraform validate on TF 1.2.7, which works fine on 1.2.6.Anyone else experienced this? Based on their changelog it is likely due to this change: https://github.com/hashicorp/terraform/issues/31576
Isaac Campbellover 3 years ago(edited)
https://github.com/cloudposse/terraform-aws-eks-cluster there currently is no mapping to create custom ingress rules on this correct? I see you link the aws_security_group_rule.ingress_security_groups in the module but no actual inputs in the module to adjust for HTTPS or custom ports
JoseFover 3 years ago
Hello Sweet Ops Genies.
I was wondering how to create 2 different rds instances at the same time (same file), using
If I add create the first one with this block
And create the second one with the same module block except
The terraform apply fail with the error
Inside my head that was perfect (how dreamer I was). Clearly the logic is different.
I was looking to do it with a for_each? But I am stuck.
What I am trying to do is create 2 rds within the same parameters, inside the same vpc, subnet and so. Any clue of how should I approach this?
The reason? Simple, have 2 isolate rds sharing the same parameters between them, for auth/api respectively.
I was wondering how to create 2 different rds instances at the same time (same file), using
cloudposse/rds/awsIf I add create the first one with this block
vpc_id = module.vpc.vpc_id
security_group_ids = [module.vpc.vpc_default_security_group_id, module.sg.id]
associate_security_group_ids = [module.vpc.vpc_default_security_group_id, module.sg.id]
subnet_ids = module.subnets.public_subnet_ids
engine = var.engine
engine_version = var.engine_version
major_engine_version = var.major_engine_version
instance_class = var.instance_class
db_parameter_group = var.db_parameter_group
multi_az = var.multi_az
dns_zone_id = var.dns_zone_id
host_name = "rds-${var.namespace}-${var.environment}-auth-${var.stage}-${var.name}"
publicly_accessible = var.publicly_accessible
database_name = var.database_name
database_user = var.database_user
database_password = var.database_password
database_port = var.database_port
auto_minor_version_upgrade = var.auto_minor_version_upgrade
allow_major_version_upgrade = var.allow_major_version_upgrade
deletion_protection = var.deletion_protection
storage_type = var.storage_type
iops = var.iops
allocated_storage = var.allocated_storage
storage_encrypted = var.encryption_enabled
kms_key_arn = var.kms_key_arn
snapshot_identifier = var.snapshot_identifier_auth
performance_insights_enabled = var.encryption_enabled
performance_insights_kms_key_id = var.kms_key_arn
performance_insights_retention_period= var.performance_insights_retention_period
monitoring_interval = var.monitoring_interval
monitoring_role_arn = aws_iam_role.enhanced_monitoring.arn
apply_immediately = var.apply_immediately
backup_retention_period = var.backup_retention_period
context = module.this.contextAnd create the second one with the same module block except
host_name = "rds-${var.namespace}-${var.environment}-api-${var.stage}-${var.name}"The terraform apply fail with the error
Error creating DBInstances, the dbsubnetgroup already exists. The module create 2 different groups with the same naming convention (name, stage, environment).Inside my head that was perfect (how dreamer I was). Clearly the logic is different.
I was looking to do it with a for_each? But I am stuck.
What I am trying to do is create 2 rds within the same parameters, inside the same vpc, subnet and so. Any clue of how should I approach this?
The reason? Simple, have 2 isolate rds sharing the same parameters between them, for auth/api respectively.
Clemensover 3 years ago(edited)
Hi together,
i’m currently playing around with elastic beanstalk and writing a module myself to understand the behaviour.
Just creating a simple ebs app and environment.
Using aws provider version = “~> 4.5.0” and terraform version 1.1.7
The ebs app was created by another module i’ve written and is working but when creating the ebs environment there is an error
at the apply step not at the plan step which is the following:
Referencing the AWS docs the attribute should be correct.
I double checked the plan output of vpc id against the actual state in the AWS UI.
I’l post the code as comment
i’m currently playing around with elastic beanstalk and writing a module myself to understand the behaviour.
Just creating a simple ebs app and environment.
Using aws provider version = “~> 4.5.0” and terraform version 1.1.7
The ebs app was created by another module i’ve written and is working but when creating the ebs environment there is an error
at the apply step not at the plan step which is the following:
Error: ConfigurationValidationException: Configuration validation exception: Invalid option specification (Namespace: 'VPCId', OptionName: 'aws:ec2:vpc'): Unknown configuration setting.Referencing the AWS docs the attribute should be correct.
I double checked the plan output of vpc id against the actual state in the AWS UI.
I’l post the code as comment
Alex Millsover 3 years ago
When using
cloudposse/ecr/aws is there a way to force-delete ecr repositories that contain images?Christopher Wadeover 3 years ago
Hello all! $Job had a period of time over the last two days where a set of deployments failed seemingly because a single reference to
cloudposse/label/null 0.24.1 failed to download for a usage of cloudposse/route53-alias/aws 0.12.0 within one of our internal modules, despite that same reference downloading several other times for other usages in the same template. In at least one of the impacted pipelines, we can confirm that no other changes were present between the working and non-working runs. This coincidentally started after Terraform 1.2.7 was released, but has now started working again with no other changes.RBover 3 years ago(edited)
Shawn Stoutover 3 years ago
here
Isaac Campbellover 3 years ago
bump
Isaac Campbellover 3 years ago
im probably going to make a PR against this to support a for_each for custom ingress rules
Alex Jurkiewiczover 3 years ago(edited)
IMO, adding
The use-case for this functionality (automatically expiring access keys) is not needed for the majority of these modules, which makes the requirement extra weird.
awsutils provider as a dependency of (what will ultimately be) many of your modules is a mistake: https://github.com/cloudposse/terraform-aws-iam-system-user/releases/tag/0.23.0The use-case for this functionality (automatically expiring access keys) is not needed for the majority of these modules, which makes the requirement extra weird.
rssover 3 years ago
v1.3.0-alpha20220817
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...
Isaac Campbellover 3 years ago
sweet lord thats what we've been waiting for
GMover 3 years ago
Anyone got a svelte solution for replacing rather than updating a resource if there are any changes? I am wondering if I can just use replace_triggered_by and point to a string of input variables that could potentially change.
PePe Amengualover 3 years ago
a while a go someone posted a tool to create a diagram of resources in TF, even better if it does network diagram, does anyone remember?
Adnanover 3 years ago
How to you deal with
aws_db_instance password attribute?Mikhail Naletovover 3 years ago
Hi there.
Seems this pattern does not work for records pointing to multiple destinations https://github.com/cloudposse/terraform-cloudflare-zone/blob/master/main.tf#L9-L14
Shouldn't we add
Seems this pattern does not work for records pointing to multiple destinations https://github.com/cloudposse/terraform-cloudflare-zone/blob/master/main.tf#L9-L14
Shouldn't we add
value to map index? Or may be introduce new resource handling multiple values per one record?András Sándorover 3 years ago
Hi, I'm looking for some clarification on a tf behavior I don't fully understand. I'm using terraform-aws-ecs-alb-service-task
(as a submodule of terraform-aws-ecs-web-app), and running plan/apply on it creates an update even when nothing was modified, something like this:
Task definition revision number is the only thing that's modified by TF, and the the revisions themselves are identical. Anyone met this behaviour before?
(as a submodule of terraform-aws-ecs-web-app), and running plan/apply on it creates an update even when nothing was modified, something like this:
module.ecs_web_app.module.ecs_alb_service_task.aws_ecs_service.default[0] will be updated in-place
...
task_definition = "dev-backend:29" -> "dev-backend:8"
...Task definition revision number is the only thing that's modified by TF, and the the revisions themselves are identical. Anyone met this behaviour before?
elover 3 years ago
Any recommendations for how to handle enums in Terraform? Should I just use local variables and refer to those?
ghostfaceover 3 years ago(edited)
I have a list(map(string)) like this:
how do i form a list of each
Changes to Outputs:
+ internal_ranges_map = [
+ {
+ "CIDR" = "10.1.xx.54/32"
+ "description" = "x"
},
+ {
+ "CIDR" = "10.2.9.1xxxx/32"
+ "description" = "x"
},
+ {
+ "CIDR" = "172.22.16.0/23"
+ "description" = "xx"
},
+ {
+ "CIDR" = "172.22.18.0/23"
+ "description" = "xxx"
},
+ {
+ "CIDR" = "172.22.1x.0/23"
+ "description" = "xx"
},
+ {
+ "CIDR" = "172.22.1xxx.0/23"
+ "description" = "sharxxxx1_az1"
},
+ {
+ "CIDR" = "172.22.xx.0/23"
+ "description" = "sxxxxxuse1_az4"
},
+ {
+ "CIDR" = "xx.xx.xx/23"
+ "description" = "sxxx"
},
]how do i form a list of each
CIDR from internal_ranges_map ?OliverSover 3 years ago
Does anyone have an easy way to determine the current version of modules in use, vs the latest available? I'm thinking like a list of module names, URL, version installed by init, and version available in the registry.
Charles Smithover 3 years ago
Hello, @Brent Farand and I are making use of your https://github.com/cloudposse/terraform-aws-components and trying to follow much of your foundational reference architecture at our organisation. (Thank you BTW, it's some truly awesome work).
We're deploying EKS with your components and I've found the components
Am I missing something or should we just use the
We're deploying EKS with your components and I've found the components
eks-iam , external-dns, alb-controller for deploying iam service accounts and the necessary k8s controllers. I'm curious, however, if you have a module that handles the deployment of a cluster autoscaler controller? I can see reference to an autoscaler in a number of components but haven't been able to find one that actually deploys it.Am I missing something or should we just use the
external-dns component as a starting place and create our own cluster-autoscaler component?Soren Jensenover 3 years ago
Hi all, I got an issue automating our deployment of an ECS Service. The service consists of 1 task with 3 containers. Network mode is HOST as one of the containers is running an ssh service where we can login. (Should be an EC2 instance instead, but out of my control to change this right now)
Terraform apply gives me the following error:
How do I get terraform to destroy my service before it attempts to redeploy it?
Terraform apply gives me the following error:
service demo-executor was unable to place a task because no container instance met all of its requirements. The closest matching container-instance 5afff47b0b0e4199a97644b7a050d368 is already using a port required by your taskHow do I get terraform to destroy my service before it attempts to redeploy it?
rssover 3 years ago(edited)
v1.2.8
1.2.8 (August 24, 2022)
BUG FIXES:
config: The flatten function will no longer panic if given a null value that has been explicitly converted to or implicitly inferred as having a list, set, or tuple type. Previously Terraform would panic in such a situation because it tried to "flatten" the contents of the null value into the result, which is impossible. (<a href="https://github.com/hashicorp/terraform/issues/31675" data-hovercard-type="pull_request"...
1.2.8 (August 24, 2022)
BUG FIXES:
config: The flatten function will no longer panic if given a null value that has been explicitly converted to or implicitly inferred as having a list, set, or tuple type. Previously Terraform would panic in such a situation because it tried to "flatten" the contents of the null value into the result, which is impossible. (<a href="https://github.com/hashicorp/terraform/issues/31675" data-hovercard-type="pull_request"...
Eric Bergover 3 years ago
Regarding the terraform-aws-rds-cluster module, I'd like to apply
db_role tags, as appropriate, to the writer and readers. Anybody know how to make that happen?Erik Osterman (Cloud Posse)over 3 years ago
🎉 You can ask questions now on StackOverflow and tag
https://stackoverflow.com/tags/cloudposse/info
#cloudpossehttps://stackoverflow.com/tags/cloudposse/info
Nikhil Purvaover 3 years ago
Hi Team,
Regarding the terraform-aws-waf module, I would like to use and_statement, or_statement and not_statement of different types. Is it possible to do that? if yes then please let me know how we can achieve this.
Regarding the terraform-aws-waf module, I would like to use and_statement, or_statement and not_statement of different types. Is it possible to do that? if yes then please let me know how we can achieve this.
Stephen Bennettover 3 years ago
Any ideas on whats wrong with this
but i get an error:
external data call. It returns a output of a “23445234234” so would expect that to be seen as a string?data "external" "cognito" {
program = ["sh", "-c", "aws cognito-idp list-user-pool-clients --user-pool-id eu-west-xxx | jq '.UserPoolClients | .[] | select(.ClientName | contains(\"AmazonOpenSearchService\")) | .ClientId'"]
}but i get an error:
│ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: <https://www.terraform.io/internals/debugging>
│
│ Program: /bin/sh
│ Result Error: json: │ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: <https://www.terraform.io/internals/debugging>
│
│ Program: /bin/sh
│ Result Error: json: cannot unmarshal string into Go value of type map[string]stringAlex Jurkiewiczover 3 years ago
Using cloudposse's null label module, how can I zero the attributes when inheriting a context? For example:
I end up with labels
module "label" {
source = "cloudposse/label/null"
name = "waf"
attributes = ["regional"]
}
module "label_cloudfront" {
source = "cloudposse/label/null"
context = module.label.context
attributes = ["cloudfront"]
}I end up with labels
waf-regional and waf-regional-cloudfront, when I want the second one to be waf-cloudfrontNikhil Purvaover 3 years ago
Hi Team,
I am using cloudposses's waf module and trying to use
Below is my template
I am using cloudposses's waf module and trying to use
byte_match_statement_rules with single_header but getting errorThe given value is not suitable for module.waf.var.byte_match_statement_rules declared at modules/terraform-aws-waf/variables.tf:46,1-38: all list elements must have the same type.Below is my template
field_to_match = {
single_header = [{
name = "Host"
}]
}Jonas Steinbergover 3 years ago(edited)
Does anyone have any opinions on how best to go about introducing aws terraform tagging standards across a ton of github repos that contain application code (not really important), as well as all relevant terraform for that service (key factor)?
• The brute force approach I suppose would be to clone all repos to local (already done) and then write a script that handles iterating over the and traversing into the relevant directory and files where the terraform source is and writing a tagging object
• Another approach would be to mandate to engineering managers that people introduce a pre-determined tagging standard
• Any other ideas?
I'm new to trying out ABAC across an org.
• The brute force approach I suppose would be to clone all repos to local (already done) and then write a script that handles iterating over the and traversing into the relevant directory and files where the terraform source is and writing a tagging object
• Another approach would be to mandate to engineering managers that people introduce a pre-determined tagging standard
• Any other ideas?
I'm new to trying out ABAC across an org.
Aumkar Prajapatiover 3 years ago(edited)
Question all, does this module support MSK serverless? https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster
rssover 3 years ago
v1.3.0-beta1
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...
Michał Wośover 3 years ago
Any chances to have
cloudposse/s3-bucket/aws bumped to at least 2.0.1 here https://github.com/cloudposse/terraform-aws-s3-log-storage/pull/72 and merged?rssover 3 years ago(edited)
v1.3.0-beta1
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...
1.3.0 (Unreleased)
NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example:
variable "with_optional_attribute" {
type = object({
a = string # a required attribute
b = optional(string) # an optional attribute
c = optional(number, 127) # an...