79 messages
Shanmugam .shan7almost 4 years ago
Hi I'm very new to terraform and i'm using this CP module https://github.com/cloudposse/terraform-aws-service-control-policies getting the below error can some one help me on this
$ terraform apply
β·
β Error: Reference to undeclared module
β
β on main.tf line 9, in module "yaml_config":
β 9: context = module.this.context
β
β No module call named "this" is declared in the root module.
β΅
β·
β Error: Invalid reference
β
β on main.tf line 18, in module "service_control_policies":
β 18: service_control_policy_description = test
β
β A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
β΅
β·
β Error: Reference to undeclared module
β
β on main.tf line 21, in module "service_control_policies":
β 21: context = module.this.context
β
β No module call named "this" is declared in the root module.
β΅Shanmugam .shan7almost 4 years ago
$ terraform apply plan.tf
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
organizations_policy_arn = ""
organizations_policy_id = ""Shanmugam .shan7almost 4 years ago(edited)
Am i missing something?
Josh Hollowayalmost 4 years ago
@Shanmugam .shan7 are you using the code directly from the
README ? You're probably missing https://github.com/cloudposse/terraform-aws-service-control-policies/blob/master/examples/complete/context.tfShanmugam .shan7almost 4 years ago
@Josh Holloway I'm using the example which is give in the repo
Shanmugam .shan7almost 4 years ago
Do we need modify something on the example
Josh Hollowayalmost 4 years ago
Couple of things: Can you run the following in the directory where you've put the code?
then:
terraform versionthen:
lsShanmugam .shan7almost 4 years ago
$ terraform version
Terraform v1.1.6
on darwin_amd64
Your version of Terraform is out of date! The latest version
is 1.1.7. You can update by downloading from <https://www.terraform.io/downloads.html>Shanmugam .shan7almost 4 years ago
$ ls
LICENSE Makefile README.md README.yaml catalog context.tf docs examples main.tf outputs.tf test variables.tf versions.tfJosh Hollowayalmost 4 years ago
Oh so you've cloned the repo itself? Gotcha...
Are you running terraform from this directory or from the
Are you running terraform from this directory or from the
examples/complete one?Shanmugam .shan7almost 4 years ago
Yes
Shanmugam .shan7almost 4 years ago(edited)
Do i need to do any changes
Shanmugam .shan7almost 4 years ago
Got it i think we need use terraform apply -var-file fixtures.us-east-2.tfvars
Josh Hollowayalmost 4 years ago
Yeah... the recommended way will be to write your own terraform configuration and reference the
module using the github source: https://www.terraform.io/language/modules/sources#githubDan Herringtonalmost 4 years ago
Anyone using beanstalk and govcloud with terraform? I'm running into an issue getting the zoneId of the beanstalk environment passed along during deploy. I keep getting NoSuchHostedZone when using the using the builtin terraform data resource for elasticbeanstalk_hosted_zone.
sohaibahmed98almost 4 years ago(edited)
Hi Guys,
Is it good practice to keep separate state file for AWS services and each microservices then use terraform remote state (https://www.terraform.io/language/state/remote-state-data)?
or we should keep in single state file?
Is it good practice to keep separate state file for AWS services and each microservices then use terraform remote state (https://www.terraform.io/language/state/remote-state-data)?
or we should keep in single state file?
Matt Gowiealmost 4 years ago
Hey Cloud Posse team β Whatβs the current status of the
opsgenie vs opsgenie-team components. From looking at itβ¦ opsgenie-team is a newer and less complicated, but Iβm wondering if itβs intended to completely replace the older opsgenie component? Can anybody shed some light on that? @Yonatan Koren maybe you since I see you were part of the most recent updates?PePe Amengualalmost 4 years ago(edited)
I use
cloudposse modules pretty much always but Iβm working on a client that does not allow to clone git repos from outside the org, how can I package a module an all the dependencies? is there such tool? because as you know is I clone the ecs module I will have to clone like 30 dependencies manually and I do not want toChin Samalmost 4 years ago(edited)
hi everyone! quick question, is there anyone who used
cloudposse following module : https://github.com/cloudposse/terraform-aws-cloudwatch-events/tree/0.5.0, it provides the following required input : cloudwatch_event_rule_pattern , and i want to pass in a cron / rate expression of cloudwatch to it to trigger lambda, so my question, is the module supports it, since itβs failing, any feedback is greatly appreciated, thanksJeremy G (Cloud Posse)almost 4 years ago(edited)
Announcement and Request for Comments!
We have had numerous requests to adopt production-level SemVer versioning for our public Terraform modules. We feel we are now ready to begin this process (slowly), and somewhat forced to do so due to the numerous breaking changes we are going to be releasing, at least partly due to the release of AWS Terraform Provider v4, which substantially breaks how S3 is configured.
Unfortunately, we feel compelled to begin our new SemVer launches in a rather awkward way, again due to the AWS v4 changes, particularly around S3.
The general roadmap we are planning, and for which we would like from you either a show of support or recommendations for alternatives (and explanation of their benefits) is as follows. For each Terraform module (as we get to it):
1. The latest version that is compatible with AWS v3 and has no breaking changes will be released as v1.0.0, and version pinned to only work with AWS v3 if it is not also compatible with v4
2. The module may need refactoring, leading to breaking changes. Refactoring compatible with AWS v3 will be done and released as v2.0.0
3. The module will be brought into compliance with our current design patterns, such as use of the security-group and s3-bucket modules and the standardized inputs they allow. These modules may or may not expose the complete feature set of the underlying modules, as part of the point of some of them is to provide reasonable, "best practice" defaults and minimize the effort needed to configure the other modules for a specific intended use case. The module will be version pinned to require AWS v4 and Terraform v1. This module will get the next major release number, either v2 or v3 depending on what happened in step 2.
One drawback is that this can result in 2 or 3 major version releases happening in a very rapid succession. In particular s3-bucket v0.47.1 will be re-released (new tag, same code) as v1.0.0 and v0.49.0 will be released as v2.0.0 minutes later. (See the release notes for details of the changes.)
In a similar way, s3-log-storage will have 3 rapid fire releases. v0.26.0 will be re-released as 1.0.0, v0.27.0 will be re-released as v2.0.0, and v0.28.0 will be re-released as v3.0.0.
Personally, I do not like the hassle imposed on consumers by having the rapid fire major version releases. However, whether we use the version numbers or not to indicate it, the case remains that we have a series of breaking changes and manual migrations to release, partly forced on us by the AWS upgrade and partly a refactoring for our own sanity in maintaining all these modules. We have too many modules each with their own implementation of an S3 bucket resource (and configuration interface) and/or EC2 security group. We need to standardize on the inputs and the implementation so that we can have a clearer, smoother response to future underlying changes like these. In particular, we want to be able to take advantage of Terraform's support for coding migration paths in the module itself, but these only work within a module; they do not work when one module migrates another. By having all our resources encapsulated appropriately, we hope to make future migration much, easier. Please bear with us through these next few months of breaking changes and manual migrations.
If you have suggestions for improving on the above plan, please share them. Likewise, if you have suggestions, either general or specific, for improving our migration instructions, please share them.
We have had numerous requests to adopt production-level SemVer versioning for our public Terraform modules. We feel we are now ready to begin this process (slowly), and somewhat forced to do so due to the numerous breaking changes we are going to be releasing, at least partly due to the release of AWS Terraform Provider v4, which substantially breaks how S3 is configured.
Unfortunately, we feel compelled to begin our new SemVer launches in a rather awkward way, again due to the AWS v4 changes, particularly around S3.
The general roadmap we are planning, and for which we would like from you either a show of support or recommendations for alternatives (and explanation of their benefits) is as follows. For each Terraform module (as we get to it):
1. The latest version that is compatible with AWS v3 and has no breaking changes will be released as v1.0.0, and version pinned to only work with AWS v3 if it is not also compatible with v4
2. The module may need refactoring, leading to breaking changes. Refactoring compatible with AWS v3 will be done and released as v2.0.0
3. The module will be brought into compliance with our current design patterns, such as use of the security-group and s3-bucket modules and the standardized inputs they allow. These modules may or may not expose the complete feature set of the underlying modules, as part of the point of some of them is to provide reasonable, "best practice" defaults and minimize the effort needed to configure the other modules for a specific intended use case. The module will be version pinned to require AWS v4 and Terraform v1. This module will get the next major release number, either v2 or v3 depending on what happened in step 2.
One drawback is that this can result in 2 or 3 major version releases happening in a very rapid succession. In particular s3-bucket v0.47.1 will be re-released (new tag, same code) as v1.0.0 and v0.49.0 will be released as v2.0.0 minutes later. (See the release notes for details of the changes.)
In a similar way, s3-log-storage will have 3 rapid fire releases. v0.26.0 will be re-released as 1.0.0, v0.27.0 will be re-released as v2.0.0, and v0.28.0 will be re-released as v3.0.0.
Personally, I do not like the hassle imposed on consumers by having the rapid fire major version releases. However, whether we use the version numbers or not to indicate it, the case remains that we have a series of breaking changes and manual migrations to release, partly forced on us by the AWS upgrade and partly a refactoring for our own sanity in maintaining all these modules. We have too many modules each with their own implementation of an S3 bucket resource (and configuration interface) and/or EC2 security group. We need to standardize on the inputs and the implementation so that we can have a clearer, smoother response to future underlying changes like these. In particular, we want to be able to take advantage of Terraform's support for coding migration paths in the module itself, but these only work within a module; they do not work when one module migrates another. By having all our resources encapsulated appropriately, we hope to make future migration much, easier. Please bear with us through these next few months of breaking changes and manual migrations.
If you have suggestions for improving on the above plan, please share them. Likewise, if you have suggestions, either general or specific, for improving our migration instructions, please share them.
Erik Osterman (Cloud Posse)almost 4 years ago(edited)
Sorry for the spam in the channel. We are dealing with it. Please report any spam to me.
Matt Gowiealmost 4 years ago
@Erik Osterman (Cloud Posse) another Terraform Framework hits the scene β This time from Nike.
https://github.com/nike-inc/pterradactyl
https://github.com/nike-inc/pterradactyl
Grubholdalmost 4 years ago
Hi folks, Iβm using the following resources from CloudPosse modules to create a DynamoDB table with items from a json file. Iβm trying to conditionally created the table and items depending on a bool variable. Normally I would use
count to add the condition but Iβm using for_each to loop the json file. Please see the thread for the full code Iβm using. Any help is highly appreciatedrssalmost 4 years ago(edited)
v1.1.8
1.1.8 (April 07, 2022)
BUG FIXES:
cli: Fix missing identifying attributes (e.g. "id", "name") when displaying plan diffs with nested objects. (#30685)
functions: Fix error when sum() function is called with a collection of string-encoded numbers, such as sum(["1", "2", "3"]). (<a...
1.1.8 (April 07, 2022)
BUG FIXES:
cli: Fix missing identifying attributes (e.g. "id", "name") when displaying plan diffs with nested objects. (#30685)
functions: Fix error when sum() function is called with a collection of string-encoded numbers, such as sum(["1", "2", "3"]). (<a...
Alibekalmost 4 years ago
hi, could anyone tell when this bug https://github.com/cloudposse/terraform-aws-elasticache-redis/issues/155 wil be fixed? i can't deploy elasticcache-redis via ur terraform module
Alibekalmost 4 years ago(edited)
on page of this terraform module https://registry.terraform.io/modules/cloudposse/elasticache-redis/aws/latest is written that there is 21 required variables, but only 1 variable marked as required, could anyone write others required variables? i have probelms with deploying cuz i don't know all required variables.
Nimesh Aminalmost 4 years ago
Hey everyone! I'm running into an issue where I'm trying to add an s3 bucket to my existing infra, but hitting what looks like a resource dependency before it's created issue. However, I've verified in the AWS console that mailer@comp policy exists. This role here is owned by my mailer, which should allow it access to the new s3 bucket I'm creating.
Is this error due to the IAM policy, or really due to the s3 bucket needing to be created first?
Is this error due to the IAM policy, or really due to the s3 bucket needing to be created first?
β·
β Error: Invalid for_each argument
β
β on .terraform/modules/iam-eks-roles.eks_iam_role/main.tf line 82, in resource "aws_iam_policy" "service_account":
β 82: for_each = length(var.aws_iam_policy_document) > 0 ? toset(compact([module.service_account_label.id])) : []
β βββββββββββββββββ
β β module.service_account_label.id is "mailer@comp"
β β var.aws_iam_policy_document is a string, known only after apply
β
β The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
β΅Chandler Forrestalmost 4 years ago
Need some guidance on manipulating a data object. I have this list of maps:
And I want to turn it into this:
I've played around with the for expressions, merge function, keys function, etc - > but I can't quite get my output.
pet_store = [
{
animal_type = "cat"
animal_name = "fluffy"
},
{
animal_type = "cat"
animal_name = "blah"
},
{
animal_type = "dog"
animal_name = "bingo"
}
]And I want to turn it into this:
pet_store2 = [
{
animal_type = "cat"
animal_name = ["fluffy", "blah"]
},
{
animal_type = "dog"
animal_name = ["bingo"]
}
]I've played around with the for expressions, merge function, keys function, etc - > but I can't quite get my output.
RBalmost 4 years ago
Try this
locals {
# pet_store = {}
unique_animals = [
for store in local.pet_store :
store.animal_type
]
pet_store2 = [
for animal_type in local.unique_animals :
{
animal_type = animal_type
animal_names = [
for store in local.pet_store :
store.animal_name
if store.animal_type = animal_type
]
}
]
}Francoalmost 4 years ago
Hello everyone, Iβd like to enable logging on elasticache-redis using https://registry.terraform.io/modules/cloudposse/elasticache-redis/aws/latest, but I donβt find documentation
Francoalmost 4 years ago
is it possible?
Ellevalalmost 4 years ago
Hey guys, I was looking at the TF modules for ECS/Fargate but didn't see a module for creating an ECS cluster. Have I missed it?
Joshuaalmost 4 years ago
Hello, I have a question concerning atmos. I am currently using terraform cloud for the remote state, planning, and applying. If I use atmos, can I still keep my workflow as in will tf cloud plan and apply, or is it strictly geo/atmos that has to apply stacks to my environments? I love the idea of the yaml and stacks, as it seems to make life easier, but our devs also like seeing what is planned/applied in tf cloud or spacelift. So I hope this makes sense. TY!
TL;DR does Atmos replace Terraform Cloud for planning/applying since it uses yaml
TL;DR does Atmos replace Terraform Cloud for planning/applying since it uses yaml
Bhavik Patelalmost 4 years ago
I created a tf module to provision an RDS database that has resources like security groups within the module. Iβm wanting to change the naming that I setup without having to do
terraform state mv module.old_resource module.new_resource for all the resources associated with the module. Anyone have ideas?rssalmost 4 years ago
v1.2.0-alpha20220413
1.2.0 (Unreleased)
UPGRADE NOTES:
The official Linux packages for the v1.2 series now require Linux kernel version 2.6.32 or later.
When making outgoing HTTPS or other TLS connections as a client, Terraform now requires the server to support TLS v1.2. TLS v1.0 and v1.1 are no longer supported. Any safely up-to-date server should support TLS 1.2, and mainstream web browsers have required it since 2020.
When making outgoing HTTPS or other TLS connections as a client, Terraform will no longer...
1.2.0 (Unreleased)
UPGRADE NOTES:
The official Linux packages for the v1.2 series now require Linux kernel version 2.6.32 or later.
When making outgoing HTTPS or other TLS connections as a client, Terraform now requires the server to support TLS v1.2. TLS v1.0 and v1.1 are no longer supported. Any safely up-to-date server should support TLS 1.2, and mainstream web browsers have required it since 2020.
When making outgoing HTTPS or other TLS connections as a client, Terraform will no longer...
Ellevalalmost 4 years ago
Hello! I was looking at this module - https://github.com/cloudposse/terraform-aws-ecs-alb-service-task. I'm not clear on the relationship between the ECS service and the ALB. I'm interpreting the docs as the ALB needs to be created first and then pass details via https://github.com/cloudposse/terraform-aws-ecs-alb-service-task#input_ecs_load_balancers? Its got me scratching my head as I thought the target group would need to be created before the ALB. Any pointers appreciated.
NguyΓͺn Nguyα»
n BΓ‘almost 4 years ago
Hi guys, i'm following this module https://github.com/cloudposse/terraform-aws-tfstate-backend for storing backend state on s3. The apply step works perfectly but when i do
No state file was uploaded on my s3 bucket.
This is my state backend module and the "terraform init --force-copy" logs. Any step did i miss?
Thank you.
terraform init -force-copyNo state file was uploaded on my s3 bucket.
This is my state backend module and the "terraform init --force-copy" logs. Any step did i miss?
Thank you.
Jeremy (UnderGrid Network Services)almost 4 years ago
We're looking to move our DR region so have to setup a new region from scratch. Previous DR and primary regions were not done via Terraform but by hand so trying to take this opportunity to change that. Looking to use the terraform-aws-vpc module but looking at the various subnet modules trying to determine which would be best and how to follow our network design. We only have 1 security VPC that has the IGW and we connect the rest of the VPCs to a TGW that has the default route going to the security VPC. Obviously I have to make sure the NGW settings are disabled and can disable the IGW creation on all but the security VPC. Looking for pros/cons against the different subnet modules to help narrow down which to utilize.
Matt Gowiealmost 4 years ago
Interesting new tool for Terraform AWS IAM Permission change diffs β https://semdiff.io/
David Spedziaalmost 4 years ago(edited)
Hey there, I am using the CloudPosse terraform modules for
cloudposse/vpc/aws and cloudposse/multi-az-subnets/aws . I have two CIDR ranges in the VPC 10.20.0.0/22 and 10.21.0.0/18. The /22 is for public subnets in the VPC and the /18 is for private subnets. When I run the terraform the private subnets fail to create. I am able to create them manually in AWS however. What is the limitation here?lorenalmost 4 years ago
was reviewing the release notes for the upcoming 4.10.0 release of the aws provider, and noticed a mention about custom policies for config rules, which led me to this feature. pretty neat... an alternative to writing lambda functions for custom config rules...
β’ https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_cfn-guard.html
β’ https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_cfn-guard.html
sephralmost 4 years ago
Hello, weβre using the CloudPosse terraform module
I have a general understanding of whatβs happening here. Essentially terraform canβt access the k8s config map inside the cluster to verify its there, but I thought the whole point of setting
Is there a recommended process to navigate around this issue and upgrade my eks cluster without having to re-create the aws-auth cm?
Thanks!
cloudposse/eks-cluster/aws and weβre currently in the process of upgrading our eks cluster to 1.22. However when we run terraform to perform the upgrade we receive the following error during terraform plan :β Error: configmaps "aws-auth" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
β
β with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
β on .terraform/modules/eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
β 115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {I have a general understanding of whatβs happening here. Essentially terraform canβt access the k8s config map inside the cluster to verify its there, but I thought the whole point of setting
aws_auth_ignore_changes[0] = true was to avoid this situation. Perhaps Iβm misunderstanding something.Is there a recommended process to navigate around this issue and upgrade my eks cluster without having to re-create the aws-auth cm?
Thanks!
Gregalmost 4 years ago
Hi all, i've imported a previously-manually-created elastic beanstalk environment into terraform that i'd like to manage using
cloudposse/elastic-beanstalk-environment/aws; the environment name uses caps but terraform wants to recreate it with a lowercase name. Is there any way to avoid that and maintain the current case?Erik Osterman (Cloud Posse)almost 4 years ago
Ikanaalmost 4 years ago
Iβm trying to use the lambda module, but Iβm getting some errors when trying to use the
var.custom_iam_policy_arnsIkanaalmost 4 years ago(edited)
β Error: Invalid for_each argument
β
β on .terraform/modules/main.lambda/iam-role.tf line 77, in resource "aws_iam_role_policy_attachment" "custom":
β 77: for_each = local.enabled && length(var.custom_iam_policy_arns) > 0 ? var.custom_iam_policy_arns : toset([])
β βββββββββββββββββ
β β local.enabled is true
β β var.custom_iam_policy_arns is set of string with 1 element
β
β The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
β around this, use the -target argument to first apply only the resources that the for_each depends on.Ikanaalmost 4 years ago
This is where I add the arn:
custom_iam_policy_arns = [aws_iam_policy.exec.arn]Manoj Jadhavalmost 4 years ago
I am to create a new VPC. I see for subnets, there are 3 subnet repos.
1. Dynamic
2. Named subnets
3. Multi AZ.
I do not want to use the dynamic subnets as we have planned the CIDR range in a different way. Its not equal CIDR distribution for all subnets. If I used named subnets, would I be able to spin up subnets in diffrent AZ's or am I to use Mutli AZ modules itself? What are the differences?
1. Dynamic
2. Named subnets
3. Multi AZ.
I do not want to use the dynamic subnets as we have planned the CIDR range in a different way. Its not equal CIDR distribution for all subnets. If I used named subnets, would I be able to spin up subnets in diffrent AZ's or am I to use Mutli AZ modules itself? What are the differences?
ikaralmost 4 years ago(edited)
Hey all π§βπ¬
...as our infra slowly deprecates and each infra part was deployed with a different terraform version (oldest is 0.12.x) - is there any recommended way how to manage multiple tf installations?
Quickly went through tfenv (https://github.com/tfutils/tfenv) but this seems a bit more complicated than needed.
Here's my idea on how such tool works:
β’ when running
β’ when running
β’ terraform auto-complete works and ideally all commands are run with
Any recommendations, pls?
...as our infra slowly deprecates and each infra part was deployed with a different terraform version (oldest is 0.12.x) - is there any recommended way how to manage multiple tf installations?
Quickly went through tfenv (https://github.com/tfutils/tfenv) but this seems a bit more complicated than needed.
Here's my idea on how such tool works:
β’ when running
terraform apply, the tool automatically selects which terraform version should be used (from tfstate) - ideally it would use highest compatible versionβ’ when running
terraform init the newest available terraform is (installed and) usedβ’ terraform auto-complete works and ideally all commands are run with
terraformAny recommendations, pls?
Bhavik Patelalmost 4 years ago
Anyone know how to get the static_route ip address from a site to site VPN ?
Tyler Jarjouraalmost 4 years ago(edited)
Hey all, I have a question about module design and security groups. Letβs say I have an ASG of workers and a database. Each is managed by a terraform module, with its own security group. The worker ASG needs to have an ingress rule to access the database. Does it make more sense to:
1. Pass the worker sg_id into the database module, and create the security group ingress rule there
2. Pass the database sg id into the worker module, and create the ingress rule there.
I guess the pro of the second option, is that everything the worker needs is encapsulated in one module. But I noticed that most third party modules opt for the first approach (you pass in who needs to access your resources). Thoughts?
1. Pass the worker sg_id into the database module, and create the security group ingress rule there
2. Pass the database sg id into the worker module, and create the ingress rule there.
I guess the pro of the second option, is that everything the worker needs is encapsulated in one module. But I noticed that most third party modules opt for the first approach (you pass in who needs to access your resources). Thoughts?
Jeremy (UnderGrid Network Services)almost 4 years ago
I've been looking at all the subnet modules available. It seems to me that they are all only support a "private" and "public" subnet... am I missing a more multi-tier setup? The network I'm working on really doesn't use "public" subnets except in 1 VPC and everything else is either "private" and "data"
Soren Jensenalmost 4 years ago
Can this module log to it self? https://github.com/cloudposse/terraform-aws-s3-log-storage
I tried to set:
But getting an error that the bucket doesn't exist
I tried to set:
access_log_bucket_name = random_pet.s3_logging_bucket_name.id
access_log_bucket_prefix = "${random_pet.s3_logging_bucket_name.id}/"But getting an error that the bucket doesn't exist
Ellevalalmost 4 years ago
Hello All, probably a daft question but I wondered why this module is on the init branch https://github.com/cloudposse/terraform-aws-sqs-queue. The master branch is an example module.
Grummfyalmost 4 years ago(edited)
hello, is there any tools that scan multiple terraform state and make a list of resource not created by terraform but that exist in the infrastructure (AWS in my case) ?
lorenalmost 4 years ago
very cool enhancement coming in a future version of terraform... the ability to force a replacing update on a resource when there are changes in resources it depends on... https://github.com/hashicorp/terraform/pull/30900
jeannichalmost 4 years ago(edited)
Hi,
When using
I'd like to see the differences during a
Currently what I do is I output the merged yaml result into a file:
It works great when the local file is kept between executions, terraform does show the yaml differences when there is, but my run environment regularly deletes the file "merged_config.yaml" so what I often end up with is this output:
Is there any way to keep the merged yaml content into a terraform resource that does not output its content to a local file ?
I looked for terraform plugin that could do that but could not find any.
Thanks for your suggestions !
When using
utils_deep_merge_yaml do you know if there is a way to show the differences when yaml files content changes βοΈI'd like to see the differences during a
terraform plan, not after the apply.Currently what I do is I output the merged yaml result into a file:
data "utils_deep_merge_yaml" "merged_config" {
input = local.configs_files_content
}
resource "local_file" "merged_yaml_files" {
content = data.utils_deep_merge_yaml.merged_config.output
filename = "merged_config.yaml"
}It works great when the local file is kept between executions, terraform does show the yaml differences when there is, but my run environment regularly deletes the file "merged_config.yaml" so what I often end up with is this output:
# local_file.merged_yaml_files has been deleted
- resource "local_file" "merged_yaml_files" {
- content = <<-EOT
foo:
bar:
key1: 1
key2: 2
[.........]
# local_file.merged_yaml_files will be created
+ resource "local_file" "merged_yaml_files" {
+ content = <<-EOT
foo:
bar:
key1: 1
key2: 2Is there any way to keep the merged yaml content into a terraform resource that does not output its content to a local file ?
I looked for terraform plugin that could do that but could not find any.
Thanks for your suggestions !
bhavin vyasalmost 4 years ago
HI team ..i am new to terraform trying to create simple Security Group Module using below main.tf.
resource "aws_security_group" "ec2_VDS_securityGroup" {
for_each = var.security_groups
name = each.value.name
description = each.value.description
vpc_id = var.aws_vpc_TRVPC_id
dynamic "ingress" {
for_each = each.value.ingress
content {
from_port = ingress.value.from
to_port = ingress.value.to
protocol = ingress.value.protocol
prefix_list_ids = var.aws_zpa_prefix_id
}
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group" "ec2_VDS_securityGroup" {
for_each = var.security_groups
name = each.value.name
description = each.value.description
vpc_id = var.aws_vpc_TRVPC_id
dynamic "ingress" {
for_each = each.value.ingress
content {
from_port = ingress.value.from
to_port = ingress.value.to
protocol = ingress.value.protocol
prefix_list_ids = var.aws_zpa_prefix_id
}
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
bhavin vyasalmost 4 years ago
challage what i am facing is how to get output of this security group ID and use it in parent module as a reference for EC2 ?
Grummfyalmost 4 years ago
hello,
look at https://www.terraform.io/language/values/outputs
output "some_id" { value = aws_security_group.ec2_VDS_securityGroup.id }
and in the module that refer it you will use module.YOUR-MODULE-NAME.some_id
look at existing module like https://github.com/cloudposse/terraform-aws-eks-cluster
you will see the outputs.tf (the name is not mandatatory, is just easier)
look at https://www.terraform.io/language/values/outputs
output "some_id" { value = aws_security_group.ec2_VDS_securityGroup.id }
and in the module that refer it you will use module.YOUR-MODULE-NAME.some_id
look at existing module like https://github.com/cloudposse/terraform-aws-eks-cluster
you will see the outputs.tf (the name is not mandatatory, is just easier)
bhavin vyasalmost 4 years ago
HI grummy thanks for your help ..it seems i am getting below massage while updating terraform security Module details in EC2 instance.
bhavin vyasalmost 4 years ago
Erik Osterman (Cloud Posse)almost 4 years ago
Elias Andres Machado Mollejaalmost 4 years ago
Good afternoon!!! I need to ask you something about a issue I get in node-group cloudposse module terraform-aws-eks-node-group version β0.27.3β. Iβm trying to create one with the option cluster_autoscaler_enabled in false, but the ASG still have the tags with CAS enabled in true (attach 1).
Here you have the POC code we created (attach 2) based in the example attach in README
Could this issue be a tag problem? Iβm new using cloudposse module. Could you help me with it?
Here you have the POC code we created (attach 2) based in the example attach in README
Could this issue be a tag problem? Iβm new using cloudposse module. Could you help me with it?
lorenalmost 4 years ago
New issue aimed at discussing options to resolve these annoying errors:
and
Very much worth reading and understanding!
https://github.com/hashicorp/terraform/issues/30937
β Error: Invalid for_each argumentand
β Error: Invalid count argumentVery much worth reading and understanding!
https://github.com/hashicorp/terraform/issues/30937
Tyler Jarjouraalmost 4 years ago(edited)
Hey everybody, why is βname_prefixβ used instead of just βnameβ in this module for certain resources (parameter group, option group)? https://github.com/cloudposse/terraform-aws-rds
rssalmost 4 years ago
v1.2.0-beta1
1.2.0 (Unreleased)
UPGRADE NOTES:
The official Linux packages for the v1.2 series now require Linux kernel version 2.6.32 or later.
When making outgoing HTTPS or other TLS connections as a client, Terraform now requires the server to support TLS v1.2. TLS v1.0 and v1.1 are no longer supported. Any safely up-to-date server should support TLS 1.2, and mainstream web browsers have required it since 2020.
When making outgoing HTTPS or other TLS connections as a client, Terraform will no...
1.2.0 (Unreleased)
UPGRADE NOTES:
The official Linux packages for the v1.2 series now require Linux kernel version 2.6.32 or later.
When making outgoing HTTPS or other TLS connections as a client, Terraform now requires the server to support TLS v1.2. TLS v1.0 and v1.1 are no longer supported. Any safely up-to-date server should support TLS 1.2, and mainstream web browsers have required it since 2020.
When making outgoing HTTPS or other TLS connections as a client, Terraform will no...
mrwackyalmost 4 years ago
Did Hashicorp make it easier to update to v4.x AWS provider? I know they totally changed how S3 was managed, and then we the people collectively flipped our lids. Did they make it less painful?
Roman Orlovskiyalmost 4 years ago
Hi all. First of all wanted to thank Cloudposse team for their amazing work! Really appreciate what you are doing for the community.
As for my question, do I understand properly that iam-primary-roles iam-delegated-roles components from https://github.com/cloudposse/terraform-aws-components/tree/master/modules are not required when using AWS SSO (https://github.com/cloudposse/terraform-aws-sso)? Or is there still a need for them in some cases? And what would those case be? Thanks in advance
As for my question, do I understand properly that iam-primary-roles iam-delegated-roles components from https://github.com/cloudposse/terraform-aws-components/tree/master/modules are not required when using AWS SSO (https://github.com/cloudposse/terraform-aws-sso)? Or is there still a need for them in some cases? And what would those case be? Thanks in advance
Ihor Urazovalmost 4 years ago
How does Cloud Posse configure Renovate Bot, so it automatically updates terraform module versions in generated docks? Like here https://github.com/cloudposse/terraform-aws-eks-cluster/commit/22ab0dd1271d272b134b62682d275a73e07dc0fd
Ellevalalmost 4 years ago
Hey guys, I'm looping through a module to create multiple resources and getting this error:
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created.Joe Perezalmost 4 years ago
Hello All! I just wanted to share my terraform blog with everyone. Here is my latest post on using 1password CLI with Terraform: https://www.taccoform.com/posts/tfg_p5/
Davidalmost 4 years ago
Hi Folks. Our deployment servers donβt have access to internet directly, only allowed out via a proxy which doesnβt have github whitelisted. We mirror repositories in our source control system. We tried to use the parameter store module which wants to clone the null label module directly from Github, but I cannot see a workaround currently. Am I missing something?
21:42:00 β·
21:42:00 β Error: Failed to download module
21:42:00 β
21:42:00 β Could not download module "this" (<http://context.tf:23|context.tf:23>) source code from
21:42:00 β "git::<https://github.com/cloudposse/terraform-null-label?ref=0.25.0>": error
21:42:00 β downloading
21:42:00 β '<https://github.com/cloudposse/terraform-null-label?ref=0.25.0>':
21:42:00 β /usr/bin/git exited with 128: Cloning into '.terraform/modules/this'...
21:42:00 β fatal: unable to access
21:42:00 β '<https://github.com/cloudposse/terraform-null-label/>': Received HTTP code
21:42:00 β 403 from proxy after CONNECT
21:42:00 β .
21:42:00 β΅Ellevalalmost 4 years ago
Hey folks, I'm getting errors when running the
terraform-aws-iam-role .Ellevalalmost 4 years ago
Hi, is there an ECS task module, which supports the creation of a ECS/Fargate scheduled task? I had a look but couldn't see one.
Alex Jurkiewiczalmost 4 years ago
Did Cloudposse invent the null-label module idea, or did it exist beforehand? Just curious about where the idea came from
awlalmost 4 years ago
Anyone know how I can escape double quotes in an output statement but make them literal when the output is shown? For example:
gives me this output:
I want it to show me:
output "quote_test" {
value = "this is \"awesome\""
}gives me this output:
quote_test = "this is \"awesome\""I want it to show me:
quote_test = "this is "awesome""Pablo Silveiraalmost 4 years ago
Hello mates, how are you? is there a possibilty to call this module https://github.com/cloudposse/terraform-aws-vpn-connection without the creation of aws_customer_gateway.default , can it be sent as parameter?