165 messages
lorenover 4 years ago(edited)
I forget who else was looking for this, but the new aws provider release has support for aws amplify... https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.43.0
Harryover 4 years ago
Does anyone know of a good terraform module for creating an S3 bucket set up to host a static site?
Rob Schoeningover 4 years ago
For those of you using terraform static analysis and plan verification tools (Sentinel, Snyk, tfsec, checkov, etc.), it would be great to hear your thoughts on what features are missing or what approach you see working/not working? Do you see this as something that should be coupled with PR process, CI/CD, TACOS platform, all of above or something else entirely? In full transparency, I’m the founder of https://soluble.ai which integrates a variety of static analysis tools into a ?(free) GitHub App. But the question is just honest discovery, useful to all. Curious what you all think and what your experiences have been.
Luisover 4 years ago(edited)
Does anyone have an example on how to use the "kubelet_additional_options" variable for the terraform-aws-eks-node-group module? I am testing it like this without any luck so far. Thanks
kubelet_additional_options = "--allowed-unsafe-sysctls=net.core.somaxconn,net.ipv4.ip_local_port_range=1024 65000"rssover 4 years ago
v0.15.5
0.15.5 (June 02, 2021)
BUG FIXES:
terraform plan and terraform apply: Don't show "Objects have changed" notification when the detected changes are only internal details related to legacy SDK quirks. (#28796)
core: Prevent crash during planning when encountering a deposed instance that has been removed from the configuration. (<a...
0.15.5 (June 02, 2021)
BUG FIXES:
terraform plan and terraform apply: Don't show "Objects have changed" notification when the detected changes are only internal details related to legacy SDK quirks. (#28796)
core: Prevent crash during planning when encountering a deposed instance that has been removed from the configuration. (<a...
ememover 4 years ago(edited)
hi guys anyone has an idea how to resolve this currently defined in cloudflare terraform module. I first thought i should set the attribute for paused: true. But its still does not seem tot work. Plese help
➜ staging git:(BTA-6363-Create-a-terraform-code-base-for-cloudflare) ✗ terraform plan
Acquiring state lock. This may take a few moments...
Error: Unsupported attribute
on ../../cloudflare/modules/firewall.tf line 6, in locals:
6: md5(rule.expression),
This object does not have an attribute named "expression".Chris Fowlesover 4 years ago
i'm hitting a problem with the way some of our modules are designed now that we're starting to switch to AWS SSO for auth. we use
data "aws_caller_identity" "current" {} a bit to get the current account id rather than having to pass it in, unfortunately when using SSO it looks like this is the root account rather than the account you're applying against. does anyone have an easy way around this or do i need to go on an adventure?ememover 4 years ago
hi guys who has gotten around resolving this terraform import issue before
nil entry in ImportState results. This is always a bug with
the resource that is being imported. Please report this as
a bug to TerraformHenry Courseover 4 years ago
guess this might be the right place to put this, got a contribution PR that should now be ready for review: https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/22
Pierre-Yvesover 4 years ago
Hello,
when using terraform cloud, how do you provide terraform init argument ? I didn't find a way to do it
I am used to provide variable to connect to the remote state like this :
when using terraform cloud, how do you provide terraform init argument ? I didn't find a way to do it
I am used to provide variable to connect to the remote state like this :
terraform init -reconfigure -backend-config="login=$TF_VAR_login" ...managedkaosover 4 years ago
Have you seen something like this where you know there are changes (made manually in the console), terraform knows there are changes, and yet there is no plan to revert the changes? 🤔
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
No changes. Your infrastructure matches the configuration.Vijay LLover 4 years ago
Hello Guys, Is anyone using Terraform API driven runs?
curl -s --header "Content-Type: application/octet-stream" --request PUT --data-binary @${config_dir}.tar.gz "$upload_url"
I am trying to understand and use this. I'd like to do this through Go or Python
curl -s --header "Content-Type: application/octet-stream" --request PUT --data-binary @${config_dir}.tar.gz "$upload_url"
I am trying to understand and use this. I'd like to do this through Go or Python
Vijay LLover 4 years ago
Yes or Terraform Enterprise
marcoscbover 4 years ago(edited)
Hello, I'm trying to update the AMI on an EKS cluster created with terraform-aws-eks-cluster-0.38.0 module and terraform-aws-eks-node-group-0.19.0 setting
create_before_destroy = true
in the eks_node_group module but pods are not relocated to the new nodes and the node group keeps modifying and times out.
Anybody using this kind of rolling updates with this modules? Any hint about how to orchestrate this rollings?
Thanks.
create_before_destroy = true
in the eks_node_group module but pods are not relocated to the new nodes and the node group keeps modifying and times out.
Anybody using this kind of rolling updates with this modules? Any hint about how to orchestrate this rollings?
Thanks.
Raja Miahover 4 years ago
hi anyone have any good resources or links for terraforming a aws api-gateway ??
Ryan Smithover 4 years ago(edited)
Can I get some upvotes on this? lol for some reason it's been sitting there for a long time, but adding S3 Replication Time Control would be very valuable from Terraform
https://github.com/hashicorp/terraform-provider-aws/pull/11337
original issue I think
https://github.com/hashicorp/terraform-provider-aws/issues/10974
https://github.com/hashicorp/terraform-provider-aws/pull/11337
original issue I think
https://github.com/hashicorp/terraform-provider-aws/issues/10974
MattBover 4 years ago
Hey all. Is this still under review? I'm manually editing the module with this PR and it's working well so far. Any idea on a new release? https://github.com/cloudposse/terraform-aws-sso/pull/13
Alex Jurkiewiczover 4 years ago
on .terraform/modules/apigw_certificate/main.tf line 37, in resource "aws_route53_record" "default":
37: name = each.value.name
A reference to "each.value" has been used in a context in which it
unavailable, such as when the configuration no longer contains the value in
its "for_each" expression. Remove this reference to each.value in your
configuration to work around this error.Started seeing this error with cloudposse / terraform-aws-acm-request-certificate . Anyone familiar with this Terraform error? I've never seen it before and can't quite understand it 🤔
Adnanover 4 years ago
Hi All,
terraform plan does already some validation like duplicate variables
but what is missing is duplicate validation for the contents of maps and lists
does anyone know of a way/tool to validate .tfvars files duplicates including duplicates inside maps and lists?
terraform plan does already some validation like duplicate variables
but what is missing is duplicate validation for the contents of maps and lists
does anyone know of a way/tool to validate .tfvars files duplicates including duplicates inside maps and lists?
Brian A.over 4 years ago
https://github.com/terraform-linters/tflint might be able to do what you need @Adnan
Gene Fontanillaover 4 years ago
is it possible to pass outputs a inputs for variables?
Rhys Daviesover 4 years ago(edited)
hey guys this is probably a FAQ so sorry if so: What's a good article or series for writing CI for Terraform? Specifically I now have a small team of people all working on a project together, what's a good resource to follow on how to test. deploy and not step on each other's toes?
*We use CircleCI and Terraform, no PaaS (yet)
*We use CircleCI and Terraform, no PaaS (yet)
Thomas Hoefkensover 4 years ago
Hi all, I am using the helm provider to deploy a chart... but when adding a template in the helm chart, the tf deployment does not detect the fact that I added a yaml file... how can this be resolved?
Brian Ojedaover 4 years ago
https://registry.terraform.io/ - Anyone else having issues reaching the site?
Parthaover 4 years ago
i can access
Parthaover 4 years ago
the site
Parthaover 4 years ago
@Brian Ojeda
Brian Ojedaover 4 years ago
me too now. 😄
Brian Ojedaover 4 years ago
rssover 4 years ago
v1.0.0
1.0.0 (June 08, 2021)
Terraform v1.0 is an unusual release in that its primary focus is on stability, and it represents the culmination of several years of work in previous major releases to make sure that the Terraform language and internal architecture will be a suitable foundation for forthcoming additions that will remain backward compatible.
Terraform v1.0.0 intentionally has no significant changes compared to Terraform v0.15.5. You can consider the v1.0 series as a direct continuation...
1.0.0 (June 08, 2021)
Terraform v1.0 is an unusual release in that its primary focus is on stability, and it represents the culmination of several years of work in previous major releases to make sure that the Terraform language and internal architecture will be a suitable foundation for forthcoming additions that will remain backward compatible.
Terraform v1.0.0 intentionally has no significant changes compared to Terraform v0.15.5. You can consider the v1.0 series as a direct continuation...
Jon Butterworthover 4 years ago(edited)
Hi all. QQ if I may.. I'm seeing the following error
This is coming from the
Full Error:
I can't seem to see why it's not getting an id..
FYI, I've changed nothing. Just calling the
NB: Although the module is local, it was cloned this morning so is up to date.
Error: "name_prefix" cannot be less than 3 charactersThis is coming from the
eks-workers module. Looks as though it's then coming from the ec2-autoscale-group module and then from the label/null module.Full Error:
│ on .terraform/modules/eks_workers.autoscale_group/main.tf line 4, in resource "aws_launch_template" "default":
│ 4: name_prefix = format("%s%s", module.this.id, module.this.delimiter)I can't seem to see why it's not getting an id..
FYI, I've changed nothing. Just calling the
eks-workers module...module "eks_workers" {
source = "./modules/eks-workers"
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_name = module.eks_cluster.eks_cluster_id
cluster_security_group_id = module.eks_cluster.security_group_id
instance_type = "t3.medium"
max_size = 8
min_size = 4
subnet_ids = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
associate_public_ip_address = true
}NB: Although the module is local, it was cloned this morning so is up to date.
Brij Sover 4 years ago(edited)
Hey all, Im using the terraform eks community module. Im trying to tag the managed nodes with the following:
In addition to this i’m trying to merge the tags above with
I tried the following with no luck
additional_tags = {
"<http://k8s.io/cluster-autoscaler/enabled|k8s.io/cluster-autoscaler/enabled>" = "true"
"<http://k8s.io/cluster-autoscaler/${var.cluster_name}|k8s.io/cluster-autoscaler/${var.cluster_name}>" = "true"
"Name" = var.cluster_name
}In addition to this i’m trying to merge the tags above with
var.tags with minimal success - does anyone know how to do that?I tried the following with no luck
additional_tags = {
merge(var.tags,
"<http://k8s.io/cluster-autoscaler/enabled|k8s.io/cluster-autoscaler/enabled>" = "true"
"<http://k8s.io/cluster-autoscaler/${var.cluster_name}|k8s.io/cluster-autoscaler/${var.cluster_name}>" = "true"
"Name" = var.cluster_name
)
}Aveniaover 4 years ago
tags = merge(
{
"Name" = format("%s", var.name)
},
local.tags,
)
}Aveniaover 4 years ago
i think your issue is the { } missing around your 3 bottom tags.
Brij Sover 4 years ago
let me try adding the
{ }Aveniaover 4 years ago
additional_tags = {
merge(var.tags, {
"<http://k8s.io/cluster-autoscaler/enabled|k8s.io/cluster-autoscaler/enabled>" = "true"
"<http://k8s.io/cluster-autoscaler/${var.cluster_name}|k8s.io/cluster-autoscaler/${var.cluster_name}>" = "true"
"Name" = var.cluster_name
})
}Brij Sover 4 years ago
that results in
50: additional_tags = {
51: merge(var.tags, {
52: "<http://k8s.io/cluster-autoscaler/enabled|k8s.io/cluster-autoscaler/enabled>" = "true"
53: "<http://k8s.io/cluster-autoscaler/${var.cluster_name}|k8s.io/cluster-autoscaler/${var.cluster_name}>" = "true"
54: "Name" = var.cluster_name
55: })
56: }
Expected an attribute value, introduced by an equals sign ("=")Aveniaover 4 years ago
= ${var.cluster_name}" ?
Aveniaover 4 years ago
it shouldnt need that. but thats odd.
Brij Sover 4 years ago
still the same error 😞
Aveniaover 4 years ago
what version is this?
Aveniaover 4 years ago
Ohj
Aveniaover 4 years ago
you still ahve a syntax error
Brij Sover 4 years ago
14.10
Aveniaover 4 years ago
additional_tags = merge(var.tags, {
"<http://k8s.io/cluster-autoscaler/enabled|k8s.io/cluster-autoscaler/enabled>" = "true"
"<http://k8s.io/cluster-autoscaler/${var.cluster_name}|k8s.io/cluster-autoscaler/${var.cluster_name}>" = "true"
"Name" = var.cluster_name
})Aveniaover 4 years ago
try that.
Brij Sover 4 years ago
An argument or block definition is required here.Brij Sover 4 years ago
additional_tags is map(string) value so that should work 🤔Aveniaover 4 years ago
turn your tags into a local and see if it works then
Aveniaover 4 years ago
locals {
#Instance Tagging
tags = {
"service" = var.service_name
"env" = var.environment
"stackname" = "${var.environment}-${var.application_name}"
}
}etc
Aveniaover 4 years ago
then do locals.tags in the merge.
Brij Sover 4 years ago
hmm I’ll try - the thing is,
var.tags are picked up from various *.tfvars filesBrij Sover 4 years ago
so locals might make it so i duplicate some tags
Aveniaover 4 years ago
are you outputting them?
Brij Sover 4 years ago
the tags? no
Thomas Hoefkensover 4 years ago
Hi all, I am using the helm provider to deploy a chart... but when adding a template in the helm chart, the tf deployment does not detect the fact that I added a yaml file... how can this be resolved?
Jon Butterworthover 4 years ago
I posted a question for module support yesterday and it's lost in the scroll back. Is this the best place for module support? Or should I raise a github issue? TIA.
Jon Butterworthover 4 years ago
Reshared here so it doesn't get lost in scrollback 🙂
MrAtheistover 4 years ago
Does anyone know if theres a way to ignore changes to the entire module? Ive got this tgw module originally deployed, but it has been messed with manually a couple of times that i dont know if i could salvage it by monkey patching the tf code, hence this question... 🙏
Mr.Devopsover 4 years ago
Running into issue creating aks cluster in azure when using manage identity and private dns zones. Hoping to find anyone who worked with AKS and possibly provide some guidance please
Dias Raphaelover 4 years ago
Hi Team, I would like to create a hosted zone in aws through terraform...Can you suggest me a terraform module which does the same or any guidance would be helpful.
Zachover 4 years ago
While HCP Packer is not “Packer in the cloud,”
Too late, its 100% going to be branded “Packer in the cloud”
Patrick Joyceover 4 years ago
I created a module for Route 53 Resolver DNS Firewall using the cloudposse scaffolding if anyone wants to kick the tires on it https://github.com/pjaudiomv/terraform-aws-route53-resolver-dns-firewall
Alex Jurkiewiczover 4 years ago
nice, in Terraform 1.0,
terraform destroy -help states only that it's an alias for terraform apply -destroy. But terraform apply -help doesn't mention -destroy 😅Alex Jurkiewiczover 4 years ago(edited)
I think it's because's
-destroy comes from planJon Butterworthover 4 years ago(edited)
In regards to
contex.tf and this module.. can someone tell me where module.this.id is coming from? In specific reference to the aws-ec2-autoscale-group and aws-eks-workers modules. But this seems to be a standard configuration across a lot of modules.Johnmaryover 4 years ago
I am new to terraform I am trying to use cloudposse (git url: https://github.com/cloudposse/terraform-aws-tfstate-backend) to save the state of the terraform on s3 bucket on AWS but I keep getting this error on Jenkins: (
). Any help and guidance will be appreciated thanks.
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error: Failed to get existing workspaces: S3 bucket does not exist.
The referenced S3 bucket must have been previously created. If the S3 bucket
was created within the last minute, please wait for a minute or two and try
again.
Error: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: RTY8A45R6KR8G72F, host id: yEzmd9hrvPSY3MY3trWfvdtyw4VcJZ+L+hf79QpkOkbSD7GU4Xz9EViWHbDRXiHjTp8k5LgPIzM=). Any help and guidance will be appreciated thanks.
Raja Miahover 4 years ago
hi everyone looking for any ideas or resources that i can use to setup using terraform api gateway with a cognito user pool any help would be much appreciated if you wanna contact me i can explain in more detail our current setup and issues we are facing 🙂
Thomas W.over 4 years ago
Hi there.
I run into a weird error with aws provider and wonder if anyone have run into this too:
terraform apply and:
I run into a weird error with aws provider and wonder if anyone have run into this too:
resource "aws_synthetics_canary" "api" {
name = "test"
artifact_s3_location = "s3://${aws_s3_bucket.synthetic.id}"
execution_role_arn = aws_iam_policy.synthetic.arn
handler = "apiCanaryBlueprint.handler"
runtime_version = var.synthetic_runtime_version
zip_file = data.archive_file.synthetic.output_path
schedule {
expression = "rate(60 minutes)"
}
}terraform apply and:
│ Error: error reading Synthetics Canary: InvalidParameter: 1 validation error(s) found.
│ - minimum field size of 1, GetCanaryInput.Name.
│
│
│ with aws_synthetics_canary.api,
│ on monitoring.tf line 94, in resource "aws_synthetics_canary" "api":
│ 94: resource "aws_synthetics_canary" "api" {
│
╵Harryover 4 years ago
I've got a VPC with some private subnets, and I'm passing those subnet IDs into a module to deploy instances to run an app. I'm also passing in an instance type, but not all instance types are available in all regions and one subnet doesn't have the instance type I need in it. I'm trying to use the
aws_subnet data resource to retrieve the AZs each subnet is in, then use aws_ec2_instance_type_offerings to filter the list of subnets so I only deploy in ones where the instance type is available, but I'm not sure how to create a data resource for each subnet. Can I use foreach here?Michael Dizonover 4 years ago
Does anyone know a workaround for this issue? https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/issues/129
Michael Dizonover 4 years ago
trying to provide
iam_roles but get this error: Error: InvalidParameterValue: The feature-name parameter must be provided with the current operation for the Aurora (PostgreSQL) engine.Andy Miguelover 4 years ago
@here We're having a special edition of #office-hours next week and will be joined by @Taylor Dolezal who is a Senior Developer Advocate at HashiCorp. Please queue up any questions (or gripes) you have about Terraform on this thread and we'll have Taylor review them live on the call, thanks!
T
Taylor Dolezalover 4 years ago
@Taylor Dolezal has joined the channel
Jon Butterworthover 4 years ago
Hi all, quick question for sanity's sake... In the EKS-Workers module where it refers to autoscaling groups.. This is not the same as Cluster Autoscaler? Or is it?
Jon Butterworthover 4 years ago
I've deployed a cluster using EKS-Workers.. set max nodes to 8 and min nodes to 3.. but when I deploy 20 nginx pods the nodes don't scale.
Jon Butterworthover 4 years ago
Perhaps there's an input to enable autoscaling? Or do I need to look at writing something myself to enable cluster auto scaling?
Jon Butterworthover 4 years ago
I think I've answered this myself. I needed to deploy the autoscaler pod
Mohammed Yahyaover 4 years ago
I love the feeling when support asked you how you did solve it?
Mohammed Yahyaover 4 years ago(edited)
https://github.com/tonedefdev/terracreds allow you to store token for TF cloud or similar SaaS like ( env0 - scalr - spacelift) in macos or windows vault instead of plain text, same as aws-vault. I used this between switching TF Cli workflow between TF cloud and Scalr.
MrAtheistover 4 years ago
Questions for
terraform-aws-modules/vpc/aws: im switching from single NATGW to multi NATGW setup per AZ. In the plan it's instructed to destroy the original NATGW that was originally created. This seems fishy to me as it would basically cease the outgoing traffic during which the apply is doing its thing... Anyone knows a way to skip the destroy? or is there a better way to go about this?Ashish Sharmaover 4 years ago
Hi Guy ....Do we have any utility like tfenv for windows to use tf version whichver we like ?
reiover 4 years ago
Hi folks,
I am starting to migrate my terraform state and stuff to Terraform Cloud. So far so good, however now I encountered the following error, when migrating the module using the cloudposse eks modue.
Any ideas/hints?
I have tried to change the kubernetes provider, checked the IAM credentials. Still no clue
I am starting to migrate my terraform state and stuff to Terraform Cloud. So far so good, however now I encountered the following error, when migrating the module using the cloudposse eks modue.
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│ on .terraform/modules/eks_cluster/auth.tf line 83, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│ 83: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
│ Any ideas/hints?
I have tried to change the kubernetes provider, checked the IAM credentials. Still no clue
uselessuseofcatover 4 years ago
I was able to send notification to SNS topic when a new log event appears in Log Group via
aws_cloudwatch_log_metric_filter and aws_cloudwatch_metric_alarm , but I was wondering, how can I send the message itself and not just metric values? Thanks!Jagsover 4 years ago
hi there,
I’m new user of atmos workflow. just wondering how to import existing resources using atmos or i should do it outside of atmos and then use atmos after
I’m new user of atmos workflow. just wondering how to import existing resources using atmos or i should do it outside of atmos and then use atmos after
marc slaytonover 4 years ago(edited)
Hi all -- I'm troubleshooting a specific problem in terraform-aws-components//account-map, a shared component which makes remote-state calls using modules defined in terraform-yaml-stack-config. I've been troubleshooting a few cases where the terraform-aws-provider seems to hang up for various reasons during the remote-state call. The reasons aren't always clear, but they result in terraform errors such as:
Error: rpc error: code = Unavailable desc = transport is closing Would any of you have an idea what the provider might be giving up on here? Are there techniques that might pull more debugging info out of the utils provider?Florian SILVAover 4 years ago
Hello guys,
I just pushed a new PR to the Beanstalk env module. Could somebody take a look on it when possible ? This feature would close some old issues and PR at the same time.
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/182
I just pushed a new PR to the Beanstalk env module. Could somebody take a look on it when possible ? This feature would close some old issues and PR at the same time.
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/182
joshmyersover 4 years ago(edited)
Anyone here using
default_tags with the AWS provider? Seen any gotchas? A few open issues around perpetual diff/conflicting resource tag problems. Looks maybe not fully baked yet....Bradover 4 years ago(edited)
Hi all, I'm fairly new to Terraform and I'm still getting to grips with the best practices....
I'm currently in the process of creating a simple environment which includes a newly created - VPC, IGW, Public Subnet and a EC2 Instance.
However, at the point of applying the config I receive the error message below, has anyone seen anything like this before? Any help/advice would be greatly appreciated 🙂
I'm currently in the process of creating a simple environment which includes a newly created - VPC, IGW, Public Subnet and a EC2 Instance.
However, at the point of applying the config I receive the error message below, has anyone seen anything like this before? Any help/advice would be greatly appreciated 🙂
terraform apply --auto-approve
module.vpc.aws_subnet.main_subnet: Creating...
module.vpc.aws_vpc.vpc: Creating...
module.vpc.aws_vpc.vpc: Creation complete after 4s [id=vpc-09da0001c2b98a15f]
module.network.aws_internet_gateway.igw: Creating...
module.network.aws_internet_gateway.igw: Creation complete after 1s [id=igw-0e922b721b610639f]
╷
│ Error: error creating subnet: InvalidVpcID.NotFound: The vpc ID 'aws_vpc.vpc.id' does not exist
│ status code: 400, request id: 5b4df02a-6826-45a4-a3ca-1e7fcaff4920
│
│ with module.vpc.aws_subnet.main_subnet,
│ on modules/vpc/main.tf line 11, in resource "aws_subnet" "main_subnet":
│ 11: resource "aws_subnet" "main_subnet" {Eric Jacquesover 4 years ago
Hello folks, I'm trying to play with terraform-aws-ecs-web-app, launching examples/complete with all defaults, terraform apply went well but, task eg-test-ecs-web-app keep dying because of "Task failed ELB health checks", maybe because of fargate, someone have an idea ?
Vlad Ionescu (he/him)over 4 years ago(edited)
FYI, if anybody here was using my
gitpod-terrafrom image for Gitpod, I moved it to ECR Public as Dockerhub annoyed me: https://github.com/Vlaaaaaaad/gitpod-terraform/pull/11 AKA https://gallery.ecr.aws/vlaaaaaaad/gitpod-terraformKyle Johnsonover 4 years ago
Question on using multiple workspaces and graphs
We currently use terragrunt and its
How do folks handle the graph without a tool like Terragrunt? Some form of pipeline which understands dependencies? Avoid having deeply nested graphs to begin with?
We currently use terragrunt and its
dependency resource to pull outputs from other workspaces (example: one workspace is for VPC config and most other resources pull subnet ID’s from it). It seems we could be doing this with the terraform_remote_state provider, but we would miss out on terragrunt’s ability to understand the graph of dependencies (the run-all commands are smart about ordering based on the graph).How do folks handle the graph without a tool like Terragrunt? Some form of pipeline which understands dependencies? Avoid having deeply nested graphs to begin with?
Anand Gautamover 4 years ago
I am using this module (https://registry.terraform.io/modules/cloudposse/config/aws/latest) to deploy AWS Config using the CIS 1.2 AWS benchmark with this submodule (https://registry.terraform.io/modules/cloudposse/config/aws/latest/submodules/cis-1-2-rules). I get an error on the terraform plan though:
The error goes way when I put
Any insights on how to get rid of this error? It seems like the module expects sns policy to exist
│ Error: Invalid index
│
│ on .terraform/modules/aws_config/main.tf line 99, in module "iam_role":
│ 99: data.aws_iam_policy_document.config_sns_policy[0].json
│ ├────────────────
│ │ data.aws_iam_policy_document.config_sns_policy is empty tuple
│
│ The given key does not identify an element in this collection value.The error goes way when I put
create_sns_topic value to trueAny insights on how to get rid of this error? It seems like the module expects sns policy to exist
Erik Osterman (Cloud Posse)over 4 years ago
rssover 4 years ago(edited)
v1.1.0-alpha20210616
1.1.0 (Unreleased)
NEW FEATURES:
lang/funcs: add a new type() function, only available in terraform console (#28501)
ENHANCEMENTS:
configs: Terraform now checks the syntax of and normalizes module source addresses (the source argument in module blocks) during configuration decoding rather than only at module installation time. This...
1.1.0 (Unreleased)
NEW FEATURES:
lang/funcs: add a new type() function, only available in terraform console (#28501)
ENHANCEMENTS:
configs: Terraform now checks the syntax of and normalizes module source addresses (the source argument in module blocks) during configuration decoding rather than only at module installation time. This...
Austin Lovelessover 4 years ago
When using https://github.com/cloudposse/terraform-aws-eks-iam-role?ref=tags/0.3.1 how can I add an annotation to the serviceaccount to use the IAM role I created? I had to do it manually after the serviceaccount and IAM role were created. Is there a way I can automate this?
Command I used:
Command I used:
kubectl annotate serviceaccount -n app service <http://eks.amazonaws.com/role-arn=arn:aws:iam::xxxxx:role/rolename@app|eks.amazonaws.com/role-arn=arn:aws:iam::xxxxx:role/rolename@app>managedkaosover 4 years ago(edited)
This is more of an annoyance with AWS than with Terraform but…
Is there a way to deregister task all definitions for a given task family on destroy?
TLDR: I’m finding that when I work with ECS, each service creates a “family” of task definitions. Once I’m done with the service I can
Is there a way to deregister task all definitions for a given task family on destroy?
TLDR: I’m finding that when I work with ECS, each service creates a “family” of task definitions. Once I’m done with the service I can
terraform destroy and it goes away but the task family and the task definitions hang around. I can clean them up from the console and/or CLI but is there a way to nudge TF to do it for me? 🤔 It would be one less thing to have to do to keep my “unused-resources-in-the-AWS-console” OCD in check. 😅Nate Diazover 4 years ago
anyone know why during a
for example, i have a blah.tf file that has some
terraform destroy terraform would still try to resolve the data resources?for example, i have a blah.tf file that has some
data sources for resources that no longer exist. This makes my terraform destroy error out. Why does terraform care about those data sources? shouldn't it just try to destroy everything within the state?Ian Bartholomewover 4 years ago
im starting out a new greenfield terraform project, and I am curious how people are structuring their projects? Most recently, in the last few years, I have been using a workspaces based approach (OSS workspaces, not tf cloud workspaces), but I found a lot of issues with this approach (the project and state grew to the point of needing to be moved to separate repos, which led to issues of how to handle changes that crossed repos, and also how to handle promotion of changes, etc), so I’m looking around to see other ways of structuring a TF project. Are people still using terragrunt, and structuring their project accordingly? Thanks!
Johannover 4 years ago
Someone can share with me good terraform examples for eks+fargate?
Johannover 4 years ago
just to learn
greg nover 4 years ago
How can I suppress Checkov failures coming from upstream modules pls?
Putting suppression comments in the module call doesn’t seem to work.
Putting suppression comments in the module call doesn’t seem to work.
module "api" {
#checkov:skip=CKV_AWS_76:Do not enable logging for now
#checkov:skip=CKV_AWS_73:Do not enable x ray tracing for now
source = "git@github.com:XXXXXXXX/terraform-common-modules.git//api-gateway?ref=main"
<snip>
}B
Bernard Gütermannover 4 years ago
Hi,I don't know if this is the right place to ask but I'm trying to run the Elastic Beanstalk example at https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete and it is not working.
I get the following output. The only thing I changed was the value of the "source" and "version" fields in the ""elastic_beanstalk_environment" module to "cloudposse/elastic-beanstalk-environment/aws" and "0.41.0". What am I missing ?
I get the following output. The only thing I changed was the value of the "source" and "version" fields in the ""elastic_beanstalk_environment" module to "cloudposse/elastic-beanstalk-environment/aws" and "0.41.0". What am I missing ?
JGover 4 years ago(edited)
Hello. I am trying to learn how to use the for_each meta argument and am really hoping someone can help me out. I am trying to create 4 subnets each with a different name & cidr, like so:
I need to use the resultant subnet ids in several other resources like acls and route tables but am having issues because everything then seems to require I add a for_each argument to each resource so I can then refer to the aws_subnet.public_a[each.key].id.
Questions:
1) Is there a way around doing this such as splitting these into individual elements and then not having to add a for_each to every resource that references the subnet id?
2) Even if I add the for_each to something like a route table I still get errors and am not sure what the for_each should reference since if I put something like for_each = aws.subnet.public_a.id I would have to add the [each.key]. Assuming I do have to use a for_each for every resource that references the subnets, what is the proper way to handle this?
3) Is my code for the subnet ok or should I have handled it differently - perhaps it is inherently faulty?
I have tried element, flatten, using a data source block, using [*], etc.. I appreciate any help but please explain in terms someone who is learning can understand as I really want to progress in my understanding. Thank you.
resource "aws_subnet" "public_a" {for_each = {"public_subnet_a" = "10.10.0.0/28""public_subnet_b" = "10.10.0.16/28"}vpc_id = aws_vpc.this.idcidr_block = each.valueavailability_zone = "us-west-1a"tags = merge(var.tags,{"Name" = each.key},)}I need to use the resultant subnet ids in several other resources like acls and route tables but am having issues because everything then seems to require I add a for_each argument to each resource so I can then refer to the aws_subnet.public_a[each.key].id.
Questions:
1) Is there a way around doing this such as splitting these into individual elements and then not having to add a for_each to every resource that references the subnet id?
2) Even if I add the for_each to something like a route table I still get errors and am not sure what the for_each should reference since if I put something like for_each = aws.subnet.public_a.id I would have to add the [each.key]. Assuming I do have to use a for_each for every resource that references the subnets, what is the proper way to handle this?
3) Is my code for the subnet ok or should I have handled it differently - perhaps it is inherently faulty?
I have tried element, flatten, using a data source block, using [*], etc.. I appreciate any help but please explain in terms someone who is learning can understand as I really want to progress in my understanding. Thank you.
David Morganover 4 years ago
does the cloudposse “terraform-aws-ecs-cloudwatch-autoscaling” module support target tracking scaling strategy?
Mark juanover 4 years ago
Hey everyone!
Mark juanover 4 years ago
I got a problem, can someone please help me with this?
Mark juanover 4 years ago
i)What i am getting- i am creating prometheus by using helm release of prometheus and using this repo
ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc.
iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want.
"<https://prometheus-community.github.io/helm-charts>" so i am getting the services name like this 1)prometheus-kube-prometheus-operator 2)prometheus-kube-state metrics 3) prometheus-grafana, etc.ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc.
iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want.
nicolasvdbover 4 years ago(edited)
Hi, I'm using the module terraform-aws-config from https://github.com/cloudposse/terraform-aws-config and it seems you can't create the resources without using "create_sns_topic = false" you get this error:
Just letting you guys know.. no breaking issue, use terraform 0.15.5 btw
╷
│ Error: Invalid index
│
│ on main.tf line 99, in module "iam_role":
│ 99: data.aws_iam_policy_document.config_sns_policy[0].json
│ ├────────────────
│ │ data.aws_iam_policy_document.config_sns_policy is empty tuple
│
│ The given key does not identify an element in this collection value.Just letting you guys know.. no breaking issue, use terraform 0.15.5 btw
A
Alysonover 4 years ago(edited)
Hi,
I didn't understand how to set the value of the map_additional_iam_roles variable in the terraform-aws-eks-cluster module
I tried it this way and was unsuccessful:
https://github.com/cloudposse/terraform-aws-eks-cluster
I didn't understand how to set the value of the map_additional_iam_roles variable in the terraform-aws-eks-cluster module
I tried it this way and was unsuccessful:
map_additional_iam_roles = {"rolearn":"arn:aws:iam::xxxxxxxx:role/JenkinsRoleForTerraform"}https://github.com/cloudposse/terraform-aws-eks-cluster
Mark juanover 4 years ago(edited)
Hi all, I've raised the same issue before i)What i am getting- i am creating prometheus by using helm release of prometheus and using this repo
ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc.
iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want. but this time i was able to rename the prometheus operator but using the fullnameOverride set, but not for other services like node exporter,etc
"<https://prometheus-community.github.io/helm-charts>" so i am getting the services name like this 1)prometheus-kube-prometheus-operator 2)prometheus-kube-state metrics 3) prometheus-grafana, etc.ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc.
iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want. but this time i was able to rename the prometheus operator but using the fullnameOverride set, but not for other services like node exporter,etc
Jon Butterworthover 4 years ago
Anyone thought of a way to achieve this without using
I tried moving it to locals but kept seeing the
null_data_source which is deprecated. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/c25940a8172fac9f37bc2a74c99acf4c21ef12b0/examples/complete/main.tf#L89I tried moving it to locals but kept seeing the
aws-auth configmap error.Jon Butterworthover 4 years ago
Am I right in saying that in theory, we should be able to move this into locals? Since all we're doing here is waiting until the cluster & config map is up before we deploy the node group?
Zachover 4 years ago
Where do you see that its deprecated? That isn't mentioned on the docs. However the documentation says that locals can achieve the same effect. https://registry.terraform.io/providers/hashicorp/null/latest/docs/data-sources/data_source
Steve Wade (swade1987)over 4 years ago(edited)
can anyone help me with the below please ...
when
I think it has something to do with the
resource "aws_acm_certificate" "cert" {
count = var.enable_ingress_alb ? 1 : 0
domain_name = "alb.platform.${var.team_prefix}.${var.platform}.${var.root_domain}"
validation_method = "DNS"
tags = {
CreatedBy = "Terraform" }
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cert_validation" {
for_each = { for domain in aws_acm_certificate.cert.*.domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } }
name = each.value["name"]
type = "CNAME"
records = [each.value["record_value"]]
zone_id = data.terraform_remote_state.dns.outputs.zone
ttl = 60
}when
enable_ingress_alb = true I am getting the following errorError: Unsupported attribute
on .terraform/modules/kubernetes/modules/kubernetes-bottlerocket/ingress-alb-certs.tf line 17, in resource "aws_route53_record" "cert_validation":
17: for_each = { for domain in aws_acm_certificate.cert.*.domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } }
This value does not have any attributes.I think it has something to do with the
* in aws_acm_certificate.cert.*.domain_validation_optionsBabar Baigover 4 years ago
Hey guys. Quick question. Can we create a terraform module conditionally?
Andrea Cavagnaover 4 years ago
hey guys, I just found this:
https://github.com/iann0036/cfn-tf-custom-types
custom type for CloudFormation, you can now add also Terraform resource, this is awesome to me
https://github.com/iann0036/cfn-tf-custom-types
custom type for CloudFormation, you can now add also Terraform resource, this is awesome to me
Steve Wade (swade1987)over 4 years ago
does anyone have any useful resources for path based DENY rules on WAF v2?
Steve Wade (swade1987)over 4 years ago
i am reading the docs and getting myself totally confused
Pavelover 4 years ago
hi all
Pavelover 4 years ago
im having some weirdness with the key-pair/aws module
Pavelover 4 years ago
so i have this
I have the .pem files it generates on one machine, but i want to transfer this to another machine. I put the same key files in the same folder. But tf apply wants to create new private/public key pairs for some reason
module "key_pair" {
source = "cloudposse/key-pair/aws"
namespace = var.app_name
stage = var.environment
name = "key"
ssh_public_key_path = "./.secrets"
generate_ssh_key = "true"
private_key_extension = ".pem"
public_key_extension = ".pub"
}I have the .pem files it generates on one machine, but i want to transfer this to another machine. I put the same key files in the same folder. But tf apply wants to create new private/public key pairs for some reason
# module.key_pair.local_file.private_key_pem[0] will be created
+ resource "local_file" "private_key_pem" {
+ directory_permission = "0777"
+ file_permission = "0600"
+ filename = "./.secrets/xx-development-key.pem"
+ id = (known after apply)
+ sensitive_content = (sensitive value)
}
# module.key_pair.local_file.public_key_openssh[0] will be created
+ resource "local_file" "public_key_openssh" {
+ content = <<-EOT
ssh-rsa xxx
EOT
+ directory_permission = "0777"
+ file_permission = "0777"
+ filename = "./.secrets/xx-development-key.pub"
+ id = (known after apply)
}Steve Wade (swade1987)over 4 years ago
is there a way to only perform a remote state lookup if a value is true?
Nikola Milicover 4 years ago
I think I’m stuck on a past decision to include stage option inside my backend declaration for my application
main.tf
Since my workspace was dev at the time, my remote backend bucket, as I now realize, has been unfortunately called
Now, when I added my new terraform workspace called
All of that backend-y stuff which shouldn’t really be added since my unfortunate
So now, I’m stuck. There is a prod/ folder inside my state bucket, but the state is empty, so it wants to create everything including the backend (which I guess should not be added).
If i edit this module declaration from the top, and remove the stage line, it cannot just edit resources but must replace them, which I think would break in half as it tries to keep state but also replace state bucket.
How do i escape this?
In short, my idea is to recreate everything from dev back on prod, in the same state bucket, by using workspaces.
x.main.tf
# Backend ----------------------------------------------------------------------
module "terraform_state_backend" {
source = "cloudposse/tfstate-backend/aws"
version = "~> 0.32"
namespace = var.application
stage = terraform.workspace
name = "terraform"
profile = var.aws_credentials_profile_name
terraform_backend_config_file_path = "."
terraform_backend_config_file_name = "backend.tf"
force_destroy = false
}Since my workspace was dev at the time, my remote backend bucket, as I now realize, has been unfortunately called
x-*dev*-terraform instead of what I think should have been from the beginning, just x-terraform.Now, when I added my new terraform workspace called
prod and doing a simple terraform plan, I see that it would create additional state bucket, dynamodb table etc.All of that backend-y stuff which shouldn’t really be added since my unfortunate
x-dev-terraform state bucket already has subfolders for each of my workspaces, right?So now, I’m stuck. There is a prod/ folder inside my state bucket, but the state is empty, so it wants to create everything including the backend (which I guess should not be added).
If i edit this module declaration from the top, and remove the stage line, it cannot just edit resources but must replace them, which I think would break in half as it tries to keep state but also replace state bucket.
How do i escape this?
In short, my idea is to recreate everything from dev back on prod, in the same state bucket, by using workspaces.
sheldonhover 4 years ago
Related to the prior question on backend declarations. I want a dynamic backend creation in s3/dynamo, like how terragrunt does the project initialization. However, I want to keep things as native terraform as possible.
I know Go. Should I just look at some code to initialize backend writing it myself, or is there some Go tool I'm missing out there that creates the remote backend, s3 dynamic creation, and policies? Something like
I know Go. Should I just look at some code to initialize backend writing it myself, or is there some Go tool I'm missing out there that creates the remote backend, s3 dynamic creation, and policies? Something like
tfbackend init so I can use native terraform for the remaining tasks without more frameworks? (I looked at Terraspace, promising, but again like Terragrunt another abstraction layer to troubleshooting)oskar maria grandeover 4 years ago
Has anyone in here used this pattern before? What speaks against using
https://github.com/concourse/governance/blob/master/github.tf
https://github.com/concourse/governance/blob/master/locals.tf
https://github.com/concourse/governance/tree/master/contributors
tfvars instead of yaml here?https://github.com/concourse/governance/blob/master/github.tf
https://github.com/concourse/governance/blob/master/locals.tf
https://github.com/concourse/governance/tree/master/contributors
Steve Wade (swade1987)over 4 years ago
can anyone recommend a WAF module with kinesis firehose setup?
Mazin Ahmedover 4 years ago
https://twitter.com/mazen160/status/1408041406195699715 - if anyone's interested!
Babar Baigover 4 years ago
Hello everyone. I have a question. I want to use this module to create my organisation, workspaces and variables required by those workspaces
https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/tfe/latest
Below are the points where I am confused.
1. Do I need to setup a separate repository (or even the same repository with different path) and place all the TF cloud related infra setup code there.
2. Create a workspace for that repository in Terraform Cloud
3. Create this specific workspace and variables related to it manually in Terraform Cloud.
Thats what I can think of. Is there any other way. I want to know how community is using this module.
Thanks.
https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/tfe/latest
Below are the points where I am confused.
1. Do I need to setup a separate repository (or even the same repository with different path) and place all the TF cloud related infra setup code there.
2. Create a workspace for that repository in Terraform Cloud
3. Create this specific workspace and variables related to it manually in Terraform Cloud.
Thats what I can think of. Is there any other way. I want to know how community is using this module.
Thanks.
rssover 4 years ago(edited)
v1.0.1
1.0.1 (June 24, 2021)
ENHANCEMENTS:
json-output: The JSON plan output now indicates which state values are sensitive. (#28889)
cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS.
BUG FIXES:
backend/remote: Fix faulty Terraform Cloud version check when migrating...
1.0.1 (June 24, 2021)
ENHANCEMENTS:
json-output: The JSON plan output now indicates which state values are sensitive. (#28889)
cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS.
BUG FIXES:
backend/remote: Fix faulty Terraform Cloud version check when migrating...
Mohammed Yahyaover 4 years ago
https://github.com/hashicorp/envconsul very nice tool to pass env variables generated on the fly from consul (configuration) or Vault ( secrets)
Erik Osterman (Cloud Posse)over 4 years ago
Ryan Rykeover 4 years ago
hi guys long time no talk can someone please merge https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/pull/45
Ryan Rykeover 4 years ago
trying to use it in gov cloud 🙂
Ryan Rykeover 4 years ago
cc @Erik Osterman (Cloud Posse)
Ryan Rykeover 4 years ago
👋
msharma24over 4 years ago
What is the practice followed to grant Terraform IAM access to multiple AWS Accounts -
In the past I have just created one IAM user in the SharedServices which can assume a "Terraform Deploy IAM Role with Admin Policy" in all other accounts where I wish to create resources with terraform and I would just use the IAM Access Keys in the CICD Configuration securely.
In the past I have just created one IAM user in the SharedServices which can assume a "Terraform Deploy IAM Role with Admin Policy" in all other accounts where I wish to create resources with terraform and I would just use the IAM Access Keys in the CICD Configuration securely.
rssover 4 years ago(edited)
v1.0.1
1.0.1 (June 24, 2021)
ENHANCEMENTS:
json-output: The JSON plan output now indicates which state values are sensitive. (#28889)
cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS.
BUG FIXES:
backend/remote: Fix faulty Terraform Cloud version check when migrating...
1.0.1 (June 24, 2021)
ENHANCEMENTS:
json-output: The JSON plan output now indicates which state values are sensitive. (#28889)
cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS.
BUG FIXES:
backend/remote: Fix faulty Terraform Cloud version check when migrating...
Ryan Rykeover 4 years ago
hi @Matt Gowie i have a pr for you https://github.com/cloudposse/terraform-aws-vpc-flow-logs-s3-bucket/pull/27
Phil Hadvigerover 4 years ago
Anybody know where I can find info about
module.this ? https://github.com/cloudposse/terraform-aws-vpc/blob/master/main.tf#L13 I haven't been able to find anything in the Terraform docs, and have only seen this in CloudPosse modules so far.Alencar Juniorover 4 years ago
Folks, I was wondering how do you deal with
Do you keep the
I'm stuck with being able to build my app and deploy the latest release tag to ECS within the pipeline however, I have environment variables and configs which are dependent on other terraform resources outputs.
aws_ecs_task_definition and continuous delivery to ECS.Do you keep the
task-definition.yml within the application repository or you manage it by Terraform?I'm stuck with being able to build my app and deploy the latest release tag to ECS within the pipeline however, I have environment variables and configs which are dependent on other terraform resources outputs.
Chris Childressover 4 years ago(edited)
Hello, everyone. Hope all’s well. Before I submit an issue on Github, I wanted to make sure I wasn’t doing something “dumb”.
I am attempting to use the rds proxy module located at “cloudposse/rds-db-proxy/aws”. I’ve filled in most of the values, and I want the module to create an IAM role for accessing the RDS authentication Secret (rather than providing my own). I’m getting the following errors when I try a “terraform plan”:
I have tried:
• terraform init
• terraform get
• terraform get inside the module (the “cloudposse/label/null” module didn’t appear to download automatically)
I am attempting to use the rds proxy module located at “cloudposse/rds-db-proxy/aws”. I’ve filled in most of the values, and I want the module to create an IAM role for accessing the RDS authentication Secret (rather than providing my own). I’m getting the following errors when I try a “terraform plan”:
Error: expected length of name to be in the range (1 - 128), got
on .terraform/modules/catalog_aurora_proxy/iam.tf line 78, in resource "aws_iam_policy" "this":
78: name = module.role_label.id
Error: expected length of name to be in the range (1 - 64), got
on .terraform/modules/catalog_aurora_proxy/iam.tf line 84, in resource "aws_iam_role" "this":
84: name = module.role_label.idI have tried:
• terraform init
• terraform get
• terraform get inside the module (the “cloudposse/label/null” module didn’t appear to download automatically)
Chris Childressover 4 years ago(edited)
I’m using version 0.2.0 of the module, though I believe there weren’t any changes since 0.1.0?
I’m currently on terraform 0.14.11.
I’m currently on terraform 0.14.11.
Chris Childressover 4 years ago
Ah!! I figured it out
Chris Childressover 4 years ago
it was because the “name” parameter for the RDS proxy module wasn’t set yet
Chris Childressover 4 years ago
I narrowed it down by setting the iam_role_attributes field to insert a letter, which passed the iam role creation issue and then gave me an error about the length of the name in the “module.this.id”
Chris Childressover 4 years ago
after I commented out the iam_role_attributes parameter line and set the name for the module, everything was fine
oskar maria grandeover 4 years ago(edited)
has anyone in here integrated https://driftctl.com into their workflows somehow? just curious.
Kevin Neufeld(PayByPhone)over 4 years ago(edited)
How are teams handling terraform destroy of managed s3 buckets that have a large 500K+ objects? We have sometimes resorted to emptying the bucket via the management portal. We have been looking at using the on destroy provisioning step but passing the correct creds down into the script problematic in our case.
lorenover 4 years ago
Anyone play with this tool yet? Claiming to be an open source alternative to Sentinel... https://github.com/terraform-compliance/cli
Michael Koroteevover 4 years ago
Hi guys
did anyone encounter this and can assist ?
https://github.com/cloudposse/terraform-aws-eks-cluster/issues/117
thanks!
did anyone encounter this and can assist ?
https://github.com/cloudposse/terraform-aws-eks-cluster/issues/117
thanks!
Mark juanover 4 years ago
Anyone encountered this issue
│ Error: Error creating ElastiCache Replication Group (cosmos-test-elasticache): InvalidParameterValue: When specifying preferred availability zones, the number of cache clusters must be specified and must match the number of preferred availability zones.
│ status code: 400, request id: a29ff76d-dad3-4775-b1cf-6b265a37dbe4
│
│ with module.redis["cluster-2"].aws_elasticache_replication_group.redis_cluster,
│ on ../redis/main.tf line 3, in resource "aws_elasticache_replication_group" "redis_cluster":
│ 3: resource "aws_elasticache_replication_group" "redis_cluster" {
│
╵Mark juanover 4 years ago
this is my main.tf file
data "aws_availability_zones" "available" {}
resource "aws_elasticache_replication_group" "redis_cluster" {
automatic_failover_enabled = true
availability_zones = data.aws_availability_zones.available.names
replication_group_id = "${var.name}-elasticache"
replication_group_description = "redis replication group"
node_type = var.node_type
number_cache_clusters = 2
parameter_group_name = "default.redis6.x"
port = 6379
subnet_group_name = aws_elasticache_subnet_group.redis_subnets.name
}
resource "aws_elasticache_subnet_group" "redis_subnets" {
name = "tf-test-cache-subnet"
subnet_ids = var.redis_subnets
}Reneover 4 years ago
Hello. I’m wondering about this ECS module. How exactly can I work EFS in to the container definition?
https://registry.terraform.io/modules/cloudposse/ecs-container-definition/aws/latest
https://registry.terraform.io/modules/cloudposse/ecs-container-definition/aws/latest
Reneover 4 years ago
Because as it seems, I can only really use a
mount_points argument, once I involve that of volume it naturally doesn’t work since this seems not supported. Am I missing something?Matt Gowieover 4 years ago
Anyone know if there is an open issue / discussion within the TF community around
terraform.lock.hcl files not supporting multiple operating systems? Or what to do about that hangup? I’m starting to think about checking in lock files… but if they don’t work cross platform then I’m unsure how folks make em work for their whole team.RBover 4 years ago
upvote plz https://github.com/hashicorp/terraform/pull/28700
that will eventually add the ability to templatize strings instead of being stuck creating a file just to feed it into
that will eventually add the ability to templatize strings instead of being stuck creating a file just to feed it into
templatefilesheldonhover 4 years ago
I revisited using native terraform with terraform cloud instead of terragrunt and was annoyingly reminded of the limitations when I tried to pass my env variables file with
I think that's probably my biggest annoyance right now.
If I could rely on
Otherwise I'd have to use a CLI/Go SDK to set all the variables in the workspace in Terraform Cloud itself.
Otherwise I feel I'm back to terragrunt if I don't want to use my own wrappers to pass environment based configurations.
I do like the yaml config module, but it's too abstracted right for for easy debugging so I'm sticking with environment files.
-var-file and it complained 😆I think that's probably my biggest annoyance right now.
If I could rely on
env.auto.tfvars working I'd do that.Otherwise I'd have to use a CLI/Go SDK to set all the variables in the workspace in Terraform Cloud itself.
Otherwise I feel I'm back to terragrunt if I don't want to use my own wrappers to pass environment based configurations.
I do like the yaml config module, but it's too abstracted right for for easy debugging so I'm sticking with environment files.
Michael Koroteevover 4 years ago
anyone encounter this ?
https://github.com/cloudposse/terraform-aws-eks-cluster/issues/117
https://github.com/cloudposse/terraform-aws-eks-cluster/issues/117
Thomas Hoefkensover 4 years ago
Hi everyone, could you give me a hint on how to pass a json object as an input variable to a module... e.g. the module contains the <<POLICY EOF>> or <<ITEM >> syntax.. can I pass json into a variable and then use jsonencode in the module? If yes, how do you pass json as an input? Perhaps as a string?
Grubholdover 4 years ago
Hi folks, I need your help in understanding the folder structure I need to have for different environments (dev, stage, prod) when building an infra very similar to https://github.com/cloudposse/terraform-aws-ecs-web-app
jason einonover 4 years ago
Hey, anyone ever had to do TF based interview test... looking to pull one together and though this would be a good place for some ideas
Vikram Yerneniover 4 years ago
Folks, anyone here got into Terraform module testing for infrastructure code?
Source: https://www.terraform.io/docs/language/modules/testing-experiment.html#writing-tests-for-a-module
Source: https://www.terraform.io/docs/language/modules/testing-experiment.html#writing-tests-for-a-module
rssover 4 years ago(edited)
v1.1.0-alpha20210630
1.1.0 (Unreleased)
NEW FEATURES:
cli: terraform add generates resource configuration templates (#28874)
config: a new type() function, only available in terraform console (<a href="https://github.com/hashicorp/terraform/issues/28501" data-hovercard-type="pull_request"...
1.1.0 (Unreleased)
NEW FEATURES:
cli: terraform add generates resource configuration templates (#28874)
config: a new type() function, only available in terraform console (<a href="https://github.com/hashicorp/terraform/issues/28501" data-hovercard-type="pull_request"...