97 messages
Matt Gowiealmost 5 years ago
If anybody is interested in contributing to a good open source module — We have a few good first issues in our terraform-aws-multi-az-subnets repository. Ranges from super easy change a variable to a different type + rename to figure out the difference between two modules and write some quick docs. Check em out if you’re interested!
marc slaytonalmost 5 years ago
Multi-AZ subnets occur in more than one reliability zone.
msharma24almost 5 years ago
Starting a greenfield Terraform env - Customer has Bitbucket and Bamboo , Would you recommend Atlantis over Bamboo Pipeline (YAML Specs) for Terraform Automation ?
Jeff Behlalmost 5 years ago
alright, apologies for this being so general, but.. how/where are folks registering the outputs of terraform for use in app configurations? eg: my app needs to use the SQS queue created by terraform, and it uses a env file with vars and values. env file could/should be a template of some sorts, but how to get the values? CI/CD on
terraform output that looks for specific outputs and pushes to consul (or any persistent place)? some place where ansible could get facts for template generation? but for both, storing the results somewhere accessible seems to be the question for us. parsing the state file seems like a horrible idea, so I’ll discount that.. thxPetro Gorobchenkoalmost 5 years ago(edited)
Hello everyone, looking for support on this issue.
looking to utilize
running into this issue - cache location is required when cache type is "S3" , seems like it may be coming from
I can't see what configuration may be causing this.
Any help on this is greatly appreciated.
looking to utilize
terraform-aws-ecs-web-apprunning into this issue - cache location is required when cache type is "S3" , seems like it may be coming from
on .terraform/modules/ecs_web_app.ecs_codepipeline.codebuild/main.tf line 292, in resource "aws_codebuild_project" "default":
292: resource "aws_codebuild_project" "default" {I can't see what configuration may be causing this.
Any help on this is greatly appreciated.
Pierre-Yvesalmost 5 years ago(edited)
Hello,
how do you initialize a new disk on an Azure VM with Terraform ?
I am looking to automatize the next step which have to be done after the two steps:
• azurerm_managed_disk
• azurerm_virtual_machine_data_disk_attachment
The point I want to do with terraform is mounting and formating the disk on windows which is describe here: Initialize a new data disk
how do you initialize a new disk on an Azure VM with Terraform ?
I am looking to automatize the next step which have to be done after the two steps:
• azurerm_managed_disk
• azurerm_virtual_machine_data_disk_attachment
The point I want to do with terraform is mounting and formating the disk on windows which is describe here: Initialize a new data disk
Sai Mudikialmost 5 years ago
Hi folks, does anyone can provide me details on how to spin up a full blown AWS EKS cluster using terraform with self managed nodes and fargate profile. There was no clear documentation on how to get started with it! I would love start using the modules that has been already built. Thanks for your help in advance!
Mr.Devopsalmost 5 years ago
hoping someone can help. I'm using the
would result me to setting my inputs as
where instead i would like to use it something like this
azurerm_role_assignment resource. What i like to be able to do is have a list of resource i can scope to. I did something using map types, but using it in this wayvariable "role" {
type = map(any)
description = "The permission block for roles assignment"
default = {
"default_assignment" = {
scope = ""
role_definition_name = "Reader"
principal_id = ""
}
}
}would result me to setting my inputs as
role = {
"scope-001" = {
scope = "/subscriptions/${local.sub_id}"
role_definition_name = "Contributor"
principal_id = dependency.identity.outputs
},
"scope-002" = {
scope = "/subscriptions/${local.sub_id}"
role_definition_name = "Reader"
principal_id = dependency.identity
}where instead i would like to use it something like this
role = {
"scope-001" = {
scope = "/subscriptions/${local.sub_id}"
role_definition_name = "Contributor"
principal_id = dependency.identity.outputs
},
{
scope = "/subscriptions/${local.sub_id}"
role_definition_name = "Reader"
principal_id = dependency.identity
}Adrianalmost 5 years ago
hey, I used terraform-aws-elastic-beanstalk-environment to create an Elastic Benstalk env.
I want to upload an new docker image. Is there an bucket for this? didnt see any reference for this.
I want to upload an new docker image. Is there an bucket for this? didnt see any reference for this.
Saichovskyalmost 5 years ago
I have set up security hub using terraform and part of the resources include a lambda which gets triggered by an EventBridge rule. So whenever I run terraform apply, a new
This is the output from terraform state show. It only lists one resource when i provide the resource address, but in the lambda console, under triggers, I have two EventBridge resources with the same ARN, but one is enabled and the other disabled.
1. Is this a bug in terraform?
2. Is there a way to have
aws_cloudwatch_event_target resource is created as a trigger attached to the existing lambda. So we have a duplication of triggers to the lambda, with the latest one being the active one and the former being disabled. Both triggers have the same ARN, but they have separate IDsresource "aws_cloudwatch_event_target" "event_target" {
arn = "arn:aws:lambda:eu-west-1:123456789012:function:service-security_hub_to_jira"
event_bus_name = "default"
id = "eng-security_hub_to_jira_rule-terraform-20210505102803432000000001"
rule = "eng-security_hub_to_jira_rule"
target_id = "terraform-20210505102803432000000001"
}This is the output from terraform state show. It only lists one resource when i provide the resource address, but in the lambda console, under triggers, I have two EventBridge resources with the same ARN, but one is enabled and the other disabled.
1. Is this a bug in terraform?
2. Is there a way to have
terraform apply ID the event rule by ARN and not by id which is not even viewable on the AWS console?Sergey Kvetkoalmost 5 years ago
Hi! Could somebody makes release https://github.com/cloudposse/terraform-provider-utils with darwin_arm64 support?
Steve Wade (swade1987)almost 5 years ago
does anyone know the minimal IAM policy required to read SQS messages ?
Mr.Devopsalmost 5 years ago
Reposting just in case anyone missed and willing to help 😌
rssalmost 5 years ago
v0.15.2
0.15.2 (May 05, 2021)
ENHANCEMENTS:
terraform plan and terraform apply: Both now support a new planning option -replace=... which takes the address of a resource instance already tracked in the state and forces Terraform to upgrade either an update or no-op plan for that instance into a "replace" (either destroy-then-create or create-then-destroy depending on configuration), to allow replacing a degraded object with a new object of the same configuration in a single action and preview the...
0.15.2 (May 05, 2021)
ENHANCEMENTS:
terraform plan and terraform apply: Both now support a new planning option -replace=... which takes the address of a resource instance already tracked in the state and forces Terraform to upgrade either an update or no-op plan for that instance into a "replace" (either destroy-then-create or create-then-destroy depending on configuration), to allow replacing a degraded object with a new object of the same configuration in a single action and preview the...
Dhaval Dedhiaalmost 5 years ago
Hi, I am trying to make create multiple DataDog monitors via Terraform. And I am faced with a weird issue. Can anyone please help me out here??
My resource block looks like this (which is inspired from Cloudposse's module
And in my tfvars file, I have a map of monitors configs which I pass via tfvars.
I am not able to pass in the query exactly like this because of the quotes in the query value (")
I tried to replace " with ', but that won't work because the query then become invalid.
I even tried to prefix the quotes in the middle with an \" but that gives me errors well. I am stuck. Has anybody else faced a similar issue before and can help me out please?
My resource block looks like this (which is inspired from Cloudposse's module
resource "datadog_monitor" "monitor" {
for_each = var.datadog_monitors
name = each.value.name
type = each.value.type
query = each.value.query
message = format("%s%s", each.value.message, var.alert_tags)
.
.
.
.
.
.
.
}And in my tfvars file, I have a map of monitors configs which I pass via tfvars.
datadog_monitors {
high-error-logs = {
name = "[P2] [uds] [prod] [naea1] monitor name here"
type = "log alert"
query = "logs("service:service-name platform:platform-name environment:prod status:error env:env-name region:us-east-1").index("*").rollup("count").last("10m") > 50"
tags = ["managed_by:Terraform", "platform:platform-name", "environment:prod", "env:env-name", "service:service-name", "region:us-east-1"]
}
}I am not able to pass in the query exactly like this because of the quotes in the query value (")
I tried to replace " with ', but that won't work because the query then become invalid.
I even tried to prefix the quotes in the middle with an \" but that gives me errors well. I am stuck. Has anybody else faced a similar issue before and can help me out please?
managedkaosalmost 5 years ago
Hey, Team! Question: When you encounter a catastrophic error in TF (crash or resource conflict), what’s the best way to find and/or recover any resources that have been created but not written to state?
Example, the TF config says create resource named X but resource X already exists (manually created, from another TF project, etc). So TF encounters an error and stops processing (at best) or crashes (at worst). The resources created up to that point may have not been written to state prior to the stop/crash.
On small projects, I’ve gone through the console or CLI and manually removed things. But I’m wondering if there’s a better way in the event a project contains hundreds (or more!) resources all over the place. TIA!
Example, the TF config says create resource named X but resource X already exists (manually created, from another TF project, etc). So TF encounters an error and stops processing (at best) or crashes (at worst). The resources created up to that point may have not been written to state prior to the stop/crash.
On small projects, I’ve gone through the console or CLI and manually removed things. But I’m wondering if there’s a better way in the event a project contains hundreds (or more!) resources all over the place. TIA!
SecOHalmost 5 years ago
Hello, guys. I have a question. Is it mandatory that amazonmq(rabbitmq) security group’s egress rule should allow all outbound traffic? (egress 0.0.0.0/0)
https://github.com/cloudposse/terraform-aws-mq-broker/blob/3951c8e1cf4faf94c3c92b2b01d26b078bc60d88/sg.tf#L8
If I create security group for mq, the egress rule must be added to my SG.
https://github.com/cloudposse/terraform-aws-mq-broker/blob/3951c8e1cf4faf94c3c92b2b01d26b078bc60d88/sg.tf#L8
If I create security group for mq, the egress rule must be added to my SG.
Vlad Ionescu (he/him)almost 5 years ago
If any of y’all use GitHub and you want to see Dependabot support for Terraform, there is a way to help: https://github.com/dependabot/dependabot-core/issues/1176#issuecomment-833383992
Tl;dr: dependabot is working on HCL2/tf0.14/tf0.15 support and is asking for any people with public repos interested in testing
Tl;dr: dependabot is working on HCL2/tf0.14/tf0.15 support and is asking for any people with public repos interested in testing
greg nalmost 5 years ago
Hello guy, I just ran into https://github.com/cloudposse/terraform-aws-multi-az-subnets not supporting
var vpc_default_route_table_id like https://github.com/cloudposse/terraform-aws-dynamic-subnets#input_vpc_default_route_table_id. Could that be useful so worth raising a GH issue for ?Michael Dizonalmost 5 years ago(edited)
anyone know of a way to conditionally create a resource (in my case a
aws_lambda_function) only if a resource exists (an aws_ecr_image that gets uploaded as a separate process outside of TF) ?Steve Wade (swade1987)almost 5 years ago
could this be an asymmetric routing issue? 🤔
rssalmost 5 years ago(edited)
v0.15.3
0.15.3 (May 06, 2021)
ENHANCEMENTS:
terraform show: Add data to the JSON plan output describing which changes caused a resource to be replaced (#28608)
BUG FIXES:
terraform show: Fix crash for JSON plan output of new resources with sensitive attributes in nested blocks (<a href="https://github.com/hashicorp/terraform/issues/28624"...
0.15.3 (May 06, 2021)
ENHANCEMENTS:
terraform show: Add data to the JSON plan output describing which changes caused a resource to be replaced (#28608)
BUG FIXES:
terraform show: Fix crash for JSON plan output of new resources with sensitive attributes in nested blocks (<a href="https://github.com/hashicorp/terraform/issues/28624"...
Paul Robinsonalmost 5 years ago
Hi @Matt Gowie I've just joined following a couple of PRs to the terraform-aws-multi-az-subnets module.
I have a question about this if you can explain please?
https://github.com/cloudposse/terraform-aws-multi-az-subnets/pull/48#pullrequestreview-649820152
I saw you reviewed the follow up PR #50.
Is there any contextual design discussion that I can read up on please?
I have a question about this if you can explain please?
https://github.com/cloudposse/terraform-aws-multi-az-subnets/pull/48#pullrequestreview-649820152
We are not going to support the use case of private subnets without NAT gateways, at least not in this module.
I saw you reviewed the follow up PR #50.
Is there any contextual design discussion that I can read up on please?
managedkaosalmost 5 years ago
anyone used this app to go from infra back to code? 🤔
I’ve used terraformer but this looks slicker since it generates TF, CFN, CDK, and even Pulumi 🤷♂️🏾 🤪
https://former2.com/
I’ve used terraformer but this looks slicker since it generates TF, CFN, CDK, and even Pulumi 🤷♂️🏾 🤪
https://former2.com/
Sachin calmost 5 years ago(edited)
Hi Team, I was trying to use latest version of
Expected behavior:
Actual Result:
cloudposse/ec2-autoscale-group/aws module and found cloudwatch alarm name is duplicating in default alarms.Expected behavior:
# module.autoscale_group.aws_cloudwatch_metric_alarm.all_alarms["cpu_high"] will be created
+ resource "aws_cloudwatch_metric_alarm" "all_alarms" {
+ actions_enabled = true
+ alarm_actions = (known after apply)
+ alarm_description = "Scale up if CPU utilization is above 70 for 120 seconds"
+ alarm_name = "appname-prod-backend-cpu-utilization-high"Actual Result:
# module.autoscale_group.aws_cloudwatch_metric_alarm.all_alarms["cpu_high"] will be created
+ resource "aws_cloudwatch_metric_alarm" "all_alarms" {
+ actions_enabled = true
+ alarm_actions = (known after apply)
+ alarm_description = "Scale up if CPU utilization is above 70 for 120 seconds"
+ alarm_name = "appname-prod-backend-appname-prod-backend-cpu-utilization-high"Alec Fongalmost 5 years ago
Hello! What’s the differences between terraform-aws-multi-az-subnets and terraform-aws-dynamic-subnets? Why/when would I use one over the other?
Brij Salmost 5 years ago(edited)
Hey all, I’m trying to setup EKS with managed node groups with the following config
However, I keep running into the following error;
Has anyone else run into this? I’ve looked in the eks module issues and havent found anything and I also tried adding/removing the subnet in the nodegroup defaults, with no success 😕
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "eks-vpc"
cidr = "172.21.0.0/16"
azs = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1]]
public_subnets = ["172.21.0.0/20", "172.21.16.0/20"]
enable_nat_gateway = false
enable_vpn_gateway = true
propagate_public_route_tables_vgw = true
tags = merge(var.tags, {
"<http://kubernetes.io/cluster/eks|kubernetes.io/cluster/eks>" = "shared"
})
public_subnet_tags = merge(var.tags, {
"<http://kubernetes.io/cluster/eks|kubernetes.io/cluster/eks>" = "shared"
})
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "15.2.0"
cluster_name = "eks"
cluster_version = "1.19"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
cluster_enabled_log_types = ["scheduler"]
tags = var.tags
cluster_endpoint_private_access = true
node_groups_defaults = {
ami_type = "AL2_x86_64"
disk_size = 20
# subnets = module.vpc.private_subnets
}
node_groups = {
gitlab-eks = {
name = "gitlab-eks"
desired_capacity = 3
max_capacity = 5
min_capacity = 3
instance_types = ["t3.2xlarge"]
capacity_type = "ON_DEMAND"
}
}
}However, I keep running into the following error;
Error: List shorter than MinItems
on .terraform/modules/eks/modules/node_groups/node_groups.tf line 8, in resource "aws_eks_node_group" "workers":
8: subnet_ids = each.value["subnets"]
Attribute supports 1 item minimum, config has 0 declaredHas anyone else run into this? I’ve looked in the eks module issues and havent found anything and I also tried adding/removing the subnet in the nodegroup defaults, with no success 😕
ROalmost 5 years ago
Hi everyone
Building a VPC with 3 subnets using terraform, I have some more to add to it but it’s want to know if anyone has a ready receipt so I can check to see any mistakes I may made
Building a VPC with 3 subnets using terraform, I have some more to add to it but it’s want to know if anyone has a ready receipt so I can check to see any mistakes I may made
François Davieralmost 5 years ago
Hi, want to use https://registry.terraform.io/modules/cloudposse/cloudwatch-events/aws/latest. Target is to monitor if aws backup job is ok/ko when copying restauration point from source region to vault target region. Have you some example please ? Thank you
Rhys Daviesalmost 5 years ago(edited)
Hey can anyone recommend an article or series or blog post about how to correctly structure the layers in a terraform project
David Morganalmost 5 years ago
hi
i am trying to use terraform-aws-modules/dynamodb-table/aws
0..13.0
when i specify
ttl_attribute = “ttl”
i get the following message
this is what i’m an trying to run
thanks
i am trying to use terraform-aws-modules/dynamodb-table/aws
0..13.0
when i specify
ttl_attribute = “ttl”
i get the following message
An argument named "ttl_attribute" is not expected herethis is what i’m an trying to run
module "cache_dynamo_table_forum_post_count" {
source = "terraform-aws-modules/dynamodb-table/aws"
version = "0.13.0"
name = "mytable_name"
hash_key = "my_id"
billing_mode = "PAY_PER_REQUEST"
ttl_attribute = "ttl"
}thanks
Jasonalmost 5 years ago
Hi all
I’m using terraform-aws-transit-gateway (https://github.com/cloudposse/terraform-aws-transit-gateway) to create TGW and share it with external principals.
I faced an issue when sharing TGW with external principal as below
The reason is this module has a check
As you may know, for external principal, it needs to be accepted from second aws account then it can be seen and define as
My question is whether we have timeout/delay in
Thanks everyone
I’m using terraform-aws-transit-gateway (https://github.com/cloudposse/terraform-aws-transit-gateway) to create TGW and share it with external principals.
I faced an issue when sharing TGW with external principal as below
rror: error reading EC2 Transit Gateway: InvalidTransitGatewayID.NotFound: Transit Gateway tgw-090ff1710310403a7 was deleted or does not exist.
status code: 400, request id: 836b5c87-7b76-44f9-b318-f1fbf47fa785
on ../../../modules/tgw/main.tf line 49, in data "aws_ec2_transit_gateway" "this":
49: data "aws_ec2_transit_gateway" "this" {The reason is this module has a check
data "aws_ec2_transit_gateway" "this" {
id = local.transit_gateway_id
}As you may know, for external principal, it needs to be accepted from second aws account then it can be seen and define as
data_sourceMy question is whether we have timeout/delay in
data_source or dependency to wait for the accepter to accept the sharing, then it can process for next steps. ?Thanks everyone
Yoni Leitersdorf (Indeni Cloudrail)almost 5 years ago
There’s been some discussion here and other forums about third-party account access provided to vendors. Has anyone ever seen a collection of all of the known vendors’ IAM role requirements in IaC format (CloudFormation, Terraform, etc.)?
My thinking is this - let’s say you want to give a third party vendor access to your account through an IAM role, and you’d rather do it through your IaC process and not using a click-through-stack-deployment. It would be great to have the roles’ code just easily available for download and inclusion in your code repo.
This should be easy with CloudFormation (as the stack code is visible in click-through process), but more complicated for Terraform.
My thinking is this - let’s say you want to give a third party vendor access to your account through an IAM role, and you’d rather do it through your IaC process and not using a click-through-stack-deployment. It would be great to have the roles’ code just easily available for download and inclusion in your code repo.
This should be easy with CloudFormation (as the stack code is visible in click-through process), but more complicated for Terraform.
Darren Phamalmost 5 years ago
Issue with Terraform 0.15 and https://github.com/cloudposse/terraform-aws-ecs-container-definition
https://github.com/cloudposse/terraform-aws-ecs-container-definition/issues/136
https://github.com/cloudposse/terraform-aws-ecs-container-definition/issues/136
David Morganalmost 5 years ago
is there a cloudposse way to specify attributes to ignore aka terraform lifecycle ignore_changes?
Michael Warkentinalmost 5 years ago
Looking to get some feedback on this issue: https://github.com/cloudposse/terraform-aws-dynamodb/issues/84
Not sure if I misunderstood how it should be configured or ran into a bug
Not sure if I misunderstood how it should be configured or ran into a bug
lorenalmost 5 years ago
in case you're using the ram share accepter resource and have started getting failures when destroying the accepter from the member account: https://github.com/hashicorp/terraform-provider-aws/issues/19319
Steffanalmost 5 years ago(edited)
Hi guys - Wondering if someone can help me out here. i dont see any datasource config on tf doc to grab existing user key and secret so it can be used in another module. is this something that can be done? how can i achieve this
so like
so like
data "aws_iam_access_key" "example" {
name = "example"
}Almondovaralmost 5 years ago(edited)
Hi all, glad do find you!
we are using module
1. how do i know which version of terraform matching with each version of the module?
2. What is the optimal path to upgrade the module from 0.5.0 to 0.14.0? - do i need to upgrade one by one the versions or i can "merge" few versions together?
3. i don't want in any case to run
4. if need for terraform import comes, is it supported? i dont see any import examples on github 🙂
Thank you!
we are using module
terraform-aws-vpc-peering-multi-account v0.5.0 with Terraform v0.11 and we need to upgrade our terraform to v0.14, may i have some questions please?1. how do i know which version of terraform matching with each version of the module?
2. What is the optimal path to upgrade the module from 0.5.0 to 0.14.0? - do i need to upgrade one by one the versions or i can "merge" few versions together?
3. i don't want in any case to run
tf apply in the production system, is it possible to upgrade the module and tf version without never hitting tf apply?4. if need for terraform import comes, is it supported? i dont see any import examples on github 🙂
Thank you!
Steve Wade (swade1987)almost 5 years ago
is there a way in terraform 0.13 to perform a data lookup for region using an aliased provider?
Steve Wade (swade1987)almost 5 years ago
quick guardduty question if thats ok ... i have configured the following for region X ...
however, when going to guardduty i can see all the accounts but they aren't listed as members, do I need to execute something else as well or leave it for X hours?
resource "aws_guardduty_organization_admin_account" "this" {
provider = aws.master-account
admin_account_id = data.aws_organizations_organization.this.master_account_id
}
resource "aws_guardduty_detector" "this" {
provider = aws.master-account
enable = true
}
resource "aws_guardduty_organization_configuration" "this" {
provider = aws.master-account
auto_enable = true
detector_id = aws_guardduty_detector.this.id
depends_on = [aws_guardduty_organization_admin_account.this]
}however, when going to guardduty i can see all the accounts but they aren't listed as members, do I need to execute something else as well or leave it for X hours?
Erik Osterman (Cloud Posse)almost 5 years ago
Up vote please! PR by @RB https://github.com/hashicorp/terraform/pull/28686
Brij Salmost 5 years ago
are you able to store
locals in a .tfvars file?Evgenii Prokofevalmost 5 years ago
Hi. Can someone give a clue what purpose of
cloudposse/route53-cluster-hostname/aws module? How it can be used?Albert Balinskialmost 5 years ago(edited)
Hello, I have seen that yesterday a PR
I am not sure if this is a coincident, but today I got an error:
After I have downgraded version from
Greater control over Access Logging was merged in https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/161 (btw, thank you very much for maintaining it, great stuff!).I am not sure if this is a coincident, but today I got an error:
│ Error: Error putting S3 logging: InvalidTargetBucketForLogging: The owner for the bucket to be logged and the target bucket must be the same.
│ status code: 400, request id: VGEG9C37YWX8BH14, host id: abc=
│
│ with module.qa.aws_s3_bucket.origin[0],
│ on .terraform\modules\qa\main.tf line 200, in resource "aws_s3_bucket" "origin":
│ 200: resource "aws_s3_bucket" "origin" {After I have downgraded version from
0.65.0 to 0.64.0 it works correctlyJacob Tranalmost 5 years ago
Hello all, I'm using terraform-aws-ec2-instance to provision an ec2, but I got an error:
╷
│ Error: ConflictsWith
│
│ with module.ec2_instance.module.default_sg.aws_security_group_rule.default["ingress-tcp-443-443-ipv4-no_ipv6-no_ssg-no_pli-no_self-no_desc"],
│ on .terraform/modules/ec2_instance.default_sg/main.tf line 58, in resource "aws_security_group_rule" "default":
│ 58: self = lookup(each.value, "self", null) == null ? false : each.value.self
│
│ "self": conflicts with cidr_blocks
Could someone help me?
╷
│ Error: ConflictsWith
│
│ with module.ec2_instance.module.default_sg.aws_security_group_rule.default["ingress-tcp-443-443-ipv4-no_ipv6-no_ssg-no_pli-no_self-no_desc"],
│ on .terraform/modules/ec2_instance.default_sg/main.tf line 58, in resource "aws_security_group_rule" "default":
│ 58: self = lookup(each.value, "self", null) == null ? false : each.value.self
│
│ "self": conflicts with cidr_blocks
Could someone help me?
Slackbotalmost 5 years ago
This message was deleted.
ememalmost 5 years ago
i have my terraform.state file created but its not beiing used
GitRepository Gitalmost 5 years ago
Good Morning To All
GitRepository Gitalmost 5 years ago
on main.tf line 160, in resource "aws_security_group" "websg":
│ 160: ingress = {
│ 161: cidr_blocks = [ local.anywhere ]
│ 162: description = "open ssh prt"
│ 163: from_port = 22
│ 164: protocol = "tcp"
│ 165: to_port = 22
│ 166: #security_groups = [ "value" ]
│ 167: #self = false
│ 168: #ipv6_cidr_blocks = [ "value" ]
│ 169: #prefix_list_ids = [ "value" ]
│ 170:
│ 171:
│ 172: }
│ ├────────────────
│ │ local.anywhere is "0.0.0.0/0"
│
│ Inappropriate value for attribute "ingress": set of object required.
╵
│ 160: ingress = {
│ 161: cidr_blocks = [ local.anywhere ]
│ 162: description = "open ssh prt"
│ 163: from_port = 22
│ 164: protocol = "tcp"
│ 165: to_port = 22
│ 166: #security_groups = [ "value" ]
│ 167: #self = false
│ 168: #ipv6_cidr_blocks = [ "value" ]
│ 169: #prefix_list_ids = [ "value" ]
│ 170:
│ 171:
│ 172: }
│ ├────────────────
│ │ local.anywhere is "0.0.0.0/0"
│
│ Inappropriate value for attribute "ingress": set of object required.
╵
GitRepository Gitalmost 5 years ago
i got this error
lorenalmost 5 years ago
dependabot has landed support for HCL2... https://github.com/dependabot/dependabot-core/issues/1176#issuecomment-841239564
Zachalmost 5 years ago
oooo
Heath Snowalmost 5 years ago
nice
David Morganalmost 5 years ago
hello, i am using “cloudposse/elasticache-redis/aws”
i am using terraform 0.13.7
and specifying version “0.13.0"
however i get the following errors when running init:
and
i’m not sure how to resolve the version incompatibility
thank you…
i am using terraform 0.13.7
and specifying version “0.13.0"
however i get the following errors when running init:
Module module.cache_redis.module.redis.module.dns (from
git::<https://github.com/cloudposse/terraform-aws-route53-cluster-hostname.git?ref=tags/0.3.0>)
does not support Terraform version 0.13.7and
Module module.cache_redis.module.redis.module.label (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.14.1>)
does not support Terraform version 0.13.7i’m not sure how to resolve the version incompatibility
thank you…
GitRepository Gitalmost 5 years ago
i'm using Terraform v0.15.1
michael sewalmost 5 years ago
Q: I've accidentally tried to rename an AWS RDS subnet group with an RDS instance still attached. Unfortunately, terraform complained I needed to move the DB to a diff subnet group.
.. now it seems that the terraform state is a little messed up. Does anybody have suggestions to unmuck this? is it historically to manually fix it and then re-import the resource? or
Error: Error modifying DB Instance dev-db-01: InvalidVPCNetworkStateFault: You cannot move DB instance dev-mpa-spa-db-01 to subnet group dev-db-01-dbsg. The specified DB subnet group and DB instance are in the same VPC. Choose a DB subnet group in different VPC than the specified DB instance and try again... now it seems that the terraform state is a little messed up. Does anybody have suggestions to unmuck this? is it historically to manually fix it and then re-import the resource? or
untaint something?Brandon Metcalfalmost 5 years ago
hello everyone. with version 0.17.2 of cloudposse/terraform-aws-s3-bucket and terraform 0.14.5, the policy that gets generated and to be applied to the newly created bucket looks like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck",
"Effect": "Allow",
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail",
"Principal": {
"Service": "<http://cloudtrail.amazonaws.com|cloudtrail.amazonaws.com>"
}
},
{
"Sid": "AWSCloudTrailWrite",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail/*",
"Principal": {
"Service": [
"<http://config.amazonaws.com|config.amazonaws.com>",
s.com"
]
},
"Condition": {
"StringEquals": {
"s3:x-amz-acl": [
"bucket-owner-full-control"
]
}
}
}
]
}Brandon Metcalfalmost 5 years ago
notice the s.com on a line by itself. i believe "cloudtrail.amazonaws.com" is getting truncated resulting in this. and when terraform tries to apply the policy, the following error occurs:
Error putting S3 policy: MalformedPolicy: Policy has invalid resourceBrandon Metcalfalmost 5 years ago
plan output shows the policy as
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "s3:GetBucketAcl"
+ Effect = "Allow"
+ Principal = {
+ Service = "<http://cloudtrail.amazonaws.com|cloudtrail.amazonaws.com>"
}
+ Resource = "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail"
+ Sid = "AWSCloudTrailAclCheck"
},
+ {
+ Action = "s3:PutObject"
+ Condition = {
+ StringEquals = {
+ s3:x-amz-acl = [
+ "bucket-owner-full-control",
]
}
}
+ Effect = "Allow"
+ Principal = {
+ Service = [
+ "<http://config.amazonaws.com|config.amazonaws.com>",
+ "<http://cloudtrail.amazonaws.com|cloudtrail.amazonaws.com>",
]
}
+ Resource = "arn:aws:s3:::dev-gov-test-us-gov-west-1-cloudtrail/*"
+ Sid = "AWSCloudTrailWrite"
},
]
+ Version = "2012-10-17"
}Brandon Metcalfalmost 5 years ago
it turns out the debug output seems to be a red herring. this is actually occurring in govcloud, so the arn is incorrect. instead of
aws it should be aws-gov-us. i'll look into submitting a PR.michael sewalmost 5 years ago
Question about structuring terraform variables (as I'm learning). Do you prefer putting variables (aka *.tfvars variables)
• A) in terraform cloud workspaces variables section?
OR
• B) in your git repo alongside your .tf code?
With the latter option, it seems like I need a separate batch file or cmd to properly point to the environment's -var-file , and that this wouldn't work with terraform cloud unless I'm missing something.
• A) in terraform cloud workspaces variables section?
OR
• B) in your git repo alongside your .tf code?
main.tf
/env
/dev
dev.auto.tfvars
/prd
prd.auto.tfavrsWith the latter option, it seems like I need a separate batch file or cmd to properly point to the environment's -var-file , and that this wouldn't work with terraform cloud unless I'm missing something.
lorenalmost 5 years ago(edited)
How long until Hashicorp open sources sentinel? https://aws.amazon.com/about-aws/whats-new/2021/05/aws-cloudformation-guard-2-0-is-now-generally-available/
Steve Wade (swade1987)almost 5 years ago
does anyone know the appropriate IAM role to allow users to change their password and setup MFA ?
lorenalmost 5 years ago
i forget, which of the TACOS other than atlantis supported self-hosted deployments?
Marko Sustarsicalmost 5 years ago
Hi there, I have a question about
Looking at the docs of this module, if we set
Does anyone know if there’s an easy way of achieving what we need using this module?
terraform-aws-cloudfront-s3-cdn which we’re using to provide images to a number of our client applications. Currently if an image is unavailable on Cloudfront the clients will receive a 404 response with no content, and we’d like to change that so that we return a fallback image, whilst still maintaining the 404 http code for backwards compatibility.Looking at the docs of this module, if we set
website_enabled = true we would be able to provide a custom error response. However, presumably the “error document” that is passed in should be an html file, not an alternative asset to be served.Does anyone know if there’s an easy way of achieving what we need using this module?
rssalmost 5 years ago
v0.15.4
0.15.4 (May 19, 2021)
NEW FEATURES:
Noting changes made outside of Terraform: Terraform has always, by default, made a point during the planning operation of reading the current state of remote objects in order to detect any changes made outside of Terraform, to make sure the plan will take those into account.
Terraform will now report those detected changes as part of the plan result, in order to give additional context about the planned changes. We've often heard that people find it...
0.15.4 (May 19, 2021)
NEW FEATURES:
Noting changes made outside of Terraform: Terraform has always, by default, made a point during the planning operation of reading the current state of remote objects in order to detect any changes made outside of Terraform, to make sure the plan will take those into account.
Terraform will now report those detected changes as part of the plan result, in order to give additional context about the planned changes. We've often heard that people find it...
lorenalmost 5 years ago
pretty cool release there, lot going on
Michael Warkentinalmost 5 years ago
Is there a way to disable tags entirely for the cloudposse modules? I’m trying to reuse some config to configure DynamoDB local tables, and it doesn’t support tagging..
WCalmost 5 years ago
Hi, I got error this when create the efs_file_system resource:
Error: error reading EFS FileSystem: empty output
on main.tf line 33, in data "aws_efs_file_system" "tf-efs-fs":
33: data "aws_efs_file_system" "tf-efs-fs" {WCalmost 5 years ago
the resource is like this:
data "aws_efs_file_system" "tf-efs-fs" {
creation_token = "my-efs-file-system"
}WCalmost 5 years ago
I had spent a whole day to fix this, but still not progress. Please help me to know what is wrong. Thank you.
greg nover 4 years ago
You said you’re trying to create and efs filesystem, but the snippet you posted is a data source for looking up an existing resource.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_file_system
vs
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/efs_file_system
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_file_system
vs
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/efs_file_system
Jeff Dykeover 4 years ago(edited)
Dumb question time. I'm building new vpcs to replace those created by console. Each time i create a new state folder, with a backend, the first plan doesn't have a remote state. Is there a better way than
-lock=false ? Its only me building this so i'm not worried about it...just more interested in the workflow. First time working on completely new infra with terraform.Alex Jurkiewiczover 4 years ago
RFC for a cloudposse null label feature request
We use this module a lot, but some of the tag names don't match our internal standard. For example, we use "environment" rather than "stage". As a workaround, we set
In AWS this is a problem since there is a limit of 10 tags per resource, and we add other tags globally for cost /security control.
So, I'd like to enhance this module so you can disable certain default tags or override their name. For example:
Thoughts?
We use this module a lot, but some of the tag names don't match our internal standard. For example, we use "environment" rather than "stage". As a workaround, we set
additional_tag_map = { environment = "dev" }, but this means we have a Stage tag which duplicates the value.In AWS this is a problem since there is a limit of 10 tags per resource, and we add other tags globally for cost /security control.
So, I'd like to enhance this module so you can disable certain default tags or override their name. For example:
module "context" {
...
stage_tag_name = "environment" # use 'environment' instead of 'stage'
namespace_tag_name = "" # don't create a namespace tag at all
}Thoughts?
adebola olowoseover 4 years ago
Hello Guys Please i need your advise on how to alter a baseline. we want to move our cloudrail logs from the master accounts to a centralize audit account. the questions is, we have cloudtrail and s3 buckets which collects logs for all the other accounts on the master account now we want to move it to the audit account. whats the best approach to do this?
adebola olowoseover 4 years ago
Thank you @Yoni Leitersdorf (Indeni Cloudrail) any idea on how to use terraform to do this?
Karl Websterover 4 years ago
Is there a maintainer of the
• The PR: https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/128
• The Issue it fixes: https://github.com/cloudposse/terraform-aws-dynamic-subnets/issues/133
I would @ but I have no idea who the right person is..
terraform-aws-dynamic-subnets here? The following PR (created by the renovate bot) fixes my issues:• The PR: https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/128
• The Issue it fixes: https://github.com/cloudposse/terraform-aws-dynamic-subnets/issues/133
I would @ but I have no idea who the right person is..
Andrew Nazarovover 4 years ago
Hah, thanks for Terrateam, will check it out for sure. It’s funny I came to this channel to ask about best practices of running TF in a CI/CD system and the last message is kinda related to this:)
Nonetheless let me ask the question anyway to collect some valuable feedback. Say, we have a repo with TF code that creates a piece of infrastructure. What would be the safest and the most convenient way to make changes and run the code? A natural thing for people with dev background is to leverage PR/MRs to review changes: to do this we would run
Hah, Terrateam even uses a word “confidence” in their slogan:)
Nonetheless let me ask the question anyway to collect some valuable feedback. Say, we have a repo with TF code that creates a piece of infrastructure. What would be the safest and the most convenient way to make changes and run the code? A natural thing for people with dev background is to leverage PR/MRs to review changes: to do this we would run
terraform plan in a feature branch (MR) and we even can make plan results easily visible either via comments or MR widgets. However, things might go wrong - somebody else could make changes to the code in his/her branch and in this MR will be reviewed separately. It doesn’t have changes of the first one and if it’s merged first then the results of the first one, probably already approved MR, will be obsolete. That means reviewing the MR doesn’t have that much value. We also need to take a look at the plan result in the mainline, therefore we have to apply manually then. And somehow we need a mechanism to ask peers to check out changes. What are the better workflows with and (which is even more interesting:) without additional tooling? How are you running TF code and keep the confidence?)Hah, Terrateam even uses a word “confidence” in their slogan:)
Aumkar Prajapatiover 4 years ago(edited)
Hey all, does anyone know a good way to migrate worker_groups to node_groups with the
terraform-aws-modules/vpc/aws Terraform module in an EKS environment? We’re looking to move to managed nodes from our unmanaged setup.Sean Turnerover 4 years ago(edited)
Would someone be willing to test drive the deployment of a serverless single page application I built that deploys via terraform module? Please have a look at the code first as well so you know what is being deployed. github.com/seanturner026/moot.git
One would need
It works on my machine currently (lol), but I want to share this around and want to iron out any deployment kinks first.
It deploys api gateway, ssm parameters, iam roles, dynamodb, cloudfront, cognito. If you uncomment
One would need
go, yarn, and the awscli as the module builds golang lambdas (latest version is better as it uses go modules), builds a vuejs bundle with yarn, and also aws s3 syncs the bundle to the s3 bucket built by the module.It works on my machine currently (lol), but I want to share this around and want to iron out any deployment kinks first.
It deploys api gateway, ssm parameters, iam roles, dynamodb, cloudfront, cognito. If you uncomment
fqdn_alias and hosted_zone_name, you'll also get an ACM cert and can access cloudfront via custom DNS (moot.link in my example as I bought a cheap domain).module "moot" {
source = "<http://github.com/seanturner026/moot.git|github.com/seanturner026/moot.git>"
name = "moot"
aws_profile = "default" // whatever profile you want to use in .aws/config
admin_user_email = "<mailto:email@example.com|email@example.com>" // or your email if you want the email with cognito creds
enable_delete_admin_user = false
github_token = "42"
gitlab_token = "42"
slack_webhook_url = "42"
# fqdn_alias = "moot.link"
# hosted_zone_name = "moot.link"
enable_api_gateway_access_logs = true
tags = {}
}marc slaytonover 4 years ago
Provider crash. Hey all -- I've recently been working with the latest versions of yaml_stack_config. I've been trying to wire up the account-map component to perform its tfstate.tf lookups using the remote_state modules that come with yaml_stack_config. When I perform the lookups, I get the following message, fairly consistently:
Error: rpc error: code = Unavailable desc = transport is closing
Error: 1 error occurred:
* step "plan cmd": job "terraform subcommand": command "terraform plan -out gbl-master-account-map.planfile -var-file gbl-master-account-map.terraform.tfvars.json" in "./components/terraform/account-map": exit status 1marc slaytonover 4 years ago(edited)
It feels like the aws provider might have something to do with it. I've tried various combinations of the cloudposse/utils provider: 0.4.3, 0.4.4, v0.6.0 and v0.8.0 with no change in symptoms. The backtraces mostly show healthy operation, but when control is relinquished, the aws provider appears to be dead.
Just curious if anyone else here has seen this phenomenon and knows a workaround, or a better way to find the cause. I've also tried the cloudposse/utils provider and installing from scratch. They all seem to work reasonably well. Alas, debugging terraform execution isn't my strong suit.
2021/05/26 02:06:04 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2021/05/26 02:06:04 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021/05/26 02:06:04 [INFO] backend/local: plan operation completed
2021/05/26 02:06:04 [TRACE] statemgr.Filesystem: removing lock metadata file terraform.tfstate.d/gbl-master/.<http://terraform.tfstate.lock.info|terraform.tfstate.lock.info>
2021/05/26 02:06:04 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate.d/gbl-master/terraform.tfstate using fcntl flock
2021-05-26T02:06:04.172Z [DEBUG] plugin: plugin exited
Error: rpc error: code = Unavailable desc = transport is closingJust curious if anyone else here has seen this phenomenon and knows a workaround, or a better way to find the cause. I've also tried the cloudposse/utils provider and installing from scratch. They all seem to work reasonably well. Alas, debugging terraform execution isn't my strong suit.
Michaelover 4 years ago
Hi there. I've got a beginner's question 🙂 I structured my Terraform set up in modules. How can I output data (e.g. the public IP of my EC2 instance) after the deployment with
terraform console?$ terraform apply -auto-approve
...
Plan: 14 to add, 0 to change, 0 to destroy.
...
module.default.module.aws_ec2.aws_instance.debian: Creation complete after 13s [id=i-0000aaaabbbbccccd]
$ terraform console
> module.default.module.aws_ec2.aws_instance.debian.public_ip
> ╷
│ Error: Unsupported attribute
│
│ on <console-input> line 1:
│ (source code not available)
│
│ This object does not have an attribute named "module".Mohammed Yahyaover 4 years ago
Terraformers, anyone test/use Scalr agent on-premise?
marc slaytonover 4 years ago(edited)
Hey all -- quick question about importing resources. I noticed that atmos supports the 'import' command, but I'm not completely sure if it can be used to import resources into a stack component. Is this possible?
mfridhover 4 years ago
When the tools are fighting the engineer's explicit intentions, example #4568:
Yes, I want to output it... it's actually not sensitive, just because the source happens to be a publically available ssm parameter, which I've even explicitly marked as
│ Error: Output refers to sensitive values
│
│ on outputs.tf line 13:
│ 13: output recommended_ecs_ami {
│
│ To reduce the risk of accidentally exporting sensitive data that was intended to be only internal, Terraform requires that any root module output containing sensitive data be explicitly marked as
│ sensitive, to confirm your intent.
│
│ If you do intend to export this data, annotate the output value as sensitive by adding the following argument:
│ sensitive = true
╵Yes, I want to output it... it's actually not sensitive, just because the source happens to be a publically available ssm parameter, which I've even explicitly marked as
sensitive = false in the sub-module. 😄ememover 4 years ago
hi anyone experience this while creating a module from cloudflare
Enter a value: yes
module.zone.cloudflare_zone.default[0]: Destroying... [id=68ecf4a68af4f9ee970ce00a6f275064]
module.zone.cloudflare_zone.default[0]: Destruction complete after 1s
module.zone.cloudflare_zone.default[0]: Creating...
Error: Error setting plan free for zone "68ecf4a68af4f9ee970ce00a6f275064": HTTP status 403: Authentication error (10000)
on ../../cloudflare/modules/main.tf line 26, in resource "cloudflare_zone" "default":
26: resource "cloudflare_zone" "default" {Fabianover 4 years ago(edited)
Hi. We have had terraform validations break twice in the last few months without us changing anything. This is very frustrating. The first time some AWS thing changed somehow. Still not sure about the second time. Has anyone else had similar situations? Are you just spending time fixing or has anyone considered moving away from TF? I think there are benefits to Terraform with AWS, but I don't want to spend engineering time fixing these issues.
Barak Schosterover 4 years ago
Hi everyone, Happy to introduce a new open-source tool to tag and trace IaC (Terraform, Cloudformation, Serverless). We are using it in our CI to have a consistent owner, cost center, trace from code to cloud and other tags that are automatically added to each IaC resource.
Feedback & GitHub ⭐️ are highly welcome 🙌!
Github Repo: https://github.com/bridgecrewio/yor
Blog: https://bridgecrew.io/blog/announcing-yor-open-source-iac-tag-trace-cloud-resources/
Feedback & GitHub ⭐️ are highly welcome 🙌!
Github Repo: https://github.com/bridgecrewio/yor
Blog: https://bridgecrew.io/blog/announcing-yor-open-source-iac-tag-trace-cloud-resources/
Pierre-Yvesover 4 years ago
Hello, is there a way to do some math in terraform ? I would like to check if my server name contains an even or odd number, to dynamically set an azure zone , ( 1 if odd , 2 if even ).
david hoangover 4 years ago
GM! Trying to update a dynamic subnet and having some issues.
Goal - update the current wide open nacl to a stricter nacl
I added resources for creating a public and private network acl.
Then added public_network_acl_id (pointing to the resource id) to the dynamic_subnets module.
Getting error:
count = local.public_network_acl_enabled
any examples on using a created network acl and associating them to the subnets?
Goal - update the current wide open nacl to a stricter nacl
I added resources for creating a public and private network acl.
Then added public_network_acl_id (pointing to the resource id) to the dynamic_subnets module.
Getting error:
count = local.public_network_acl_enabled
any examples on using a created network acl and associating them to the subnets?
Alex Kagnoover 4 years ago
Hi all, trying to leverage this AWS ES module and I can't seem to get access to it... Is there a way from this module to enable open access? I'm trying to secure it with AWS Cognito and it creates without error but I can't seem to turn open access on.
https://github.com/cloudposse/terraform-aws-elasticsearch
Thanks for all the great modules
https://github.com/cloudposse/terraform-aws-elasticsearch
Thanks for all the great modules
Alex Kagnoover 4 years ago
module "elasticsearch" {
source = "cloudposse/elasticsearch/aws"
namespace = "main"
stage = "development"
name = "logging"
zone_awareness_enabled = "true"
vpc_enabled = "false"
cognito_authentication_enabled = "true"
cognito_identity_pool_id = aws_cognito_identity_pool.main.id
cognito_user_pool_id = aws_cognito_user_pool.main.id
cognito_iam_role_arn = aws_iam_role.cognito.arn
elasticsearch_version = "7.10"
instance_type = "t3.medium.elasticsearch"
instance_count = 4
ebs_volume_size = 10
encrypt_at_rest_enabled = true
create_iam_service_linked_role = true
domain_hostname_enabled = true
dns_zone_id = data.aws_route53_zone.main.zone_id
domain_endpoint_options_enforce_https = true
custom_endpoint = "aes.${local.r53_zone_name}"
custom_endpoint_certificate_arn = module.acm_aes.acm_certificate_arn
custom_endpoint_enabled = true
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
tags = local.common_tags
}DevOpsGuyover 4 years ago
Guys, I am trying to use s3 as backend for storing terraform state files. We have only one account (For, QA, STG and PROD in our aws). I am using GitLab for CICD. Not sure how to store environment specific state files. Can someone please help how to use terraform workspace concept in case if we have only one service account in aws for all the environments.
marcoscbover 4 years ago
Hello all, I am using terraform-aws-eks-cluster module 0.38.0 with TF 0.14.11 and kubernetes provider 2.1.0 but although the initial from-scratch deployment works fine trying to plan the same deployment from another environment (same code and config) fails with the error
"Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused".
Using TF 0.13.7 works fine from everywhere everytime. I think its related to TF 0.14 managing that kubernetes provider dependecy with the eks cluster created along the same apply through data resources differently than TF 0.13.
Has someone experienced this before?
Is there any plan to split the eks cluster and dependent resources in
separate stacks to avoid this in terraform-aws-eks-cluster modules?
Thanks.
"Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused".
Using TF 0.13.7 works fine from everywhere everytime. I think its related to TF 0.14 managing that kubernetes provider dependecy with the eks cluster created along the same apply through data resources differently than TF 0.13.
Has someone experienced this before?
Is there any plan to split the eks cluster and dependent resources in
separate stacks to avoid this in terraform-aws-eks-cluster modules?
Thanks.