69 messages
joshmyersabout 4 years ago
What are people doing to get around https://github.com/hashicorp/terraform/issues/28803 ? "Objects have changed outside of Terraform" in > 1.0.X
DaniC (he/him)about 4 years ago
just bumped/ found https://github.com/boltops-tools/terraspace and i thought i should share it in case folks are not aware of it.
Sebastian Macarescuabout 4 years ago
Hi team. I've opened a bug here https://github.com/cloudposse/terraform-provider-awsutils/issues/26
For me the
For me the
awsutils_default_vpc_deletion resource deletes an unknown vpc then it reports as no default vpc found.Sebastian Macarescuabout 4 years ago
Anybody actually used that resource?
rssabout 4 years ago(edited)
v1.1.0
1.1.0 (December 08, 2021)
If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible.
Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively "forgetting" all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it's possible that incorrect future...
1.1.0 (December 08, 2021)
If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible.
Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively "forgetting" all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it's possible that incorrect future...
rssabout 4 years ago(edited)
v1.1.1
1.1.1 (December 15, 2021)
If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible.
Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively "forgetting" all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it's possible that incorrect future...
1.1.1 (December 15, 2021)
If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to the latest version as soon as possible.
Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively "forgetting" all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it's possible that incorrect future...
Alex Jurkiewiczabout 4 years ago
hm, v1.1.2 already came out. I guess they edited the release notes to mention the major bug
Brad McCoyabout 4 years ago
Hey folks, we have a DevSecOps webinar coming up this week about Terraform and how you can use it safely in pipelines https://www.meetup.com/sydney-hashicorp-user-group/events/283063949/
Almondovarabout 4 years ago
Hi colleagues, i need to make appstream work via terraform, do i understand correctly that the only way to do that is the usage of a 3rd party provider that is not the official aws one?
appstream = {
source = "arnvid/appstream"
version = "2.0.0"
}Davidabout 4 years ago
resource "aws_acm_certificate" "this" {
for_each = toset(var.vpn_certificate_urls)
domain_name = each.value
subject_alternative_names = ["*.${each.value}"]
certificate_authority_arn = var.certificate_authority_arn
tags = {
Name = each.value
}
options {
certificate_transparency_logging_preference = "ENABLED"
}
lifecycle {
create_before_destroy = true
}
provider = aws.so
}Any tips on this?
Error message on certificate in console: ‘The signing certificate for the CA you specified in the request has expired.’
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/acm_certificate
According to the docs above, you can create a cert signed by a private CA by passing the CA arn
Creating a private CA issued certificate
domain_name - (Required) A domain name for which the certificate should be issued
certificate_authority_arn - (Required) ARN of an ACM PCA
subject_alternative_names - (Optional) Set of domains that should be SANs in the issued certificate. To remove all elements of a previously configured list, set this value equal to an empty list ([]) or use the terraform taint command to trigger recreation.Azizabout 4 years ago
Hello Guys - I used the bastion incubator helm chart from here and deployed on K8s and tried to connect to it using below commands but getting permission denied always.
• I checked the Github API Key is correct
• I checked the SSH key in Github is there as well
• I also checked the users created in
Is there anything missing from my end? can you point me somewhere to fix the issue?
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=bastion-bastion" -o jsonpath="{.items[0].metadata.name}")
echo "Run 'ssh -p 2222 127.0.0.1' to use your application"
kubectl port-forward $POD_NAME 2222:22➜ ssh -p 2211 azizzoaib786@127.0.0.1
The authenticity of host '[127.0.0.1]:2211 ([127.0.0.1]:2211)' can't be established.
RSA key fingerprint is SHA256:S44NDDfev4x8NCJHMVJgYXrhx4OS/SoYGer5TMGUgqg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2211' (RSA) to the list of known hosts.
azizzoaib786@127.0.0.1: Permission denied (publickey).• I checked the Github API Key is correct
• I checked the SSH key in Github is there as well
• I also checked the users created in
github-authorized-keys & bastion containers as well from Github team that is configured in values.yaml fileIs there anything missing from my end? can you point me somewhere to fix the issue?
F
Frankabout 4 years ago
Does anyone know how I can do this using Terraform? I have deployed a Lambda function, created a CF Distribution, associated the
origin-request to it.. But right now it's giving me a 503 because the "function is invalid or doesn't have the required permissions".OliverSabout 4 years ago
Does anyone use, or has anyone used Ansible enough to shed some light on when (what types of tasks) Ansible would definitely be better than Terraform? Context: cloud, not on-prem.
Basically wondering if I should invest some time learning Ansible. It's yet another DSL and architecture and system to manage (master etc) so there should be a significantly sized set of tasks that are significantly easier to do with it than with Terraform, in order to justify it.
Basically wondering if I should invest some time learning Ansible. It's yet another DSL and architecture and system to manage (master etc) so there should be a significantly sized set of tasks that are significantly easier to do with it than with Terraform, in order to justify it.
rssabout 4 years ago(edited)
v1.1.3
1.1.3 (January 06, 2022)
BUG FIXES:
terraform init: Will now remove from the dependency lock file entries for providers not used in the current configuration. Previously it would leave formerly-used providers behind in the lock file, leading to "missing or corrupted provider plugins" errors when other commands verified the consistency of the installed plugins against the locked plugins. (<a...
1.1.3 (January 06, 2022)
BUG FIXES:
terraform init: Will now remove from the dependency lock file entries for providers not used in the current configuration. Previously it would leave formerly-used providers behind in the lock file, leading to "missing or corrupted provider plugins" errors when other commands verified the consistency of the installed plugins against the locked plugins. (<a...
Jim Parkabout 4 years ago
I've recently took a stand that our teams should avoid using terraform to configure Datadog Monitors and Dashboards (despite having written a module to configure Datadog - AWS integration>).
I'll say more about why in the thread, but the relevant TL;DR is that part of the workflow for configuring Datadog is to contextualize the monitors and dashboards with historical data. Doing so via a manifest doesn't make sense.
What do you think? Do you agree? Have you seen examples where Datadog via code is super useful?
I'll say more about why in the thread, but the relevant TL;DR is that part of the workflow for configuring Datadog is to contextualize the monitors and dashboards with historical data. Doing so via a manifest doesn't make sense.
What do you think? Do you agree? Have you seen examples where Datadog via code is super useful?
Eric Bergabout 4 years ago
I just got this error, when trying to plan the
complete example of the Spacelift cloud-infrastructure-automation mod. What's the preferred way to report this?│ Error: Plugin did not respond
│
│ with module.example.module.yaml_stack_config.data.utils_spacelift_stack_config.spacelift_stacks,
│ on .terraform/modules/example.yaml_stack_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│ 1: data "utils_spacelift_stack_config" "spacelift_stacks" {
│
│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain
│ more details.
╵
Stack trace from the terraform-provider-utils_v0.17.10 plugin:
panic: interface conversion: interface {} is nil, not map[interface {}]interface {}
goroutine 55 [running]:
<http://github.com/cloudposse/atmos/pkg/stack.ProcessConfig(0xc000120dd0|github.com/cloudposse/atmos/pkg/stack.ProcessConfig(0xc000120dd0>, 0x6, 0xc00011e348, 0x17, 0xc000616690, 0x100, 0x0, 0x0, 0xc00034bcf0, 0xc00034bd20, ...)
<http://github.com/cloudposse/atmos@v1.3.12/pkg/stack/stack_processor.go:276|github.com/cloudposse/atmos@v1.3.12/pkg/stack/stack_processor.go:276> +0x42ad
<http://github.com/cloudposse/atmos/pkg/stack.ProcessYAMLConfigFiles.func1(0xc000120dc0|github.com/cloudposse/atmos/pkg/stack.ProcessYAMLConfigFiles.func1(0xc000120dc0>, 0x0, 0x0, 0xc0005130e0, 0x1040100, 0xc0005130d0, 0x1, 0x1, 0xc00050d0e0, 0x0, ...)
<http://github.com/cloudposse/atmos@v1.3.12/pkg/stack/stack_processor.go:72|github.com/cloudposse/atmos@v1.3.12/pkg/stack/stack_processor.go:72> +0x3f9
created by <http://github.com/cloudposse/atmos/pkg/stack.ProcessYAMLConfigFiles|github.com/cloudposse/atmos/pkg/stack.ProcessYAMLConfigFiles>
<http://github.com/cloudposse/atmos@v1.3.12/pkg/stack/stack_processor.go:39|github.com/cloudposse/atmos@v1.3.12/pkg/stack/stack_processor.go:39> +0x1a5
Error: The terraform-provider-utils_v0.17.10 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.jonjitsuabout 4 years ago
Anyone know of a working json2hcl2 tool? I've tried kvx/json2hcl but it's hcl1.
Chris Fowlesabout 4 years ago(edited)
jsondecode() ?
lorenabout 4 years ago
Heh, pretty much, just don't. If it's in json, and you want tf to process it, just save the file extension as .tf.json...
A
Almondovarabout 4 years ago(edited)
hi colleagues i am working on terraforming the appstream on aws, but i cant find anywhere in terraform code the lines for the
appstream image registry - am i missing something?michael sewabout 4 years ago(edited)
I have a terraform workspace upgrade question. Using self-hosted Terraform Enterprise. My workspace is tied to Github VCS (no jenkins or other CI), and set to remote execution, not local. That means the workspace will use whatever tf version is config'd (ie. 0.12.31).
I'm trying to upgrade to 0.13
However, I'm trying to test whether the plan using the new (0.13) binary runs clean or not. Am I doing this right?
^^ terraform plans whatever version is set by the workspace. Do I have any other options to test the upgrade locally?
I'm trying to upgrade to 0.13
However, I'm trying to test whether the plan using the new (0.13) binary runs clean or not. Am I doing this right?
tfenv use 0.12.31
terraform plan
# it makes the workspace run the plan.. using version 0.12.31
tfenv list-remote | grep 0.13
0.13.6
tfenv install 0.13.6
tfenv use 0.13.6
terraform init
terraform 0.13upgrade
terraform plan
# it STILL runs the workspace's terraform version, 0.12.31!! NOT my local 0.13 binary.^^ terraform plans whatever version is set by the workspace. Do I have any other options to test the upgrade locally?
Florin Andreiabout 4 years ago
Using
I need to enable monitoring on Kafka (AWS MSK). While running
But if anyone knows what’s the default behavior of this module, and what are the best practices for the server properties, that would be useful to know.
Question 2 (related): Let’s say I decide it’s best to freeze
I’ve tried to pass all that as a Terraform map into inputs / properties for the Kafka cluster module, but it gets deleted / rewritten with spaces by
I’ve tried to include it as a template file, with the contents of the template just literally the properties file from AWS:
But then I get this error:
What is a good way to force terragrunt to inject that variable into AWS exactly as I want it?
git::git@github.com:cloudposse/terraform-aws-msk-apache-kafka-cluster.git?ref=tags/0.6.0I need to enable monitoring on Kafka (AWS MSK). While running
terragrunt plan I noticed the server_properties from the aws_msk_configuration resource was going to be deleted. I will try to figure out whether the properties were added later manually, or whether those are some defaults by the module or by AWS itself, etc.But if anyone knows what’s the default behavior of this module, and what are the best practices for the server properties, that would be useful to know.
Question 2 (related): Let’s say I decide it’s best to freeze
server_properties in our Gruntwork templates. AWS MSK appears to store that variable as a plain text file, one key/value per line, no spaces, which I can retrieve with AWS CLI and base64 --decode it:auto.create.topics.enable=true
default.replication.factor=3
min.insync.replicas=2
...I’ve tried to pass all that as a Terraform map into inputs / properties for the Kafka cluster module, but it gets deleted / rewritten with spaces by
terragrunt plan:auto.create.topics.enable = true
default.replication.factor = 3
min.insync.replicas = 2
...I’ve tried to include it as a template file, with the contents of the template just literally the properties file from AWS:
properties = templatefile("server.properties.tpl", { })But then I get this error:
Error: Extra characters after expression
on <value for var.properties> line 1:
(source code not available)
An expression was successfully parsed, but extra characters were found after
it.What is a good way to force terragrunt to inject that variable into AWS exactly as I want it?
Ryanabout 4 years ago
Most organizations have at least 1 of these infrastructure problems? How are you solving them?
-Broken Modules Tearing Down Your Configurations
-Drifting Away From What You Had Defined
-Lack of Security & Compliance
-Troublesome Collaboration
-Budgets Out of Hand
-Broken Modules Tearing Down Your Configurations
-Drifting Away From What You Had Defined
-Lack of Security & Compliance
-Troublesome Collaboration
-Budgets Out of Hand
Ben Dubuissonabout 4 years ago
Hi ! Using https://github.com/cloudposse/terraform-aws-sso/ .
It creates iam roles through permission sets and I wonder if anybody figured out how to get access to the IAM role name (want to save it as SSM.
It seems to follow the pattern:
It creates iam roles through permission sets and I wonder if anybody figured out how to get access to the IAM role name (want to save it as SSM.
It seems to follow the pattern:
AWSReservedSSO_{permissionSetName}_{someRandomnHash}Jens Lauterbachabout 4 years ago
Hi 👋 ,
I started looking at the Terraform AWS EC2 Client VPN module and got everything deployed based on the complete example. That worked well so far. I downloaded the client configuration and imported it in the OpenVPN client (which should be supported based on the AWS documentation).
But that’s when my luck runs out. I can’t connect to the VPN and the client provides following error:
So this appears to be a “networking issue”. My computer can’t resolve the endpoint address. So it appears I missed something in my VPN setup?
Any suggestions what I might be doing wrong?
I started looking at the Terraform AWS EC2 Client VPN module and got everything deployed based on the complete example. That worked well so far. I downloaded the client configuration and imported it in the OpenVPN client (which should be supported based on the AWS documentation).
But that’s when my luck runs out. I can’t connect to the VPN and the client provides following error:
Transport Error: DNS resolve error on 'cvpn-endpoint-.....<http://prod.clientvpn.eu-central-1.amazonaws.com|prod.clientvpn.eu-central-1.amazonaws.com>' for UDP session: Host not found.So this appears to be a “networking issue”. My computer can’t resolve the endpoint address. So it appears I missed something in my VPN setup?
Any suggestions what I might be doing wrong?
Matt Gowieabout 4 years ago
Pretty interesting email that just lit up all of my inboxes from AWS on a Terraform provider fix… I’m very surprised they went this far to announce a small fix. I wonder if it is bogging down their servers for some reason.
Hello,
You are receiving this message because we identified that your account uses Hashicorp Terraform to create and update Lambda functions. If you are using the V2.x release with version V2.70.1, or V3.x release with version V3.41.0 or newer of the AWS Provider for Terraform you can stop reading now.
As notified in July 2021, AWS Lambda has extended the capability to track the state of a function through its lifecycle to all functions [1] as of November 23, 2021 in all AWS public, GovCloud and China regions. Originally, we informed you that the minimum version of the AWS Provider for Terraform that supports states (by waiting until a Lambda a function enters an Active state) is V2.40.0. We recently identified that this version had an issue where Terraform was not waiting until the function enters an Active state after the function code is updated. Hashicorp released a fix for this issue in May 2021 via V3.41.0 [2] and back-ported it to V2.70.1 [3] on December 14, 2021.
If you are using V2.x release of AWS Provider for Terraform, please use V2.70.1, or update to the latest release. If you are using V3.x version, please use V3.41.0 or update to the latest release. Failing to use the minimum supported version or latest can result in a ‘ResourceConflictException’ error when calling Lambda APIs without waiting for the function to become Active.
If you need additional time to make the suggested changes, you can delay states change for your functions until January 31, 2022 using a special string (aws:states:opt-out) in the description field when creating or updating the function. Starting February 1, 2022, the delay mechanism expires and all customers see the Lambda states lifecycle applied during function create or update. If you need additional time beyond January 31, 2022, please contact your enterprise support representative or AWS Support [4].
To learn more about this change refer to the blog post [5]. If you have any questions, please contact AWS Support [4].
[1] https://docs.aws.amazon.com/lambda/latest/dg/functions-states.html
[2] https://newreleases.io/project/github/hashicorp/terraform-provider-aws/release/v3.41.0
[3] https://github.com/hashicorp/terraform-provider-aws/releases/tag/v2.70.1
[4] https://aws.amazon.com/support
[5] https://aws.amazon.com/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions
Sincerely,
Amazon Web Services
J Normentabout 4 years ago
Does anyone know how I would be able to determine what version of TLS is used by TF when making calls to AWS APIs?
Raimabout 4 years ago
Hello everyone and good evening.
I'm trying to set up https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account for a VPC peering between two regions in a single AWS account.
This is how I'm defining the module right now.
I've run
The module is:
Both VPCs exist, and if I try to do a simple data block they are detected with the IDs. What am I missing and perhaps what I have not read about this? Thank you beforehand for any help.
The referenced profiles do exist, and they are the ones from which the existing infrastructures exist in their respective regions.
I'm trying to set up https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account for a VPC peering between two regions in a single AWS account.
module "vpc_peering" {
source = "cloudposse/vpc-peering-multi-account/aws"
version = "0.17.1"
namespace = var.namespace
stage = var.stage
name = var.name
requester_vpc_id = var.requester_vpc_id
requester_region = var.requester_region
requester_allow_remote_vpc_dns_resolution = true
requester_aws_assume_role_arn = aws_iam_role.vpc_peering_requester_role.arn
requester_aws_profile = var.requester_profile
accepter_enabled = true
accepter_vpc_id = var.accepter_vpc
accepter_region = var.accepter_region
accepter_allow_remote_vpc_dns_resolution = true
accepter_aws_profile = var.accepter_profile
requester_vpc_tags = {
"Primary" = false
}
accepter_vpc_tags = {
Primary = true
}
}This is how I'm defining the module right now.
I've run
terraform init no problems, but when I try to create a plan I get:│ Error: no matching VPC found
│
│ with module.vpc_peering_west_east.module.vpc_peering.data.aws_vpc.accepter[0],
│ on .terraform/modules/vpc_peering_west_east.vpc_peering/accepter.tf line 43, in data "aws_vpc" "accepter":
│ 43: data "aws_vpc" "accepter" {
╷
│ Error: no matching VPC found
│
│ with module.vpc_peering_west_east.module.vpc_peering.data.aws_vpc.requester[0],
│ on .terraform/modules/vpc_peering_west_east.vpc_peering/requester.tf line 99, in data "aws_vpc" "requester":
│ 99: data "aws_vpc" "requester" {The module is:
module "vpc_peering_west_east" {
source = "../modules/vpc_peering"
namespace = "valley"
stage = terraform.workspace
name = "valley-us-east-2-to-us-west-2-${terraform.workspace}"
accepter_vpc = "vpc-id-1"
accepter_region = "us-west-2"
accepter_profile = "valley-prod-us-west-2"
requester_vpc_id = "vpc-id-2"
requester_region = "us-east-2"
requester_profile = "valley-prod-us-east-2"
vpc_peering_requester_role_name = "valley-us-west-2-to-us-east-2-${terraform.workspace}"
}terraform version output is:Terraform v1.1.3
on linux_amd64
+ provider <http://registry.terraform.io/hashicorp/aws|registry.terraform.io/hashicorp/aws> v3.68.0
+ provider <http://registry.terraform.io/hashicorp/null|registry.terraform.io/hashicorp/null> v3.1.0Both VPCs exist, and if I try to do a simple data block they are detected with the IDs. What am I missing and perhaps what I have not read about this? Thank you beforehand for any help.
The referenced profiles do exist, and they are the ones from which the existing infrastructures exist in their respective regions.
DevOpsGuyabout 4 years ago
Hi All, I am trying to install mysql database on Windows Server 2016 (64 bit) using terraform. This is not going to be RDS. I am not sure where to start on how to install mysql on Windows Server 2016 (64 bit) EC2 in aws using terraform. Can someone provide me the insight.
Jas Rowinskiabout 4 years ago
Hi, I was wondering what the official process is to add enhancements to Cloudposse git repos? I wanted to enable
Is it possible to become a contributor or is this the only way to handle updates from the community?
ebs_optimized to your eks_node_group but seems you require a fork, branch -> PR ? First time wanting to contribute to a public repo so not sure if this is a standard way of doing it.Is it possible to become a contributor or is this the only way to handle updates from the community?
Matt Gowieabout 4 years ago
Sad to see this conversation in the sops repo considering it’s such an essential tool for at least my own Terraform workflow. Wanted to bring it up here to get more eyes on it and if anyone who knows folks at Mozilla so they can bug them.
https://github.com/mozilla/sops/discussions/927
https://github.com/mozilla/sops/discussions/927
Shilpaabout 4 years ago
Hi everyone, I m new bee in Terraform and started Github Administration through Terraform. I am creating repo, setting all required config. Now, wants to upload files present into working directory or fetch from different repo/S3 to the newly created repo. Any pointers to achieve this? Thank you 🙂
Jonás Márquezabout 4 years ago
Hi everyone! I am trying to use the terraform-null-label module and I get an error with the map, Terraform recommends the use of tomap, but I have been doing tests passing the keys and values in various ways and I can't get it to work, has anyone had the same problem? I leave an example of the error, thanks in advance to all!
│ Error: Error in function call
│
│ on .terraform/modules/subnets/private.tf line 8, in module "private_label":
│ 8: map(var.subnet_type_tag_key, format(var.subnet_type_tag_value_format, "private"))
│ ├────────────────
│ │ var.subnet_type_tag_key is a string, known only after apply
│ │ var.subnet_type_tag_value_format is a string, known only after apply
│
│ Call to function "map" failed: the "map" function was deprecated in Terraform v0.12 and is no longer available; use tomap({ ...
│ }) syntax to write a literal map.
╵
╷
│ Error: Error in function call
│
│ on .terraform/modules/subnets/public.tf line 8, in module "public_label":
│ 8: map(var.subnet_type_tag_key, format(var.subnet_type_tag_value_format, "public"))
│ ├────────────────
│ │ var.subnet_type_tag_key is a string, known only after apply
│ │ var.subnet_type_tag_value_format is a string, known only after apply
│
│ Call to function "map" failed: the "map" function was deprecated in Terraform v0.12 and is no longer available; use tomap({ ...
│ }) syntax to write a literal map.Jim Parkabout 4 years ago(edited)
Did you start a terragrunt run, but then spam Ctrl + C out of cowardice, like me?
Of course you didn't, because you know you'd end up with a bajillion locks. If you need some guidance to clearing those locks on dynamodb, reference this gist.
Of course you didn't, because you know you'd end up with a bajillion locks. If you need some guidance to clearing those locks on dynamodb, reference this gist.
greg nabout 4 years ago
Heya, we’ve got all our terraform in a
Does pre-commit does support passing args to tflint like:
Looking at this, I’m guessing not ?
https://github.com/gruntwork-io/pre-commit/blob/master/hooks/tflint.sh#L14
Thanks
./terraform repo subdirectory.Does pre-commit does support passing args to tflint like:
repos:
- repo: <https://github.com/gruntwork-io/pre-commit>
rev: v0.1.17 # Get the latest from: <https://github.com/gruntwork-io/pre-commit/releases>
hooks:
- id: tflint
args:
- "--config ./terraform/.tflint.hcl"
- "./terraform"Looking at this, I’m guessing not ?
https://github.com/gruntwork-io/pre-commit/blob/master/hooks/tflint.sh#L14
Thanks
Zeeshan Sabout 4 years ago
whats the best ci/cd pipeline for terraform these days ?
Zeeshan Sabout 4 years ago
mostly for provisioning aws
Jas Rowinskiabout 4 years ago
Was wondering how people have setup their EKS clusters when it comes to Node Groups (EKS managed or Self Managed). I'm running EKS managed, but trying to find a way to achieve
Using the node group Cloudposse module currently. But after reviewing others, it seems that
can only be done via Self Managed. Is that correct or am I missing something?
Looking at this module, they offer 3 different strategies when it comes to node groups and mixed instances. But like I said before, it seems to be only Self Managed. Anyone else manage to get this to work with EKS managed node groups?
mixed_instances_policy when it comes to SPOT & ON_DEMAND instance types.Using the node group Cloudposse module currently. But after reviewing others, it seems that
mixed_instances_policycan only be done via Self Managed. Is that correct or am I missing something?
Looking at this module, they offer 3 different strategies when it comes to node groups and mixed instances. But like I said before, it seems to be only Self Managed. Anyone else manage to get this to work with EKS managed node groups?
aimbotdabout 4 years ago(edited)
Hey friends. I'm running into an odd issue. I can run a
This cluster was originally deployed with this pipeline user as well.
terraform plan successfully from my user. I cannot run it from the user in our pipeline, who has the same permissions/policies. That pipeline user keeps hitting this error, but I have no idea why. This is deploy with cloudposse/eks-cluster/aws@0.43.2This cluster was originally deployed with this pipeline user as well.
module.eks_cluster.aws_iam_openid_connect_provider.default[0]: Refreshing state... [id=arn:aws:iam::<REDACTED>:oidc-provider/oidc.eks.<REDACTED>.<http://amazonaws.com/id/<REDACTED>|amazonaws.com/id/<REDACTED>>]
module.eks_cluster.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]
╷
│ Error: Get "https://<REDACTED>.gr7.<REDACTED>.<http://eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps/aws-auth|eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps/aws-auth>": getting credentials: exec: executable aws failed with exit code 255
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth[0],
│ on .terraform/modules/eks_cluster/auth.tf line 132, in resource "kubernetes_config_map" "aws_auth":
│ 132: resource "kubernetes_config_map" "aws_auth" {Kartik Vabout 4 years ago(edited)
Hello everyone , I am trying to achieve vpc cross region peering using terragrunt need some ideas/suggestion to use provider as alias in terragunt.hcl root
chrisabout 4 years ago(edited)
I am trying to get my copy of
I think I have to use
Thanks in advance
reference-architectures up and running again — I know it is outdated but we already have 1 architecture built using it and we need another and the team wants them to be consistent — I believe I have everything resolved except that template_file is not available for the M1 (at least as far as I can find) so I need to updatedata "template_file" "data" {
count = "${length(keys(var.users))}"
# this path is relative to repos/$image_name
template = "${file("${var.templates_dir}/conf/users/user.tf")}"
vars = {
resource_name = "${replace(element(keys(var.users), count.index), local.unsafe_characters, "_")}"
username = "${element(keys(var.users), count.index)}"
keybase_username = "${element(values(var.users), count.index)}"
}
}
resource "local_file" "data" {
count = "${length(keys(var.users))}"
content = "${element(data.template_file.data.*.rendered, count.index)}"
filename = "${var.output_dir}/overrides/${replace(element(keys(var.users), count.index), local.unsafe_characters, "_")}.tf"
}I think I have to use
templatefile() but can’t figure out how to re-write it.Thanks in advance
aimbotdabout 4 years ago
How does one use the
kubernetes_taints variable? I'm trying the following but to no success, │ The given value is not suitable for child module variable "kubernetes_taints" defined at .terraform/modules/eks_node_group/variables.tf:142,1-29: map of string required.kubernetes_taints = [
{
key = "<http://foo.gg/emr|foo.gg/emr>"
effect = "NO_SCHEDULE"
value = null
}
]Danish Kaziabout 4 years ago(edited)
Hello, I am trying to create custom modules for mongodbatlas using the verified mongodbatlas provider https://registry.terraform.io/providers/mongodb/mongodbatlas/1.2.0 but i get the below error on "terraform init"
Initializing provider plugins...
- Finding latest version of hashicorp/mongodbatlas...
- Reusing previous version of mongodb/mongodbatlas from the dependency lock file
- Using previously-installed mongodb/mongodbatlas v1.2.0
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/mongodbatlas: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/mongodbatlas
│
│ Did you intend to use mongodb/mongodbatlas? If so, you must specify that source address in each module which requires that provider. To see which modules are currently
│ depending on hashicorp/mongodbatlas, run the following command:
│ terraform providers
Is this because mongodbatlas provider is a "verified" provider and not a "published" provider ?
Initializing the backend...Initializing provider plugins...
- Finding latest version of hashicorp/mongodbatlas...
- Reusing previous version of mongodb/mongodbatlas from the dependency lock file
- Using previously-installed mongodb/mongodbatlas v1.2.0
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/mongodbatlas: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/mongodbatlas
│
│ Did you intend to use mongodb/mongodbatlas? If so, you must specify that source address in each module which requires that provider. To see which modules are currently
│ depending on hashicorp/mongodbatlas, run the following command:
│ terraform providers
Is this because mongodbatlas provider is a "verified" provider and not a "published" provider ?
Slackbotabout 4 years ago
This message was deleted.
rssabout 4 years ago(edited)
v1.1.4
1.1.4 (January 19, 2022)
BUG FIXES:
config: Non-nullable variables with null inputs were not given default values when checking validation statements (#30330)
config: Terraform will no longer incorrectly report "Cross-package move statement" when an external package has changed a resource from no count to using count, or...
1.1.4 (January 19, 2022)
BUG FIXES:
config: Non-nullable variables with null inputs were not given default values when checking validation statements (#30330)
config: Terraform will no longer incorrectly report "Cross-package move statement" when an external package has changed a resource from no count to using count, or...
Brij Sabout 4 years ago
Hi all, has anyone attempted to set
WARM_IP_TARGET when using the terraform-eks module ? Im having a tough time finding a way to set that upAlmondovarabout 4 years ago(edited)
hi colleagues, few weeks ago i found somewhere a tool that detects the missing terraform from the infra, and gives output like "50% terraformed" - but i cant recall what was its name, any ideas? (its not driftctl)
andylampabout 4 years ago
hi there! I am trying to create an additional authorisation rule when using terraform-aws-ec2-client-vpn repo. I try to create an authorization rule as per example like so:
However, this creates an error saying
If I remove the
authorization_rules = [{
name = "Authorise ingress traffic"
access_group_id = "-"
authorize_all_groups = true
description = "Authorisation for traffic using this VPN to VPC resources"
target_network_cidr = "0.0.0.0/0"
}]However, this creates an error saying
"access_group_id": only one of `access_group_id,authorize_all_groups` can be specified, but `access_group_id,authorize_all_groups` were specified.If I remove the
access_group_id complains that it is required 🙂 - does anyone know how to resolve this issue?Isaacabout 4 years ago
Has anyone used the control tower account factory for terraform? I'm about to explore setting up multi-account AWS environments. HashiCorp Teams with AWS on New Control Tower Account Factory for Terraform
Vucomir Ianculovabout 4 years ago
Hey, I using terraform-aws-cloudfront-cdn and traing to used Origin access identity for my default origin, is it posible ?
andylampabout 4 years ago
hi there again 🙂 - are there any concrete examples of
ConfigMap s to use in https://registry.terraform.io/modules/cloudposse/eks-cluster/aws? like for example adding new users etc?Tyler Pickettabout 4 years ago
Howdy everyone, I opened a PR against terraform-aws-ecs-web-app and realized that there was probably some process that I missed. Is there a process for contributing documented somewhere?
Tyler Pickettabout 4 years ago
It looks like my contributing question has been asked/answered before here but that is old enough that the history has been truncated
michael sewabout 4 years ago
RDS question: what does everybody include for their lifecycle ignore_changes block? Some considerations:
• Db Engine version : we dont enable auto minor version upgrade, as we want to control outages.
• Storage allocation: we enable auto scaling, so..dont manage size w terraform?
• Db Engine version : we dont enable auto minor version upgrade, as we want to control outages.
• Storage allocation: we enable auto scaling, so..dont manage size w terraform?
Mike Croweabout 4 years ago
Can somebody point me to how to use a
but that doesn't seem to work
http backend state with atmos? I'm trying:terraform:
vars: {}
remote_state_backend_type: http
remote_state_backend:
http:
config:
address: <https://gitlab.com/api/v4/projects/>...
lock_address: <https://gitlab.com/api/v4/projects/.../lock>
unlock_address: <https://gitlab.com/api/v4/projects/.../lock>
username: ...
password: ...
lock_method: POST
unlock_method: DELETE
retry_wait_min: 5 but that doesn't seem to work
Slackbotabout 4 years ago
This message was deleted.
Kevin Kennyabout 4 years ago(edited)
I am new to TFE, I am currently working on automating TFE workspace including adding the env variables, Any source code i can take an example of ?
I saw below example but not sure https://github.com/cloudposse/terraform-tfe-cloud-infrastructure-automation where they mentioned how to add the env vs regular variables to the workspace
I saw below example but not sure https://github.com/cloudposse/terraform-tfe-cloud-infrastructure-automation where they mentioned how to add the env vs regular variables to the workspace
michael sewabout 4 years ago
Terraform question: how do i get the first/single item out of a set? I see the docs show the one() function, but that's 0.15+ only . I was hoping for a more general/back-wards compatible method (i'm 0.14)
JBabout 4 years ago
Having some issue reconciling the account-map and the account components from the terraform-aws-components repo. It looks like pr #363 updated the account-map to a version not compatible with the currently published account component. In particular I am trying to sort out how to reverse engineer the required account_info_map output and woudl greatly appreciate a nudge in the right direction;)
Bhavik Patelabout 4 years ago
anyone have any examples on how to make certain resources within a module optional?
Zachary Loeberabout 4 years ago
Taking a peek at Atmos after a long break (in IT time, so like a few months heh) and I'm totally impressed by a very focused and well thought out implementation of a declarative manifest processing engine. I'm curious, was there ever any thought of using a config library like Cuelang to better manage schema level updates and changes?
tamskyabout 4 years ago
What is Cloudposse's current "top pick" for centralized terraform operations/collaboration?
Atlantis / Scalr / Spacelift / TF Cloud/Enterprise / Something-else?
Atlantis / Scalr / Spacelift / TF Cloud/Enterprise / Something-else?
Muhammad Badawyabout 4 years ago
Hello, I'm trying to understand Cloudposse terraform components, and how they are connected/invoked , the modules, labels, contexts ..etc
Any high level explanation/architecture?
Thanks in advance
Any high level explanation/architecture?
Thanks in advance
Mike Croweabout 4 years ago
How would you integrate a root Route53 domain where you purchased the domain in AWS (and thus the Route53 zone was auto-created). I'm trying to understand how to wire that up to
dns-primary (or more specifically dns-delegated)IKabout 4 years ago
Hey guys.. does anyone have a way to propagate the TGW attachment
Name tag to the TGW owner account? We are creating TGW attachments using TF in spoke accounts which is fine however would be great to have the attachment in the TGW owner account to also be named and tagged appropriately. Cheers!Nick Kocharhookabout 4 years ago(edited)
I’m attempting to use
And when I look on AWS, I can see two target groups:
ecs-web-app to set up a Fargate instance which handles traffic from the internet. When I attempt to apply on Terraform Cloud, I get this error:Error: error creating ECS service (ocs-staging-myproj): InvalidParameterException: The target group with targetGroupArn arn:aws_elasticloadbalancing:us-east-2:xxxxxxxxxxxxtargetgroup/ocs-staging-osprey/721e8ee36076b407 does not have an associated load balancer.with module.project_module.module.web_app.module.ecs_alb_service_task.aws_ecs_service.ignore_changes_task_definition[0]
on .terraform/modules/project_module.web_app.ecs_alb_service_task/main.tf line 326, in resource "aws_ecs_service" "ignore_changes_task_definition":resource "aws_ecs_service" "ignore_changes_task_definition" {
And when I look on AWS, I can see two target groups:
ocs-staging-default and ocs-staging-myproj. The first has a load balancer of “ocs-staging”, and the second indeed says “None associated.” The code I’m using is in the thread>>Marceloabout 4 years ago
Hi Guys … this is my first message here… Please i Would like to know if what I am trying to do is possible using just terraform…
I am trying to create an
I am Using AWS environment.
I am trying to create an
ECR Module and get the repository_url output to use as Variable in Task Definition Module ….task_definition container_image variable trying to get the ECR output :/container_image = module.ecr_container_tiktok_api.ecr_tiktok_api.*.repository_urlI am Using AWS environment.
mrwackyabout 4 years ago
Is it correct to say that Terraform state format hasn't changed between 0.14 and 1.1 ? The 0.14 upgrade guide says:
But newer versions are silent on this issue.
Terraform v0.14 does not support legacy Terraform state snapshot formats from prior to Terraform v0.13, so before upgrading to Terraform v0.14 you must have successfully run terraform apply at least once with Terraform v0.13 so that it can complete its state format upgrades.
But newer versions are silent on this issue.