terraform
1082843,973
M
Maksym Vlasov42 minutes ago
Hi, can anyone take a look? Simple one
https://github.com/cloudposse/terraform-aws-rds-cluster/pull/284
https://github.com/cloudposse/terraform-aws-rds-cluster/pull/284
R
RB9 days ago
Is there a private ca terraform module/component? I swear I had found one before in the cloudposse/ or cloudposse-terraform-components/ orgs and now I cannot find it
T
Tamilkumar16 days ago(edited)
Thanks for connecting me
Can anybody help me out why rabbitmq version 3.13 is not listing the instance_type t3.micro in the console and aws cli rather we need to choose mq.m5.large. but in aws documentation says the instance its supported
Can anybody help me out why rabbitmq version 3.13 is not listing the instance_type t3.micro in the console and aws cli rather we need to choose mq.m5.large. but in aws documentation says the instance its supported
M
Marcin BraΕski19 days ago
supply chain attack targeting the official checkmarx/kics Docker Hub repositoryM
Marcin BraΕski19 days ago
watch out if you use KICS
https://cybersecuritynews.com/checkmarx-kics-compromised/
https://cybersecuritynews.com/checkmarx-kics-compromised/
M
managedkaos20 days ago
Email received todayβ¦ π€
Your account has now been migrated from the legacy Free plan to HCP Terraform Free. This transition is complete, and you can continue using the platform on the Free tier at no cost.
HCP Terraform Free is built to support modern infrastructure teams with stronger security, governance, and collaboration capabilitiesβwhile incurring no cost to you for up to 500 managed resources per month.
Now that youβre on HCP Terraform Free, you can take advantage of:
β’ Unlimited users β Invite your entire team and collaborate without seat limits
β’ Single sign-on (SSO) β Secure access using your organizationβs identity provider
β’ Policy enforcement β Maintain governance and consistency across infrastructure changes
β’ Run tasks β Integrate security, compliance, and other tools directly into your Terraform workflows
β’ Stacks β Simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure
G
Gerry Laracuente23 days ago
Hey folks π
I'm working on an interesting problem and I feel like someone out there must have run into this before:
Problem/Background
β’ I'm currently assuming an aws role to access an s3 bucket for backend state as follows:
β’ This role has permissions to read and write to the s3 bucket, and I'd like to scope down permissions so that I use a read-only role during
Where I'm stuck
β’ I can swap in the read-write role arn before running the
β’ If I run
β’ I've also attempted
β’ At this point, my only option is to run another
An alternative
β’ I can remove the
β’ This allows me to assume a read-only role to begin with (in my terminal, pipeline, etc), and I can
β’ Then I can assume a read-write role before running the
β’ This does work, but I'm reaching out for other ideas here. Ideally, I don't want to have to switch role in my terminal or pipeline. The pattern I'm trying to maintain is a single AWS role that can assume the read or write roles to the s3 bucket backend during
I'm working on an interesting problem and I feel like someone out there must have run into this before:
Problem/Background
β’ I'm currently assuming an aws role to access an s3 bucket for backend state as follows:
terraform {
backend "s3" {
...
assume_role = {
role_arn = var.backend_assume_role_arn
}
...
}
}β’ This role has permissions to read and write to the s3 bucket, and I'd like to scope down permissions so that I use a read-only role during
init and plan phases, and switch to a read-write role for the apply phase. Where I'm stuck
β’ I can swap in the read-write role arn before running the
apply phase, but the problem I run into is that since the backend config changes, a init -reconfigure is required here. β’ If I run
tofu init -reconfigure before the apply, this leads to Error: Inconsistent dependency lock fileβ’ I've also attempted
tofu init -reconfigure -lockfile=readonly, but that leads to Error: Provider dependency changes detectedβ’ At this point, my only option is to run another
tofu plan, which is out of the question.An alternative
β’ I can remove the
assume_role out of the backend block entirely. β’ This allows me to assume a read-only role to begin with (in my terminal, pipeline, etc), and I can
init and plan with itβ’ Then I can assume a read-write role before running the
applyβ’ This does work, but I'm reaching out for other ideas here. Ideally, I don't want to have to switch role in my terminal or pipeline. The pattern I'm trying to maintain is a single AWS role that can assume the read or write roles to the s3 bucket backend during
init/plan vs apply.S
Stanislava Racheva26 days ago(edited)
Hi everyone π
I hope you're doing well. I wanted to kindly ask if someone might be able to take a look at the following issue when they have time: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/271
Itβs about making the random βpet nameβ suffix configurable (or optionally disabled) for instance names. Iβm not entirely sure if thereβs already a recommended approach for this, so Iβd really appreciate any guidance or help.
I understand everyone has a lot on their plate, so thank you very much in advance for your time and consideration β itβs genuinely appreciated π
Please let me know if I can provide any additional details.
Thanks again!
update: we opened PR making that suffix optional: https://github.com/cloudposse/terraform-aws-rds-cluster/pull/282
I hope you're doing well. I wanted to kindly ask if someone might be able to take a look at the following issue when they have time: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/271
Itβs about making the random βpet nameβ suffix configurable (or optionally disabled) for instance names. Iβm not entirely sure if thereβs already a recommended approach for this, so Iβd really appreciate any guidance or help.
I understand everyone has a lot on their plate, so thank you very much in advance for your time and consideration β itβs genuinely appreciated π
Please let me know if I can provide any additional details.
Thanks again!
update: we opened PR making that suffix optional: https://github.com/cloudposse/terraform-aws-rds-cluster/pull/282
J
JP Pakalapatiabout 1 month ago
Hello! I'm looking for a way to switch identity depending on the component in a stack.
I have a default identity which I use store terraform state file, But the component that I'm trying to create should be on another account. I have tried using the component level identity selection, but it isn't working. it tries to create the component on the default identity no matter the config I tried.
I have a default identity which I use store terraform state file, But the component that I'm trying to create should be on another account. I have tried using the component level identity selection, but it isn't working. it tries to create the component on the default identity no matter the config I tried.
C
cam72camabout 1 month ago
Sharing for visibility: https://github.com/goreleaser/goreleaser/issues/6514. Newer go-releaser versions are breaking provider signing.
D
davidabout 1 month ago
Whatβs the reasoning behind having so many of components being pinned to < 6 for the aws provider? Lambdas need a provider version 6.21> to utilize python 3.14. Weβre particularly hitting this issue with the
I understand I can pull the latter locally and reference that way, but would like to see the upstream be operational if it can support it.
aws-datadog-lambda-forwarder component also having a requirement on the aws-datadog-credentials component.I understand I can pull the latter locally and reference that way, but would like to see the upstream be operational if it can support it.
R
Roman Orlovskiyabout 2 months ago
Hello. Are there any good workarounds for depends_on in data resources and cases like https://github.com/hashicorp/terraform-provider-aws/issues/29421 ? In my setup, I am trying to create a terraform aws-sso module, which not only creates AWS PS and assigns to accounts, but also creates an AD group, triggers AWS SCIM sync, and then using data resource finds the corresponding AWS SSO group id via its name. The issue is that data resource needs to depend on the AD SCIM trigger resource, which results in constant "known after apply" AWS Permission Set account assignment resource recreation due to how data resources work. I know that I can use ignore_changes lifecycle, or even separate the module into two technically, but just curious if anyone faced something similar and has some other approaches.
J
Jonathan Roseabout 2 months ago
J
Joe Perezabout 2 months ago
Lessons learned from scaling Infrastructure as Code from 5 to 1000+ workspaces https://www.ordisi.us/posts/2026_1_scaling/
J
johncblandii2 months ago
@James Humphries how long does it generally take to for the registry to update now that #3808 merged and CI ran?
J
J
johncblandii2 months ago
yeah, just trying to get this issue over the line
J
James Humphries2 months ago
@johncblandii, you shouldn't ever have to do a manual PR to bump versions. I think someone is putting together a fix now π
J
johncblandii2 months ago
Add a PR with the missing values https://github.com/opentofu/registry/pull/3806
J
johncblandii2 months ago
Weβre seeing OpenTofu not sync with the repo releases. I requested access to the CNCF slack (seems that requires a human review).
I confirmed the GitHub RSS feed shows v1.32.0 but https://search.opentofu.org/provider/cloudposse/utils/latest does not.
Any thoughts?
CC @James Humphries
I confirmed the GitHub RSS feed shows v1.32.0 but https://search.opentofu.org/provider/cloudposse/utils/latest does not.
Any thoughts?
CC @James Humphries
M
Michal Tomaszek2 months ago
Hey, is there anything against implementing import of secrets and rulesets in this component?
https://github.com/cloudposse-terraform-components/aws-github-repository
https://github.com/cloudposse-terraform-components/aws-github-repository
M
Maksym Vlasov2 months ago
Hi
does anyone know is https://github.com/cloudposse/terraform-provider-context maintained or already deprecated?
Docs are mostly not existing and no activity for half year π€
does anyone know is https://github.com/cloudposse/terraform-provider-context maintained or already deprecated?
Docs are mostly not existing and no activity for half year π€
G
Gaurav Gupta2 months ago
I'm new to this channel.
G
Gaurav Gupta2 months ago
Hi @everyone
C
cricketsc2 months ago(edited)
Are folks here reliably able to use manifest rendering via the terraform helm provider? What version of the provider are you using?
T
Tyler Rankin2 months ago
We make heavy use of
All Spacelift UI outputs for each of our eks stacks (>20) recently started to display
Throughout the planning phase we have multiple remote-state calls reading cross account values and we observe the workspace changing by checking
It seems like the last remote-state call might be to
All that said our local atmos applies succeed. So I guess the question to the group is has anyone seen an issue with
remote-state, generally v1.8.0. Recently we've encountered an issue with our eks component. We believe there is a piece of this on Spaceliftβs end that is causing the issue but was curious if anyone else using remote-state might've observed this behavior. All Spacelift UI outputs for each of our eks stacks (>20) recently started to display
sandbox values. We don't have any corresponding runs that show these being set. The stacks are failing to apply due to a lineage mismatch. Throughout the planning phase we have multiple remote-state calls reading cross account values and we observe the workspace changing by checking
terraform workspace show. I assume this might be normal for the remote-state calls to actually lookup the state values. It seems like the last remote-state call might be to
sandbox, and Spacelift isn't reading .terraform/enviroment a final time after it switches back to workspace of the actual stack we are planning. This causes Spacelift to pull the sandbox state and attempt an apply which fails. While Spacelift UI outputs are incorrect, the state file for each of our eks components are intact and don't have erroneous sandbox values. All that said our local atmos applies succeed. So I guess the question to the group is has anyone seen an issue with
remote-state ever targeting the incorrect workspace at the end of a plan/apply? R
RickA3 months ago
terraform-provider-utils plugin errorA
Ashwini Manoj3 months ago(edited)
Hi, I am new to atmos and I was wondering what is the difference between the module vpc-peering and the component vpc-peering?
I thought the component would be building on top of the module, but that doesn't seem to be the case.
Does that mean they are maintained parallelly with the same features available?
What should I consider using in my project, the component or the module?
https://github.com/cloudposse/terraform-aws-vpc-peering
https://github.com/cloudposse-terraform-components/aws-vpc-peering
I thought the component would be building on top of the module, but that doesn't seem to be the case.
Does that mean they are maintained parallelly with the same features available?
What should I consider using in my project, the component or the module?
https://github.com/cloudposse/terraform-aws-vpc-peering
https://github.com/cloudposse-terraform-components/aws-vpc-peering
J
Jonathan Rose3 months ago
I am trying to understand how I can use cloudposse/terraform-github-repository at v1.1.0 to create sample "projects" (e.g. dotnet, python, java, HCL) with an expected directory structure
E
erik3 months ago
@James Humphries any insights on what the OpenTofu provider rate limits are?
P
Prasanna3 months ago
@Prasanna has joined the channel
J
JS3 months ago
@JS has joined the channel
S
Salman Shaik3 months ago
@Salman Shaik has joined the channel
D
Deep3 months ago
@Deep has joined the channel
B
brandonvin3 months ago(edited)
Does anyone have a reusable tool or scripts handy for bulk-moving resources from one root module to another?
I could build one, but figured I'd ask before reinventing the wheel
I could build one, but figured I'd ask before reinventing the wheel
J
J
M
Mujahid this side4 months ago
π Hello, team!
Y
Yurii5 months ago
very close to crossplane
R
A
akirata5 months ago
Heya do you guys use anything else with cloudposse yaml of datadog monitors? like Kustomize,yq etc?
C
Cyberjesus5 months ago
is there any way to use templates in workflows? I tried using a
!template yaml function to apply a sprig template but it doesn't seem to work The following command failed to execute:
atmos terraform plan aws_federated_access -s !template entau-{{ env STAGE }}-{{ env BRAND }}-awsapse2M
Michael5 months ago
Happy Friday! Thought I'd share a little blog post on some of my favorite Terraform techniques that I've picked up from the Cloud Posse community over the years. It's nothing revolutionary, but some of these tricks aren't widely used from what I've seen in the wild:
https://rosesecurity.dev/2025/12/04/terraform-tips-and-tricks.html
https://rosesecurity.dev/2025/12/04/terraform-tips-and-tricks.html
S
Sean Nguyen5 months ago
Hi all, looking for feedback on this PR here π
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/143
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/143
M
Marat Bakeev5 months ago
Hi guys, what is the future of account-map component? We've noticed a bit about it being deprecated?
Jonathan6 months ago
Hey folks, I built a new Kubernetes Terraform provider that might be interesting to you.
It solves a long-standing Terraform limitation: you can't create a cluster and deploy to it in the same apply. Providers are configured at the root, before resources exist, so you can't use a cluster's endpoint as provider config.
Most people work around this with two separate applies, some use null_resource hacks, others split everything into multiple stacks. After being frustrated by this for many years, I realized the only solution was to build a provider that sidesteps the whole problem with inline connections.
Example:
Create cluster β deploy workloads β single apply. No provider configuration needed.
Building with Server-Side Apply from the ground up (rather than bolting it on) opened doors to fix other persistent community issues with existing providers.
β’ Accurate diffs - Server-side apply dry-run projections show actual changes, not client-side guesses
β’ YAML + validation - K8s strict schema validation catches typos at plan time
β’ CRD+CR same apply - Auto-retry handles eventual consistency (no more time_sleep)
β’ Patch resources - Modify EKS/GKE defaults without taking ownership
β’ Non-destructive waits - Timeouts don't force resource recreation
300+ tests, runnable examples for everything.
GitHub: https://github.com/jmorris0x0/terraform-provider-k8sconnect
Registry: https://registry.terraform.io/providers/jmorris0x0/k8sconnect/latest
Would love feedback if you've hit this pain point.
It solves a long-standing Terraform limitation: you can't create a cluster and deploy to it in the same apply. Providers are configured at the root, before resources exist, so you can't use a cluster's endpoint as provider config.
Most people work around this with two separate applies, some use null_resource hacks, others split everything into multiple stacks. After being frustrated by this for many years, I realized the only solution was to build a provider that sidesteps the whole problem with inline connections.
Example:
resource "k8sconnect_object" "app" {
cluster = {
host = aws_eks_cluster.main.endpoint
token = data.aws_eks_cluster_auth.main.token
}
yaml_body = file("app.yaml")
}Create cluster β deploy workloads β single apply. No provider configuration needed.
Building with Server-Side Apply from the ground up (rather than bolting it on) opened doors to fix other persistent community issues with existing providers.
β’ Accurate diffs - Server-side apply dry-run projections show actual changes, not client-side guesses
β’ YAML + validation - K8s strict schema validation catches typos at plan time
β’ CRD+CR same apply - Auto-retry handles eventual consistency (no more time_sleep)
β’ Patch resources - Modify EKS/GKE defaults without taking ownership
β’ Non-destructive waits - Timeouts don't force resource recreation
300+ tests, runnable examples for everything.
GitHub: https://github.com/jmorris0x0/terraform-provider-k8sconnect
Registry: https://registry.terraform.io/providers/jmorris0x0/k8sconnect/latest
Would love feedback if you've hit this pain point.
MrAtheist6 months ago
anyone know how to go about destroying a specific resource deep in the modules without making a mess...?
in this case i would like to destroy
i thought this was pretty trivial until i step thru the tf plan, but i dont think this is doable by messing with hcl itself, instead...
any other suggestions...?
in this case i would like to destroy
module.service_b.module.ec2 ...module "service_a" {
source = "../modules/stuff"
...
}
module "service_b" {
source = "../modules/stuff"
...
}
...
# modules/stuff
module "ec2" {
source = "../modules/ec2"
}
...
... some more stuffi thought this was pretty trivial until i step thru the tf plan, but i dont think this is doable by messing with hcl itself, instead...
terraform destroy --target module.service_b.module.ec2
terraform state rm module.service_b.module.ec2any other suggestions...?
Alek6 months ago
Hello team! π
I'm hitting a perpetual diff on various resources originating from the GitHub Provider, used in the aws-argocd-github-repo component. Specifically, the
I found our that recently, this PR was merged, which directly addresses handling of etags on the GH provider. Is my understanding correct that the issue should resolve on its own once the change gets released (currently it is not)? Are you aware of any other workaround here? (fyi.
I'm hitting a perpetual diff on various resources originating from the GitHub Provider, used in the aws-argocd-github-repo component. Specifically, the
etag property is constantly changing on the GitHub's API side, creating ever-changing plans. Those plans are failing to apply via gitops with plan files have differences.I found our that recently, this PR was merged, which directly addresses handling of etags on the GH provider. Is my understanding correct that the issue should resolve on its own once the change gets released (currently it is not)? Are you aware of any other workaround here? (fyi.
ignore_changes on etag does not work)Prateek kumar6 months ago
I'm trying to build a tool which require, terraform core's connectivity, using RPC
!!not building a plugin, its like a standalone software that imports terraform core and compares files,
but didn't found any content on youtube, i really do even know how to initiate this project.
i am an intern BTW!
!!not building a plugin, its like a standalone software that imports terraform core and compares files,
but didn't found any content on youtube, i really do even know how to initiate this project.
i am an intern BTW!