terraform
1082843,819
P
Prasanna6 days ago
@Prasanna has joined the channel
J
JS6 days ago
@JS has joined the channel
S
Salman Shaik7 days ago
@Salman Shaik has joined the channel
D
Deep7 days ago
@Deep has joined the channel
B
brandon20 days ago(edited)
Does anyone have a reusable tool or scripts handy for bulk-moving resources from one root module to another?
I could build one, but figured I'd ask before reinventing the wheel
I could build one, but figured I'd ask before reinventing the wheel
J
J
M
Mujahid this sideabout 1 month ago
👋 Hello, team!
Y
Yuriiabout 2 months ago
very close to crossplane
R
A
akirata2 months ago
Heya do you guys use anything else with cloudposse yaml of datadog monitors? like Kustomize,yq etc?
C
Cyberjesus2 months ago
is there any way to use templates in workflows? I tried using a
!template yaml function to apply a sprig template but it doesn't seem to work The following command failed to execute:
atmos terraform plan aws_federated_access -s !template entau-{{ env STAGE }}-{{ env BRAND }}-awsapse2M
Michael3 months ago
Happy Friday! Thought I'd share a little blog post on some of my favorite Terraform techniques that I've picked up from the Cloud Posse community over the years. It's nothing revolutionary, but some of these tricks aren't widely used from what I've seen in the wild:
https://rosesecurity.dev/2025/12/04/terraform-tips-and-tricks.html
https://rosesecurity.dev/2025/12/04/terraform-tips-and-tricks.html
S
Sean Nguyen3 months ago
Hi all, looking for feedback on this PR here 🙂
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/143
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/143
M
Marat Bakeev3 months ago
Hi guys, what is the future of account-map component? We've noticed a bit about it being deprecated?
Jonathan4 months ago
Hey folks, I built a new Kubernetes Terraform provider that might be interesting to you.
It solves a long-standing Terraform limitation: you can't create a cluster and deploy to it in the same apply. Providers are configured at the root, before resources exist, so you can't use a cluster's endpoint as provider config.
Most people work around this with two separate applies, some use null_resource hacks, others split everything into multiple stacks. After being frustrated by this for many years, I realized the only solution was to build a provider that sidesteps the whole problem with inline connections.
Example:
Create cluster → deploy workloads → single apply. No provider configuration needed.
Building with Server-Side Apply from the ground up (rather than bolting it on) opened doors to fix other persistent community issues with existing providers.
• Accurate diffs - Server-side apply dry-run projections show actual changes, not client-side guesses
• YAML + validation - K8s strict schema validation catches typos at plan time
• CRD+CR same apply - Auto-retry handles eventual consistency (no more time_sleep)
• Patch resources - Modify EKS/GKE defaults without taking ownership
• Non-destructive waits - Timeouts don't force resource recreation
300+ tests, runnable examples for everything.
GitHub: https://github.com/jmorris0x0/terraform-provider-k8sconnect
Registry: https://registry.terraform.io/providers/jmorris0x0/k8sconnect/latest
Would love feedback if you've hit this pain point.
It solves a long-standing Terraform limitation: you can't create a cluster and deploy to it in the same apply. Providers are configured at the root, before resources exist, so you can't use a cluster's endpoint as provider config.
Most people work around this with two separate applies, some use null_resource hacks, others split everything into multiple stacks. After being frustrated by this for many years, I realized the only solution was to build a provider that sidesteps the whole problem with inline connections.
Example:
resource "k8sconnect_object" "app" {
cluster = {
host = aws_eks_cluster.main.endpoint
token = data.aws_eks_cluster_auth.main.token
}
yaml_body = file("app.yaml")
}Create cluster → deploy workloads → single apply. No provider configuration needed.
Building with Server-Side Apply from the ground up (rather than bolting it on) opened doors to fix other persistent community issues with existing providers.
• Accurate diffs - Server-side apply dry-run projections show actual changes, not client-side guesses
• YAML + validation - K8s strict schema validation catches typos at plan time
• CRD+CR same apply - Auto-retry handles eventual consistency (no more time_sleep)
• Patch resources - Modify EKS/GKE defaults without taking ownership
• Non-destructive waits - Timeouts don't force resource recreation
300+ tests, runnable examples for everything.
GitHub: https://github.com/jmorris0x0/terraform-provider-k8sconnect
Registry: https://registry.terraform.io/providers/jmorris0x0/k8sconnect/latest
Would love feedback if you've hit this pain point.
MrAtheist4 months ago
anyone know how to go about destroying a specific resource deep in the modules without making a mess...?
in this case i would like to destroy
i thought this was pretty trivial until i step thru the tf plan, but i dont think this is doable by messing with hcl itself, instead...
any other suggestions...?
in this case i would like to destroy
module.service_b.module.ec2 ...module "service_a" {
source = "../modules/stuff"
...
}
module "service_b" {
source = "../modules/stuff"
...
}
...
# modules/stuff
module "ec2" {
source = "../modules/ec2"
}
...
... some more stuffi thought this was pretty trivial until i step thru the tf plan, but i dont think this is doable by messing with hcl itself, instead...
terraform destroy --target module.service_b.module.ec2
terraform state rm module.service_b.module.ec2any other suggestions...?
Alek4 months ago
Hello team! 👋
I'm hitting a perpetual diff on various resources originating from the GitHub Provider, used in the aws-argocd-github-repo component. Specifically, the
I found our that recently, this PR was merged, which directly addresses handling of etags on the GH provider. Is my understanding correct that the issue should resolve on its own once the change gets released (currently it is not)? Are you aware of any other workaround here? (fyi.
I'm hitting a perpetual diff on various resources originating from the GitHub Provider, used in the aws-argocd-github-repo component. Specifically, the
etag property is constantly changing on the GitHub's API side, creating ever-changing plans. Those plans are failing to apply via gitops with plan files have differences.I found our that recently, this PR was merged, which directly addresses handling of etags on the GH provider. Is my understanding correct that the issue should resolve on its own once the change gets released (currently it is not)? Are you aware of any other workaround here? (fyi.
ignore_changes on etag does not work)Prateek kumar4 months ago
I'm trying to build a tool which require, terraform core's connectivity, using RPC
!!not building a plugin, its like a standalone software that imports terraform core and compares files,
but didn't found any content on youtube, i really do even know how to initiate this project.
i am an intern BTW!
!!not building a plugin, its like a standalone software that imports terraform core and compares files,
but didn't found any content on youtube, i really do even know how to initiate this project.
i am an intern BTW!
Craig4 months ago
👋 I'm trying to figure out what I am doing incorrectly when using the
I have several VPCs already created from this module and am working towards removing the default VPC security group default egress & ingress rules. I thought I would be able to do this by simply adding the
If I set the value to
Why does setting this value to
default_security_group_deny_all variable with the terraform-aws-vpc module.I have several VPCs already created from this module and am working towards removing the default VPC security group default egress & ingress rules. I thought I would be able to do this by simply adding the
default_security_group_deny_all variable to my existing Terraform with a value of true and just redeploying my Terraform, however when I make a PR with those changes, my Terraform plan shows 0 changes to be made.If I set the value to
false I see the default security group being removed (I imagine by setting this to false I'll need to make a moved block indicating that I am now managing this security group as part a different Terraform resource), but that's not what i want to do.Why does setting this value to
true not seem to do anything for already created default VPC security groups?Mark Johnson4 months ago(edited)
Hi CloudPosse team - Any chance we can get an issue to update Terraform awsutils - https://github.com/cloudposse/terraform-provider-awsutils
Updated such that the corresponding awsutils resources support a
---
Use Case: We now pass in ~15
Updated such that the corresponding awsutils resources support a
region parameter? Basically, similar to the AWS 6.0 Terraform provider?---
Use Case: We now pass in ~15
awsutils providers each with separate regions to delete VPCs for all these regions. It would be amazing to loop over with a region parameter.Drew Fulton4 months ago(edited)
Good morning, I've been a longtime fan of the CloudPosse architecture as we used it at one of my former roles. While I was overseeing our architecture at the time, I was not the person that actually set up the original accounts a few year ago. As a result, I'm taking some time to go through the process myself so I can set things up in the future. I'm making really solid progress but seem to have run into a wall and could really use some help.
I've been working through the foundation documents on my own. I'm currently in the Deploy Accounts (https://docs.cloudposse.com/layers/accounts/deploy-accounts/) stage. I've run everything through Step 6 deploying the accounts and account map.
I'm now trying to apply the
I've tried setting
FWIW, I've confirmed I'm using the latest versions of all the modules.
Thanks for any suggestions!
I've been working through the foundation documents on my own. I'm currently in the Deploy Accounts (https://docs.cloudposse.com/layers/accounts/deploy-accounts/) stage. I've run everything through Step 6 deploying the accounts and account map.
I'm now trying to apply the
account-settings module and its failing with two instances of the The given key does not identify an element in this collection value. error. The docs mention that this is usually due to a mismatch of the root_account_aws_name in the account-map. I've confirmed that multiple times and have it set to root. For this troubleshooting, let's assume we are trying to apply the account-settings for the audit account which is called core-audit. The account-settings module appears to be looking for the audit index instead of core-audit.I've tried setting
audit_account_account_name to both core-audit and audit, neither of which are working. I believe the value should be core-audit. Where else could I be going wrong?FWIW, I've confirmed I'm using the latest versions of all the modules.
Thanks for any suggestions!
Craig4 months ago
I imagine I could create a x-account trust policy like this:
but I don't think you can apply it to the permissionset that is being created on the AWS destination account side
data "aws_iam_policy_document" "xaccount_trust_policy" {
provider = aws.destination
statement {
actions = [
"sts:AssumeRole",
"sts:TagSession",
"sts:SetSourceIdentity"
]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.source.account_id}:root"]
}
}
}but I don't think you can apply it to the permissionset that is being created on the AWS destination account side
Craig4 months ago(edited)
👋 I am using the https://github.com/cloudposse/terraform-aws-sso/ module to create permission_sets and assign them to AWS accounts, pretty standard stuff.
I would like to try and customize the trust policy associated with a permissionset to allow for assuming the role in one AWS account, from another AWS account within the same Org, but I'm not finding much to go on as far as examples go in this repo.
I am trying to setup something that would allow users that have been assigned a role in AWS permissions to copy items from an S3 bucket in Account A to an S3 bucket in Account B, within the same region, similar to what's goin gon here: https://stackoverflow.com/questions/73639007/allow-user-to-assume-an-iam-role-with-sso-login
the problem I am running into is I am finding nowhere to actually configure the contents of thePermissionSet Trust Policy, is that just something that is outside of the scope of the terraform-aws-sso module?
I would like to try and customize the trust policy associated with a permissionset to allow for assuming the role in one AWS account, from another AWS account within the same Org, but I'm not finding much to go on as far as examples go in this repo.
I am trying to setup something that would allow users that have been assigned a role in AWS permissions to copy items from an S3 bucket in Account A to an S3 bucket in Account B, within the same region, similar to what's goin gon here: https://stackoverflow.com/questions/73639007/allow-user-to-assume-an-iam-role-with-sso-login
the problem I am running into is I am finding nowhere to actually configure the contents of thePermissionSet Trust Policy, is that just something that is outside of the scope of the terraform-aws-sso module?
Erik Osterman (Cloud Posse)4 months ago
Would it be interesting if Cloud Posse offered something like a commercial "Bug Fix Insurance" across our module ecosystem?
Gustavo4 months ago
Hi! Is there an open source SQS module from cloudposse out there? I was checking the sqs-queue one but it's not listed in their modules library and I couldn't use it directly in tf
Marat Bakeev4 months ago
Hey guys, what is the procedure to add or update components in https://github.com/cloudposse-terraform-components ?
For example, if we want to add some features to a component, or we have a completely new component - do we need to ask and discuss somewhere first? Or just send PRs? or..?
For example, if we want to add some features to a component, or we have a completely new component - do we need to ask and discuss somewhere first? Or just send PRs? or..?
will4 months ago
Hi, I'm using the ECR aws module (https://registry.terraform.io/modules/cloudposse/ecr/aws/latest). I would like some clarification on the
1. Does the
2. Is the
We've had some issues with deployed tags being cleaned up and I want to make sure I fully understand these 2 settings. Thanks.
max_image_count and protected_tags_keep_count parameters.1. Does the
max_image_count exclude the images with protected tags?2. Is the
protected_tags_keep_count per unique tag?We've had some issues with deployed tags being cleaned up and I want to make sure I fully understand these 2 settings. Thanks.
paulm4 months ago
Office Hours ran long (always worthwhile, thank you to CloudPosse and everyone who contributes!), so I didn't get to ask this question…
What Terraform version adoption statistics have people found? I want to balance compatibility and modernity when distributing open-source modules. Would I be shooting myself in the foot with minima of v1.10 of Terraform from November, 2024 (S3 state without DynamoDB) and v6.0 of the AWS provider from June, 2025 (multi-region without repetition), because typical users are slow to upgrade?
At work, in multiple, fairly large shops, I'm ecstatic 🥲 when I see Terraform v1.x and AWS provider v5.
Thanks for any advice!
What Terraform version adoption statistics have people found? I want to balance compatibility and modernity when distributing open-source modules. Would I be shooting myself in the foot with minima of v1.10 of Terraform from November, 2024 (S3 state without DynamoDB) and v6.0 of the AWS provider from June, 2025 (multi-region without repetition), because typical users are slow to upgrade?
At work, in multiple, fairly large shops, I'm ecstatic 🥲 when I see Terraform v1.x and AWS provider v5.
Thanks for any advice!
MichaelM4 months ago
Has anyone found a way to destroy/terminate namespaces created by the Terraform resource kubernetes_namespace when they get stuck in the Terminating state?
Right now, the only thing that seems to work is manually clearing the finalizers, like this:
Just wondering if anyone's found a cleaner or automated way to handle this by terraform ?
Right now, the only thing that seems to work is manually clearing the finalizers, like this:
kubectl get ns "$ns" -o json | jq 'del(.spec.finalizers)' | kubectl replace --raw "/api/v1/namespaces/$ns/finalize" -f -Just wondering if anyone's found a cleaner or automated way to handle this by terraform ?
Michael Galey5 months ago
Hey Guys, haven't updated my terraform for a few years, whats the best practice for this:
using terragrunt, trying to get clean tflint output etc.
directories are like:
domains/domain1.com/main.tf
domains/terraform.tfvars
production/productionapp1/main.tf
production/terraform.tfvars
modules/
I am currently using domains/terraform.tfvars to define various security rules/whitelist IPs, that are used in some domains, but not all, So if I don't define/use them in each domain folder, terraform or tflint show a warning about it.
Should I just disable the linting rules for that? or else I'm considering a modules/shared_variables and then domain1/main.tf can use shareD_variables.whitelist_ips if it needs to, and it shouldn't throw warnings about unused outputs.
using terragrunt, trying to get clean tflint output etc.
directories are like:
domains/domain1.com/main.tf
domains/terraform.tfvars
production/productionapp1/main.tf
production/terraform.tfvars
modules/
I am currently using domains/terraform.tfvars to define various security rules/whitelist IPs, that are used in some domains, but not all, So if I don't define/use them in each domain folder, terraform or tflint show a warning about it.
Should I just disable the linting rules for that? or else I'm considering a modules/shared_variables and then domain1/main.tf can use shareD_variables.whitelist_ips if it needs to, and it shouldn't throw warnings about unused outputs.
Nayeem Mohammed5 months ago(edited)
Hey guys, looking to get some help with this terraform module
https://github.com/cloudposse/terraform-aws-codebuild/tree/main
I am creating codebuild projects using atmos and the above module
it's creating the env vars by default which i have not defined and i'm unable to exempt them. any ideas??
the env vars that it currently adds
I want to exempt IMAGE_REPO_NAME and IMAGE_TAG variables
https://github.com/cloudposse/terraform-aws-codebuild/tree/main
I am creating codebuild projects using atmos and the above module
it's creating the env vars by default which i have not defined and i'm unable to exempt them. any ideas??
the env vars that it currently adds
AWS_REGION
us-east-1
PLAINTEXT
AWS_ACCOUNT_ID
11111111
PLAINTEXT
IMAGE_REPO_NAME
UNSET
PLAINTEXT
IMAGE_TAG
latest
PLAINTEXTI want to exempt IMAGE_REPO_NAME and IMAGE_TAG variables
shannon agarwal5 months ago
If anyone able to provide any guidance?
shannon agarwal5 months ago
Need some help here, I have never used Spacelift, I have my Githib Repo added to it and the stack created but I am trying to create a test s3 bucket using terraform. Any guides would be appreciated.
mloskot5 months ago(edited)
Anyone infra'ing AKS with Terraform and AzureRM 3.x and unexpectedly experiencing their Windows nodes being forcibly replaced despite no changes to config or code? I've just went into panic mode and reported this https://github.com/hashicorp/terraform-provider-azurerm/issues/30757
MichaelM5 months ago
Hi all, I need some clarification around licensing with Terraform (HashiCorp’s BUSL-1.1) and our own Terraform code.
I’m trying to understand which parts are covered by HashiCorp’s license vs. what’s governed by how we license our own code.
Specifically:
1. If we give a client access (via GitLab token) so they can run our Terraform code to create AWS resources in their account — do we need any special licensing from HashiCorp?
2. If the client clones our Terraform code and uploads it into their own repo (to run infra for themselves), does that raise any BUSL issues, or is it purely about how we choose to license our own Terraform modules?
3. If we don’t want clients to freely reuse or redistribute our Terraform code outside of the engagement, should we explicitly add a proprietary/custom license to our repo, or is “no license file” enough protection?
Would love some guidance so we make sure we’re both compliant with HashiCorp’s BUSL and clear about our own IP boundaries. 🙏
I’m trying to understand which parts are covered by HashiCorp’s license vs. what’s governed by how we license our own code.
Specifically:
1. If we give a client access (via GitLab token) so they can run our Terraform code to create AWS resources in their account — do we need any special licensing from HashiCorp?
2. If the client clones our Terraform code and uploads it into their own repo (to run infra for themselves), does that raise any BUSL issues, or is it purely about how we choose to license our own Terraform modules?
3. If we don’t want clients to freely reuse or redistribute our Terraform code outside of the engagement, should we explicitly add a proprietary/custom license to our repo, or is “no license file” enough protection?
Would love some guidance so we make sure we’re both compliant with HashiCorp’s BUSL and clear about our own IP boundaries. 🙏
Zapier5 months ago
Join us for "Office Hours" every Wednesday 01:30PM (PST, GMT-7) via Zoom.This is an opportunity to ask us questions on
terraform and get to know others in the community on a more personal level. Next one is Oct 08, 2025 01:30PM.👉️ Register for Webinar
#office-hours (our channel)
Jackie Virgo5 months ago(edited)
Super random question, why do some CloudPosse modules support passing a permissions boundary but not path? I don't want to act like I have a ton of knowledge here but in my corporate experience if a permissions boundary is required so is a path. I have run into this with both EC2 & lambda module
Zapier5 months ago
Join us for "Office Hours" every Wednesday 01:30PM (PST, GMT-7) via Zoom.This is an opportunity to ask us questions on
terraform and get to know others in the community on a more personal level. Next one is Oct 01, 2025 01:30PM.👉️ Register for Webinar
#office-hours (our channel)
idanl lodzki5 months ago
Hi everyone, I’m Idan. I’m working on an open-source project that helps monitor and control everything in an organization, with integrations to third-party tools.
We’re looking for someone with Terraform experience to contribute code and help automate a demo environment so users can try it out quickly.
Stars are of course very welcome ⭐️
Check it out here:
https://github.com/OpsiMate/OpsiMate
We’re looking for someone with Terraform experience to contribute code and help automate a demo environment so users can try it out quickly.
Stars are of course very welcome ⭐️
Check it out here:
https://github.com/OpsiMate/OpsiMate
Yangci Ou5 months ago(edited)
Hey all! We're working through an IAM role delegation pattern for a central primary role (for Spacelift/Terraform executions), which would then assume into downstream account roles.
The setup:
• Primary role in the "Identity" account (Spacelift or any automation system like GHA assumes this)
• The primary role can then assume into delegated admin roles in downstream accounts (trust policy allow)
• The delegated roles have admin permission in their respective accounts
BUT, now how do we, if we want to perform Terraform locally, assume into the Primary role in the Identity account?
1. Chain role
a. Users authenticate via AWS SSO -> Primary TF admin role Identity account
b. How do we do this? Leapp, we can do this via Chained Roles... but local CLI, we'd have to do an additional step to assume role via AWS CLI
2. We don't assume into primary role, Delegated roles directly have trust policy to allow the AWS SSO admin role in the Identity account.
This is very similar to the CloudPosse's architecture guide, https://docs.cloudposse.com/layers/identity/centralized-terraform-access/ but from the Permission Set -> intermediary Primary role in the Identity account, how is that assumption usually done? Is an additional AWS CLI command the best option? I'm not sure which is the best path.
The setup:
• Primary role in the "Identity" account (Spacelift or any automation system like GHA assumes this)
• The primary role can then assume into delegated admin roles in downstream accounts (trust policy allow)
• The delegated roles have admin permission in their respective accounts
BUT, now how do we, if we want to perform Terraform locally, assume into the Primary role in the Identity account?
1. Chain role
a. Users authenticate via AWS SSO -> Primary TF admin role Identity account
b. How do we do this? Leapp, we can do this via Chained Roles... but local CLI, we'd have to do an additional step to assume role via AWS CLI
2. We don't assume into primary role, Delegated roles directly have trust policy to allow the AWS SSO admin role in the Identity account.
This is very similar to the CloudPosse's architecture guide, https://docs.cloudposse.com/layers/identity/centralized-terraform-access/ but from the Permission Set -> intermediary Primary role in the Identity account, how is that assumption usually done? Is an additional AWS CLI command the best option? I'm not sure which is the best path.
Robert5 months ago(edited)
I am trying to upgrade terraform-aws-msk-apache-kafka-cluster from v1.4.0 to v2.5.0
The plan shows that the whole msk cluster needs a replacement.
Is there any guideline how to do or avoid that?
there is a guide for older releases that looks similar
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/blob/main/docs/migration-0.7.x-0.8.x+.md
looks like this issue has some guideline
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/issues/93
The plan shows that the whole msk cluster needs a replacement.
Is there any guideline how to do or avoid that?
# module.kafka.module.kafka.aws_msk_cluster.default[0] must be replaced
-/+ resource "aws_msk_cluster" "default" {there is a guide for older releases that looks similar
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/blob/main/docs/migration-0.7.x-0.8.x+.md
looks like this issue has some guideline
https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/issues/93
Slackbot5 months ago
This message was deleted.
Cyberjesus5 months ago
I would love to understand your reasoning around provider pinning. I know Hashicorp's recommendations are currently
1. use minimum constraints
2. use pessimistic semver constraints
but which of those does an Atmos terraform component fit into?
We are building our own components for a brownfields deployment and have based all of our components on the cloudposse example module template which uses the cloudposse test-harness to ensure that provider versions are pinned with only minimum constraints. However, there are cases like https://medium.com/@mr.ryanflynn/why-hard-pinning-terraform-provider-versions-is-essential-a-lesson-from-an-aws-eks-issue-a03928ae410f and recommendations from seasoned terraform users in reddit that suggest versions should always be hard-pinned with
I can also see the test-harness did allow pessimistic semver constraints at some point, I just can't see why it was allowed or why it was changed.
We are also exploring the idea of using a component repo as either a component (root module) or a module (eg. an EKS component that includes the generic IAM component as a module to add roles using the cluster's own OIDC provider so we don't have to call the IAM component from atmos a second time)
1. use minimum constraints
>= for modules2. use pessimistic semver constraints
~> for root modulesbut which of those does an Atmos terraform component fit into?
We are building our own components for a brownfields deployment and have based all of our components on the cloudposse example module template which uses the cloudposse test-harness to ensure that provider versions are pinned with only minimum constraints. However, there are cases like https://medium.com/@mr.ryanflynn/why-hard-pinning-terraform-provider-versions-is-essential-a-lesson-from-an-aws-eks-issue-a03928ae410f and recommendations from seasoned terraform users in reddit that suggest versions should always be hard-pinned with
=.I can also see the test-harness did allow pessimistic semver constraints at some point, I just can't see why it was allowed or why it was changed.
We are also exploring the idea of using a component repo as either a component (root module) or a module (eg. an EKS component that includes the generic IAM component as a module to add roles using the cluster's own OIDC provider so we don't have to call the IAM component from atmos a second time)
Zapier5 months ago
Join us for "Office Hours" every Wednesday 01:30PM (PST, GMT-7) via Zoom.This is an opportunity to ask us questions on
terraform and get to know others in the community on a more personal level. Next one is Sep 24, 2025 01:30PM.👉️ Register for Webinar
#office-hours (our channel)
Jonathan Rose5 months ago
Is there any appetite for Feature Request for supporting custom rules · Issue #131 · cloudposse/terraform-aws-config?
Zapier5 months ago
Join us for "Office Hours" every Wednesday 01:30PM (PST, GMT-7) via Zoom.This is an opportunity to ask us questions on
terraform and get to know others in the community on a more personal level. Next one is Sep 17, 2025 01:30PM.👉️ Register for Webinar
#office-hours (our channel)
Zapier6 months ago
Join us for "Office Hours" every Wednesday 01:30PM (PST, GMT-7) via Zoom.This is an opportunity to ask us questions on
terraform and get to know others in the community on a more personal level. Next one is Sep 10, 2025 01:30PM.👉️ Register for Webinar
#office-hours (our channel)
James Stocker6 months ago(edited)
Hi, I raised this PR a few weeks ago to fix an issue with the module (as a new version of the helm provider has broken this module)
Is there anything I need to do to get it reviewed? Or should I just fork and start using a personal reslease?
https://github.com/cloudposse/terraform-aws-helm-release/pull/78
Is there anything I need to do to get it reviewed? Or should I just fork and start using a personal reslease?
https://github.com/cloudposse/terraform-aws-helm-release/pull/78