39 messages
๐ฝ๏ธ
Nitin J7 months ago
I'm currently using Atmos for managing our infrastructure as code. I have a catalog component named
Now that weโre still early in our rollout (with only dev environments deployed), Iโd like to take the opportunity to rename the component from
Hereโs what Iโve done so far:
1. Created a copy of the
2. Followed the Atmos component migration tutorial to migrate Terraform state from the old component name to the new one.
Steps taken:
Additionally, I compared the state between the two workspaces:
The resource lists in both workspaces are identical, confirming no drift.
However, when I run:
The plan still shows all 54 EKS-related resources as if they are going to be recreated.
My Question:
Is there a supported way to change a catalog component name (in this case, from
I'm assuming this has something to do with the backend configuration or how the component identity is derived, but I couldnโt find specific guidance on how to remap the state to avoid a full re-creation.
Any insight into what I might be missingโor a recommended process to cleanly rename components without destroying and recreating resourcesโwould be greatly appreciated.
eks-ops which provisions the EKS cluster and its associated components. Over time, the name eks-ops has caused some confusionโespecially with regard to associating it with an "ops" AWS account, even though itโs used for all our environments, including dev.Now that weโre still early in our rollout (with only dev environments deployed), Iโd like to take the opportunity to rename the component from
eks-ops to simply eks, which is more intuitive and aligned with the standard naming convention.Hereโs what Iโve done so far:
1. Created a copy of the
catalog/eks-ops folder and renamed it to catalog/eks, cleaning up all eks-ops references in the manifest YAMLs.2. Followed the Atmos component migration tutorial to migrate Terraform state from the old component name to the new one.
Steps taken:
cd components/terraform/eks/cluster/
terraform init
terraform workspace list
terraform workspace new plat-use1-dev-eks-cluster
terraform workspace select plat-use1-dev-eks-cluster
terraform state pull > plat-use1-dev-eks.tfstate
terraform state push plat-use1-dev-eks.tfstateAdditionally, I compared the state between the two workspaces:
terraform workspace select plat-use1-dev-eks-cluster
terraform state list
terraform workspace select plat-use1-dev-eks-cluster-ops
terraform state listThe resource lists in both workspaces are identical, confirming no drift.
However, when I run:
atmos terraform plan eks/cluster --stack plat-use1-devThe plan still shows all 54 EKS-related resources as if they are going to be recreated.
My Question:
Is there a supported way to change a catalog component name (in this case, from
eks-ops to eks) in Atmos without causing Terraform to interpret it as a new component and attempt to recreate all the associated infrastructure?I'm assuming this has something to do with the backend configuration or how the component identity is derived, but I couldnโt find specific guidance on how to remap the state to avoid a full re-creation.
Any insight into what I might be missingโor a recommended process to cleanly rename components without destroying and recreating resourcesโwould be greatly appreciated.
Jonathan Rose7 months ago
I am doing a POC for GitHub Actions for Atmos and trying to figure out which actions would showcase how awesome Atmos + GHA work together?
Jonathan Rose7 months ago(edited)
I am tasked with making an Atmos component for cloudposse/terraform-aws-service-control-policies at v0.15.1 and trying to understand:
1. Best practice
2. If one already exists
3. How to leverage terraform-aws-service-control-policies/catalog at v0.15.1 ยท cloudposse/terraform-aws-service-control-policies
1. Best practice
2. If one already exists
3. How to leverage terraform-aws-service-control-policies/catalog at v0.15.1 ยท cloudposse/terraform-aws-service-control-policies
Sean7 months ago
Trying to make an adjustment to
Is there a way to get this to work?
stack/catalog/eks/mixins/k8s-1-32.yaml so that it applies some dynamic values from a running plan in the addons config values. I've tried a lot of variations but cannot get the vars to populate. Example using go templating syntax:configuration_values: |
{
"ExampleAttr1": "static-prefix-{{.vars.environment}}-{{.vars.tenant}}-{{.vars.stage}}",
"ExampleAttr2": ...
} Is there a way to get this to work?
Dan Hansen7 months ago
I'm curious if there are any established patterns for managing per-component deployment service accounts?
I'd like each
Pike could be useful here for determining the permissions needed to apply a component
I'd like each
terraform apply to be run in Github Actions with least-privilege permissions.Pike could be useful here for determining the permissions needed to apply a component
PePe Amengual7 months ago
looks like I found another bug
atmos describe affected --include-settings=false --include-dependents=true --stack=dev-wus3 --logs-level=Debug will include process all stacks. include-dependents is not respecting the --stack optionPetr Dondukov7 months ago
Hi all! Please help. How can I determine dependencies in another Stack if I have a custom name.template in atmos.yaml:
name_template: "{{.settings.context.tenant}}-{{.settings.context.env}}-{{.settings.context.resource.name}}-{{.settings.context.region}}-{{.settings.context.resource.number}}"`Igor Rodionov7 months ago
Hello @Matt Calhoun
Do you know is the action https://github.com/cloudposse/github-action-setup-atmos/ supports installing atmos from nightly builds and feature releases ?
Do you know is the action https://github.com/cloudposse/github-action-setup-atmos/ supports installing atmos from nightly builds and feature releases ?
Ian Cornett7 months ago
I'm having a bear of a time trying to query for specific outputs using
!store . This worked before using !terraform.output What am I missing here?!store dev/ssm vpc-networking "public_subnet_objects | query select(.tags.Name == ""bastion_subnet_us_east_2c"") | .id"Jonathan Rose7 months ago(edited)
I had a random thought...what if there was an atmos command that could be used to install binaries as part of the workflow? For example,
This would eliminate our need for tools like terraform-switcher for managing this
I would be more than happy to make a ticket for this, but curious if there was any appetite for it first ๐
atmos install terraform would install the version of terraform referenced in atmos.yaml ๐คThis would eliminate our need for tools like terraform-switcher for managing this
I would be more than happy to make a ticket for this, but curious if there was any appetite for it first ๐
Jonathan Rose7 months ago
Does atmos have a function similar to concat - Functions - Configuration Language | Terraform | HashiCorp Developer that would allow me to consolidate multiple lists?
Ian Cornett7 months ago(edited)
Has anybody else seen an issue when using stores where you get duplicate "dummy" values pushed to the store when doing a component deployment? e.g. I have a public ip stored at this path
/atmos/dev/copay/aws/use2/dev/connectivity/bastion_public_ip but I get null values (because it doesn't live in these components) at /atmos/dev/copay/aws/use2/dev/persistence/bastion_public_dns, /atmos/dev/copay/aws/use2/dev/vpc-networking/bastion_public_dns, ... I'm running ๐ฝ๏ธ Atmos 1.185.0 on darwin/arm64Jonathan Rose6 months ago
Does anyone have experience Atmos workflows to import terraform resources (e.g.
atmos terraform import)?Jonathan Rose6 months ago(edited)
Random Question No. 4,587 ๐
I would to develop an automated system that checks Releases ยท cloudposse/atmos for stable releases (e.g. v1.187.0) and create a Jira ticket when it finds one. I'm thinking of a python script running in a lambda or something similar. Any suggestions? The goal is to ensure we don't loose track of stable releases that we should be using in our docker image we ship our IAC Service Catalog with.
I would to develop an automated system that checks Releases ยท cloudposse/atmos for stable releases (e.g. v1.187.0) and create a Jira ticket when it finds one. I'm thinking of a python script running in a lambda or something similar. Any suggestions? The goal is to ensure we don't loose track of stable releases that we should be using in our docker image we ship our IAC Service Catalog with.
cromega6 months ago
is it possible to use !terraform.state with a yq expression to pass in a list of values to terraform that expects a list?
Ed6 months ago
Hi everyone, I'm looking into Atmos Pro and one question I couldn't answer from the docs is: do I have to use the cloudposse github actions workflows?
For context, I have forked the affected stacks, terraform plan, and terraform apply cloudposse workflows to support my particular use case. This involves splitting the affected stacks matrix output by org and then executes the plan/apply workflows in different jobs (using GitHub environments to inject the appropriate creds for authenticating to the relevant org's cloud).
The functionality of my forked versions is largely equivalent to the originals, just different handling of auth.
For context, I have forked the affected stacks, terraform plan, and terraform apply cloudposse workflows to support my particular use case. This involves splitting the affected stacks matrix output by org and then executes the plan/apply workflows in different jobs (using GitHub environments to inject the appropriate creds for authenticating to the relevant org's cloud).
The functionality of my forked versions is largely equivalent to the originals, just different handling of auth.
Jonathan Rose6 months ago
How to do I perform Atmos overrides for a specific component? I need to override a schema validation that targets a specific component and not the whole stack.
Jonathan Rose6 months ago
Having some trouble with my workflow:
The goal is to leverage existing variables when noting the location of the planfile. Here is the error I am getting:
Not sure if it's a spacing, templating, or some other issue altogether.
config:
plan: &plan
- command: terraform generate planfile transit-gateway -f "{{ /atmos/tenable/{{ .vars.stage }}-transit-gateway.tfplan.json }}"
- command: terraform generate planfile tgw-routing -f "{{ /atmos/tenable/{{ .vars.stage }}-tgw-routing.tfplan.json }}"
workflows:
plan-ue1:
description: |
The steps in this workflow are run when a pull request is opened or updated.
This workflow will plan the component with mock outputs for dependencies.
It will not apply any changes.
stack: cfsb-it-network-ue1
steps: *planThe goal is to leverage existing variables when noting the location of the planfile. Here is the error I am getting:
The following command failed to execute:
atmos terraform generate planfile transit-gateway -f "{{ /atmos/tenable/{{ .vars.stage }}-transit-gateway.tfplan.json }}" -s cfsb-it-network-ue1
To resume the workflow from this step, run:
atmos workflow plan-ue1 -f run --from-step step1 -s cfsb-it-network-ue1Not sure if it's a spacing, templating, or some other issue altogether.
Michael6 months ago
This question might not even make sense, but is there a best practice for configuring Atmos stores to use each reference architecture account's SSM Parameter Store rather than a centralized store? For example, if I have a
and then reference the per-account store via:
My goal is to be able to directly pull from each account's store into my stacks (if that makes sense)
dev-use1-log, dev-usw2-log , stage-use1-iam , and others along that pattern, can I do something like this in my atmos.yaml:# Atmos Stores
# <https://atmos.tools/core-concepts/projects/configuration/stores/>
stores:
use1/ssm:
type: aws-ssm-parameter-store
options:
region: us-east-1
usw2/ssm:
type: aws-ssm-parameter-store
options:
region: us-west-2and then reference the per-account store via:
- name: "AWS-AWSManagedRulesBotControlRuleSet"
priority: 100
statement:
# <https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-bot.html>
name: AWSManagedRulesBotControlRuleSet
vendor_name: AWS
scope_down_not_statement_enabled: true
scope_down_statement:
byte_match_statement:
field_to_match:
single_header:
name: "secret-token"
positional_constraint: "EXACTLY"
search_string: !store use1/ssm waf/defaults /waf/defaults/secret_token
text_transformation:
- priority: 101
type: "NONE"My goal is to be able to directly pull from each account's store into my stacks (if that makes sense)
Erik Osterman (Cloud Posse)6 months ago
any votes for this? https://github.com/cloudposse/atmos/issues/1402
PePe Amengual6 months ago(edited)
for visibility : I was talking to @Igor Rodionov and I bought this up a while ago but looks like my message is gone, basically I'm having this messages in all plan and apply github actions
and it looks like the problem is :
==> Binaries will be located at: /opt/hostedtoolcache/opentofu/opentofu/v1.8.4/linux-x64
Warning: Failed to restore: Cache service responded with 400
==> File extension matching disabled
searching for tofu_1.8.4_386.apk with (linux|x86_64|x64|amd64).*(linux|x86_64|x64|amd64).*cloudposse/github-action-atmos-terraform-apply@048b02e4659a25250ea801915b922f8ccb7b25c4 ( for example)and it looks like the problem is :
The 400 error from the cache service is occurring because GitHub migrated to a new cache service on February 1st, 2025, and deprecated older versions. The
cloudposse-github-actions/install-gh-releases@v1 action is using an outdated cache implementation.Bruce6 months ago
hello, need some help with syntax. i'm looking to combine 2 tf outputs from separate components. something simliar to the below, which does not seem to work
vars:
connection_string: |
printf "<postgres://username>:%s@%s:5432/dbname"
!terraform.output rds-postgres-resources .additional_users | .username.db_user_password
!terraform.output rds-postgres rds_addressErik Osterman (Cloud Posse)6 months ago(edited)
You can now manage all your GitHub repositories with Atmos
https://github.com/cloudposse-terraform-components/aws-github-repository
https://github.com/cloudposse-terraform-components/aws-github-repository
components:
terraform:
myrepo:
metadata:
component: aws-github-repository
vars:
enabled: true
owner: acme-github-org
repository:
name: "my-repository"
description: "My repository"
homepage_url: "<http://example.com/>"
topics:
- terraform
- github
- test
default_branch: "main"
secrets:
MY_SECRET: "my-secret-value"
MY_SECRET_2: "nacl:dGVzdC12YWx1ZS0yCg=="
MY_SECRET_3: "<ssm://my/secret/path>"
MY_SECRET_4: "sm:secret-name"
variables:
MY_VARIABLE: "my-variable-value"
MY_VARIABLE_2: "<ssm://my/variable/path>"
MY_VARIABLE_3: "sm:variable-name"D
Daniel Booth6 months ago
I tend to get a lot of these when running my plans/apply because of my imports! Is there a way to suppress them without declaring a load of arbitrary variables in my terraform?
Jonathan Rose6 months ago(edited)
Running into an issue where I need to concat lists:
But I am getting the following error:
Here is the documentation I am using as a reference:
Any thoughts? Not sure if it's a missing quote or list concatenantes won't work when getting values from
vpc-attachments:
vars:
vpc_attachments:
vpc:
subnet_ids: !terraform.state vpc "intra_subnets // ""[MOCK__subnet_0, MOCK__subnet_1, MOCK__subnet_2]"""
# Atmos supports YQ expressions on state outputs <https://atmos.tools/core-concepts/stacks/yaml-functions/terraform.state#using-yq-expressions-to-retrieve-items-from-complex-output-types>
# See YQ merge for concatenating multiple lists <https://mikefarah.gitbook.io/yq/operators/add#concatenate-arrays>
vpc_route_table_ids: !terraform.state vpc ".private_route_table_ids + .intra_route_table_ids + .database_route_table_ids" // ""[MOCK__route_table_id_0, MOCK__route_table_id_1, MOCK__route_table_id_2]"""
tgw_vpc_attachment_tags:
Name: "{{ .atmos_stack }}-tgw-attachment"But I am getting the following error:
ProcessYAMLConfigFile: Merge: Deep-merge the stack manifest and all the imports: Error: cannot override two slices with different type ([]interface {}, string)Here is the documentation I am using as a reference:
<https://mikefarah.gitbook.io/yq/operators/add#concatenate-arrays>Any thoughts? Not sure if it's a missing quote or list concatenantes won't work when getting values from
!terraform.statecromega6 months ago
Hello!
I have an instance of a vpc component in stack1, it works exactly as expected. I'm just adding an instance to a new stack, if I run apply, I get "A change in the backend configuration has been detected, which may require migrating existing state." migrate state or reconfigure, etc.
Is this normal or am I fundamentally misunderstanding how components instances should work?
I have an instance of a vpc component in stack1, it works exactly as expected. I'm just adding an instance to a new stack, if I run apply, I get "A change in the backend configuration has been detected, which may require migrating existing state." migrate state or reconfigure, etc.
Is this normal or am I fundamentally misunderstanding how components instances should work?
PePe Amengual6 months ago
can I vendor the vendor.yaml?
Shruti Tirpude6 months ago(edited)
Hello, has anyone worked with atmos on a windows machine, I have set the environment variables ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH and getting following error
no matches found for the import, has anyone seen this error while running atmos terraform plan <component> -s <stack>Jase Koonce6 months ago
Hi! I am Currently running into some issues with an OCI artifact that I am referencing in my vendor.yaml. The package is hosted in a private registry and my understanding was that it would utilize my
~/.docker/config.json for authentication, but I am still getting the error 401 Unauthorized: Invalid token . I have validated that I can use oras to pull the package so my token should be good. Should I be putting the token somewhere else?Nitin J6 months ago
Iโm running into issues with creating an ArgoCD GitHub repository using Terraform.
One of the repository was created earlier by one of my colleague, but when I try to create a new one I hit errors. Iโve seen both cases:
โข When creating a fresh repo, Terraform shows:
The repo does get created in GitHub, but the apply fails.
โข On re-running, Terraform tries to create again and fails with:
Iโm using the CloudPosse aws-argocd-github-repo module. What am i missing here ?
One of the repository was created earlier by one of my colleague, but when I try to create a new one I hit errors. Iโve seen both cases:
โข When creating a fresh repo, Terraform shows:
github_repository.default[0]: Creating...
Error: PATCH <https://api.github.com/repos/redacted/argocd-prod>: 404 Not Found []The repo does get created in GitHub, but the apply fails.
โข On re-running, Terraform tries to create again and fails with:
Error: POST <https://api.github.com/orgs/redacted/repos>: 422 Repository creation failed.
[{Resource:Repository Field:name Code:custom Message:name already exists on this account}]Iโm using the CloudPosse aws-argocd-github-repo module. What am i missing here ?
Sam6 months ago
Hi All! This most likely me doing something stupid since I'm new to atmos but I can not figure out how to set a default s3 backend config (in
this is my folder structure
and this is my atmos.yaml
one of my networking stacks
I always get an error saying my backend.tf.json is empty since it looks like this
I would really appreciate any help since i've spent a few hours now trying to get this working to no avail.
stacks/environment/_defaults.yaml) to be used in my networking.yaml stacks. I am using the auto generated backend file in my atmos.yamlthis is my folder structure
โโโ atmos.yaml
โโโ components
โ โโโ terraform
โ โโโ vpc
โโโ stacks
โโโ environment
โ โโโ _defaults.yaml
โ โโโ sandbox
โ โ โโโ networking.yaml
โ โโโ staging
โ โโโ networking.yaml
โโโ mixins
โโโ environment
โโโ sandbox.yaml
โโโ staging.yamland this is my atmos.yaml
base_path: "./"
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
stacks:
base_path: "stacks"
included_paths:
- "environment/**/*"
excluded_paths:
- "environment/_defaults.yaml"
name_pattern: "{environment}"
logs:
file: "/dev/stderr"
level: Infoone of my networking stacks
imports:
- mixins/environment/sandbox
- environment/_defaults
vars:
environment: sandbox
components:
terraform:
vpc:
backend:
s3:
workspace_key_prefix: sandbox
vars:
vpc_ipv4_cidr: "10.1.0.0/16"I always get an error saying my backend.tf.json is empty since it looks like this
{
"terraform": {
"backend": {
"": {}
}
}
}I would really appreciate any help since i've spent a few hours now trying to get this working to no avail.
Michael6 months ago
Is there a way to use Atmos with Terraform/OpenTofu's Test Framework? For example,
atmos terraform test <component -s <stack> and then have the tests populate with Atmos' deep-merged YAMLs. Was just curious if anyone had a workaround or way to achieve thisRickA6 months ago
Good afternoon. A coworker ran into a scenario where values are merged rather than replaced. Trying to confirm whether it should be expected or not due to how this ends up typed. ๐งต
RickA6 months ago
We have our Atmos config in a different repository than our components. Most of us copy/paste or use softlinks to get the stacks in with the components. Saw today commands that were added that alluded to being able to influence behavior via CLI for redirection.
If I change
If I change
stacks.base_path in our atmos.yaml in the root of the components repository I can do ../stack-repo and that works. But the docs seem to suggest I could do atmos describe <all the things> --config my_new_config.yaml and that could be any other configuration to move around. I'm misunderstanding since it's not overriding the default file. To test I broke the atmos.yaml in the root of the repo and used --config to point to a correct file in a subdirectory in the same repo. (When I say repo I mean my local folder structure for that repo).Daniel Booth6 months ago
Hi,
when using abstract components are they still expected to come up when using
when using abstract components are they still expected to come up when using
atmos list command?Daniel Booth6 months ago
โฆ/home/IaC ๓ฐข-main* โฏ โ default:netbird-system atmos list components -s prod-oci-k8s
capacity_report
instance_pool
kubeconfig
โฆ/home/IaC ๓ฐข-main* โฏ โ default:netbird-system atmos list components -s prod-oci-k8s
argocd_gitops_repo
capacity_report
cloudflared_tunnel
instance_pool
kubeconfig
namespaces
privileged_namespacesDaniel Booth6 months ago
for example I have imported from the catalog but in my
prod-oci-k8s stack the resources like argocd-gitops-repo remain abstracted so i did not expect to see them in the outputDaniel Booth6 months ago
they do not appear in the TUI
Daniel Booth6 months ago
which is as expected