atmos
134719,579
👽️
E
erikabout 21 hours ago
We just launched MCP support for Atmos Pro.
Atmos Pro continuously tracks stack health, deployment failures, Terraform drift, and historical changes across your infrastructure — and now Claude has API access to all of that operational context directly inside the conversation. You can ask things like “what’s drifting?”, “when did this start failing?”, “show me the logs for the last deployment”, or even “open a PR to fix this failing component”, without digging through dashboards, GitHub Actions runs, or Terraform output manually. Claude can reason about issues using full awareness of your Atmos stacks, components, environments, deployment history, and live operational state.
https://atmos-pro.com/changelog/2026-05-09-mcp-server
Atmos Pro continuously tracks stack health, deployment failures, Terraform drift, and historical changes across your infrastructure — and now Claude has API access to all of that operational context directly inside the conversation. You can ask things like “what’s drifting?”, “when did this start failing?”, “show me the logs for the last deployment”, or even “open a PR to fix this failing component”, without digging through dashboards, GitHub Actions runs, or Terraform output manually. Claude can reason about issues using full awareness of your Atmos stacks, components, environments, deployment history, and live operational state.
https://atmos-pro.com/changelog/2026-05-09-mcp-server
M
Marat Bakeev1 day ago
Hi team, could you have a look at this PR - https://github.com/cloudposse/atmos/pull/2402 - it suppresses multiple messages about updating kubeconfigs, if the kubeconfig didnt change
Z
Zack2 days ago
❓️ recommended way to have atmos populate stack vars with secretsmanager secrets?
J
Jonathan Rose6 days ago
@Erik Osterman (Cloud Posse) what would the level of effort be to update https://atmos.tools/cli/commands/terraform/generate/planfile to work with atmos CI to provide plan changes in GitHub Summaries?
Z
Zack8 days ago(edited)
One place we've had issues with using
!terraform.state is inside of yaml:policy: |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExternalSecretsAccess",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"kms:Decrypt",
"kms:GenerateDataKey",
"kms:DescribeKey"
],
"Resource": [
!terraform.state rds {{ printf "demo-%s" .vars.deployment_requirements.stage }} .rds_properties.db_instance_master_user_secret_arn,
!terraform.state kms {{ printf "demo-%s" .vars.deployment_requirements.stage }} kms_key_arn,
]
}
]
}J
Jeremy G12 days ago
@Erik Osterman (Cloud Posse) Has anyone tried using Atmos to generate ArgoCD Application manifests or Crossplane Claims?
M
Miguel Zablah14 days ago
Hey guys I think I found a bug with Atmos Stores and the identity field.. I have created an issue with more explanation but figure I should posted here:
https://github.com/cloudposse/atmos/issues/2377
https://github.com/cloudposse/atmos/issues/2377
J
Jonathan Rose15 days ago
Question. I'm using
atmos terraform plan --all in GitHub action, but noticing the following in GitHub Summary. Is this expected?J
Jonathan Rose18 days ago(edited)
Question, does anyone here use atmos + checkov together in GitHub actions?
J
Jonathan Rose20 days ago
atmos docs generate is a game changer! used it for a new project and it works amazing!S
Sean21 days ago
Following the documentation it seems like
but I get error
I tried with
!literal should be working here:field_template: !literal |
{%- if alertDef.entityLabels.channel is defined -%}
{{ alertDef.entityLabels.channel }}
{%- else -%}
_sandbox
{%- endif -%}but I get error
*Error:* template: templates-all-atmos-sections:624: function "alertDef" not defined, is there something wrong with my usage or a bug?I tried with
1.210.0 & 1.216.0Y
Yota22 days ago(edited)
I just upgraded to 1.115 and saw this:
Preserve Directory Hierarchy in Terraform State Buckets. Yeah! For weeks, I've been telling myself I need to report that I have to override all components manually because it doesn't preserve the /. A small feature, but very valuable. Thanks :).P
Paavo Pokkinen22 days ago
Has anyone ran into issue that prevents using atmos locals? More in the thread.
L
Leonardo Agueci26 days ago
Hi all, I have a question on datasources/templates
I have this datasource defined:
And I use it as described in the documentation:
It works great if
I have templates evaluation set to 3 and as far as I understood, in the first evaluation
Am I assuming something wrong? Is there any way to make it works if I'm doing something wrong? Thanks in advance
I have this datasource defined:
settings:templates:settings:gomplate:datasources:env:url: "stacks/data/{{ .vars.environment }}.yaml"And I use it as described in the documentation:
ami: '{{{{ (index (datasource "env").ami .vars.aws_region) }}}}'It works great if
aws_region contains directly a string value but is not working if it reference another variable, for example aws_region: {{ .vars.region }}I have templates evaluation set to 3 and as far as I understood, in the first evaluation
aws_region should be resolved to the actual value of region and ami to '{{ (index (datasource "env").ami .vars.aws_region) }}' so in the second evaluation it should be evaluated correctly with the value originally contained into region.Am I assuming something wrong? Is there any way to make it works if I'm doing something wrong? Thanks in advance
C
Charles Smith27 days ago(edited)
Hey I just feel like I stumbled upon a very cool configuration that will help us migrate from static traditional terraform dirs in a progressive way.
In our current static terraform we only use a default workspace and set specific backend keys for each dir. Eventually we'll migrate these static root modules into components but until they are all migrated in I was worried it would be difficult to leverage the
Then I had a crazy idea. If could create a
Here's what one of these
Then I can use this in another real component in the same stack:
Thought I would share in case this helps anyone else move into
The
In our current static terraform we only use a default workspace and set specific backend keys for each dir. Eventually we'll migrate these static root modules into components but until they are all migrated in I was worried it would be difficult to leverage the
atmos !terraform.state YAML function to lookup outputs in the older terraform root modules.Then I had a crazy idea. If could create a
dummy component that essentially only contains a context.tf could I then setup a stack that points to the older root modules where each component inherits from dummy but then specifies it's backend to make !terrafofrm.state work. To my surprise this actually works amazingly well!Here's what one of these
dummy instances actually looks like (bogus account id swapped in):components:
terraform:
vpc/us-east-1-development:
metadata:
enabled: false
terraform_workspace: "default"
component: dummy
name: vpc/us-east-1-development
backend:
s3:
key: infra/terraform/123456789012/vpc/us-east-1-development/terraform.tfstateThen I can use this in another real component in the same stack:
vars:
vpc_id: !terraform.state vpc/us-east-1-development vpc_idThought I would share in case this helps anyone else move into
atmosThe
metadata.enabled is very cool cause that ensures my CI/CD does not try to plan/apply this dummy component as well!M
Mateusz Osiński27 days ago
Hi guys!
Apologies for the next rookie question - is the
Apologies for the next rookie question - is the
Makefile deprecated?T
toka27 days ago
Hello, I'm looking how to tackle
With vanilla Terraform, resources are deleted when source code files are removed from the codebase.
With Atmos components that is not the case, as we need deep merged yaml to even run terraform in the first place.
I'm looking for ways how to carry out deletions when PR with deleted
atmos terraform destroy with GitHub Actions.With vanilla Terraform, resources are deleted when source code files are removed from the codebase.
With Atmos components that is not the case, as we need deep merged yaml to even run terraform in the first place.
I'm looking for ways how to carry out deletions when PR with deleted
yaml files gets created.C
Charles Smith29 days ago
Hello. I've been looking into
I guess if I wanted a file more globally generated I could try having all my components inherit from one that does the generation...
atmos generate and it seems it can't be specified under the global terraform key (like backend or providers). Am I missing something or is there a reason it must only be specified under a component?I guess if I wanted a file more globally generated I could try having all my components inherit from one that does the generation...
E
erik29 days ago
Have you ever wanted to auto commit fixes like
The workaround used to suck: create a static PAT or create a new github app, then distribute/manage/rotate those secrets.
Well, not anymore.
Now you can use
terraform fmt and realized that GitHub Actions won't run on those changes? GitHub does this as a circuit-breaker to prevent commit loops.The workaround used to suck: create a static PAT or create a new github app, then distribute/manage/rotate those secrets.
Well, not anymore.
Now you can use
atmos pro commit https://atmos.tools/changelog/pro-commitB
Brandon29 days ago
Hi there, just confirming planfile to Github artifacts in native CI is not implemented in 215 right?
🧪 ci is an experimental feature. Learn more <https://atmos.tools/experimental>
WARN CI hook handler failed event=after.terraform.plan
error=
│ failed to upload planfile: failed to upload planfile: not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ (1) forced error mark
│ | "failed to upload planfile"
│ | errors/*errors.errorString::
│ Wraps: (2) key=dev-primary/argocd-apps/0509682841e9b66922a344abd606567815703ad3.tfplan.tar store=github/artifacts
│ Wraps: (3) Failed to upload planfile to store
│ Wraps: (4) failed to upload planfile: failed to upload planfile: not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ └─ Wraps: (5) failed to upload planfile: not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ └─ Wraps: (6) not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ └─ Wraps: (7) not implemented
│ └─ Wraps: (8) failed to upload planfile
│ └─ Wraps: (9) failed to upload planfile
│ Error types: (1) *markers.withMark (2) *safedetails.withSafeDetails (3) *hintdetail.withDetail (4) *fmt.wrapErrors (5) *fmt.wrapErrors (6) *fmt.wrapError (7) *errors.errorString (8) *errors.errorString (9) *errors.errorString
ℹ Executing command: atmos terraform apply argocd-apps --from-plan -auto-approve
🧪 ci is an experimental feature. Learn more <https://atmos.tools/experimental>G
Gerry Laracuente29 days ago(edited)
Hello 👋 I'm working on getting
I can't find this version published to the open tufu registry. Is this version only meant to be usable with Atmos?
aws-ssosync setup, and I I see you all recently released v2.0.0.I can't find this version published to the open tufu registry. Is this version only meant to be usable with Atmos?
E
Eunice Bello29 days ago
Hello guys! I'm trying to use the
Is there a way to skip the AWS credentials configuration action and make it work with GCP instead? Or is there a another atmos-terraform-plan action for GCP?
github-action-atmos-terraform-plan in a github actions pipeline that deploys resources to Google Cloud but it fails because it seems to only supports AWS authentication:Run aws-actions/configure-aws-credentials@v4.0.2
Error: Source Account ID is needed if the Role Name is provided and not the Role Arn.Is there a way to skip the AWS credentials configuration action and make it work with GCP instead? Or is there a another atmos-terraform-plan action for GCP?
M
Miguel Zablahabout 1 month ago
Hey guys I have a question about atmos toolchain I know we have the .tool-versions file but do we need to set the version again when declaring them as a dependencies?
for example on the org/_default I have this:
but I will prefer to manage version in 1 place and just declare the tool that is need it that way I don't have to remember and update this too hahaha
for example on the org/_default I have this:
dependencies:
tools:
atmos: 1.214.0
terraform: 1.14.2
tflint: 0.61.0but I will prefer to manage version in 1 place and just declare the tool that is need it that way I don't have to remember and update this too hahaha
E
erikabout 1 month ago
Just another friendly reminder: if you are waiting on a resolution to an issue you have opened, please check if there's an associated PR. If there is, it's probably held up on validation, and any help we can get to validate those pull requests will expedite merging and cutting a release.
Y
Yotaabout 1 month ago(edited)
Hello,
I'm asking because I didn't see about this in the Atmos documentation. Is it possible to add the Git commit and branch to the variables? This would allow us to tag resources and thus track the code that created or modified them. Do you think this is a good idea? And Then would you consider adding this feature to the Atmos core?
I'm asking because I didn't see about this in the Atmos documentation. Is it possible to add the Git commit and branch to the variables? This would allow us to tag resources and thus track the code that created or modified them. Do you think this is a good idea? And Then would you consider adding this feature to the Atmos core?
M
Mateusz Osińskiabout 1 month ago
Hi guys!
Is there any article/documentation on how to move from Terraform to OpenTofu? (tf 1.5.7)
Is there any article/documentation on how to move from Terraform to OpenTofu? (tf 1.5.7)
E
erikabout 1 month ago
We added a batch deployment dashboard to Atmos Pro which is useful when PRs touch a lot of components and the normal mermaid-style diagrams cannot be shown due to comment-size limitations.
J
JP Pakalapatiabout 1 month ago
Hello! I'm looking for a way to switch identity depending on the component in a stack.
I have a default identity which I use store terraform state file, But the component that I'm trying to create should be on another account. I have tried using the component level identity selection, but it isn't working. it tries to create the component on the default identity no matter the config I tried.
I have a default identity which I use store terraform state file, But the component that I'm trying to create should be on another account. I have tried using the component level identity selection, but it isn't working. it tries to create the component on the default identity no matter the config I tried.
Z
Zackabout 1 month ago
❓️ Is there a way, to combine JIT and generating overrides files like this?
https://developer.hashicorp.com/terraform/language/files/override#merging-locals-blocks
Starting to dig into this feature
https://atmos.tools/stacks/generate
https://developer.hashicorp.com/terraform/language/files/override#merging-locals-blocks
Starting to dig into this feature
https://atmos.tools/stacks/generate
E
erikabout 1 month ago
We've Updated the Atmos Native CI examples also to use containerized steps rather than run
https://github.com/cloudposse-examples/atmos-native-ci
github-action-setup-atmos since it's way more performant.https://github.com/cloudposse-examples/atmos-native-ci
E
erikabout 1 month ago
We've added an example of using Atmos Native CI with GitHub Job Matrixes:
https://github.com/cloudposse-examples/atmos-native-ci-advanced
https://github.com/cloudposse-examples/atmos-native-ci-advanced
M
Miguel Zablahabout 1 month ago
Hey guys, I have a question about
https://atmos.tools/cli/configuration/ci/comments
I like it and want it for most of my CI/CD workflows but I'm trying to do a custom drift detection workflow where I will not like this to be trigger but I don't see a way to disable this using an ENV like
ci.comments is there a ENV to disable this?https://atmos.tools/cli/configuration/ci/comments
I like it and want it for most of my CI/CD workflows but I'm trying to do a custom drift detection workflow where I will not like this to be trigger but I don't see a way to disable this using an ENV like
ATMOS_COMMENTS_ENABLED=FALSEE
erikabout 1 month ago
Yes, we are moving away from account map for all new engagements. For the next year, we need to straddle a world where both work, before we fully yank it out. We therefore use a providers.tf mixin that doesn't use account map.
E
Elonabout 1 month ago
Hi, I have a quick question about components under https://github.com/cloudposse-terraform-components
Why do they require the account-map component to be present? Isn't account map deprecated?
Why do they require the account-map component to be present? Isn't account map deprecated?
E
Elonabout 1 month ago
Hi,
Is there any issue with cloudposse/utils? Why can't I fetch it?
Is there any issue with cloudposse/utils? Why can't I fetch it?
M
Miguel Zablahabout 1 month ago
Hey guys!
I have a question about Atmos MCP configuration that came up during
the latest office-hours meeting.
I've reviewed the docs at https://atmos.tools/cli/configuration/mcp and the
example at https://atmos.tools/gists/mcp-with-aws where ATMOS_PROFILE is set
for authentication.
Currently, I'm defining multiple MCP servers and need to set ATMOS_PROFILE and
identity for each one individually. For example:
Is there a way to configure ATMOS_PROFILE and identity globally so they apply to all MCP servers by default, rather than repeating them in each server definition?
PS: I think the links for the last office hour are missing on the channel.
I have a question about Atmos MCP configuration that came up during
the latest office-hours meeting.
I've reviewed the docs at https://atmos.tools/cli/configuration/mcp and the
example at https://atmos.tools/gists/mcp-with-aws where ATMOS_PROFILE is set
for authentication.
Currently, I'm defining multiple MCP servers and need to set ATMOS_PROFILE and
identity for each one individually. For example:
mcp:
servers:
server-one:
env:
ATMOS_PROFILE: 'managers'
identity: "core-root/terraform"
server-two:
env:
ATMOS_PROFILE: 'managers'
identity: "core-root/terraform"Is there a way to configure ATMOS_PROFILE and identity globally so they apply to all MCP servers by default, rather than repeating them in each server definition?
PS: I think the links for the last office hour are missing on the channel.
M
Mateusz Osińskiabout 1 month ago
Hi guys! I have two more quick questions:
1. How do you manage Terraform version in the repo? Is there any chance to define the TF version only in the
2. How do you format the
1. How do you manage Terraform version in the repo? Is there any chance to define the TF version only in the
atmos.yaml file? (so there is no need to define required_version everywhere?),2. How do you format the
.tf files? Is there any standard in CloudPosse? Or just terraform fmt -recursive? Or should we use https://github.com/cloudposse/github-action-auto-format?E
erikabout 1 month ago
We've just published the Atmos Pro changelog here: https://atmos-pro.com/changelog
M
Miguel Zablahabout 1 month ago
Hey guys! I have a question about
It takes 4m when I run it to find 1 dependency is this normal? Maybe I'm doing something wrong?
atmos describe dependents <component> -s <stack> cmd.It takes 4m when I run it to find 1 dependency is this normal? Maybe I'm doing something wrong?
M
Mateusz Osińskiabout 1 month ago
Hi! Quick question about Atmos config setup.
In our repo we currently have two copies of
• one at the repo root (
• another baked into our container image at
Right now these two files are 1:1 identical (I diffed them).
In CI, our GitHub workflows pass
I’d like to simplify this and move to a single
• keep only the root
• have the Docker image copy that file into
Or do you recommend keeping a separate, baked-in config for the image in addition to the repo one? Any best-practice guidance (and pitfalls to avoid) would be really appreciated.
In our repo we currently have two copies of
atmos.yaml:• one at the repo root (
devops/atmos.yaml)• another baked into our container image at
rootfs/usr/local/etc/atmos/atmos.yamlRight now these two files are 1:1 identical (I diffed them).
In CI, our GitHub workflows pass
atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}, so Atmos should be using the config file from the checked-out repo.I’d like to simplify this and move to a single
atmos.yaml as the source of truth. Is it safe (and idiomatic in CloudPosse setups) to:• keep only the root
atmos.yaml,• have the Docker image copy that file into
/usr/local/etc/atmos/atmos.yaml, and point CI (ATMOS_CONFIG_PATH) at the root file?Or do you recommend keeping a separate, baked-in config for the image in addition to the repo one? Any best-practice guidance (and pitfalls to avoid) would be really appreciated.
P
Paavo Pokkinenabout 1 month ago
Can Atmos support collecting different outputs across stacks from particular component, and injecting them to another?
For example: I am deploying Argo-CD once to management cluster. There can be potentially be X number of workload clusters (separate component). For Argo-CD I need to create secret on management cluster to make it aware of number of workload clusters that exists.
terraform.state function at least can only get output of one component in current, or different stacks; not combine across range of stacks.
How should I solve issue or collecting outputs and merging them? There’s plenty of potential use cases where this pattern emerges.
For example: I am deploying Argo-CD once to management cluster. There can be potentially be X number of workload clusters (separate component). For Argo-CD I need to create secret on management cluster to make it aware of number of workload clusters that exists.
terraform.state function at least can only get output of one component in current, or different stacks; not combine across range of stacks.
How should I solve issue or collecting outputs and merging them? There’s plenty of potential use cases where this pattern emerges.
A
Alexandre Feblotabout 2 months ago(edited)
Hi,
Atmos Pro: 403 . What can I be doing wrong?
I added a new repo to Atmos Pro (and removed the first one just in case), so only one repo installed.
Running
Atmos Pro: 403 . What can I be doing wrong?
I added a new repo to Atmos Pro (and removed the first one just in case), so only one repo installed.
Running
atmos describe affected ... --upload does retrieve and print the stack information, including the workspace_id, but fails uploading: ERRO Pro API Error operation=UploadAffectedStacks request="" status=403 success=false trace_id=b11636f3f0e1e158947bd5abfe7a4d35 error_message="" context=map[]
# Error
**Error:** failed to upload stacks API response error: API request failed with
status 403 (trace_id: b11636f3f0e1e158947bd5abfe7a4d35)M
Martin Bornholdabout 2 months ago
Hi all, that's my first contact, happy to get in touch with you 🙂
I am doing a PoC with atmos and spacelift.io. I read about the integration between atmos and spacelift and tried to follow the docs. I was able to create spacelift spaces and stacks with your provided TF modules: https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack and https://github.com/cloudposse-terraform-components/aws-spacelift-spaces.
But now I am blocked to follow the documentation because it mentions the below scripts to be used in the spacelift stack hooks. For example:
Example copied from https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack?tab=readme-ov-file#usage
Obviously these scripts do not exist in the default spacelift runner image and the stack run fails. Just commenting them out makes the default OpenTofu workflow fail. I searched for hours but I was not able to find examples or more information about these scripts. Maybe someone can point me into the right direction and help me to find these scripts? Thanks a lot 🙂
I am doing a PoC with atmos and spacelift.io. I read about the integration between atmos and spacelift and tried to follow the docs. I was able to create spacelift spaces and stacks with your provided TF modules: https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack and https://github.com/cloudposse-terraform-components/aws-spacelift-spaces.
But now I am blocked to follow the documentation because it mentions the below scripts to be used in the spacelift stack hooks. For example:
before_init:
- spacelift-configure-paths
- spacelift-write-vars
- spacelift-tf-workspaceExample copied from https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack?tab=readme-ov-file#usage
Obviously these scripts do not exist in the default spacelift runner image and the stack run fails. Just commenting them out makes the default OpenTofu workflow fail. I searched for hours but I was not able to find examples or more information about these scripts. Maybe someone can point me into the right direction and help me to find these scripts? Thanks a lot 🙂
C
Charles Smithabout 2 months ago
Question. I'm trying out using
Am I missing something or is this a bug with
I have even done
name_template: "{{ .vars.stage }}-{{ .vars.region }}" and it's working everywhere. Stacks validate fine. I can plan all components and stacks as expected but when I useatmos describe affected it returns the error:Error: template: describe-stacks-name-template:1:26: executing "describe-stacks-name-template" at <.vars.region>: map has no entry for key "region"Am I missing something or is this a bug with
atmos describe affected?I have even done
atmos describe stacks and I can see the region var in all component instances in all stacksC
Charlesabout 2 months ago
Is it possible to set a default identity for a specific stack?
B
Brandonabout 2 months ago
Hello, I have a deploy-layers workflow, in which I run a plan between applying each layer, and I wanted to know if there was a way to have atmos skip a workflow step if the plan step didn't detect any changes?
deploy-layers:
description: "Deploy infrastructure layers with plans between each layer"
steps:
- command: terraform plan network
- command: terraform apply network --from-plan -auto-approve
- command: terraform plan security-groups
- command: terraform apply security-groups --from-plan -auto-approve
- command: terraform plan eks-cluster
- command: terraform apply eks-cluster --from-plan -auto-approve
- command: terraform plan eks-bootstrap
- command: terraform apply eks-bootstrap --from-plan -auto-approveA
Alexandre Feblotabout 2 months ago(edited)
Atmos Pro with Native CI
Hi,
Is it already possible to use Atmos Pro without
The 2 Atmos Pro examples I could find still rely on those actions.
If already possible, full example pipelines would be very welcome. - Mostly how to deal with atmos describe affected and pass the info to Atmos Pro.
Would it be enough to just run
Hi,
Is it already possible to use Atmos Pro without
cloudposse/github-action-atmos-*, just with atmos native CI support?The 2 Atmos Pro examples I could find still rely on those actions.
If already possible, full example pipelines would be very welcome. - Mostly how to deal with atmos describe affected and pass the info to Atmos Pro.
Would it be enough to just run
atmos describe affected --upload=true ?E
Elonabout 2 months ago
Hi,
atmos terraform
However, I did not see that
Will it be added in the future?
Because if I want to delete all components in a stack, I have to do it one by one.
If I'm using:
Atmos would create the S3 bucket for the backend. It only works with
In this case, I also have to
atmos terraform
apply and plan have an --all flag to deploy all components in a stack.However, I did not see that
init and destroy have an --all flag as well.Will it be added in the future?
Because if I want to delete all components in a stack, I have to do it one by one.
If I'm using:
provision:
backend:
enabled: true
Atmos would create the S3 bucket for the backend. It only works with
init first, so apply and/or plan do not work and throw an error that the backend does not exist.In this case, I also have to
init each component in the stack, so adding an --all flag to init would be beneficial.Z
Zackabout 2 months ago
Atmos v 1.112.0 Playing with JIT 0s TTL and it looks like it's not generating the varfiles or backend files 😭
A
Alexandre Feblotabout 2 months ago
Hi,
I don't understand what is wrong here. Running
• I'm authenticated using "atmos auth" with an account admin identity set as the default one
•
• using atmos 1.210.0
Atmos.yaml auth section:
The state bucket declaration:
Debug log:
I don't understand what is wrong here. Running
atmos list affected --ref refs/heads/main fails to authenticate to the state bucket.• I'm authenticated using "atmos auth" with an account admin identity set as the default one
•
atmos tf plan ... works fine• using atmos 1.210.0
Atmos.yaml auth section:
auth:
providers:
my-sso:
kind: aws/iam-identity-center
start_url: <https://mycorp.awsapps.com/start>
region: eu-west-1
default: true
identities:
account-admin:
kind: aws/permission-set
via:
provider: my-sso
principal:
name: AdministratorAccess
account:
id: "000000000000"
default: trueThe state bucket declaration:
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: false
use_lockfile: true
bucket: "my-state-bucket"
key: "terraform.tfstate"
region: eu-west-1
provision:
backend:
enabled: true
workdir:
enabled: trueDebug log:
DEBU Found component 'mycomponent' in the stack 'mystack' in the stack manifest 'path/to/mycomponent'
DEBU Resolved component path type=terraform component=mycomponent resolved_path=.../components/terraform/mycomponent base_path=.../components/terraform env_override=false
DEBU Component has auth config with default identity, creating component-specific AuthManager component=mycomponent stack=mystack
DEBU CreateAndAuthenticateManager called identityName="" hasAuthConfig=true
DEBU Loading stack configs for auth identity defaults
DEBU Loading stack files for auth defaults count=6
DEBU No default identities found in stack configs
DEBU System keyring not available, using no-op keyring (will use credentials from files/environment) error="system keyring not available: The name org.freedesktop.secrets was not provided by any .service files"
DEBU Auth realm computed realm=1ad85013bbd5d0d4 source=config
DEBU System keyring not available, using no-op keyring (will use credentials from files/environment) error="system keyring not available: The name org.freedesktop.secrets was not provided by any .service files"
DEBU Auth realm computed realm=1ad85013bbd5d0d4 source=config
DEBU Authentication chain discovered identity=account-admin chainLength=2 chain="[my-sso account-admin]"
DEBU Checking cached credentials chainIndex=1 identityName=account-admin
DEBU Noop keyring: credentials managed externally alias=account-admin realm=1ad85013bbd5d0d4
DEBU Credentials not in keyring, trying identity storage identity=account-admin
DEBU Loading AWS credentials from files credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config profile=account-admin region=eu-west-1
DEBU Loaded expiration from credentials file metadata profile=account-admin expiration=2026-03-25T15:37:02+01:00
DEBU Successfully loaded AWS credentials from files profile=account-admin region=eu-west-1 has_session_token=true has_expiration=true
DEBU Successfully loaded credentials from identity storage identity=account-admin
DEBU Found valid cached credentials chainIndex=1 identityName=account-admin expiration="2026-03-25 15:37:02 +0100 CET"
DEBU Found valid cached credentials validFromIndex=1 chainStep=account-admin
DEBU Noop keyring: credentials managed externally alias=account-admin realm=1ad85013bbd5d0d4
DEBU Credentials not in keyring, trying identity storage identity=account-admin
DEBU Loading AWS credentials from files credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config profile=account-admin region=eu-west-1
DEBU Loaded expiration from credentials file metadata profile=account-admin expiration=2026-03-25T15:37:02+01:00
DEBU Successfully loaded AWS credentials from files profile=account-admin region=eu-west-1 has_session_token=true has_expiration=true
DEBU Successfully loaded credentials from identity storage identity=account-admin
DEBU Starting authentication from cached credentials startIndex=1 identity=account-admin
DEBU Authenticating identity chain chainLength=2 startIndex=2 chain="[my-sso account-admin]"
DEBU Writing AWS credentials provider=my-sso identity=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials has_session_token=true
DEBU Acquired file lock lock_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials.lock
DEBU Writing credentials to file profile=account-admin access_key_prefix=ASIA... has_session_token=true expiration=2026-03-25T15:37:02+01:00
DEBU Successfully wrote AWS credentials provider=my-sso identity=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials
DEBU Writing AWS config provider=my-sso identity=account-admin config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config region=eu-west-1 output_format=""
DEBU Acquired file lock lock_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config.lock
DEBU Successfully wrote AWS config provider=my-sso identity=account-admin config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config
DEBU Set AWS auth context profile=account-admin credentials=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config region=eu-west-1
DEBU Preparing AWS environment for Atmos-managed credentials profile=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config
DEBU Prepared AWS environment profile=account-admin
DEBU Deleted legacy keyring entry (pre-realm) alias=my-sso
DEBU Deleted legacy keyring entry (pre-realm) alias=account-admin
DEBU Skipping keyring cache for session tokens in WhoamiInfo identity=account-admin
DEBU Created component-specific AuthManager component=mycomponent stack=mystack identityChain="[my-sso account-admin]"
DEBU Populated AuthContext from AuthManager for template functions
DEBU Found component 'mycomponent' in the stack 'mystack' in the stack manifest 'path/to/mycomponent'
DEBU Resolved component path type=terraform component=mycomponent resolved_path=.../components/terraform/mycomponent base_path=.../components/terraform env_override=false
DEBU Using standard AWS SDK credential resolution (no auth context provided)
DEBU Using explicit region region=eu-west-1
DEBU Loading AWS SDK config num_options=1
DEBU Successfully loaded AWS SDK config region=eu-west-1
DEBU Failed to read Terraform state file from the S3 bucket attempt=1 file=mycomponent/mystack/terraform.tfstate bucket=my-state-bucket error="operation error S3: GetObject, get identity: get credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request canceled, context deadline exceeded" backoff=1s