atmos
134719,527
👽️
J
Jonathan Roseabout 2 hours ago
atmos docs generate is a game changer! used it for a new project and it works amazing!S
Seanabout 22 hours ago
Following the documentation it seems like
but I get error
I tried with
!literal should be working here:field_template: !literal |
{%- if alertDef.entityLabels.channel is defined -%}
{{ alertDef.entityLabels.channel }}
{%- else -%}
_sandbox
{%- endif -%}but I get error
*Error:* template: templates-all-atmos-sections:624: function "alertDef" not defined, is there something wrong with my usage or a bug?I tried with
1.210.0 & 1.216.0Y
Yota2 days ago(edited)
I just upgraded to 1.115 and saw this:
Preserve Directory Hierarchy in Terraform State Buckets. Yeah! For weeks, I've been telling myself I need to report that I have to override all components manually because it doesn't preserve the /. A small feature, but very valuable. Thanks :).P
Paavo Pokkinen2 days ago
Has anyone ran into issue that prevents using atmos locals? More in the thread.
L
Leonardo Agueci6 days ago
Hi all, I have a question on datasources/templates
I have this datasource defined:
And I use it as described in the documentation:
It works great if
I have templates evaluation set to 3 and as far as I understood, in the first evaluation
Am I assuming something wrong? Is there any way to make it works if I'm doing something wrong? Thanks in advance
I have this datasource defined:
settings:templates:settings:gomplate:datasources:env:url: "stacks/data/{{ .vars.environment }}.yaml"And I use it as described in the documentation:
ami: '{{{{ (index (datasource "env").ami .vars.aws_region) }}}}'It works great if
aws_region contains directly a string value but is not working if it reference another variable, for example aws_region: {{ .vars.region }}I have templates evaluation set to 3 and as far as I understood, in the first evaluation
aws_region should be resolved to the actual value of region and ami to '{{ (index (datasource "env").ami .vars.aws_region) }}' so in the second evaluation it should be evaluated correctly with the value originally contained into region.Am I assuming something wrong? Is there any way to make it works if I'm doing something wrong? Thanks in advance
C
Charles Smith7 days ago(edited)
Hey I just feel like I stumbled upon a very cool configuration that will help us migrate from static traditional terraform dirs in a progressive way.
In our current static terraform we only use a default workspace and set specific backend keys for each dir. Eventually we'll migrate these static root modules into components but until they are all migrated in I was worried it would be difficult to leverage the
Then I had a crazy idea. If could create a
Here's what one of these
Then I can use this in another real component in the same stack:
Thought I would share in case this helps anyone else move into
The
In our current static terraform we only use a default workspace and set specific backend keys for each dir. Eventually we'll migrate these static root modules into components but until they are all migrated in I was worried it would be difficult to leverage the
atmos !terraform.state YAML function to lookup outputs in the older terraform root modules.Then I had a crazy idea. If could create a
dummy component that essentially only contains a context.tf could I then setup a stack that points to the older root modules where each component inherits from dummy but then specifies it's backend to make !terrafofrm.state work. To my surprise this actually works amazingly well!Here's what one of these
dummy instances actually looks like (bogus account id swapped in):components:
terraform:
vpc/us-east-1-development:
metadata:
enabled: false
terraform_workspace: "default"
component: dummy
name: vpc/us-east-1-development
backend:
s3:
key: infra/terraform/123456789012/vpc/us-east-1-development/terraform.tfstateThen I can use this in another real component in the same stack:
vars:
vpc_id: !terraform.state vpc/us-east-1-development vpc_idThought I would share in case this helps anyone else move into
atmosThe
metadata.enabled is very cool cause that ensures my CI/CD does not try to plan/apply this dummy component as well!M
Mateusz Osiński7 days ago
Hi guys!
Apologies for the next rookie question - is the
Apologies for the next rookie question - is the
Makefile deprecated?T
toka7 days ago
Hello, I'm looking how to tackle
With vanilla Terraform, resources are deleted when source code files are removed from the codebase.
With Atmos components that is not the case, as we need deep merged yaml to even run terraform in the first place.
I'm looking for ways how to carry out deletions when PR with deleted
atmos terraform destroy with GitHub Actions.With vanilla Terraform, resources are deleted when source code files are removed from the codebase.
With Atmos components that is not the case, as we need deep merged yaml to even run terraform in the first place.
I'm looking for ways how to carry out deletions when PR with deleted
yaml files gets created.C
Charles Smith9 days ago
Hello. I've been looking into
I guess if I wanted a file more globally generated I could try having all my components inherit from one that does the generation...
atmos generate and it seems it can't be specified under the global terraform key (like backend or providers). Am I missing something or is there a reason it must only be specified under a component?I guess if I wanted a file more globally generated I could try having all my components inherit from one that does the generation...
E
erik9 days ago
Have you ever wanted to auto commit fixes like
The workaround used to suck: create a static PAT or create a new github app, then distribute/manage/rotate those secrets.
Well, not anymore.
Now you can use
terraform fmt and realized that GitHub Actions won't run on those changes? GitHub does this as a circuit-breaker to prevent commit loops.The workaround used to suck: create a static PAT or create a new github app, then distribute/manage/rotate those secrets.
Well, not anymore.
Now you can use
atmos pro commit https://atmos.tools/changelog/pro-commitB
Brandon9 days ago
Hi there, just confirming planfile to Github artifacts in native CI is not implemented in 215 right?
🧪 ci is an experimental feature. Learn more <https://atmos.tools/experimental>
WARN CI hook handler failed event=after.terraform.plan
error=
│ failed to upload planfile: failed to upload planfile: not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ (1) forced error mark
│ | "failed to upload planfile"
│ | errors/*errors.errorString::
│ Wraps: (2) key=dev-primary/argocd-apps/0509682841e9b66922a344abd606567815703ad3.tfplan.tar store=github/artifacts
│ Wraps: (3) Failed to upload planfile to store
│ Wraps: (4) failed to upload planfile: failed to upload planfile: not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ └─ Wraps: (5) failed to upload planfile: not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ └─ Wraps: (6) not implemented: GitHub Artifacts upload requires running within GitHub Actions (ACTIONS_RUNTIME_TOKEN and ACTIONS_RESULTS_URL must be set)
│ └─ Wraps: (7) not implemented
│ └─ Wraps: (8) failed to upload planfile
│ └─ Wraps: (9) failed to upload planfile
│ Error types: (1) *markers.withMark (2) *safedetails.withSafeDetails (3) *hintdetail.withDetail (4) *fmt.wrapErrors (5) *fmt.wrapErrors (6) *fmt.wrapError (7) *errors.errorString (8) *errors.errorString (9) *errors.errorString
ℹ Executing command: atmos terraform apply argocd-apps --from-plan -auto-approve
🧪 ci is an experimental feature. Learn more <https://atmos.tools/experimental>G
Gerry Laracuente9 days ago(edited)
Hello 👋 I'm working on getting
I can't find this version published to the open tufu registry. Is this version only meant to be usable with Atmos?
aws-ssosync setup, and I I see you all recently released v2.0.0.I can't find this version published to the open tufu registry. Is this version only meant to be usable with Atmos?
E
Eunice Bello9 days ago
Hello guys! I'm trying to use the
Is there a way to skip the AWS credentials configuration action and make it work with GCP instead? Or is there a another atmos-terraform-plan action for GCP?
github-action-atmos-terraform-plan in a github actions pipeline that deploys resources to Google Cloud but it fails because it seems to only supports AWS authentication:Run aws-actions/configure-aws-credentials@v4.0.2
Error: Source Account ID is needed if the Role Name is provided and not the Role Arn.Is there a way to skip the AWS credentials configuration action and make it work with GCP instead? Or is there a another atmos-terraform-plan action for GCP?
M
Miguel Zablah10 days ago
Hey guys I have a question about atmos toolchain I know we have the .tool-versions file but do we need to set the version again when declaring them as a dependencies?
for example on the org/_default I have this:
but I will prefer to manage version in 1 place and just declare the tool that is need it that way I don't have to remember and update this too hahaha
for example on the org/_default I have this:
dependencies:
tools:
atmos: 1.214.0
terraform: 1.14.2
tflint: 0.61.0but I will prefer to manage version in 1 place and just declare the tool that is need it that way I don't have to remember and update this too hahaha
E
erik13 days ago
Just another friendly reminder: if you are waiting on a resolution to an issue you have opened, please check if there's an associated PR. If there is, it's probably held up on validation, and any help we can get to validate those pull requests will expedite merging and cutting a release.
Y
Yota13 days ago(edited)
Hello,
I'm asking because I didn't see about this in the Atmos documentation. Is it possible to add the Git commit and branch to the variables? This would allow us to tag resources and thus track the code that created or modified them. Do you think this is a good idea? And Then would you consider adding this feature to the Atmos core?
I'm asking because I didn't see about this in the Atmos documentation. Is it possible to add the Git commit and branch to the variables? This would allow us to tag resources and thus track the code that created or modified them. Do you think this is a good idea? And Then would you consider adding this feature to the Atmos core?
M
Mateusz Osiński13 days ago
Hi guys!
Is there any article/documentation on how to move from Terraform to OpenTofu? (tf 1.5.7)
Is there any article/documentation on how to move from Terraform to OpenTofu? (tf 1.5.7)
E
erik14 days ago
We added a batch deployment dashboard to Atmos Pro which is useful when PRs touch a lot of components and the normal mermaid-style diagrams cannot be shown due to comment-size limitations.
J
JP Pakalapati15 days ago
Hello! I'm looking for a way to switch identity depending on the component in a stack.
I have a default identity which I use store terraform state file, But the component that I'm trying to create should be on another account. I have tried using the component level identity selection, but it isn't working. it tries to create the component on the default identity no matter the config I tried.
I have a default identity which I use store terraform state file, But the component that I'm trying to create should be on another account. I have tried using the component level identity selection, but it isn't working. it tries to create the component on the default identity no matter the config I tried.
Z
Zack15 days ago
❓️ Is there a way, to combine JIT and generating overrides files like this?
https://developer.hashicorp.com/terraform/language/files/override#merging-locals-blocks
Starting to dig into this feature
https://atmos.tools/stacks/generate
https://developer.hashicorp.com/terraform/language/files/override#merging-locals-blocks
Starting to dig into this feature
https://atmos.tools/stacks/generate
E
erik16 days ago
We've Updated the Atmos Native CI examples also to use containerized steps rather than run
https://github.com/cloudposse-examples/atmos-native-ci
github-action-setup-atmos since it's way more performant.https://github.com/cloudposse-examples/atmos-native-ci
E
erik16 days ago
We've added an example of using Atmos Native CI with GitHub Job Matrixes:
https://github.com/cloudposse-examples/atmos-native-ci-advanced
https://github.com/cloudposse-examples/atmos-native-ci-advanced
M
Miguel Zablah16 days ago
Hey guys, I have a question about
https://atmos.tools/cli/configuration/ci/comments
I like it and want it for most of my CI/CD workflows but I'm trying to do a custom drift detection workflow where I will not like this to be trigger but I don't see a way to disable this using an ENV like
ci.comments is there a ENV to disable this?https://atmos.tools/cli/configuration/ci/comments
I like it and want it for most of my CI/CD workflows but I'm trying to do a custom drift detection workflow where I will not like this to be trigger but I don't see a way to disable this using an ENV like
ATMOS_COMMENTS_ENABLED=FALSEE
erik18 days ago
Yes, we are moving away from account map for all new engagements. For the next year, we need to straddle a world where both work, before we fully yank it out. We therefore use a providers.tf mixin that doesn't use account map.
E
Elon18 days ago
Hi, I have a quick question about components under https://github.com/cloudposse-terraform-components
Why do they require the account-map component to be present? Isn't account map deprecated?
Why do they require the account-map component to be present? Isn't account map deprecated?
E
Elon18 days ago
Hi,
Is there any issue with cloudposse/utils? Why can't I fetch it?
Is there any issue with cloudposse/utils? Why can't I fetch it?
M
Miguel Zablah22 days ago
Hey guys!
I have a question about Atmos MCP configuration that came up during
the latest office-hours meeting.
I've reviewed the docs at https://atmos.tools/cli/configuration/mcp and the
example at https://atmos.tools/gists/mcp-with-aws where ATMOS_PROFILE is set
for authentication.
Currently, I'm defining multiple MCP servers and need to set ATMOS_PROFILE and
identity for each one individually. For example:
Is there a way to configure ATMOS_PROFILE and identity globally so they apply to all MCP servers by default, rather than repeating them in each server definition?
PS: I think the links for the last office hour are missing on the channel.
I have a question about Atmos MCP configuration that came up during
the latest office-hours meeting.
I've reviewed the docs at https://atmos.tools/cli/configuration/mcp and the
example at https://atmos.tools/gists/mcp-with-aws where ATMOS_PROFILE is set
for authentication.
Currently, I'm defining multiple MCP servers and need to set ATMOS_PROFILE and
identity for each one individually. For example:
mcp:
servers:
server-one:
env:
ATMOS_PROFILE: 'managers'
identity: "core-root/terraform"
server-two:
env:
ATMOS_PROFILE: 'managers'
identity: "core-root/terraform"Is there a way to configure ATMOS_PROFILE and identity globally so they apply to all MCP servers by default, rather than repeating them in each server definition?
PS: I think the links for the last office hour are missing on the channel.
M
Mateusz Osiński23 days ago
Hi guys! I have two more quick questions:
1. How do you manage Terraform version in the repo? Is there any chance to define the TF version only in the
2. How do you format the
1. How do you manage Terraform version in the repo? Is there any chance to define the TF version only in the
atmos.yaml file? (so there is no need to define required_version everywhere?),2. How do you format the
.tf files? Is there any standard in CloudPosse? Or just terraform fmt -recursive? Or should we use https://github.com/cloudposse/github-action-auto-format?E
M
Miguel Zablah24 days ago
Hey guys! I have a question about
It takes 4m when I run it to find 1 dependency is this normal? Maybe I'm doing something wrong?
atmos describe dependents <component> -s <stack> cmd.It takes 4m when I run it to find 1 dependency is this normal? Maybe I'm doing something wrong?
M
Mateusz Osiński24 days ago
Hi! Quick question about Atmos config setup.
In our repo we currently have two copies of
• one at the repo root (
• another baked into our container image at
Right now these two files are 1:1 identical (I diffed them).
In CI, our GitHub workflows pass
I’d like to simplify this and move to a single
• keep only the root
• have the Docker image copy that file into
Or do you recommend keeping a separate, baked-in config for the image in addition to the repo one? Any best-practice guidance (and pitfalls to avoid) would be really appreciated.
In our repo we currently have two copies of
atmos.yaml:• one at the repo root (
devops/atmos.yaml)• another baked into our container image at
rootfs/usr/local/etc/atmos/atmos.yamlRight now these two files are 1:1 identical (I diffed them).
In CI, our GitHub workflows pass
atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}, so Atmos should be using the config file from the checked-out repo.I’d like to simplify this and move to a single
atmos.yaml as the source of truth. Is it safe (and idiomatic in CloudPosse setups) to:• keep only the root
atmos.yaml,• have the Docker image copy that file into
/usr/local/etc/atmos/atmos.yaml, and point CI (ATMOS_CONFIG_PATH) at the root file?Or do you recommend keeping a separate, baked-in config for the image in addition to the repo one? Any best-practice guidance (and pitfalls to avoid) would be really appreciated.
P
Paavo Pokkinen24 days ago
Can Atmos support collecting different outputs across stacks from particular component, and injecting them to another?
For example: I am deploying Argo-CD once to management cluster. There can be potentially be X number of workload clusters (separate component). For Argo-CD I need to create secret on management cluster to make it aware of number of workload clusters that exists.
terraform.state function at least can only get output of one component in current, or different stacks; not combine across range of stacks.
How should I solve issue or collecting outputs and merging them? There’s plenty of potential use cases where this pattern emerges.
For example: I am deploying Argo-CD once to management cluster. There can be potentially be X number of workload clusters (separate component). For Argo-CD I need to create secret on management cluster to make it aware of number of workload clusters that exists.
terraform.state function at least can only get output of one component in current, or different stacks; not combine across range of stacks.
How should I solve issue or collecting outputs and merging them? There’s plenty of potential use cases where this pattern emerges.
A
Alexandre Feblot27 days ago(edited)
Hi,
Atmos Pro: 403 . What can I be doing wrong?
I added a new repo to Atmos Pro (and removed the first one just in case), so only one repo installed.
Running
Atmos Pro: 403 . What can I be doing wrong?
I added a new repo to Atmos Pro (and removed the first one just in case), so only one repo installed.
Running
atmos describe affected ... --upload does retrieve and print the stack information, including the workspace_id, but fails uploading: ERRO Pro API Error operation=UploadAffectedStacks request="" status=403 success=false trace_id=b11636f3f0e1e158947bd5abfe7a4d35 error_message="" context=map[]
# Error
**Error:** failed to upload stacks API response error: API request failed with
status 403 (trace_id: b11636f3f0e1e158947bd5abfe7a4d35)M
Martin Bornhold27 days ago
Hi all, that's my first contact, happy to get in touch with you 🙂
I am doing a PoC with atmos and spacelift.io. I read about the integration between atmos and spacelift and tried to follow the docs. I was able to create spacelift spaces and stacks with your provided TF modules: https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack and https://github.com/cloudposse-terraform-components/aws-spacelift-spaces.
But now I am blocked to follow the documentation because it mentions the below scripts to be used in the spacelift stack hooks. For example:
Example copied from https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack?tab=readme-ov-file#usage
Obviously these scripts do not exist in the default spacelift runner image and the stack run fails. Just commenting them out makes the default OpenTofu workflow fail. I searched for hours but I was not able to find examples or more information about these scripts. Maybe someone can point me into the right direction and help me to find these scripts? Thanks a lot 🙂
I am doing a PoC with atmos and spacelift.io. I read about the integration between atmos and spacelift and tried to follow the docs. I was able to create spacelift spaces and stacks with your provided TF modules: https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack and https://github.com/cloudposse-terraform-components/aws-spacelift-spaces.
But now I am blocked to follow the documentation because it mentions the below scripts to be used in the spacelift stack hooks. For example:
before_init:
- spacelift-configure-paths
- spacelift-write-vars
- spacelift-tf-workspaceExample copied from https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack?tab=readme-ov-file#usage
Obviously these scripts do not exist in the default spacelift runner image and the stack run fails. Just commenting them out makes the default OpenTofu workflow fail. I searched for hours but I was not able to find examples or more information about these scripts. Maybe someone can point me into the right direction and help me to find these scripts? Thanks a lot 🙂
C
Charles Smith28 days ago
Question. I'm trying out using
Am I missing something or is this a bug with
I have even done
name_template: "{{ .vars.stage }}-{{ .vars.region }}" and it's working everywhere. Stacks validate fine. I can plan all components and stacks as expected but when I useatmos describe affected it returns the error:Error: template: describe-stacks-name-template:1:26: executing "describe-stacks-name-template" at <.vars.region>: map has no entry for key "region"Am I missing something or is this a bug with
atmos describe affected?I have even done
atmos describe stacks and I can see the region var in all component instances in all stacksC
Charles28 days ago
Is it possible to set a default identity for a specific stack?
B
Brandon28 days ago
Hello, I have a deploy-layers workflow, in which I run a plan between applying each layer, and I wanted to know if there was a way to have atmos skip a workflow step if the plan step didn't detect any changes?
deploy-layers:
description: "Deploy infrastructure layers with plans between each layer"
steps:
- command: terraform plan network
- command: terraform apply network --from-plan -auto-approve
- command: terraform plan security-groups
- command: terraform apply security-groups --from-plan -auto-approve
- command: terraform plan eks-cluster
- command: terraform apply eks-cluster --from-plan -auto-approve
- command: terraform plan eks-bootstrap
- command: terraform apply eks-bootstrap --from-plan -auto-approveA
Alexandre Feblot28 days ago(edited)
Atmos Pro with Native CI
Hi,
Is it already possible to use Atmos Pro without
The 2 Atmos Pro examples I could find still rely on those actions.
If already possible, full example pipelines would be very welcome. - Mostly how to deal with atmos describe affected and pass the info to Atmos Pro.
Would it be enough to just run
Hi,
Is it already possible to use Atmos Pro without
cloudposse/github-action-atmos-*, just with atmos native CI support?The 2 Atmos Pro examples I could find still rely on those actions.
If already possible, full example pipelines would be very welcome. - Mostly how to deal with atmos describe affected and pass the info to Atmos Pro.
Would it be enough to just run
atmos describe affected --upload=true ?E
Elon28 days ago
Hi,
atmos terraform
However, I did not see that
Will it be added in the future?
Because if I want to delete all components in a stack, I have to do it one by one.
If I'm using:
Atmos would create the S3 bucket for the backend. It only works with
In this case, I also have to
atmos terraform
apply and plan have an --all flag to deploy all components in a stack.However, I did not see that
init and destroy have an --all flag as well.Will it be added in the future?
Because if I want to delete all components in a stack, I have to do it one by one.
If I'm using:
provision:
backend:
enabled: true
Atmos would create the S3 bucket for the backend. It only works with
init first, so apply and/or plan do not work and throw an error that the backend does not exist.In this case, I also have to
init each component in the stack, so adding an --all flag to init would be beneficial.Z
Zack29 days ago
Atmos v 1.112.0 Playing with JIT 0s TTL and it looks like it's not generating the varfiles or backend files 😭
A
Alexandre Feblot29 days ago
Hi,
I don't understand what is wrong here. Running
• I'm authenticated using "atmos auth" with an account admin identity set as the default one
•
• using atmos 1.210.0
Atmos.yaml auth section:
The state bucket declaration:
Debug log:
I don't understand what is wrong here. Running
atmos list affected --ref refs/heads/main fails to authenticate to the state bucket.• I'm authenticated using "atmos auth" with an account admin identity set as the default one
•
atmos tf plan ... works fine• using atmos 1.210.0
Atmos.yaml auth section:
auth:
providers:
my-sso:
kind: aws/iam-identity-center
start_url: <https://mycorp.awsapps.com/start>
region: eu-west-1
default: true
identities:
account-admin:
kind: aws/permission-set
via:
provider: my-sso
principal:
name: AdministratorAccess
account:
id: "000000000000"
default: trueThe state bucket declaration:
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: false
use_lockfile: true
bucket: "my-state-bucket"
key: "terraform.tfstate"
region: eu-west-1
provision:
backend:
enabled: true
workdir:
enabled: trueDebug log:
DEBU Found component 'mycomponent' in the stack 'mystack' in the stack manifest 'path/to/mycomponent'
DEBU Resolved component path type=terraform component=mycomponent resolved_path=.../components/terraform/mycomponent base_path=.../components/terraform env_override=false
DEBU Component has auth config with default identity, creating component-specific AuthManager component=mycomponent stack=mystack
DEBU CreateAndAuthenticateManager called identityName="" hasAuthConfig=true
DEBU Loading stack configs for auth identity defaults
DEBU Loading stack files for auth defaults count=6
DEBU No default identities found in stack configs
DEBU System keyring not available, using no-op keyring (will use credentials from files/environment) error="system keyring not available: The name org.freedesktop.secrets was not provided by any .service files"
DEBU Auth realm computed realm=1ad85013bbd5d0d4 source=config
DEBU System keyring not available, using no-op keyring (will use credentials from files/environment) error="system keyring not available: The name org.freedesktop.secrets was not provided by any .service files"
DEBU Auth realm computed realm=1ad85013bbd5d0d4 source=config
DEBU Authentication chain discovered identity=account-admin chainLength=2 chain="[my-sso account-admin]"
DEBU Checking cached credentials chainIndex=1 identityName=account-admin
DEBU Noop keyring: credentials managed externally alias=account-admin realm=1ad85013bbd5d0d4
DEBU Credentials not in keyring, trying identity storage identity=account-admin
DEBU Loading AWS credentials from files credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config profile=account-admin region=eu-west-1
DEBU Loaded expiration from credentials file metadata profile=account-admin expiration=2026-03-25T15:37:02+01:00
DEBU Successfully loaded AWS credentials from files profile=account-admin region=eu-west-1 has_session_token=true has_expiration=true
DEBU Successfully loaded credentials from identity storage identity=account-admin
DEBU Found valid cached credentials chainIndex=1 identityName=account-admin expiration="2026-03-25 15:37:02 +0100 CET"
DEBU Found valid cached credentials validFromIndex=1 chainStep=account-admin
DEBU Noop keyring: credentials managed externally alias=account-admin realm=1ad85013bbd5d0d4
DEBU Credentials not in keyring, trying identity storage identity=account-admin
DEBU Loading AWS credentials from files credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config profile=account-admin region=eu-west-1
DEBU Loaded expiration from credentials file metadata profile=account-admin expiration=2026-03-25T15:37:02+01:00
DEBU Successfully loaded AWS credentials from files profile=account-admin region=eu-west-1 has_session_token=true has_expiration=true
DEBU Successfully loaded credentials from identity storage identity=account-admin
DEBU Starting authentication from cached credentials startIndex=1 identity=account-admin
DEBU Authenticating identity chain chainLength=2 startIndex=2 chain="[my-sso account-admin]"
DEBU Writing AWS credentials provider=my-sso identity=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials has_session_token=true
DEBU Acquired file lock lock_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials.lock
DEBU Writing credentials to file profile=account-admin access_key_prefix=ASIA... has_session_token=true expiration=2026-03-25T15:37:02+01:00
DEBU Successfully wrote AWS credentials provider=my-sso identity=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials
DEBU Writing AWS config provider=my-sso identity=account-admin config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config region=eu-west-1 output_format=""
DEBU Acquired file lock lock_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config.lock
DEBU Successfully wrote AWS config provider=my-sso identity=account-admin config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config
DEBU Set AWS auth context profile=account-admin credentials=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config region=eu-west-1
DEBU Preparing AWS environment for Atmos-managed credentials profile=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config
DEBU Prepared AWS environment profile=account-admin
DEBU Deleted legacy keyring entry (pre-realm) alias=my-sso
DEBU Deleted legacy keyring entry (pre-realm) alias=account-admin
DEBU Skipping keyring cache for session tokens in WhoamiInfo identity=account-admin
DEBU Created component-specific AuthManager component=mycomponent stack=mystack identityChain="[my-sso account-admin]"
DEBU Populated AuthContext from AuthManager for template functions
DEBU Found component 'mycomponent' in the stack 'mystack' in the stack manifest 'path/to/mycomponent'
DEBU Resolved component path type=terraform component=mycomponent resolved_path=.../components/terraform/mycomponent base_path=.../components/terraform env_override=false
DEBU Using standard AWS SDK credential resolution (no auth context provided)
DEBU Using explicit region region=eu-west-1
DEBU Loading AWS SDK config num_options=1
DEBU Successfully loaded AWS SDK config region=eu-west-1
DEBU Failed to read Terraform state file from the S3 bucket attempt=1 file=mycomponent/mystack/terraform.tfstate bucket=my-state-bucket error="operation error S3: GetObject, get identity: get credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request canceled, context deadline exceeded" backoff=1sB
Brandon30 days ago
Anyone using the native CI feature with atmos workflows? I'm not getting the plan artifacts I'd expect when running my workflow from Github actions
J
Jonathan Rose30 days ago(edited)
anyone use semantic-release/semantic-release: :package::rocket: Fully automated version management and package publishing for auto-versioning their service catalog? I'm trying to figure out how I move away from defining a version manifest while preventing tagging every commit
E
Elonabout 1 month ago
Hi, I have a quick question.
I use
When I run
Is this expected behavior? I assumed using dependencies and
I use
!terraform.state and dependencies.components between components in the same stack, for example, VPC and subnets.When I run
atmos terraform apply --all -s stack-name, it does not work as expected and throws an error while trying to create the subnets after the VPC is created. The subnets provision successfully only after a second deployment.Is this expected behavior? I assumed using dependencies and
terraform.state would create all components in a stack in order without errors.A
Alekabout 1 month ago
Hi!
There's been a change to
• https://github.com/cloudposse-terraform-components/aws-guardduty/issues/56
I did not make a PR as I'd like to know if it's my stack that is outdated w.r. to newest Atmos best practices, or if that is a genuine regression.
Thanks so much 🙏
cc: @Andriy Knysh (Cloud Posse)
There's been a change to
providers.tf rolled out to many AWS Security Suite components (AWS Config/GuardDuty/SecurityHub), which broke my legacy GitOps Atmos workflow. More details here:• https://github.com/cloudposse-terraform-components/aws-guardduty/issues/56
I did not make a PR as I'd like to know if it's my stack that is outdated w.r. to newest Atmos best practices, or if that is a genuine regression.
Thanks so much 🙏
cc: @Andriy Knysh (Cloud Posse)
C
Charles Smithabout 1 month ago
Hello so many new things added to atmos!
As I'm setting up a new atmos repo for my org I was putting a lot of thought into whether we would adopt your
https://atmos.tools/cli/configuration/stacks/#stack-naming-with-name_template-recommended
If I understand that correctly then if we make sure we include a common set of our own variables in all components we can actually define/name our stacks using our own naming vars instead of just [namespace, tenant, stage, environment]?
Are there any major gotchas to heading down this path?
As I'm setting up a new atmos repo for my org I was putting a lot of thought into whether we would adopt your
null-label or context.tf mixin in all our components. I figured it was a must have until I stumbled on this new recommendation:https://atmos.tools/cli/configuration/stacks/#stack-naming-with-name_template-recommended
If I understand that correctly then if we make sure we include a common set of our own variables in all components we can actually define/name our stacks using our own naming vars instead of just [namespace, tenant, stage, environment]?
Are there any major gotchas to heading down this path?
J
Jonathan Roseabout 1 month ago(edited)
I am using abstract components with inheritance. Is the expectation that all components for a given stack are now written to a single remote state? Or is it because I need these defined to avoid drift?
Normally, I would expect the workspace to be formatted as
vars:
labels_as_tags: []
label_value_case: none
regex_replace_chars: "/[^a-zA-Z0-9\\s-]/"Terraform has been successfully initialized!
Switched to workspace "cfsb-aws-org-units".
aws_organizations_organizational_unit.this[0]: Preparing import... [id=ou-8wia-m8s04a9h]
aws_organizations_organizational_unit.this[0]: Refreshing state... [id=ou-8wia-m8s04a9h]Normally, I would expect the workspace to be formatted as
{{ .atmos_stacks }}-{{ .atmos_components}}C
Charles Smithabout 1 month ago
Hello just coming back to atmos after about a year away. Amazing new additions to the tool BTW!
I'm working on setting up atmos to improve the terraform reuse and organization in my current role/org and I have a question about managing the terraform s3 state key.
In my previous role/org we explicitly used
https://atmos.tools/design-patterns/version-management/folder-based-versioning#recommended-use-metadataname
I like this idea as it's very clean and readable in the stack config.
Sorry now to my actual question. Is it true that if you rely on the default atmos behaviour of using the component (or it's new
Is there any way to control this or do I need to keep specifying
I'm working on setting up atmos to improve the terraform reuse and organization in my current role/org and I have a question about managing the terraform s3 state key.
In my previous role/org we explicitly used
workspace_key_prefix all over our stacks to ensure state keys were what we expected. I can see now, however, there is new guidance and a name element in component metadata. Basically I'm talking about this recommendation:https://atmos.tools/design-patterns/version-management/folder-based-versioning#recommended-use-metadataname
I like this idea as it's very clean and readable in the stack config.
Sorry now to my actual question. Is it true that if you rely on the default atmos behaviour of using the component (or it's new
name) that your state bucket will be a flat listing of all your deployed components? I like to have a little hierarchy in my components to add some organization and I liked that when I set workspace_key_prefix this path structure can be mirrored in the state bucket. But when I don't set workspace_key_prefix it seems that atmos always replaces all / in the component (or name ) with -.Is there any way to control this or do I need to keep specifying
workspace_key_prefix if I want a less flat state organization?J
Jonathan Roseabout 1 month ago
I'm reviewing Account Management | The Cloud Posse Reference Architecture and trying to understand how to establish an account factory that also ensures IAM management roles are deployed when an account is created
P
Paavo Pokkinenabout 1 month ago
However, in the cloud posse AWS component catalog it looks like some components are extremely tiny, like just one S3 bucket, or one IAM role. To me it feels a bit crazy to have just one bucket within single state file.