atmos
134719,233
👽️
M
Miguel Zablah1 day ago
Hey guys!
I have a question about Atmos MCP configuration that came up during
the latest office-hours meeting.
I've reviewed the docs at https://atmos.tools/cli/configuration/mcp and the
example at https://atmos.tools/gists/mcp-with-aws where ATMOS_PROFILE is set
for authentication.
Currently, I'm defining multiple MCP servers and need to set ATMOS_PROFILE and
identity for each one individually. For example:
Is there a way to configure ATMOS_PROFILE and identity globally so they apply to all MCP servers by default, rather than repeating them in each server definition?
PS: I think the links for the last office hour are missing on the channel.
I have a question about Atmos MCP configuration that came up during
the latest office-hours meeting.
I've reviewed the docs at https://atmos.tools/cli/configuration/mcp and the
example at https://atmos.tools/gists/mcp-with-aws where ATMOS_PROFILE is set
for authentication.
Currently, I'm defining multiple MCP servers and need to set ATMOS_PROFILE and
identity for each one individually. For example:
mcp:
servers:
server-one:
env:
ATMOS_PROFILE: 'managers'
identity: "core-root/terraform"
server-two:
env:
ATMOS_PROFILE: 'managers'
identity: "core-root/terraform"Is there a way to configure ATMOS_PROFILE and identity globally so they apply to all MCP servers by default, rather than repeating them in each server definition?
PS: I think the links for the last office hour are missing on the channel.
M
Mateusz Osiński3 days ago
Hi guys! I have two more quick questions:
1. How do you manage Terraform version in the repo? Is there any chance to define the TF version only in the
2. How do you format the
1. How do you manage Terraform version in the repo? Is there any chance to define the TF version only in the
atmos.yaml file? (so there is no need to define required_version everywhere?),2. How do you format the
.tf files? Is there any standard in CloudPosse? Or just terraform fmt -recursive? Or should we use https://github.com/cloudposse/github-action-auto-format?E
M
Miguel Zablah3 days ago
Hey guys! I have a question about
It takes 4m when I run it to find 1 dependency is this normal? Maybe I'm doing something wrong?
atmos describe dependents <component> -s <stack> cmd.It takes 4m when I run it to find 1 dependency is this normal? Maybe I'm doing something wrong?
M
Mateusz Osiński4 days ago
Hi! Quick question about Atmos config setup.
In our repo we currently have two copies of
• one at the repo root (
• another baked into our container image at
Right now these two files are 1:1 identical (I diffed them).
In CI, our GitHub workflows pass
I’d like to simplify this and move to a single
• keep only the root
• have the Docker image copy that file into
Or do you recommend keeping a separate, baked-in config for the image in addition to the repo one? Any best-practice guidance (and pitfalls to avoid) would be really appreciated.
In our repo we currently have two copies of
atmos.yaml:• one at the repo root (
devops/atmos.yaml)• another baked into our container image at
rootfs/usr/local/etc/atmos/atmos.yamlRight now these two files are 1:1 identical (I diffed them).
In CI, our GitHub workflows pass
atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}, so Atmos should be using the config file from the checked-out repo.I’d like to simplify this and move to a single
atmos.yaml as the source of truth. Is it safe (and idiomatic in CloudPosse setups) to:• keep only the root
atmos.yaml,• have the Docker image copy that file into
/usr/local/etc/atmos/atmos.yaml, and point CI (ATMOS_CONFIG_PATH) at the root file?Or do you recommend keeping a separate, baked-in config for the image in addition to the repo one? Any best-practice guidance (and pitfalls to avoid) would be really appreciated.
P
Paavo Pokkinen4 days ago
Can Atmos support collecting different outputs across stacks from particular component, and injecting them to another?
For example: I am deploying Argo-CD once to management cluster. There can be potentially be X number of workload clusters (separate component). For Argo-CD I need to create secret on management cluster to make it aware of number of workload clusters that exists.
terraform.state function at least can only get output of one component in current, or different stacks; not combine across range of stacks.
How should I solve issue or collecting outputs and merging them? There’s plenty of potential use cases where this pattern emerges.
For example: I am deploying Argo-CD once to management cluster. There can be potentially be X number of workload clusters (separate component). For Argo-CD I need to create secret on management cluster to make it aware of number of workload clusters that exists.
terraform.state function at least can only get output of one component in current, or different stacks; not combine across range of stacks.
How should I solve issue or collecting outputs and merging them? There’s plenty of potential use cases where this pattern emerges.
A
Alexandre Feblot7 days ago(edited)
Hi,
Atmos Pro: 403 . What can I be doing wrong?
I added a new repo to Atmos Pro (and removed the first one just in case), so only one repo installed.
Running
Atmos Pro: 403 . What can I be doing wrong?
I added a new repo to Atmos Pro (and removed the first one just in case), so only one repo installed.
Running
atmos describe affected ... --upload does retrieve and print the stack information, including the workspace_id, but fails uploading: ERRO Pro API Error operation=UploadAffectedStacks request="" status=403 success=false trace_id=b11636f3f0e1e158947bd5abfe7a4d35 error_message="" context=map[]
# Error
**Error:** failed to upload stacks API response error: API request failed with
status 403 (trace_id: b11636f3f0e1e158947bd5abfe7a4d35)M
Martin Bornhold7 days ago
Hi all, that's my first contact, happy to get in touch with you 🙂
I am doing a PoC with atmos and spacelift.io. I read about the integration between atmos and spacelift and tried to follow the docs. I was able to create spacelift spaces and stacks with your provided TF modules: https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack and https://github.com/cloudposse-terraform-components/aws-spacelift-spaces.
But now I am blocked to follow the documentation because it mentions the below scripts to be used in the spacelift stack hooks. For example:
Example copied from https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack?tab=readme-ov-file#usage
Obviously these scripts do not exist in the default spacelift runner image and the stack run fails. Just commenting them out makes the default OpenTofu workflow fail. I searched for hours but I was not able to find examples or more information about these scripts. Maybe someone can point me into the right direction and help me to find these scripts? Thanks a lot 🙂
I am doing a PoC with atmos and spacelift.io. I read about the integration between atmos and spacelift and tried to follow the docs. I was able to create spacelift spaces and stacks with your provided TF modules: https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack and https://github.com/cloudposse-terraform-components/aws-spacelift-spaces.
But now I am blocked to follow the documentation because it mentions the below scripts to be used in the spacelift stack hooks. For example:
before_init:
- spacelift-configure-paths
- spacelift-write-vars
- spacelift-tf-workspaceExample copied from https://github.com/cloudposse-terraform-components/aws-spacelift-admin-stack?tab=readme-ov-file#usage
Obviously these scripts do not exist in the default spacelift runner image and the stack run fails. Just commenting them out makes the default OpenTofu workflow fail. I searched for hours but I was not able to find examples or more information about these scripts. Maybe someone can point me into the right direction and help me to find these scripts? Thanks a lot 🙂
C
Charles Smith7 days ago
Question. I'm trying out using
Am I missing something or is this a bug with
I have even done
name_template: "{{ .vars.stage }}-{{ .vars.region }}" and it's working everywhere. Stacks validate fine. I can plan all components and stacks as expected but when I useatmos describe affected it returns the error:Error: template: describe-stacks-name-template:1:26: executing "describe-stacks-name-template" at <.vars.region>: map has no entry for key "region"Am I missing something or is this a bug with
atmos describe affected?I have even done
atmos describe stacks and I can see the region var in all component instances in all stacksC
Charles8 days ago
Is it possible to set a default identity for a specific stack?
B
Brandon8 days ago
Hello, I have a deploy-layers workflow, in which I run a plan between applying each layer, and I wanted to know if there was a way to have atmos skip a workflow step if the plan step didn't detect any changes?
deploy-layers:
description: "Deploy infrastructure layers with plans between each layer"
steps:
- command: terraform plan network
- command: terraform apply network --from-plan -auto-approve
- command: terraform plan security-groups
- command: terraform apply security-groups --from-plan -auto-approve
- command: terraform plan eks-cluster
- command: terraform apply eks-cluster --from-plan -auto-approve
- command: terraform plan eks-bootstrap
- command: terraform apply eks-bootstrap --from-plan -auto-approveA
Alexandre Feblot8 days ago(edited)
Atmos Pro with Native CI
Hi,
Is it already possible to use Atmos Pro without
The 2 Atmos Pro examples I could find still rely on those actions.
If already possible, full example pipelines would be very welcome. - Mostly how to deal with atmos describe affected and pass the info to Atmos Pro.
Would it be enough to just run
Hi,
Is it already possible to use Atmos Pro without
cloudposse/github-action-atmos-*, just with atmos native CI support?The 2 Atmos Pro examples I could find still rely on those actions.
If already possible, full example pipelines would be very welcome. - Mostly how to deal with atmos describe affected and pass the info to Atmos Pro.
Would it be enough to just run
atmos describe affected --upload=true ?E
Elon8 days ago
Hi,
atmos terraform
However, I did not see that
Will it be added in the future?
Because if I want to delete all components in a stack, I have to do it one by one.
If I'm using:
Atmos would create the S3 bucket for the backend. It only works with
In this case, I also have to
atmos terraform
apply and plan have an --all flag to deploy all components in a stack.However, I did not see that
init and destroy have an --all flag as well.Will it be added in the future?
Because if I want to delete all components in a stack, I have to do it one by one.
If I'm using:
provision:
backend:
enabled: true
Atmos would create the S3 bucket for the backend. It only works with
init first, so apply and/or plan do not work and throw an error that the backend does not exist.In this case, I also have to
init each component in the stack, so adding an --all flag to init would be beneficial.Z
Zack9 days ago
Atmos v 1.112.0 Playing with JIT 0s TTL and it looks like it's not generating the varfiles or backend files 😭
A
Alexandre Feblot9 days ago
Hi,
I don't understand what is wrong here. Running
• I'm authenticated using "atmos auth" with an account admin identity set as the default one
•
• using atmos 1.210.0
Atmos.yaml auth section:
The state bucket declaration:
Debug log:
I don't understand what is wrong here. Running
atmos list affected --ref refs/heads/main fails to authenticate to the state bucket.• I'm authenticated using "atmos auth" with an account admin identity set as the default one
•
atmos tf plan ... works fine• using atmos 1.210.0
Atmos.yaml auth section:
auth:
providers:
my-sso:
kind: aws/iam-identity-center
start_url: <https://mycorp.awsapps.com/start>
region: eu-west-1
default: true
identities:
account-admin:
kind: aws/permission-set
via:
provider: my-sso
principal:
name: AdministratorAccess
account:
id: "000000000000"
default: trueThe state bucket declaration:
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: false
use_lockfile: true
bucket: "my-state-bucket"
key: "terraform.tfstate"
region: eu-west-1
provision:
backend:
enabled: true
workdir:
enabled: trueDebug log:
DEBU Found component 'mycomponent' in the stack 'mystack' in the stack manifest 'path/to/mycomponent'
DEBU Resolved component path type=terraform component=mycomponent resolved_path=.../components/terraform/mycomponent base_path=.../components/terraform env_override=false
DEBU Component has auth config with default identity, creating component-specific AuthManager component=mycomponent stack=mystack
DEBU CreateAndAuthenticateManager called identityName="" hasAuthConfig=true
DEBU Loading stack configs for auth identity defaults
DEBU Loading stack files for auth defaults count=6
DEBU No default identities found in stack configs
DEBU System keyring not available, using no-op keyring (will use credentials from files/environment) error="system keyring not available: The name org.freedesktop.secrets was not provided by any .service files"
DEBU Auth realm computed realm=1ad85013bbd5d0d4 source=config
DEBU System keyring not available, using no-op keyring (will use credentials from files/environment) error="system keyring not available: The name org.freedesktop.secrets was not provided by any .service files"
DEBU Auth realm computed realm=1ad85013bbd5d0d4 source=config
DEBU Authentication chain discovered identity=account-admin chainLength=2 chain="[my-sso account-admin]"
DEBU Checking cached credentials chainIndex=1 identityName=account-admin
DEBU Noop keyring: credentials managed externally alias=account-admin realm=1ad85013bbd5d0d4
DEBU Credentials not in keyring, trying identity storage identity=account-admin
DEBU Loading AWS credentials from files credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config profile=account-admin region=eu-west-1
DEBU Loaded expiration from credentials file metadata profile=account-admin expiration=2026-03-25T15:37:02+01:00
DEBU Successfully loaded AWS credentials from files profile=account-admin region=eu-west-1 has_session_token=true has_expiration=true
DEBU Successfully loaded credentials from identity storage identity=account-admin
DEBU Found valid cached credentials chainIndex=1 identityName=account-admin expiration="2026-03-25 15:37:02 +0100 CET"
DEBU Found valid cached credentials validFromIndex=1 chainStep=account-admin
DEBU Noop keyring: credentials managed externally alias=account-admin realm=1ad85013bbd5d0d4
DEBU Credentials not in keyring, trying identity storage identity=account-admin
DEBU Loading AWS credentials from files credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config profile=account-admin region=eu-west-1
DEBU Loaded expiration from credentials file metadata profile=account-admin expiration=2026-03-25T15:37:02+01:00
DEBU Successfully loaded AWS credentials from files profile=account-admin region=eu-west-1 has_session_token=true has_expiration=true
DEBU Successfully loaded credentials from identity storage identity=account-admin
DEBU Starting authentication from cached credentials startIndex=1 identity=account-admin
DEBU Authenticating identity chain chainLength=2 startIndex=2 chain="[my-sso account-admin]"
DEBU Writing AWS credentials provider=my-sso identity=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials has_session_token=true
DEBU Acquired file lock lock_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials.lock
DEBU Writing credentials to file profile=account-admin access_key_prefix=ASIA... has_session_token=true expiration=2026-03-25T15:37:02+01:00
DEBU Successfully wrote AWS credentials provider=my-sso identity=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials
DEBU Writing AWS config provider=my-sso identity=account-admin config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config region=eu-west-1 output_format=""
DEBU Acquired file lock lock_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config.lock
DEBU Successfully wrote AWS config provider=my-sso identity=account-admin config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config
DEBU Set AWS auth context profile=account-admin credentials=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config region=eu-west-1
DEBU Preparing AWS environment for Atmos-managed credentials profile=account-admin credentials_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/credentials config_file=~/.config/atmos/1ad85013bbd5d0d4/aws/my-sso/config
DEBU Prepared AWS environment profile=account-admin
DEBU Deleted legacy keyring entry (pre-realm) alias=my-sso
DEBU Deleted legacy keyring entry (pre-realm) alias=account-admin
DEBU Skipping keyring cache for session tokens in WhoamiInfo identity=account-admin
DEBU Created component-specific AuthManager component=mycomponent stack=mystack identityChain="[my-sso account-admin]"
DEBU Populated AuthContext from AuthManager for template functions
DEBU Found component 'mycomponent' in the stack 'mystack' in the stack manifest 'path/to/mycomponent'
DEBU Resolved component path type=terraform component=mycomponent resolved_path=.../components/terraform/mycomponent base_path=.../components/terraform env_override=false
DEBU Using standard AWS SDK credential resolution (no auth context provided)
DEBU Using explicit region region=eu-west-1
DEBU Loading AWS SDK config num_options=1
DEBU Successfully loaded AWS SDK config region=eu-west-1
DEBU Failed to read Terraform state file from the S3 bucket attempt=1 file=mycomponent/mystack/terraform.tfstate bucket=my-state-bucket error="operation error S3: GetObject, get identity: get credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request canceled, context deadline exceeded" backoff=1sB
Brandon10 days ago
Anyone using the native CI feature with atmos workflows? I'm not getting the plan artifacts I'd expect when running my workflow from Github actions
J
Jonathan Rose10 days ago(edited)
anyone use semantic-release/semantic-release: :package::rocket: Fully automated version management and package publishing for auto-versioning their service catalog? I'm trying to figure out how I move away from defining a version manifest while preventing tagging every commit
E
Elon10 days ago
Hi, I have a quick question.
I use
When I run
Is this expected behavior? I assumed using dependencies and
I use
!terraform.state and dependencies.components between components in the same stack, for example, VPC and subnets.When I run
atmos terraform apply --all -s stack-name, it does not work as expected and throws an error while trying to create the subnets after the VPC is created. The subnets provision successfully only after a second deployment.Is this expected behavior? I assumed using dependencies and
terraform.state would create all components in a stack in order without errors.A
Alek10 days ago
Hi!
There's been a change to
• https://github.com/cloudposse-terraform-components/aws-guardduty/issues/56
I did not make a PR as I'd like to know if it's my stack that is outdated w.r. to newest Atmos best practices, or if that is a genuine regression.
Thanks so much 🙏
cc: @Andriy Knysh (Cloud Posse)
There's been a change to
providers.tf rolled out to many AWS Security Suite components (AWS Config/GuardDuty/SecurityHub), which broke my legacy GitOps Atmos workflow. More details here:• https://github.com/cloudposse-terraform-components/aws-guardduty/issues/56
I did not make a PR as I'd like to know if it's my stack that is outdated w.r. to newest Atmos best practices, or if that is a genuine regression.
Thanks so much 🙏
cc: @Andriy Knysh (Cloud Posse)
C
Charles Smith10 days ago
Hello so many new things added to atmos!
As I'm setting up a new atmos repo for my org I was putting a lot of thought into whether we would adopt your
https://atmos.tools/cli/configuration/stacks/#stack-naming-with-name_template-recommended
If I understand that correctly then if we make sure we include a common set of our own variables in all components we can actually define/name our stacks using our own naming vars instead of just [namespace, tenant, stage, environment]?
Are there any major gotchas to heading down this path?
As I'm setting up a new atmos repo for my org I was putting a lot of thought into whether we would adopt your
null-label or context.tf mixin in all our components. I figured it was a must have until I stumbled on this new recommendation:https://atmos.tools/cli/configuration/stacks/#stack-naming-with-name_template-recommended
If I understand that correctly then if we make sure we include a common set of our own variables in all components we can actually define/name our stacks using our own naming vars instead of just [namespace, tenant, stage, environment]?
Are there any major gotchas to heading down this path?
J
Jonathan Rose11 days ago(edited)
I am using abstract components with inheritance. Is the expectation that all components for a given stack are now written to a single remote state? Or is it because I need these defined to avoid drift?
Normally, I would expect the workspace to be formatted as
vars:
labels_as_tags: []
label_value_case: none
regex_replace_chars: "/[^a-zA-Z0-9\\s-]/"Terraform has been successfully initialized!
Switched to workspace "cfsb-aws-org-units".
aws_organizations_organizational_unit.this[0]: Preparing import... [id=ou-8wia-m8s04a9h]
aws_organizations_organizational_unit.this[0]: Refreshing state... [id=ou-8wia-m8s04a9h]Normally, I would expect the workspace to be formatted as
{{ .atmos_stacks }}-{{ .atmos_components}}C
Charles Smith11 days ago
Hello just coming back to atmos after about a year away. Amazing new additions to the tool BTW!
I'm working on setting up atmos to improve the terraform reuse and organization in my current role/org and I have a question about managing the terraform s3 state key.
In my previous role/org we explicitly used
https://atmos.tools/design-patterns/version-management/folder-based-versioning#recommended-use-metadataname
I like this idea as it's very clean and readable in the stack config.
Sorry now to my actual question. Is it true that if you rely on the default atmos behaviour of using the component (or it's new
Is there any way to control this or do I need to keep specifying
I'm working on setting up atmos to improve the terraform reuse and organization in my current role/org and I have a question about managing the terraform s3 state key.
In my previous role/org we explicitly used
workspace_key_prefix all over our stacks to ensure state keys were what we expected. I can see now, however, there is new guidance and a name element in component metadata. Basically I'm talking about this recommendation:https://atmos.tools/design-patterns/version-management/folder-based-versioning#recommended-use-metadataname
I like this idea as it's very clean and readable in the stack config.
Sorry now to my actual question. Is it true that if you rely on the default atmos behaviour of using the component (or it's new
name) that your state bucket will be a flat listing of all your deployed components? I like to have a little hierarchy in my components to add some organization and I liked that when I set workspace_key_prefix this path structure can be mirrored in the state bucket. But when I don't set workspace_key_prefix it seems that atmos always replaces all / in the component (or name ) with -.Is there any way to control this or do I need to keep specifying
workspace_key_prefix if I want a less flat state organization?J
Jonathan Rose11 days ago
I'm reviewing Account Management | The Cloud Posse Reference Architecture and trying to understand how to establish an account factory that also ensures IAM management roles are deployed when an account is created
P
Paavo Pokkinen11 days ago
However, in the cloud posse AWS component catalog it looks like some components are extremely tiny, like just one S3 bucket, or one IAM role. To me it feels a bit crazy to have just one bucket within single state file.
P
Paavo Pokkinen11 days ago
I have a bit difficult time scoping how large or small the components should be.
There’s general recommendation to avoid factories (eg. for_each) inside a component. Now, if I take managing Cloudflare DNS records as example, this would mean every single DNS record should be it’s own component instance.
Perhaps in this case it would be better to put that DNS record within other components directly where it it needed, but on another hand, there’s also recommendation to limit providers to one or two per component.
For example: I now have a component to create GKE cluster. I have a separate component for enabling GKE gateway, which includes google provider, kubernetes provider (to create gateway manifest), TLS provider for provisioning custom origin cert, and cloudflare provider to create record that points to gateway IP and set the origin cert there. Is this too small, too big, or right size? 😄
Another requirement I have is ability to provision API keys (like google maps, Vertex etc.). Currently I am planning to have them right in the component that does entire project creation, as having one component to create API keys, or worse, just one type of API key, sounds a bit silly.
There’s general recommendation to avoid factories (eg. for_each) inside a component. Now, if I take managing Cloudflare DNS records as example, this would mean every single DNS record should be it’s own component instance.
Perhaps in this case it would be better to put that DNS record within other components directly where it it needed, but on another hand, there’s also recommendation to limit providers to one or two per component.
For example: I now have a component to create GKE cluster. I have a separate component for enabling GKE gateway, which includes google provider, kubernetes provider (to create gateway manifest), TLS provider for provisioning custom origin cert, and cloudflare provider to create record that points to gateway IP and set the origin cert there. Is this too small, too big, or right size? 😄
Another requirement I have is ability to provision API keys (like google maps, Vertex etc.). Currently I am planning to have them right in the component that does entire project creation, as having one component to create API keys, or worse, just one type of API key, sounds a bit silly.
E
Elon12 days ago
Hi,
General question, what's the difference between aws modules and aws components?
For example:
https://docs.cloudposse.com/modules/library/aws/s3-bucket/
Vs
https://docs.cloudposse.com/components/library/aws/s3-bucket/
Which one should I use?
General question, what's the difference between aws modules and aws components?
For example:
https://docs.cloudposse.com/modules/library/aws/s3-bucket/
Vs
https://docs.cloudposse.com/components/library/aws/s3-bucket/
Which one should I use?
K
Kyle Decot14 days ago(edited)
Is there a way to bypass / disable the auto "json-ification" of strings obtained via
Here I want the "raw" string however atmos is trying to help me and instead gives me a map 😢
!terraform.state? For example:terraform:
secretsmanager:
vars:
hello_world: !terraform.state hello .world # string containing: "{\"hello\": \"world\"}"Here I want the "raw" string however atmos is trying to help me and instead gives me a map 😢
J
Jonathan Rose15 days ago
Does cloudposse/terraform-github-repository at v1.0.0 not support configuring
copilot_code_review under rulesets?E
erik15 days ago
I spent some time learning how to use the AWS MCP servers with Atmos. TL;DR: it works very well.
I created some custom commands,
I created some custom commands,
atmos mcp aws install to install them, and then configured my .mcp.json to run the MCP using atmos authD
DE15 days ago(edited)
Question... we are implementing Versioning for our components as well as Resource Templates. Following documentation, but expanding the versions to Semantic Versioning as shown below. Can we assume that Vendoring Components would also work under this Versioning schema?
Component Versioning:
components/terraform/component-1
├── v1.0.0
│ └── tf-files.tf
└── v2.0.0
└── tf-files.tf
Resource Template Versioning:
catalog/resource_templates/component-1
├── v1.0.0.yaml
└── v2.0.0.yamlE
erik15 days ago
Really loving these status checks in atmos native CI
J
Jonathan Rose15 days ago
I am getting a weird bug in the latest version of my service catalog.
atmos vendor pull works fine locally, but yields the following error in CI:+ atmos vendor pull
# Error
**Error:** the '--everything' flag is set, but vendor config file does not exist
## Explanation
vendorJ
Jonathan Rose15 days ago
Any idea how to resolve?
Run atmos describe affected --format=matrix
atmos describe affected --format=matrix
shell: /usr/bin/bash -e {0}
env:
ATMOS_PROFILE: ci
ATMOS_IDENTITY: cfsb-aws-config-acct
INFO Starting GitHub OIDC authentication provider=github-oidc
INFO GitHub OIDC authentication successful provider=github-oidc
# Error
**Error:** git reference not found on local filesystem: exit status 128
## Hints
💡 Make sure the ref is correct and was cloned by Git from the remote, or use
the '--clone-target-ref=true' flag to clone it.
💡 Refer to <https://atmos.tools/cli/commands/describe/affected> for more details.
Error: Process completed with exit code 128.P
Paavo Pokkinen15 days ago
Is there any command like “run all (components) in a stack, in correct dependency order?”
J
Jonathan Rose15 days ago
@Erik Osterman (Cloud Posse) absolutely loving v1.210.0 as far. Only one thing I need to fix: I need the CI templates to hide the terraform warnings.
D
Dan Hansen16 days ago
Checking to see if this behavior change in
v1.210.0 was intentional. I'm guessing its not? hooks.store-output.name evaluates to staging but since now we don't process yaml functions when hydrating and running hooks, it evaluates to !template staging , raising **Error:** store "!template staging" not found in configuration hooks:
store-outputs:
# Determine where we store outputs based on the project_id
name: !template "{{ index .settings.context.project_to_store .settings.context.project_id }}"P
Paavo Pokkinen16 days ago
Does Atmos make any assumptions on directory structure under components/terraform? I’ve been placing components to sub-directories according to service they mostly interact with, eg “gcp”, “aws”, “github”.
At least
At least
terraform.output yaml function seems to make assumptions:failed to generate backend file: open /Users/paveq/modernpath/live-infrastructure/components/terraform/gke/backend.tf.json: no such file or directoryterraform.state function works, but I wonder if I should fix my structure.E
erik17 days ago(edited)
In 1.210.0 we released Native CI support in atmos. It's 🧪 experimental in terms of maturity, but this is what will soon replace our github actions.
✅️ GitHub Status Checks
✅️ GitHub Job Summaries
✅️ GitHub Outputs
✅️ Atmos Toolchain (automatically install opentofu or almost any tool you need)
✅️ GitHub OIDC
That means this one command:
Can do the following:
1. Authentic with OIDC
2. Install all tools like opentofu
3. Initialize the backend
4. Run Terraform
5. Post Status Checks
6. Post Job Summary
Coming soon:
• Planfile storage & retrieval: GitHub Artifacts
• Comment Support
https://atmos.tools/changelog/native-ci-integration
https://atmos.tools/cli/configuration/ci
✅️ GitHub Status Checks
✅️ GitHub Job Summaries
✅️ GitHub Outputs
✅️ Atmos Toolchain (automatically install opentofu or almost any tool you need)
✅️ GitHub OIDC
That means this one command:
atmos terraform plan mycomponent -s dev-us-east-1Can do the following:
1. Authentic with OIDC
2. Install all tools like opentofu
3. Initialize the backend
4. Run Terraform
5. Post Status Checks
6. Post Job Summary
Coming soon:
• Planfile storage & retrieval: GitHub Artifacts
• Comment Support
https://atmos.tools/changelog/native-ci-integration
https://atmos.tools/cli/configuration/ci
E
erik17 days ago
Anyone here using Gitlab?
P
Paavo Pokkinen18 days ago
I am a bit confused on Helmfile integration on Atmos. What is the use case vs Terraform provider Helmfile?
There seems to be some integration to EKS. Has anyone gotten this to work with GKE, so that clusters created on Terraform side are handed over to Helmfile side? I am not sure how to approach kubeconfig, seems like it must be written to a file somewhere? Or maybe try somehow fetching it from outputs, and putting it to ENV variable before Helmfile invocation? Example would be great. 🙂
There seems to be some integration to EKS. Has anyone gotten this to work with GKE, so that clusters created on Terraform side are handed over to Helmfile side? I am not sure how to approach kubeconfig, seems like it must be written to a file somewhere? Or maybe try somehow fetching it from outputs, and putting it to ENV variable before Helmfile invocation? Example would be great. 🙂
T
Tyler Rankin19 days ago
G’day - we are experiencing a
Seemingly all atmos commands (describe component, validate stacks, validate schema, etc.) complete successfully, with the exception of
Recent change to cloudposse/utils v1.32.0 bumped atmos above v1.200.0, and now all of our remote-state stacks are failing with a generic terraform error
Any tips for the best path to resolution? We’d like to use the latest version of atmos. TIA!
fatal error: stack overflow when using atmos > v1.200.0. Are we up against a stack count limit? import limit? we do make use of go templates and functions (!terraform.state).Seemingly all atmos commands (describe component, validate stacks, validate schema, etc.) complete successfully, with the exception of
atmos describe stacks -s <stack>.Recent change to cloudposse/utils v1.32.0 bumped atmos above v1.200.0, and now all of our remote-state stacks are failing with a generic terraform error
failed to find import. Pinning utils to v1.31.0 resolves this, which i believe uses an older atmos version < 1.200.0.Any tips for the best path to resolution? We’d like to use the latest version of atmos. TIA!
E
Elon19 days ago
Hi everyone.
I'm new to atmos and there is one quirck I cannot figure out and didn't find straight answer in the docs.
I'm mainly working with public modules from the terraform registry, that aren't cloud posse modules.
That means they don't have the context variables: namespace, tenant, environment, stage.
What is the best practice of adding those variables to the module automatically, without causing a drift to the existing module if I pull a new version?
I'm new to atmos and there is one quirck I cannot figure out and didn't find straight answer in the docs.
I'm mainly working with public modules from the terraform registry, that aren't cloud posse modules.
That means they don't have the context variables: namespace, tenant, environment, stage.
What is the best practice of adding those variables to the module automatically, without causing a drift to the existing module if I pull a new version?
P
Paavo Pokkinen19 days ago
Hey everyone!
I’ve been looking into Atmos couple of days now, and it looks really good for our use case. Something I am struggling with is how small I want my stacks to be split.
We’re still relatively tiny SaaS company, but as we aim to be hosted on hyperscaler marketplaces, we probably need multi-cloud strategy (GCP being main one). We also do some consulting, and might host something outside of our core product, so I’ve utilized now “namespace-tenant” split, like in atmos docs.
My name_template is currently:
This leads to stack names such as:
So far in the environment I don’t have indication of Cloud region, only the vendor. Not sure if I should have it? Environments will be only “prod”/“non-prod” split, we’ll handle app specific staging, dynamic preview envs etc. with Argo-CD inside the clusters.
I’m heavily utilizing Google’s excellent terraform modules for project creation, shared vpc and K8S cluster management. These are already on quite high level / opinionated, so components right now are really lightweight.
I’ve split layers like this:
• meta: things like org settings, folder creation and project creation
◦ why: my thinking is project creation should be split off from actual resource stacks, project rarely if at all changes after creation, and it can be slow to validate everything
• network -> shared VPC stuff
• cluster -> GKE cluster creation
• clsuter-addons -> what necessary stuff goes into Kube cluster, like Argo-CD
As to applications, I don’t know yet. We’ll probably aim to have multi-app clusters, apps could be deployed totally outside of Atmos as well. Definitely considering Argo-CD there, which I’ve had good experiences.
Any feedback would be appreciated here! Am I having too small stacks?
I’ve been looking into Atmos couple of days now, and it looks really good for our use case. Something I am struggling with is how small I want my stacks to be split.
We’re still relatively tiny SaaS company, but as we aim to be hosted on hyperscaler marketplaces, we probably need multi-cloud strategy (GCP being main one). We also do some consulting, and might host something outside of our core product, so I’ve utilized now “namespace-tenant” split, like in atmos docs.
My name_template is currently:
"{{.vars.namespace}}-{{.vars.tenant}}-{{.vars.environment}}-{{.vars.stage}}-{{.vars.layer}}"This leads to stack names such as:
mp-plat-gcp-non-prod-meta
mp-plat-gcp-non-prod-network
mp-plat-gcp-non-prod-cluster
mp-plat-gcp-shared-metaSo far in the environment I don’t have indication of Cloud region, only the vendor. Not sure if I should have it? Environments will be only “prod”/“non-prod” split, we’ll handle app specific staging, dynamic preview envs etc. with Argo-CD inside the clusters.
I’m heavily utilizing Google’s excellent terraform modules for project creation, shared vpc and K8S cluster management. These are already on quite high level / opinionated, so components right now are really lightweight.
I’ve split layers like this:
• meta: things like org settings, folder creation and project creation
◦ why: my thinking is project creation should be split off from actual resource stacks, project rarely if at all changes after creation, and it can be slow to validate everything
• network -> shared VPC stuff
• cluster -> GKE cluster creation
• clsuter-addons -> what necessary stuff goes into Kube cluster, like Argo-CD
As to applications, I don’t know yet. We’ll probably aim to have multi-app clusters, apps could be deployed totally outside of Atmos as well. Definitely considering Argo-CD there, which I’ve had good experiences.
Any feedback would be appreciated here! Am I having too small stacks?
J
Jonathan Rose20 days ago
cloudposse-terraform-components/aws-github-repository at v0.3.0 states that
required_code_scanning is unsupported due to permadrift. it looks like the underlying issue has been resolved as of v6.9.0 (refs: https://github.com/integrations/terraform-provider-github/pull/2701). are there plans to reintroduce this feature?K
Kyle Avery20 days ago
Trying to implement gcp-wif with GHA, seems there may be a mistake in the docs - https://atmos.tools/cli/configuration/auth/providers#workload-identity-federation
I see this error in GH
Seems as though the configuration should look like this?
I see this error in GH
**Error:** parse gcp/wif spec: invalid provider config invalid auth config: spec is nil
# Initialize Providers
**Error:** failed to initialize providers: invalid provider config: provider=gcp-wif: parse gcp/wif spec: invalid provider config invalid auth config: spec is nilSeems as though the configuration should look like this?
auth:
providers:
gcp-wif:
kind: gcp/workload-identity-federation
spec:
project_id: my-gcp-project
project_number: "123456789012"
workload_identity_pool_id: github-pool
workload_identity_provider_id: github-provider
service_account_email: <mailto:ci-sa@my-project.iam.gserviceaccount.com|ci-sa@my-project.iam.gserviceaccount.com>A
Alexandre Feblot21 days ago
FYI: Error on doc example: https://atmos.tools/cli/configuration/stacks/#advanced-template-with-validation
With templating enabled, the advanced name_pattern example works fine for
What works for me is this:
Note the removed "-" at the beginning of the first line and at the end of the last line.
With templating enabled, the advanced name_pattern example works fine for
atmos list staks but fails with a templating error for atmos tf plan some_component.What works for me is this:
name_template: |-
{{ $ns := .vars.namespace -}}
{{- $tenant := .vars.tenant -}}
....
{{- $stack_name }}Note the removed "-" at the beginning of the first line and at the end of the last line.
M
Michael Dizon21 days ago
Running 1.209.0. I get this error when trying to use helmfile:
i’m not using helm_aws_profile_pattern, but it seems to be picking it up from somewhere. Am i missing a config?
WARN helm_aws_profile_pattern is deprecated, use --identity flag instead
The config profile (core--gbl-tooling-helm) could not be foundi’m not using helm_aws_profile_pattern, but it seems to be picking it up from somewhere. Am i missing a config?
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
helmfile:
base_path: "components/helmfile"
cluster_name_template: "core-{{ .vars.stage }}-{{ .vars.environment }}-app"
helm_aws_profile_pattern: null
use_eks: true
kubeconfig_path: /tmp/{stage}-{environment}-kubeconfigE
erik21 days ago
Got an idea for something Atmos should do?
Instead of filling out a long, boring feature template, just submit an AI prompt.
Describe the workflow, command, or problem you want Atmos to solve.
If it resonates with us, we might just implement it.
👉️ https://github.com/cloudposse/atmos/issues
Instead of filling out a long, boring feature template, just submit an AI prompt.
Describe the workflow, command, or problem you want Atmos to solve.
If it resonates with us, we might just implement it.
👉️ https://github.com/cloudposse/atmos/issues
L
Leonardo Agueci21 days ago(edited)
Hi all, I'm testing version 1.209.0 and I think I found and issue on the authentication realm
If I run a plan for a component:
Instead, if I run a plan for the entire stack:
Then looking into the
It seems that with
I'm using AWS SSO as provider
If I run a plan for a component:
atmos terraform plan <component> -s <stack>, credentials are stored into .config/atmos/<realm>Instead, if I run a plan for the entire stack:
atmos terraform plan -s <stack> --all, I get the following warning:WARN Credentials found under realm "b1fe90cf620cb71a" but current realm is "(no realm)". This typically happens after changing auth.realm in config or ATMOS_AUTH_REALM. Run 'atmos auth login' to re-authenticate under the current realm.Then looking into the
.config/atmos/ directory I can see two different foldersls -l atmos/drwx------ 3 root root 4096 Mar 13 10:30 awsdrwx------ 3 root root 4096 Mar 13 10:30 b1fe90cf620cb71aIt seems that with
--allcredentials are stored without the realm in the path (I actually haven’t been asked to reauthenticate)I'm using AWS SSO as provider
J
Jonathan Rose21 days ago
Trying to understand what "import" is being referenced in the error below. I tried setting logs to both debug and trace and couldn't figure it out 🤔
$ atmos describe affected
Error
Error: failed to find import
$ atmos tf plan --affected
Error
Error: failed to find import