atmos
134718,537
👽️
O
Orion7Xabout 3 hours ago
Is there a way to get a component DAG view of all components for a stack? I'm aware of
atmos describe dependents, but I'm looking for a command that operates at the stack level.J
Johanabout 6 hours ago
Hi! Looking forward to use the new ai integration! I love how Atmos is evolving 🙂
Did anyone already succeed in using litellm as an AI backend?
Did anyone already succeed in using litellm as an AI backend?
R
RBabout 15 hours ago
How do you pin providers across all components ?
We ran into an issue today due to the newest https://github.com/cloudposse/terraform-provider-utils/releases/tag/v1.32.0 provider and need to pin across all of our components to
We ran into an issue today due to the newest https://github.com/cloudposse/terraform-provider-utils/releases/tag/v1.32.0 provider and need to pin across all of our components to
v1.31.0R
RB1 day ago
Do we have a way of doing terraform state migrations per component-stack slug yet ?
Ideally, we’d like to do this in a
Ideally, we’d like to do this in a
migrations.tf style hcl but do it per component-stack so migrations-redis-banana-ue1-dev.tf won’t be applied to redis-banana-ue2-prodB
Brandon2 days ago(edited)
hello, wondering what happens if you're using
!terraform.state and the component/stack you're querying for doesn't exist yet, or hasn't been applied yet?R
RB3 days ago(edited)
R
RB3 days ago
Re: upgrading from atmos 1.168.0 to latest version, the
e.g.
ignore_missing_template_values (link) seems to be the method @Pablo Paez found. Is there anyway to set this at a more global level than on every imported catalog ?e.g.
import:
- path: "<path_to_atmos_manifest2>"
context: {}
skip_templates_processing: false
ignore_missing_template_values: trueR
RB3 days ago
Does atmos pro allow applying each commit like in spacelift ? or does atmos pro only allow for applying on the default branch ?
T
Thomas Spear3 days ago
Hi team, we have some stacks that we want to rename. They have a typo in them while other non-terraform code has the correct spelling and we want to fix it.
The problem is: doing so seems to cause atmos to treat them as a different stack, and tries to create them fresh. We would like to avoid adding import blocks for this but we also don't have direct access to manually move the state files and so we need atmos to do it via our CI pipeline if possible because the stack name is used across multiple environments including production. Is there any way currently?
I see this page talks about renaming components, but what about renaming stacks?
The problem is: doing so seems to cause atmos to treat them as a different stack, and tries to create them fresh. We would like to avoid adding import blocks for this but we also don't have direct access to manually move the state files and so we need atmos to do it via our CI pipeline if possible because the stack name is used across multiple environments including production. Is there any way currently?
I see this page talks about renaming components, but what about renaming stacks?
B
Brandon3 days ago
hi there, when taking a layered approach to stacks, how does atmos recommend dealing with the layer x must exist before planning layer y, is it atmos workflows, or should github workflows determine the affected layers, or a combo of both? If there are any examples, it would be a huge help.
M
MB3 days ago(edited)
hey all, any idea how vendoring should be part of the pipeline? What's your vision on it?
J
Jonathan Rose6 days ago
any insight to when chore(deps): update terraform cloudposse/repository/github to v1.1.0 by renovate[bot] · Pull Request #29 · cloudposse-terraform-components/aws-github-repository will be released?
O
Orion7X6 days ago
👋 Hey all! We're evaluating Atmos (alongside some other tools/frameworks) for a pretty complex existing Terraform setup. We manage many deployments across cloud providers, accounts, regions, and tenancy models (multi/single-tenant). Each has a "type" with its own blueprint of Terraform code + a tfvars file per deployment. Works fine today, but only for monostates - so we are looking for something more flexible.
We'd like a pattern like this:
- A top-level stack per deployment (e.g.
- Each top-level stack only needs two things: 1) an import for its deployment type, and 2) deployment-specific vars
- Blueprints are composable - a blueprint imports layers, layers import components
Example top-level stack:
Blueprint stack:
Each Layer would import some number of Components, such as vpc, subnets, etc. for the networking layer.
This looks similar to the Layered Stack Configuration pattern in the docs. My main question is around var precedence: how do I set a default (e.g.
Two options I see:
1. Drop the blueprint layer and list all layer imports directly in each top-level stack - but this risks drift between deployments over time, so probably a no-go for us
2. Use
Is option 2 the right approach here, or is there a better pattern for this? Thanks! 🙏
We'd like a pattern like this:
- A top-level stack per deployment (e.g.
mt-prod-us, lt-prod-us-customer1)- Each top-level stack only needs two things: 1) an import for its deployment type, and 2) deployment-specific vars
- Blueprints are composable - a blueprint imports layers, layers import components
Example top-level stack:
yaml
# stacks/mt/prod-us.yml
name: mt-prod-us
import:
- catalog/blueprints/mt
vars:
stage: prod
region: us-east-1
cidr_block: "1.1.1.0/24"
rds_instance_type: "db.r5.12xlarge"Blueprint stack:
yaml
# catalog/blueprints/mt.yml
import:
- catalog/layers/aws/networking
- catalog/layers/aws/compute
- catalog/layers/aws/storageEach Layer would import some number of Components, such as vpc, subnets, etc. for the networking layer.
This looks similar to the Layered Stack Configuration pattern in the docs. My main question is around var precedence: how do I set a default (e.g.
cidr_block: 192.168.0.0/24) in a blueprint/layer/component, but allow a top-level stack to override it?Two options I see:
1. Drop the blueprint layer and list all layer imports directly in each top-level stack - but this risks drift between deployments over time, so probably a no-go for us
2. Use
overrides.vars in the top-level stack instead of vars - seems to work in testing, but not sure if this is an anti-patternIs option 2 the right approach here, or is there a better pattern for this? Thanks! 🙏
J
Jonathan Rose7 days ago(edited)
Anyone else seeing this warning?
Any chance it's related to this warning?
It seems to be specific to vendored components
│ Warning: Incomplete lock file information for providers
│
│ Due to your customized provider installation methods, Terraform was forced
│ to calculate lock file checksums locally for the following providers:
│ - hashicorp/aws
│ - hashicorp/tls
│
│ The current .terraform.lock.hcl file only includes checksums for
│ linux_amd64, so Terraform running on another platform will fail to install
│ these providers.
│
│ To calculate additional checksums for another platform, run:
│ terraform providers lock -platform=linux_amd64
│ (where linux_amd64 is the platform to generate)Any chance it's related to this warning?
│ Warning: Version constraints inside provider configuration blocks are deprecated
│
│ on providers_override.tf.json line 16, in provider.aws:
│ 16: "version": "5.100.0"
│
│ Terraform 0.13 and earlier allowed provider version constraints inside the
│ provider configuration block, but that is now deprecated and will be
│ removed in a future version of Terraform. To silence this warning, move the
│ provider version constraint into the required_providers block.It seems to be specific to vendored components
R
RB7 days ago
What's a good way in the stack yaml to reference the aws account id in an arn without hard coding it? Our devs created a new hardcoded input called var.aws_account_id in the stage mixin but there must be a better way
R
RB7 days ago
Is there anyway to cache the deep merged stack yaml to save time when running subsequent commands like plan/apply? Hoping to run something like
atmos describe components > deepmerge.yaml and the run atmos plan xyz -s ue1-dev --cached-deepmerge-yaml deepmerge.yaml or similar to save timeM
MB8 days ago
hey all! I see the following in the documentation.
https://atmos.tools/design-patterns/version-management/vendoring-components#advanced-vendor-configuration
specifically:
Is this still supported?
https://atmos.tools/design-patterns/version-management/vendoring-components#advanced-vendor-configuration
specifically:
sources:
# Vendor multiple versions of the same component
- component: vpc
source: "<http://github.com/cloudposse/terraform-aws-vpc.git///?ref={{.Version}}|github.com/cloudposse/terraform-aws-vpc.git///?ref={{.Version}}>"
targets:
- path: "vpc/{{.Version}}"
version: "2.1.0"
- path: "vpc/latest"
version: "2.2.0"Is this still supported?
Error: yaml: unmarshal errors:
line 27: cannot unmarshal !!map into string
line 29: cannot unmarshal !!map into string
line 31: cannot unmarshal !!map into stringS
Sean Nguyen9 days ago(edited)
Is there guidance for using
This week, we uncovered that
When we changed the offending catalog files to use
Given component_A and component_B, the spacelift stacks for component_B were failing when:
•
•
This seems to be not an issue after we switched to
!terraform.state vs atmos.Component ? I’m struggling to understand the differences based on the documentation.This week, we uncovered that
atmos.Component was breaking one of our stacks (it was causing terraform workspace to flap during plans which ultimately caused the apply step in Spacelift to fail).When we changed the offending catalog files to use
!terraform.state syntax instead, things began working again.Given component_A and component_B, the spacelift stacks for component_B were failing when:
•
catalog/component_A/defaults.yaml -> had atmos.Component templating to fetch outputs from component_B•
component_B -> uses remote-state module to query outputs from component_AThis seems to be not an issue after we switched to
!terraform.state syntax..E
erik9 days ago
We're looking for feedback on our new agent skills. @Andriy Knysh (Cloud Posse) put these together.
https://atmos.tools/changelog/ai-agent-skills
https://atmos.tools/changelog/ai-agent-skills
Z
Zack9 days ago
❓️ Wanted to also confirm, that JIT is still not smart enough overwrite its own
workdir if the source.version has not changedZ
Zack9 days ago
❓️ I have atmos creating two different workdirs for the same component using JIT 🧵
S
Stewart9 days ago(edited)
lol, sorry...fat fingered
A
Alexandre Feblot10 days ago
How to handle "sensitive terraform output" ?
Hi, sorry to come back to this but after having read https://github.com/cloudposse/atmos/blob/osterman/secrets-management/docs/prd/secrets-management.md , I don't understand how to deal with terraform sensitive outputs, and pass them properly to
• other terraform components
• other helm or ansible components
The secrets-management PRD does not seem to address this, as it focuses on manually defined secrets, using the atmos cli.
I would expect a store-like mechanism that allows "connecting" stack outputs to stack inputs for sensitive information, leveraging some "safe intermediate storage".
I feel like (am I wrong?) that SSM parameter store for example could maybe be an easy win, as it supports SecretStrings, and is supported as an atmos store already. Setting the parameter type as SecureString if the output is set as sensitive might be possible?
Sure, it would change the definition/scope of atmos stores, but again, there is a gap here that secret-management does not fill.
Hi, sorry to come back to this but after having read https://github.com/cloudposse/atmos/blob/osterman/secrets-management/docs/prd/secrets-management.md , I don't understand how to deal with terraform sensitive outputs, and pass them properly to
• other terraform components
• other helm or ansible components
The secrets-management PRD does not seem to address this, as it focuses on manually defined secrets, using the atmos cli.
I would expect a store-like mechanism that allows "connecting" stack outputs to stack inputs for sensitive information, leveraging some "safe intermediate storage".
I feel like (am I wrong?) that SSM parameter store for example could maybe be an easy win, as it supports SecretStrings, and is supported as an atmos store already. Setting the parameter type as SecureString if the output is set as sensitive might be possible?
Sure, it would change the definition/scope of atmos stores, but again, there is a gap here that secret-management does not fill.
B
Bruce14 days ago
when migrating to account-map, if a component has multiple aws providers with alias but uses the account-map/iam-roles to do role assumption...how should we handle auth?
for example: https://github.com/cloudposse-terraform-components/aws-tgw-spoke/blob/main/src/provider-hub.tf#L8
for example: https://github.com/cloudposse-terraform-components/aws-tgw-spoke/blob/main/src/provider-hub.tf#L8
R
Ryan Johnson14 days ago
Not sure if anyone else mentioned this, but it would be really nice, if the !store function could tell you what key it can't find in your store. When you are dealing with multiple keys figuring out which is missing is cumbersome.
B
Bart Palmowski14 days ago
Hi, I'm getting the following error:
$ atmos terraform apply foo/bar -s quux
cannot override two slices with different type ([]interface {}, map[string]interface {})B
brandonvin14 days ago
Hey folks! I'd be really happy to use this command:
There was a github issue and PR to fix, I believe. I recently updated atmos and seems like the bug fix hasn't been released
atmos terraform apply --stack a-stack --all
There was a github issue and PR to fix, I believe. I recently updated atmos and seems like the bug fix hasn't been released
J
Jonathan Rose15 days ago(edited)
For those that vendor their service catalog with atmos, does the following workflow align with what you do?
I am starting to abstract docker from the service catalog so IAC developers can use atmos locally
atmos vendor pull # download IAC service catalog
cd atmos # this is where service catalog is pulled
atmos vendor pull # download dependencies of the service catalog
atmos toolchain install # install binaries
atmos terraform plan --allI am starting to abstract docker from the service catalog so IAC developers can use atmos locally
G
Girish Maddineni16 days ago(edited)
Anyone experience this issue, we are currently testing a new feature of describe-affected in atmos version
Relative Paths for the Components and Stacks from the runner are,
Components:
Stacks:
@PePe Amengual
1.206.0-rc.4, coming from version 1.198.0 and experiencing errors with base path. Tried to follow the documentation of updated Base_paths in newer versions, https://atmos.tools/changelog/base-path-behavior-change. and also tried with following env variables per documentation ATMOS_CLI_CONFIG_PATH: "./config", ATMOS_BASE_PATH: ".." && ATMOS_BASE_PATH: "" . Running in GitHub Workflows and atmos.yaml is located in SubDirectory called Config in a Git Root Directory ( which is git-iac-repo from below error ).Relative Paths for the Components and Stacks from the runner are,
Components:
/__w/git-iac-repo/git-iac-repo/components/terraformStacks:
/__w/git-iac-repo/git-iac-repo/stacks@PePe Amengual
# Error
**Error:** directory for Atmos stacks does not exist
## Hints
:bulb: Stacks directory not found: /__w/git-iac-repo/stacks
# Error
**Error:** directory for Atmos stacks does not exist
## Hints
:bulb: Stacks directory not found: /__w/git-iac-repo/stacks
Error appears
Run if [[ -f "/__w/git-iac-repo/git-iac-repo/components/terraform/applicationgateway/stg-wus3-appgateway_be-aa699e0682be123e8c1a5beb008662e6b476ad12.planfile" ]]; then
Run if [[ "false" == "true" ]]; then
/__w/git-iac-repo/git-iac-repo/metadata/step-summary-stg-wus3-appgateway_be.md found
Drift detection mode disabled
Run echo "rand=$(openssl rand -hex 5)" >> "$GITHUB_OUTPUT"
J
Jonathan Rose16 days ago
Anyone else seeing similar issues with the toolchain?
Here is my .tool-versions file:
$ atmos toolchain install
✗ Install failed aquasecurity/trivy@0.69.1 mailto:aquasecurity/trivy@0.69.1: HTTP request failed: download failed
⡿ ████████████░░░░░░░░░░░░░░░░░░░░░░░ 33%✗ Install failed bridgecrewio/checkov@3.2.505 mailto:bridgecrewio/checkov@3.2.505: HTTP request failed: download failed
⡿ ███████████████████████░░░░░░░░░░░░ 67%✓ Installed hashicorp/terraform@1.11.4
⡿ ███████████████████████████████████ 100%
✗ Installed 1 tools, failed 2Here is my .tool-versions file:
aquasecurity/trivy 0.69.1
bridgecrewio/checkov 3.2.505
hashicorp/terraform 1.11.4P
PePe Amengual17 days ago
is there a concept of custom yaml functions? where I can do some crazy stuff to use it to create strings ( inputs for tf) dynamically
Z
Zack17 days ago
❓️ Can you JIT a local directory? Say this is for a local example directory of a module. I'd like to include an atmos example.
A
Aaron17 days ago
In the docs I see that we can use custom commands to override an existing Terraform command, but this doesn't seem to work for me. Am I doing something wrong?
https://atmos.tools/cli/configuration/commands#override-an-existing-terraform-command
In my atmos.yaml:
It works if I don't attempt to override
https://atmos.tools/cli/configuration/commands#override-an-existing-terraform-command
In my atmos.yaml:
commands:
- name: terraform
description: Execute 'terraform' commands
# subcommands
commands:
- name: plan
description: This command executes 'terraform plan' on terraform components, passing in the current commit hash
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
steps:
- atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }} -- -var "commit=$(git rev-parse HEAD)"It works if I don't attempt to override
terraform plan -- if I change the command name to foo then I can run atmos terraform foo $component -s $stack and the commit gets passed in like I want. But I can't figure out how to override terraform plan.P
PePe Amengual19 days ago(edited)
.locals do not support go templates?J
Jonathan Rose21 days ago
Was AWS Secrets Manager removed as a supported
store? Stores Configuration | atmosH
Hassan Khan21 days ago
Hey Team,
quick question I have been going over the dns-primary component,
how can we create new certificate if already existing certificate has expired without destroying dns-primary component as it would mess up the route53 entries as well.
quick question I have been going over the dns-primary component,
how can we create new certificate if already existing certificate has expired without destroying dns-primary component as it would mess up the route53 entries as well.
J
M
Miguel Zablah22 days ago
Hey guys! I think I found a new bug in the
I have created an issue explaining it a little bit better here:
https://github.com/cloudposse/atmos/issues/2089
atmos custom commands that invoke terraform it looks like there is an issue with how atmos sets the PATH when using toolchainI have created an issue explaining it a little bit better here:
https://github.com/cloudposse/atmos/issues/2089
R
RB22 days ago
Related question.
1. We also have the
2. Also, we have to manually copy the
1. We also have the
ec2-autoscale-group module vendored as a component and it doesn’t contain a component.yaml file so atmos-component-updater doesn’t pick it up. Does this mean we should create our own or is there a way to generate this file on the fly ?2. Also, we have to manually copy the
providers.tf mixin into this. Just checking if this is the “correct” way to do this.R
RB22 days ago(edited)
What’s the correct format to vendor the path in without grabbing any unnecessary files? For example, I have this setup which omitted the
component.yaml which prevented the atmos-component-updater from bumping this dep. Are these the correct excluded_paths ? How should I do this for each component and is it possible to have a default set (perhaps in atmos itself) ?J
Jonathan Rose22 days ago
I am reviewing atmos/examples/demo-vendoring at main · cloudposse/atmos and trying to understand how I can improve my vendor manifest to permit developers to side-load a vendor manifest in their stack, while still maintaining the vendor manifest that I include in my service catalog
M
Miguel Zablah23 days ago
Hey I have a question about atmos and terraform workspaces, did the naming change on v1.206.2? It looks like now the terraform workspace by default is not including component name and namespace, Is this intentional or did I miss something??
So before the calculated tf workspace will be like this:
and now the calculated tf workspace is:
So before the calculated tf workspace will be like this:
{tenant}-{environment}-{stage}-{component}-{namespace}and now the calculated tf workspace is:
{tenant}-{environment}-{stage}R
RB23 days ago
How do use renovate (or similar) to keep cloudposse vendored components up to date? Looking for something that would
1. Set the version to be the latest github-release
2. Run atmos vendor pull and commit any additional files
1. Set the version to be the latest github-release
2. Run atmos vendor pull and commit any additional files
J
Johan23 days ago
Question: I have this in my
And these files:
When I do a
But, when I try to pull one of them, it can’t find it:
What am I missing here?
atmos.yamlvendor:
base_path: "./vendor"And these files:
vendor
├── aws
│ └── eks.yaml
└── azure
└── aks.yamlWhen I do a
atmos vendor list, I get: Component Type Tags Manifest Folder
───────────────────────────────────────────────────────────────────────────────────────────────
aks-cluster-dev Vendor Manifest azure aks.yaml components/terraform/aks-cluster-dev/atmos
eks-cluster-dev Vendor Manifest aws eks.yaml components/terraform/eks-cluster-dev/atmosBut, when I try to pull one of them, it can’t find it:
➜ atmos vendor pull --tags azure
Error
Error: no YAML configuration files found in directory 'vendor'What am I missing here?
M
Michael Dizon24 days ago
was there a breaking change introduced in 1.202.0 that would affect credentials?
M
Miguel Zablah24 days ago
Hey quick question about Stores can one use maybe
atmos auth identity to read from them instead of using roles as the docs suggest? Maybe there is an option to specify an AWS identity for the SSM for example or is this not supported?M
Miguel Zablah24 days ago
Hey guys I think I found a bug with
I have created an issue for this:
https://github.com/cloudposse/atmos/issues/2080
!terraform.state when use inside locals it fails specifically if we use it without setting the stack.I have created an issue for this:
https://github.com/cloudposse/atmos/issues/2080
H
Hassan Khan24 days ago
Hey guys,
I have been going over this component
https://docs.cloudposse.com/components/library/aws/eks/karpenter-node-pool/#option-2-using-atmos-terraformstate-recommended
it mentions to use the
I have been going over this component
https://docs.cloudposse.com/components/library/aws/eks/karpenter-node-pool/#option-2-using-atmos-terraformstate-recommended
it mentions to use the
Option 2 Using Atmos !terraform.state which is recommended way, is there any specific reason not to use remote-state as recommended way, I see that the remote-state way will be deprecated in newest release. Did you guys experience some issues with remote-state and or just decided to move away from remote-state entirely for all the other components as well?M
Mike Rowehl25 days ago(edited)
I had asked a question in #general: https://sweetops.slack.com/archives/CQCDCLA1M/p1771189798862939 But it appears this is the more proper place to ask. I just realized we're running an older version of Atmos for our actions, 1.200.0. I run a more updated version locally, I can check the plan-diff with my version and see if it ignores the change if it's likely that's the issue.
M
Marat Bakeev27 days ago
Hey guys, a question about using atmos with geodesic containers. We currently are using Leapp, and want to replace it.
Am I right that having atmos installed in the host system is a requirement?
What if we want to have fully isolated geodesic containers, and to only have atmos inside of them? So containers know nothing about the authentication setup of other containers? I think we tried this, but terraform was failing to get credentials, if atmos auth is configured only inside the geodesic container?
Am I right that having atmos installed in the host system is a requirement?
What if we want to have fully isolated geodesic containers, and to only have atmos inside of them? So containers know nothing about the authentication setup of other containers? I think we tried this, but terraform was failing to get credentials, if atmos auth is configured only inside the geodesic container?