33 messages
👽️
A
Alexandre Feblot12 days ago
How to handle "sensitive terraform output" ?
Hi, sorry to come back to this but after having read https://github.com/cloudposse/atmos/blob/osterman/secrets-management/docs/prd/secrets-management.md , I don't understand how to deal with terraform sensitive outputs, and pass them properly to
• other terraform components
• other helm or ansible components
The secrets-management PRD does not seem to address this, as it focuses on manually defined secrets, using the atmos cli.
I would expect a store-like mechanism that allows "connecting" stack outputs to stack inputs for sensitive information, leveraging some "safe intermediate storage".
I feel like (am I wrong?) that SSM parameter store for example could maybe be an easy win, as it supports SecretStrings, and is supported as an atmos store already. Setting the parameter type as SecureString if the output is set as sensitive might be possible?
Sure, it would change the definition/scope of atmos stores, but again, there is a gap here that secret-management does not fill.
Hi, sorry to come back to this but after having read https://github.com/cloudposse/atmos/blob/osterman/secrets-management/docs/prd/secrets-management.md , I don't understand how to deal with terraform sensitive outputs, and pass them properly to
• other terraform components
• other helm or ansible components
The secrets-management PRD does not seem to address this, as it focuses on manually defined secrets, using the atmos cli.
I would expect a store-like mechanism that allows "connecting" stack outputs to stack inputs for sensitive information, leveraging some "safe intermediate storage".
I feel like (am I wrong?) that SSM parameter store for example could maybe be an easy win, as it supports SecretStrings, and is supported as an atmos store already. Setting the parameter type as SecureString if the output is set as sensitive might be possible?
Sure, it would change the definition/scope of atmos stores, but again, there is a gap here that secret-management does not fill.
S
Stewart11 days ago(edited)
lol, sorry...fat fingered
Z
Zack11 days ago
❓️ I have atmos creating two different workdirs for the same component using JIT 🧵
Z
Zack11 days ago
❓️ Wanted to also confirm, that JIT is still not smart enough overwrite its own
workdir if the source.version has not changedE
erik11 days ago
We're looking for feedback on our new agent skills. @Andriy Knysh (Cloud Posse) put these together.
https://atmos.tools/changelog/ai-agent-skills
https://atmos.tools/changelog/ai-agent-skills
S
Sean Nguyen11 days ago(edited)
Is there guidance for using
This week, we uncovered that
When we changed the offending catalog files to use
Given component_A and component_B, the spacelift stacks for component_B were failing when:
•
•
This seems to be not an issue after we switched to
!terraform.state vs atmos.Component ? I’m struggling to understand the differences based on the documentation.This week, we uncovered that
atmos.Component was breaking one of our stacks (it was causing terraform workspace to flap during plans which ultimately caused the apply step in Spacelift to fail).When we changed the offending catalog files to use
!terraform.state syntax instead, things began working again.Given component_A and component_B, the spacelift stacks for component_B were failing when:
•
catalog/component_A/defaults.yaml -> had atmos.Component templating to fetch outputs from component_B•
component_B -> uses remote-state module to query outputs from component_AThis seems to be not an issue after we switched to
!terraform.state syntax..M
MB10 days ago
hey all! I see the following in the documentation.
https://atmos.tools/design-patterns/version-management/vendoring-components#advanced-vendor-configuration
specifically:
Is this still supported?
https://atmos.tools/design-patterns/version-management/vendoring-components#advanced-vendor-configuration
specifically:
sources:
# Vendor multiple versions of the same component
- component: vpc
source: "<http://github.com/cloudposse/terraform-aws-vpc.git///?ref={{.Version}}|github.com/cloudposse/terraform-aws-vpc.git///?ref={{.Version}}>"
targets:
- path: "vpc/{{.Version}}"
version: "2.1.0"
- path: "vpc/latest"
version: "2.2.0"Is this still supported?
Error: yaml: unmarshal errors:
line 27: cannot unmarshal !!map into string
line 29: cannot unmarshal !!map into string
line 31: cannot unmarshal !!map into stringR
RB10 days ago
Is there anyway to cache the deep merged stack yaml to save time when running subsequent commands like plan/apply? Hoping to run something like
atmos describe components > deepmerge.yaml and the run atmos plan xyz -s ue1-dev --cached-deepmerge-yaml deepmerge.yaml or similar to save timeR
RB9 days ago
What's a good way in the stack yaml to reference the aws account id in an arn without hard coding it? Our devs created a new hardcoded input called var.aws_account_id in the stage mixin but there must be a better way
J
Jonathan Rose9 days ago(edited)
Anyone else seeing this warning?
Any chance it's related to this warning?
It seems to be specific to vendored components
│ Warning: Incomplete lock file information for providers
│
│ Due to your customized provider installation methods, Terraform was forced
│ to calculate lock file checksums locally for the following providers:
│ - hashicorp/aws
│ - hashicorp/tls
│
│ The current .terraform.lock.hcl file only includes checksums for
│ linux_amd64, so Terraform running on another platform will fail to install
│ these providers.
│
│ To calculate additional checksums for another platform, run:
│ terraform providers lock -platform=linux_amd64
│ (where linux_amd64 is the platform to generate)Any chance it's related to this warning?
│ Warning: Version constraints inside provider configuration blocks are deprecated
│
│ on providers_override.tf.json line 16, in provider.aws:
│ 16: "version": "5.100.0"
│
│ Terraform 0.13 and earlier allowed provider version constraints inside the
│ provider configuration block, but that is now deprecated and will be
│ removed in a future version of Terraform. To silence this warning, move the
│ provider version constraint into the required_providers block.It seems to be specific to vendored components
O
Orion7X8 days ago
👋 Hey all! We're evaluating Atmos (alongside some other tools/frameworks) for a pretty complex existing Terraform setup. We manage many deployments across cloud providers, accounts, regions, and tenancy models (multi/single-tenant). Each has a "type" with its own blueprint of Terraform code + a tfvars file per deployment. Works fine today, but only for monostates - so we are looking for something more flexible.
We'd like a pattern like this:
- A top-level stack per deployment (e.g.
- Each top-level stack only needs two things: 1) an import for its deployment type, and 2) deployment-specific vars
- Blueprints are composable - a blueprint imports layers, layers import components
Example top-level stack:
Blueprint stack:
Each Layer would import some number of Components, such as vpc, subnets, etc. for the networking layer.
This looks similar to the Layered Stack Configuration pattern in the docs. My main question is around var precedence: how do I set a default (e.g.
Two options I see:
1. Drop the blueprint layer and list all layer imports directly in each top-level stack - but this risks drift between deployments over time, so probably a no-go for us
2. Use
Is option 2 the right approach here, or is there a better pattern for this? Thanks! 🙏
We'd like a pattern like this:
- A top-level stack per deployment (e.g.
mt-prod-us, lt-prod-us-customer1)- Each top-level stack only needs two things: 1) an import for its deployment type, and 2) deployment-specific vars
- Blueprints are composable - a blueprint imports layers, layers import components
Example top-level stack:
yaml
# stacks/mt/prod-us.yml
name: mt-prod-us
import:
- catalog/blueprints/mt
vars:
stage: prod
region: us-east-1
cidr_block: "1.1.1.0/24"
rds_instance_type: "db.r5.12xlarge"Blueprint stack:
yaml
# catalog/blueprints/mt.yml
import:
- catalog/layers/aws/networking
- catalog/layers/aws/compute
- catalog/layers/aws/storageEach Layer would import some number of Components, such as vpc, subnets, etc. for the networking layer.
This looks similar to the Layered Stack Configuration pattern in the docs. My main question is around var precedence: how do I set a default (e.g.
cidr_block: 192.168.0.0/24) in a blueprint/layer/component, but allow a top-level stack to override it?Two options I see:
1. Drop the blueprint layer and list all layer imports directly in each top-level stack - but this risks drift between deployments over time, so probably a no-go for us
2. Use
overrides.vars in the top-level stack instead of vars - seems to work in testing, but not sure if this is an anti-patternIs option 2 the right approach here, or is there a better pattern for this? Thanks! 🙏
J
Jonathan Rose8 days ago
any insight to when chore(deps): update terraform cloudposse/repository/github to v1.1.0 by renovate[bot] · Pull Request #29 · cloudposse-terraform-components/aws-github-repository will be released?
M
MB5 days ago(edited)
hey all, any idea how vendoring should be part of the pipeline? What's your vision on it?
B
Brandon5 days ago
hi there, when taking a layered approach to stacks, how does atmos recommend dealing with the layer x must exist before planning layer y, is it atmos workflows, or should github workflows determine the affected layers, or a combo of both? If there are any examples, it would be a huge help.
T
Thomas Spear5 days ago
Hi team, we have some stacks that we want to rename. They have a typo in them while other non-terraform code has the correct spelling and we want to fix it.
The problem is: doing so seems to cause atmos to treat them as a different stack, and tries to create them fresh. We would like to avoid adding import blocks for this but we also don't have direct access to manually move the state files and so we need atmos to do it via our CI pipeline if possible because the stack name is used across multiple environments including production. Is there any way currently?
I see this page talks about renaming components, but what about renaming stacks?
The problem is: doing so seems to cause atmos to treat them as a different stack, and tries to create them fresh. We would like to avoid adding import blocks for this but we also don't have direct access to manually move the state files and so we need atmos to do it via our CI pipeline if possible because the stack name is used across multiple environments including production. Is there any way currently?
I see this page talks about renaming components, but what about renaming stacks?
R
RB5 days ago
Does atmos pro allow applying each commit like in spacelift ? or does atmos pro only allow for applying on the default branch ?
R
RB5 days ago
Re: upgrading from atmos 1.168.0 to latest version, the
e.g.
ignore_missing_template_values (link) seems to be the method @Pablo Paez found. Is there anyway to set this at a more global level than on every imported catalog ?e.g.
import:
- path: "<path_to_atmos_manifest2>"
context: {}
skip_templates_processing: false
ignore_missing_template_values: trueR
RB5 days ago(edited)
B
Brandon4 days ago(edited)
hello, wondering what happens if you're using
!terraform.state and the component/stack you're querying for doesn't exist yet, or hasn't been applied yet?R
RB4 days ago
Do we have a way of doing terraform state migrations per component-stack slug yet ?
Ideally, we’d like to do this in a
Ideally, we’d like to do this in a
migrations.tf style hcl but do it per component-stack so migrations-redis-banana-ue1-dev.tf won’t be applied to redis-banana-ue2-prodR
RB3 days ago
How do you pin providers across all components ?
We ran into an issue today due to the newest https://github.com/cloudposse/terraform-provider-utils/releases/tag/v1.32.0 provider and need to pin across all of our components to
We ran into an issue today due to the newest https://github.com/cloudposse/terraform-provider-utils/releases/tag/v1.32.0 provider and need to pin across all of our components to
v1.31.0J
Johan3 days ago
Hi! Looking forward to use the new ai integration! I love how Atmos is evolving 🙂
Did anyone already succeed in using litellm as an AI backend?
Did anyone already succeed in using litellm as an AI backend?
O
Orion7X2 days ago
Is there a way to get a component DAG view of all components for a stack? I'm aware of
atmos describe dependents, but I'm looking for a command that operates at the stack level.O
Orion7X2 days ago
It appears that
components.terraform.base_path (in atmos.yaml) doesn't support a nested directory structure - can someone confirm? I would like to organize components, at the very least, by cloud provider: /components/terraform/aws/..., but I can't get this to work.J
Jonathan Rose2 days ago
question, I am doing POC for GitHub Actions | atmos and trying to understand how to get up and running quickly without having the requirements noted (e.g. s3 bucket, dynamoDB table)
S
Syio2 days ago(edited)
Hello, I’m trying to understand the correct way to use workdir and source provisioning with the Terraform plan GitHub Action.
I’m currently using
cloudposse/github-action-atmos-terraform-plan with my components which use workdir + source provisioning enabled, because I want to avoid cache collisions when running multiple stacks in parallel that share the same component.
The issue I’m encountering is that the action attempts to write the Terraform plan file to:
terraform/components/<component>/
However, when workdir is enabled, Atmos provisions the component to:
.workdir/<stack>-<component>/
This causes a mismatch between where the action expects the component to exist and where Atmos actually provisions it.
For context: My component base path is terraform/components
My current workaround is to pre-create the expected component folder in the pipeline, while still running Terraform from the component workdir.
Before continuing with this approach, I wanted to ask: Is there a recommended way to use workdir + source provisioning with the plan action? Or is there a configuration option I might be missing so the action uses the Atmos workdir path instead?
I’m currently using
cloudposse/github-action-atmos-terraform-plan with my components which use workdir + source provisioning enabled, because I want to avoid cache collisions when running multiple stacks in parallel that share the same component.
The issue I’m encountering is that the action attempts to write the Terraform plan file to:
terraform/components/<component>/
However, when workdir is enabled, Atmos provisions the component to:
.workdir/<stack>-<component>/
This causes a mismatch between where the action expects the component to exist and where Atmos actually provisions it.
For context: My component base path is terraform/components
My current workaround is to pre-create the expected component folder in the pipeline, while still running Terraform from the component workdir.
Before continuing with this approach, I wanted to ask: Is there a recommended way to use workdir + source provisioning with the plan action? Or is there a configuration option I might be missing so the action uses the Atmos workdir path instead?
J
Jonathan Rose2 days ago
Trying to understand what "import" is being referenced in the error below. I tried setting logs to both debug and trace and couldn't figure it out 🤔
$ atmos describe affected
Error
Error: failed to find import
$ atmos tf plan --affected
Error
Error: failed to find importL
Leonardo Agueci1 day ago(edited)
Hi all, I'm testing version 1.209.0 and I think I found and issue on the authentication realm
If I run a plan for a component:
Instead, if I run a plan for the entire stack:
Then looking into the
It seems that with
I'm using AWS SSO as provider
If I run a plan for a component:
atmos terraform plan <component> -s <stack>, credentials are stored into .config/atmos/<realm>Instead, if I run a plan for the entire stack:
atmos terraform plan -s <stack> --all, I get the following warning:WARN Credentials found under realm "b1fe90cf620cb71a" but current realm is "(no realm)". This typically happens after changing auth.realm in config or ATMOS_AUTH_REALM. Run 'atmos auth login' to re-authenticate under the current realm.Then looking into the
.config/atmos/ directory I can see two different foldersls -l atmos/drwx------ 3 root root 4096 Mar 13 10:30 awsdrwx------ 3 root root 4096 Mar 13 10:30 b1fe90cf620cb71aIt seems that with
--allcredentials are stored without the realm in the path (I actually haven’t been asked to reauthenticate)I'm using AWS SSO as provider
E
erik1 day ago
Got an idea for something Atmos should do?
Instead of filling out a long, boring feature template, just submit an AI prompt.
Describe the workflow, command, or problem you want Atmos to solve.
If it resonates with us, we might just implement it.
👉️ https://github.com/cloudposse/atmos/issues
Instead of filling out a long, boring feature template, just submit an AI prompt.
Describe the workflow, command, or problem you want Atmos to solve.
If it resonates with us, we might just implement it.
👉️ https://github.com/cloudposse/atmos/issues
M
Michael Dizon1 day ago
Running 1.209.0. I get this error when trying to use helmfile:
i’m not using helm_aws_profile_pattern, but it seems to be picking it up from somewhere. Am i missing a config?
WARN helm_aws_profile_pattern is deprecated, use --identity flag instead
The config profile (core--gbl-tooling-helm) could not be foundi’m not using helm_aws_profile_pattern, but it seems to be picking it up from somewhere. Am i missing a config?
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
helmfile:
base_path: "components/helmfile"
cluster_name_template: "core-{{ .vars.stage }}-{{ .vars.environment }}-app"
helm_aws_profile_pattern: null
use_eks: true
kubeconfig_path: /tmp/{stage}-{environment}-kubeconfigA
Alexandre Feblot1 day ago
FYI: Error on doc example: https://atmos.tools/cli/configuration/stacks/#advanced-template-with-validation
With templating enabled, the advanced name_pattern example works fine for
What works for me is this:
Note the removed "-" at the beginning of the first line and at the end of the last line.
With templating enabled, the advanced name_pattern example works fine for
atmos list staks but fails with a templating error for atmos tf plan some_component.What works for me is this:
name_template: |-
{{ $ns := .vars.namespace -}}
{{- $tenant := .vars.tenant -}}
....
{{- $stack_name }}Note the removed "-" at the beginning of the first line and at the end of the last line.
K
Kyle Averyabout 20 hours ago
Trying to implement gcp-wif with GHA, seems there may be a mistake in the docs - https://atmos.tools/cli/configuration/auth/providers#workload-identity-federation
I see this error in GH
Seems as though the configuration should look like this?
I see this error in GH
**Error:** parse gcp/wif spec: invalid provider config invalid auth config: spec is nil
# Initialize Providers
**Error:** failed to initialize providers: invalid provider config: provider=gcp-wif: parse gcp/wif spec: invalid provider config invalid auth config: spec is nilSeems as though the configuration should look like this?
auth:
providers:
gcp-wif:
kind: gcp/workload-identity-federation
spec:
project_id: my-gcp-project
project_number: "123456789012"
workload_identity_pool_id: github-pool
workload_identity_provider_id: github-provider
service_account_email: <mailto:ci-sa@my-project.iam.gserviceaccount.com|ci-sa@my-project.iam.gserviceaccount.com>J
Jonathan Roseabout 5 hours ago
cloudposse-terraform-components/aws-github-repository at v0.3.0 states that
required_code_scanning is unsupported due to permadrift. it looks like the underlying issue has been resolved as of v6.9.0 (refs: https://github.com/integrations/terraform-provider-github/pull/2701). are there plans to reintroduce this feature?