51 messages
π½οΈ
H
Hassan Khanabout 2 months ago
Hey Guys,
https://atmos.tools/tutorials/azure-authentication/
Can you update the docs for this. I was going over simple device-code based authentication and according docs these
whereas in version of atmos I am using
https://atmos.tools/tutorials/azure-authentication/
Can you update the docs for this. I was going over simple device-code based authentication and according docs these
tenant_id, subscription_id and location comes under providers block alongside the kind:
auth:
providers:
azure-dev:
kind: azure/device-code
tenant_id: "xyz"
subscription_id: "abc"
location: "eastus"whereas in version of atmos I am using
1.202.1 , its says it needs to be under specauth:
providers:
azure-dev:
kind: azure/device-code
spec:
tenant_id: "xyz"
subscription_id: "abc"
location: "eastus"B
brandonabout 2 months ago
Hey all, first post. I'm trying to learn more about how atmos stacks work. So you can target a list of one or more components in a stack, or you can target
--all components in a stack. Does atmos "understand" the dependencies between components and apply things in dependency order? Can atmos automatically detect which other components need to be applied when another component's outputs change?J
Justinabout 2 months ago(edited)
Just getting back into the swing of things after break and rolled across this announcement. I feel like I just received a bonus Christmas present from the CloudPosse team. I'm super excited to leverage this functionality and am already processing the significant improvements this will bring to my organization. Thank you to everyone that helped contribute to this release. π
M
Mikeabout 2 months ago
For values such as organization_config in the aws-account component, the documentation lists the type as any. Is there any additional documentation to figure out the structure of organization_config?
organization_config:
root_account: <- How do you know this exists in the organization_config?
name: core-root
stage: root
tenant: core
tags:
eks: false
accounts: [] <- or this one?K
Kyle Decotabout 2 months ago
Is https://atmos.tools down for anyone else? I'm getting an S3/Cloudfront error
B
brandonabout 1 month ago
That was very cool: in just an hour or so of reading docs and a little bit of scripting, I created an Atmos stack on top of our existing terraform codebase which is several dozen root modules with workspaces per env. Pretty magical to 
atmos terraform plan and see everything plan cleanly without a single change to the existing terraform. This is the kind of higher level tooling I've been missing 
B
brandonabout 1 month ago
One small question: in the Atmos docs for backend config, the examples with S3 use
{{ .component }} - what is that exactly? is this a special feature of the S3 backend?E
erikabout 1 month ago
We rolled out conversational search on atmos.tools (via algolia + gpt5)
https://atmos.tools/changelog/ask-ai-search
let us know what's working or halucinated π€‘
https://atmos.tools/changelog/ask-ai-search
let us know what's working or halucinated π€‘
E
Esteban Angeeabout 1 month ago
@Erik Osterman (Cloud Posse) quick question. As our implementations grow in components, build times increase significantly since each component requires state initialization, state read etc on each plan. Any recommendations to keep builds performant as the number of components grow?
B
brandonabout 1 month ago
Is this a bug? I want to plan all components in a stack, so I'm running
Then atmos shows a list of components and prompts me to select one.
If I select one, it errors out
atmos tf plan --all --stack <stack>Then atmos shows a list of components and prompts me to select one.
If I select one, it errors out
βΉ Selected component mycomponent
Error
Error: component mycomponent : the component argument can't be used with the multi-component (bulk operations) flags --affected , --all , --query and --componentsP
PePe Amengualabout 1 month ago
can I source/vendor pull stacks files? like remote imports of some kind?
P
PePe Amengualabout 1 month ago
Can I pass a global var from the command line? to append to other vars in
vars: {}M
Miguel Zablahabout 1 month ago
Hey guys quick question if I use
atmos auth and set the region there dose atmos set this region as ENV or something I can use on the orgs yaml files? kind like .atmos_component ?M
Miguel Zablahabout 1 month ago
I have another question about
If I login usint
So one will get the S3 bucket since that is on another AWS account and the other AWS role will be use to login into the dev account to do the changes.
Is this not supported? or is there a way to set the profile on at the
I figure this might not be the atmos way of doing things right? is the atmos way having an S3 bucket per account? and handle the permissions on the role level?
If so that is okay but I still might have different scenario where login into more than 1 account might be necessary so I will love to see if this possible?
PS: I did check for a similar question or docs that might talk about this and did not find any so sorry about that.
atmos auth .If I login usint
atmos auth login it will login with the default and all and that is cool but what if I use 2 roles aws/iam-identity-center to run my tf?So one will get the S3 bucket since that is on another AWS account and the other AWS role will be use to login into the dev account to do the changes.
Is this not supported? or is there a way to set the profile on at the
orgs/ level?I figure this might not be the atmos way of doing things right? is the atmos way having an S3 bucket per account? and handle the permissions on the role level?
If so that is okay but I still might have different scenario where login into more than 1 account might be necessary so I will love to see if this possible?
PS: I did check for a similar question or docs that might talk about this and did not find any so sorry about that.
E
erikabout 1 month ago
If I login usintatmos auth loginit will login with the default and all and that is cool but what if I use 2 rolesaws/iam-identity-centerto run my tf?
You can set multiple as default, and then it will present you with a selector.
However, the better way would be to use atmos profiles.
https://atmos.tools/cli/configuration/auth/providers#using-profiles-for-different-environments
Profiles are super power. It's how we would define identities once, then change the provider, for example in OIDC to get GitHub OIDC.
That way you define identities once, and swap out the provider based on a profile.
E
erikabout 1 month ago
So one will get the S3 bucket since that is on another AWS account and the other AWS role will be use to login into the dev account to do the changes.
I am not tracking the architecture here. We woudl typically treat this as 2 separate components because sounds like they do different things.
E
erikabout 1 month ago
and if they are 2 separate components they can each use a different identity
C
Chris Hardenabout 1 month ago(edited)
Is there any timeline for adding support for CUE?
// ValidateWithCue validates the data structure using the provided CUE document
// <https://cuelang.org/docs/integrations/go/#processing-cue-in-go>
func ValidateWithCue(data any, schemaName string, schemaText string) (bool, error) {
return false, errors.New("validation using CUE is not supported yet")
}C
Chris Hardenabout 1 month ago
Is it possible to create a Rego policy that checks values on an
E.g., I have mix-in's with AWS account specific values,
I want a policy that verifies that any
import:E.g., I have mix-in's with AWS account specific values,
terraform.backend and vars .I want a policy that verifies that any
*-dev stack must contain an import of mixin/account-a.yaml and not contain an import of mixin/account-b.yamlE
erikabout 1 month ago
Heads up, if you're still using
https://sweetops.slack.com/archives/CB3579ZM3/p1768404502671929
cloudposse/packages and cloudposse/build-harness we recommend you migrate those things to Atmos.https://sweetops.slack.com/archives/CB3579ZM3/p1768404502671929
N
Nick Dunnabout 1 month ago
Hi, everyone. I'm just seeking some quick help regarding Github Actions & Atmos because I feel as though I'm missing something very basic. I'm able to run Atmos using Github Actions thanks to your documentation. They appear to run successfully without issue. However, I never see any job summary show up. Is there possibly a simple 'gotchya' somewhere that I missed? Perhaps some configuration value I need to set?
B
Brianabout 1 month ago(edited)
The new
atmos features (such as auth, devcontainer, and toolset along with custom commands) are starting to make it an attractive tool for general development use. Has there been any thought given to adding a mode (configurable via repo's atmos.yaml) for atmos that isnβt centered around typical devops/platform engineering workflows (e.g., stacks and components)? This could help replace the need for Makefile in application repos.A
Angel Bermudezabout 1 month ago
Hi Everyone. Maybe I'm missing something but here we go. I'm attempting to integrate, in a feature branch, in my atmos project (using version 1.204.0) the atmos auth approach to authentication. I'm doing this via the service-principal provider as per the docs. (https://atmos.tools/tutorials/azure-authentication#service-principal-client-credentials)
It's errors out with the following:
My
It's errors out with the following:
Initialize Providers
Error: invalid provider kind
## Explanation
unsupported provider kind: azure/service-principal
Initialize Providers
Error: failed to initialize providers: invalid provider config: provider=azure-auth: invalid provider kind: unsupported provider kind: azure/service-principal
Error
Error: failed to initialize auth manager: failed to initialize providers: invalid provider config: provider=azure-auth: invalid provider kind: unsupported provider kind: azure/service-principalMy
atmos.yaml authauth:
providers:
azure-auth:
kind: azure/service-principal
spec:
client_id: "CIxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
#client_secret: "" exported via environment variable
subscription_id: "SIxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id: "TIxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
location: "eastus"
identities:
azure-sbx:
default: true # Default subscription
kind: azure/subscription
via:
provider: azure-auth
principal:
subscription_id: "SIxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"atmos auth login --identity azure-sbxL
Leonardo Agueciabout 1 month ago
Hello again, I'm trying the latest atmos version (1.204.0) but the stack plan (and I guess also other stack related commands) seems to not work properly
It is prompting to choose a component and moreover the components are not the only belonging to the stack (and even abstract are displayed)
sh-5.1# atmos terraform plan -s ci-central-1β Choose a component β Press ctrl+c or esc to cancel β > alb-api β alb-defaults/v1 β alb/apiIt is prompting to choose a component and moreover the components are not the only belonging to the stack (and even abstract are displayed)
Z
Zackabout 1 month ago
βοΈ
atmos terraform source list doesn't seem to be implemented yetM
Matt Searleabout 1 month ago
Hey I'm new to atmos,
I'm trying to set the template prefix for my terraform state files and getting a
Any idea why the templated values aren't being filled in?
terraform:
backend_type: gcs
backend:
gcs:
bucket: terraform-state
prefix: "{{ .vars.environment }}/{{ .component }}"I'm trying to set the template prefix for my terraform state files and getting a
backend.tf.json{
"terraform": {
"backend": {
"gcs": {
"bucket": "interviews-terraform-state",
"prefix": "{{ .vars.environment }}/{{ .component }}"
}
}
}
} Any idea why the templated values aren't being filled in?
Z
Zackabout 1 month ago
βοΈ it looks like
atmos terraform shell doesn't init or generate the backend.tf file anymore - using version 1.203.0 - this is with my atmos.yaml with those settings set to trueJ
Jonathan Roseabout 1 month ago(edited)
According to YAML in Atmos | atmos, lists cannot be appended, but I have a use case where I need to append. I have not found a YAML or Go Templating solution so far. I have a Mixin and a stack, both define
when I run
ingress_cidr_blocks and the end result needs to be merging of the lists. Example:========
dev.yaml
========
sg/fsx/windows:
vars:
ingress_cidr_blocks+:
- 172.22.46.0/23
======================
mixin/fsx/windows.yaml
======================
sg/fsx/windows:
vars:
ingress_cidr_blocks:
- 172.20.200.0/22
- 172.20.208.0/22
- 172.16.200.0/22
- 172.16.208.0/22
- 172.25.200.0/22
- 172.25.208.0/22
- 172.17.10.0/24
- 172.17.19.0/24
- 172.17.18.0/24
- 172.20.10.0/24
- 172.25.10.0/24
- 172.17.10.110/32
- 172.16.100.0/24when I run
atmos describe component sg/fsx/windows -s dev, I am expecting that ingress_cidr_blocks contains all values from both lists.A
Aleks30 days ago(edited)
Hi! I am trying to set up atmos-pro on a repo however the
Is this due to the JSON output being too large or is it something else that is the issue?
atmos describe affected --upload command produces a JSON of 37k+ lines and the upload fails with a 500 error:ERRO Pro API Error operation=UploadAffectedStacks request="" status=500 success=false trace_id=fa386ec064afff75e80146eda390c25e error_message="" context=map[]
# Error
failed to upload stacks API response error: API request failed with status 500 (trace_id: fa386ec064afff75e80146eda390c25e) Is this due to the JSON output being too large or is it something else that is the issue?
Z
Zack29 days ago
βοΈ is there any way in atmos to set a variable as
null ? not a stringZ
Zack29 days ago
βοΈ is there a way with the JIT vendoring to force pulling from the remote location instead of
βΉ Copying local component:... ?B
brandon29 days ago(edited)
Is this an appropriate use case for Catalogs? Our app has a pile of 40 microservices, and correspondingly in TF we have a 40 small root modules, one per microservice that provisions things like service's access to its database. If we're deploying the same pile of ~40 microservices to multiple envs, and the set of services and configs is largely copy-and-paste between envs, does it make sense to create a Catalog with the 40 services and include that in the stacks as a "bundle" instead?
Z
Zack28 days ago
π
atmos terraform generate varfile only works on directories inside components/terraform, not with JIT vendoring π€·J
Jonathan Rose28 days ago
Trying to understand how
atmos tf plan-diff is used. How do others use it to generate human-readable output? How is the original plan created?M
Miguel Zablah28 days ago
Hey is there a way to run the
I use to do this:
but this no longer works it seems that atmos validates the cmd and
terraform provider lock cmd in terraform?terraform providers lock \
-platform=windows_amd64 \ # 64-bit Windows
-platform=darwin_amd64 \ # 64-bit macOS
-platform=linux_amd64 # 64-bit LinuxI use to do this:
commands:
- name: tf-lock
description: Execute 'terraform lock' command for all OS platforms
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- atmos terraform "providers lock" $ATMOS_COMPONENT -s $ATMOS_STACK -- -platform=windows_amd64 -platform=darwin_amd64 -platform=linux_amd64but this no longer works it seems that atmos validates the cmd and
providers lock is not a valid oneM
Miguel Zablah25 days ago
Hey! I have a question about
I usually do something really close to this Org Hierarrchy Configuration design patter but in the
atmos auth I'm staring to use it and it's awesome but I'm wondering if with this there is a Atmos way to authenticate in tf?I usually do something really close to this Org Hierarrchy Configuration design patter but in the
orgs/acme/core/_defaults.yaml and orgs/acme/core/root/_defaults.yaml I have the S3 backend role and the providers role to assume. This works really well but with atmos auth and profiles it feels like maybe there is a better way to do this? is there a design patter to follow for tf authentication?Z
L
Leonardo Agueci24 days ago(edited)
Hi all, I'm trying the new version 1.205.0.
I think I've found an issue with authentication (I use Atmos internal authentication with AWS SSO).
When I run a plan for the entire stack:
I get an error on reading the backend on S3
It seems the terraform.state YAML function, in this scenario, it is not using the internal authentication.
This is not happening if I plan the component instead and it is not happening with the terraform.output YAML function
I think I've found an issue with authentication (I use Atmos internal authentication with AWS SSO).
When I run a plan for the entire stack:
atmos terraform plan --all -s core-networkI get an error on reading the backend on S3
Error: failed to read Terraform state for component vpc in stack core-networkin YAML function: !terraform.state vpc ".vpc_id // ""vpc"""It seems the terraform.state YAML function, in this scenario, it is not using the internal authentication.
This is not happening if I plan the component instead and it is not happening with the terraform.output YAML function
A
Alexandre Feblot24 days ago(edited)
Hi, regression with atmos 1.205?
The following atmos pro config (working fine with 1.204) now fails with:
It does not work better with
The following atmos pro config (working fine with 1.204) now fails with:
Error: failed to execute describe stacks: invalid stack manifest: template: mixins/atmos-pro.yaml:4:21: executing "mixins/atmos-pro.yaml" at <.atmos_component>: map has no entry for key "atmos_component" .It does not work better with
{{ .component }} .plan-wf-config: &plan-wf-config
atmos-terraform-plan.yaml:
inputs:
component: "{{ .atmos_component }}"
stack: "{{ .atmos_stack }}"
apply-wf-config: &apply-wf-config
atmos-terraform-apply.yaml:
inputs:
component: "{{ .atmos_component }}"
stack: "{{ .atmos_stack }}"
github_environment: "{{ .atmos_stack }}"
settings:
pro:
enabled: true
pull_request:
opened:
workflows: *plan-wf-config
synchronize:
workflows: *plan-wf-config
reopened:
workflows: *plan-wf-config
merged:
workflows: *apply-wf-configA
Alexandre Feblot24 days ago(edited)
Question about passing info from one place to another one:
Let's say I have a "network" account where atmos is set up to build VPCs and subnets, and store state in a bucket in this account. Subnets are shared with multiple accounts, including a "product" one.
In "product", atmos is configured to use a state bucket in its own account.
How in this "product" atmos config should we obtain the subnet ids for example?
β’ I suppose
β’ Does this leave us with the obligation to use stores?
Thanks
Let's say I have a "network" account where atmos is set up to build VPCs and subnets, and store state in a bucket in this account. Subnets are shared with multiple accounts, including a "product" one.
In "product", atmos is configured to use a state bucket in its own account.
How in this "product" atmos config should we obtain the subnet ids for example?
β’ I suppose
!terraform.state can't be used as we have no idea here where the "network" state is stored?β’ Does this leave us with the obligation to use stores?
Thanks
P
PePe Amengual24 days ago
I think I asked this question before but in another context, How do you destroy atmos components/resources in your github pipelines? For what I have seen I will have to create a specific job to run destroy plans against my resources. I was hopping that somehow I could use the cloudposse plan action to create a destroy plan but I do not think describe affected can do that today
M
Miguel Zablah23 days ago
Hey guys, I think I found a bug with
https://github.com/cloudposse/atmos/issues/2031
I try using quotes to like tell YAML this is a string but it did not work π’
Has anyone mention this?
!terraform.state YAML Tag Incorrectly Parses String Values Ending with Colons. I think I mention this in the past but never created and issue so I created a bug with more details here:https://github.com/cloudposse/atmos/issues/2031
I try using quotes to like tell YAML this is a string but it did not work π’
Has anyone mention this?
P
Pedro23 days ago
Hey all, I was hoping someone could help me think on the best approach to deal with a particular situation and how best to version our mixins and components. Let me set the stage.
We currently have a lot of "internal clients" that each have multiple environments with a lot of churn. We have an infra template with mixins that we're using to deploy internal client projects (basic structure below).
The plan is then to have a pipeline that will import this template into each internal clients repo and apply from there. This way we have:
1. Central place where we can prepare mixins, stack example etc. (infra-template repo)
2. Distinct repo with vendored components and "tweaked" stack for each internal client.
Now, the issue is that client 1 could have a prod env on a specific version of the template, while the
I'm struggling to wrap my head around what would be the best way or approach for this. Components is relatively easy, as they're versioned and that is reflected in the vendor (we can keep a certain number of older versions), but what do you all feel would be the best approach for mixins, for instance.
Thanks in advance for any help!
We currently have a lot of "internal clients" that each have multiple environments with a lot of churn. We have an infra template with mixins that we're using to deploy internal client projects (basic structure below).
βββ atmos.yaml # Main Atmos configuration
βββ components # Vendored OpenTofu components
βββ stacks
β βββ deploy
β β βββ _defaults.yaml # Default variables for all deploy stacks
β β βββ dev # Example deploy stack (add prod, staging, etc. here)
β βββ mixins
β βββ agnostic # Platform-agnostic mixins
β βββ aws # AWS-specific mixins
β βββ gcp # GCP-specific mixins
βββ vendor
β βββ agnostic.yaml # Vendor config for platform-agnostic components
β βββ aws.yaml # Vendor config for AWS components
β βββ gcp.yaml # Vendor config for GCP components
βββ vendor.yaml # Top-level vendor configuration (imports the above)The plan is then to have a pipeline that will import this template into each internal clients repo and apply from there. This way we have:
1. Central place where we can prepare mixins, stack example etc. (infra-template repo)
2. Distinct repo with vendored components and "tweaked" stack for each internal client.
Now, the issue is that client 1 could have a prod env on a specific version of the template, while the
dev env may need to be on a newer version of the template (updated components, newer or updated mixins etc.).I'm struggling to wrap my head around what would be the best way or approach for this. Components is relatively easy, as they're versioned and that is reflected in the vendor (we can keep a certain number of older versions), but what do you all feel would be the best approach for mixins, for instance.
Thanks in advance for any help!
A
Alexandre Feblot23 days ago
The doc AI search is awesome but can't do magic π : "I couldnβt find any mention in the Atmos docs specifying which JFrog Artifactory repository type to use for the Artifactory store backend. The docs note that Artifactory is a supported store but donβt provide repository-type guidance (!store, atmos.yaml stores β Artifactory, Using External Stores)."
Any guidance on that topic would be welcome.
Thanks
Any guidance on that topic would be welcome.
Thanks
Z
Zack23 days ago
βοΈ wanted to confirm that
retry doesn't actually work for workflows?Z
Zack22 days ago
βοΈ believe there's a bug in this
ATMOS_GIT_ROOT_BASEPATH environment variable.E
erik21 days ago
I don't have any go env installed, but I should be able to download your CI build
@Alexandre Feblot great idea!
I like it so much, we're adding native support for this: https://github.com/cloudposse/atmos/pull/2040
Atmos already supports running any version:
atmos ... --use-version 1.204.0
# e.g.
atmos terraform plan ... --use-version 1.204.0So we should support running any artifact attached to a commit or PR.
# download and run artifact attached to PR #2040
atmos ... --use-version 2040Or a SHA
# download and run artifact attached to PR #2040
atmos ... --use-version abc1234J
Johan21 days ago
Anyone found a way to use the AWS
I cannot use the
aws_eks_cluster_auth datasource, with a generated provider definition?I cannot use the
awscli on the Terraform runner (Terraform Cloud..), but I still need to generate a token like so:omponents:
terraform:
eks-addons:
providers:
aws:
region: us-east-1
kubernetes:
host: "{{ .vars.eks_cluster_endpoint }}"
cluster_ca_certificate: "{{ .vars.eks_cluster_ca_certificate }}"
exec:
api_version: "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>"
command: "aws"
args:
- "eks"
- "get-token"
- "--cluster-name"
- "{{ .vars.eks_cluster_name }}"M
Mihai I21 days ago(edited)
Question: does this feature exist? Because initing the stack and checking the yaml schema tells me otherwise:
Error
Error: invalid component dependencies section
## Explanation
'components.terraform.azure/aks.dependencies' in the file 'orgs/[redacted]'
Error
Error: exit status 1R
RB20 days ago
Thoughts on the following?
1. throwing a warning when retrieving components from archived cloudposse/terraform-aws-modules instead of cloudposse-terraform-components org
2. throwing a warning when creating a component from a cloudposse/ module when a cloudposse-terraform-components equivalent exists
a. e.g. creating a component from the cloudposse/terraform-aws-rds module instead of the cloudposse-terraform-components/aws-rds component
1. throwing a warning when retrieving components from archived cloudposse/terraform-aws-modules instead of cloudposse-terraform-components org
2. throwing a warning when creating a component from a cloudposse/ module when a cloudposse-terraform-components equivalent exists
a. e.g. creating a component from the cloudposse/terraform-aws-rds module instead of the cloudposse-terraform-components/aws-rds component
B
brandon20 days ago