atmos
134717,951
👽️
J
Jonathan Roseabout 4 hours ago
Was AWS Secrets Manager removed as a supported
store? Stores Configuration | atmosH
Hassan Khanabout 5 hours ago
Hey Team,
quick question I have been going over the dns-primary component,
how can we create new certificate if already existing certificate has expired without destroying dns-primary component as it would mess up the route53 entries as well.
quick question I have been going over the dns-primary component,
how can we create new certificate if already existing certificate has expired without destroying dns-primary component as it would mess up the route53 entries as well.
J
M
Miguel Zablah1 day ago
Hey guys! I think I found a new bug in the
I have created an issue explaining it a little bit better here:
https://github.com/cloudposse/atmos/issues/2089
atmos custom commands that invoke terraform it looks like there is an issue with how atmos sets the PATH when using toolchainI have created an issue explaining it a little bit better here:
https://github.com/cloudposse/atmos/issues/2089
R
RB1 day ago
Related question.
1. We also have the
2. Also, we have to manually copy the
1. We also have the
ec2-autoscale-group module vendored as a component and it doesn’t contain a component.yaml file so atmos-component-updater doesn’t pick it up. Does this mean we should create our own or is there a way to generate this file on the fly ?2. Also, we have to manually copy the
providers.tf mixin into this. Just checking if this is the “correct” way to do this.R
RB1 day ago(edited)
What’s the correct format to vendor the path in without grabbing any unnecessary files? For example, I have this setup which omitted the
component.yaml which prevented the atmos-component-updater from bumping this dep. Are these the correct excluded_paths ? How should I do this for each component and is it possible to have a default set (perhaps in atmos itself) ?J
Jonathan Rose1 day ago
I am reviewing atmos/examples/demo-vendoring at main · cloudposse/atmos and trying to understand how I can improve my vendor manifest to permit developers to side-load a vendor manifest in their stack, while still maintaining the vendor manifest that I include in my service catalog
M
Miguel Zablah2 days ago
Hey I have a question about atmos and terraform workspaces, did the naming change on v1.206.2? It looks like now the terraform workspace by default is not including component name and namespace, Is this intentional or did I miss something??
So before the calculated tf workspace will be like this:
and now the calculated tf workspace is:
So before the calculated tf workspace will be like this:
{tenant}-{environment}-{stage}-{component}-{namespace}and now the calculated tf workspace is:
{tenant}-{environment}-{stage}R
RB2 days ago
How do use renovate (or similar) to keep cloudposse vendored components up to date? Looking for something that would
1. Set the version to be the latest github-release
2. Run atmos vendor pull and commit any additional files
1. Set the version to be the latest github-release
2. Run atmos vendor pull and commit any additional files
J
Johan2 days ago
Question: I have this in my
And these files:
When I do a
But, when I try to pull one of them, it can’t find it:
What am I missing here?
atmos.yamlvendor:
base_path: "./vendor"And these files:
vendor
├── aws
│ └── eks.yaml
└── azure
└── aks.yamlWhen I do a
atmos vendor list, I get: Component Type Tags Manifest Folder
───────────────────────────────────────────────────────────────────────────────────────────────
aks-cluster-dev Vendor Manifest azure aks.yaml components/terraform/aks-cluster-dev/atmos
eks-cluster-dev Vendor Manifest aws eks.yaml components/terraform/eks-cluster-dev/atmosBut, when I try to pull one of them, it can’t find it:
➜ atmos vendor pull --tags azure
Error
Error: no YAML configuration files found in directory 'vendor'What am I missing here?
M
Michael Dizon3 days ago
was there a breaking change introduced in 1.202.0 that would affect credentials?
M
Miguel Zablah3 days ago
Hey quick question about Stores can one use maybe
atmos auth identity to read from them instead of using roles as the docs suggest? Maybe there is an option to specify an AWS identity for the SSM for example or is this not supported?M
Miguel Zablah3 days ago
Hey guys I think I found a bug with
I have created an issue for this:
https://github.com/cloudposse/atmos/issues/2080
!terraform.state when use inside locals it fails specifically if we use it without setting the stack.I have created an issue for this:
https://github.com/cloudposse/atmos/issues/2080
H
Hassan Khan4 days ago
Hey guys,
I have been going over this component
https://docs.cloudposse.com/components/library/aws/eks/karpenter-node-pool/#option-2-using-atmos-terraformstate-recommended
it mentions to use the
I have been going over this component
https://docs.cloudposse.com/components/library/aws/eks/karpenter-node-pool/#option-2-using-atmos-terraformstate-recommended
it mentions to use the
Option 2 Using Atmos !terraform.state which is recommended way, is there any specific reason not to use remote-state as recommended way, I see that the remote-state way will be deprecated in newest release. Did you guys experience some issues with remote-state and or just decided to move away from remote-state entirely for all the other components as well?M
Mike Rowehl4 days ago(edited)
I had asked a question in #general: https://sweetops.slack.com/archives/CQCDCLA1M/p1771189798862939 But it appears this is the more proper place to ask. I just realized we're running an older version of Atmos for our actions, 1.200.0. I run a more updated version locally, I can check the plan-diff with my version and see if it ignores the change if it's likely that's the issue.
M
Marat Bakeev6 days ago
Hey guys, a question about using atmos with geodesic containers. We currently are using Leapp, and want to replace it.
Am I right that having atmos installed in the host system is a requirement?
What if we want to have fully isolated geodesic containers, and to only have atmos inside of them? So containers know nothing about the authentication setup of other containers? I think we tried this, but terraform was failing to get credentials, if atmos auth is configured only inside the geodesic container?
Am I right that having atmos installed in the host system is a requirement?
What if we want to have fully isolated geodesic containers, and to only have atmos inside of them? So containers know nothing about the authentication setup of other containers? I think we tried this, but terraform was failing to get credentials, if atmos auth is configured only inside the geodesic container?
D
david6 days ago
unsure if the person who opened this ticket is in here, but we’re seeing this same issue with
1.206.1P
Prasanna6 days ago
@Prasanna has joined the channel
J
JS6 days ago
@JS has joined the channel
S
Stephen6 days ago
I appear to be getting the following error from atmos pro. Running
1.206.1 via github actionsERRO Pro API Error operation=UploadAffectedStacks request="" status=403 success=false trace_id=d0dd655ec5cbc795467e1aa6248c174e error_message="" context=map[]
# Error
**Error:** failed to upload stacks API response error: API request failed with status 403 (trace_id: d0dd655ec5cbc795467e1aa6248c174e)Slackbot7 days ago
@Salman Shaik joined #atmos. They’re also new to SweetOps.
Slackbot7 days ago
@Deep joined #atmos. They’re also new to SweetOps.
B
Bruce7 days ago
hmm so account-settings module has been upgraded to v2. looks like the provider config has changed from
does that mean in order to upgrade...we'll need to setup atmos profiles for our CI?
dynamic assume_role (which pulled the role from account-map) to an empty provider.does that mean in order to upgrade...we'll need to setup atmos profiles for our CI?
K
Kyle Decot7 days ago(edited)
Hello, I'm a bit confused on to get
I have two stacks:
First I defined all of my auth identites in
Then in each of my stacks I specified the appropriate identity as the default. So for
and in
This however does not work as atmos will state that there are multiple default identites when running
It seems like atmos is loading all stack files even though I'm only attempting to apply
What am I misunderstanding here and what is the correct way to automatically pick the correct identity for all components within a stack?
Thanks so much as always for your help
atmos to automatically use the appropriate auth identity when applying a stack/component.I have two stacks:
organization and staging.First I defined all of my auth identites in
atmos.yaml:auth:
providers:
aws:
kind: aws/iam-identity-center
region: us-east-2
start_url: XXXXXX
identities:
organization:
kind: aws/permission-set
via:
provider: aws
principal:
account:
id: "XXXXXX"
name: developer
staging:
kind: aws/permission-set
via:
provider: aws
principal:
account:
id: "XXXXXX"
name: developerThen in each of my stacks I specified the appropriate identity as the default. So for
stacks/organization.yaml I have:auth:
identities:
organization:
default: trueand in
stacks/staging.yaml I have:auth:
identities:
staging:
default: trueThis however does not work as atmos will state that there are multiple default identites when running
atmos terraform apply network -s staging:┃ Multiple default identities found. Please choose one:
┃ Press ctrl+c or esc to exit
┃ > organization
┃ stagingIt seems like atmos is loading all stack files even though I'm only attempting to apply
staging via the -s flag.What am I misunderstanding here and what is the correct way to automatically pick the correct identity for all components within a stack?
Thanks so much as always for your help
E
el7 days ago(edited)
Hello again! I'm encountering some other small issues as I get up to speed with
Using this guide with a minimal
edit: also looks like
atmos again - please let me know if I should create proper Github issues or if dropping them in here is sufficient.Using this guide with a minimal
atmos.yaml file, I get this misleading error. The fix is adding base_path: "./" to the config - was this a default setting at some point?edit: also looks like
stacks in the atmos config needs included_paths: "**/*" to work properlyM
Miguel Zablah7 days ago
Hey guys! I think there's a bug in v1.206.0 related to config loading and profile validation.
After upgrading to
in my case it looks like the issue is that my default identify doesn't match any profile so it fails.
Should I open an issue or is there a config change that I miss?
After upgrading to
1.206.0, any atmos command (including atmos version) fails with:Error: profile not foundin my case it looks like the issue is that my default identify doesn't match any profile so it fails.
Should I open an issue or is there a config change that I miss?
E
el8 days ago
hello! I'm revisiting
If I run
Just as a sanity check, is there anything obvious that I'm missing or misunderstanding here? Thanks!
atmos after last using it 2+ years ago, using version 1.205.1. I'm trying to figure out how to use auth.identities with AWS SSO so I'm not prompted to pick an identity each time. First, here's the relevant code in atmos.yaml:auth:
providers:
aws-sso:
kind: aws/iam-identity-center
region: us-west-2
start_url: <redacted>
auto_provision_identities: true
identities:
staging-us/awsadministratoraccess:
# default: true - works if I uncomment this line, but I don't want to use a default identity
kind: aws/permission-set
via:
provider: aws-sso
principal:
name: AWSAdministratorAccess
account:
name: staging-usIf I run
atmos terraform plan eks --stack app-uw2-staging-us --identity="staging-us/awsadministratoraccess", I get the following error and it prompts me to pick an identity: No default identity configured. Please choose an identity:. Similarly, setting auth.identity on a component within a stack does not work as expected either: auth:
identity: staging-us/awsadministratoraccessJust as a sanity check, is there anything obvious that I'm missing or misunderstanding here? Thanks!
B
brandon9 days ago(edited)
What's the correct way to combine atmos and a terraform version manager like
tfenv / tenv?R
RB9 days ago
Can this repo be archived https://github.com/cloudposse/terraform-aws-components in favor of the new cloudposse-terraform-components github organization ?
R
Rafael Herrero10 days ago
Hello team, i am trying to use atmos helmfile command pointing to our internal cloud k8s cluster, but i cannot set the flag
When deploying Helmfile components to non-AWS Kubernetes clusters (GKE, AKS, k3d, self-hosted), setting
$ atmos helmfile diff my-component -s my-stack
Error: helm_aws_profile_pattern is required
The validation in
Can i open a PR to accept the option to set
The idea is to move AWS pattern validation from early initialization (checkHelmfileConfig()) to runtime execution (inside the if UseEKS block in ExecuteHelmfile())
use_eks .When deploying Helmfile components to non-AWS Kubernetes clusters (GKE, AKS, k3d, self-hosted), setting
use_eks: false doesn't work because validation still requires AWS-specific patterns:$ atmos helmfile diff my-component -s my-stack
Error: helm_aws_profile_pattern is required
The validation in
checkHelmfileConfig() runs before stack-level configuration is loaded. It only sees the global default (use_eks: true from atmos.yaml), never stack-level overrides. Even though users set use_eks: false in their stack configuration, the validation has already failed by the time that override is processed.Can i open a PR to accept the option to set
use_eks: false but keeping the default as true so it doesnt break for users that are not explicitly using this flag?The idea is to move AWS pattern validation from early initialization (checkHelmfileConfig()) to runtime execution (inside the if UseEKS block in ExecuteHelmfile())
M
Marat Bakeev13 days ago
I've vibe-coded a PR, that adds support for SSE-C encryption in atmos's
The implementation is based on how OpenTofu does it in it's S3 backend, and this was something I was comfortable in vibe-coding. I've tested it locally with encrypted state, and lookups worked fine. I don't think it's feasible for me to vibe-code opentofu's native state encryption...
https://github.com/cloudposse/atmos/pull/2060
!terraform.state lookups. Would you guys consider it? I know that AWS is burying SSE-C, but on our current project we are forced to use Hetzner Cloud, and it only supports SSE-C.The implementation is based on how OpenTofu does it in it's S3 backend, and this was something I was comfortable in vibe-coding. I've tested it locally with encrypted state, and lookups worked fine. I don't think it's feasible for me to vibe-code opentofu's native state encryption...
https://github.com/cloudposse/atmos/pull/2060
M
MP13 days ago(edited)
Hello all,
I'm having a case when using Terraform Cloud backend(Enterprise) with components that depend on each other via
structure. Instead of using
This only happens when calling components in a stack that depend on
After looking at atmos code, it looks like TF cloud backend is not supported for outputs.
I've created a patch for this in my fork(https://github.com/gitbluf/atmos/commit/96fb649e6f0918bb3dc5960b73b3349646ddd092) and this fixes the issue.
Let me know if there is something I overlooked or misunderstood.
Thanks!
I'm having a case when using Terraform Cloud backend(Enterprise) with components that depend on each other via
!terraform.output,(+ depends_on key) the auto-generated backend.tf.json file has an incorrect .structure. Instead of using
terraform.cloud at the top level, it's generating terraform.backend.cloud . (see file code snippet diff below)This only happens when calling components in a stack that depend on
!terraform.output function.< "backend": { # WRONG
< "cloud": {
< "organization": "<org-name>",
< "workspaces": {
< "name": "terraform_workspace_name" # injected with {terraform_workspace}
< }
---
> "cloud": { # GOOD
> "organization": "<org-name>",
> "workspaces": {
> "name": "terraform_workspace_name"After looking at atmos code, it looks like TF cloud backend is not supported for outputs.
I've created a patch for this in my fork(https://github.com/gitbluf/atmos/commit/96fb649e6f0918bb3dc5960b73b3349646ddd092) and this fixes the issue.
Let me know if there is something I overlooked or misunderstood.
Thanks!
S
Sean13 days ago
Using atmos v1.205.1 I've tried to get
locals working with !terraform.state but it has been unsuccessful so farM
Marat Bakeev14 days ago
Hey guys... Am I right that Atmos function
Are there any plans to add it? Some nervous Nellies do not want to store plaintext state files in S3 buckets...
!terraform.state won't work, if we enable OpenTofu encryption? https://opentofu.org/docs/language/state/encryption/#rolling-back-encryptionAre there any plans to add it? Some nervous Nellies do not want to store plaintext state files in S3 buckets...
J
Josh Simmonds14 days ago(edited)
Related to my question in ☝️ but orthogonally, when converting from a lot of
atmos.Component calls to !terraform.state calls, it looks like the calls for !terraform.state may only honor the terraform.backend.s3.assume_role.role_arn setting and not the terraform.backend.s3.profile property, is that correct?M
MB15 days ago
hey all! question on the atmos GHA. Is there a plan to support TFC for the plan and apply jobs? I see it's now tied to stored plan files.
J
Jonathan Rose15 days ago
Can !aws.account_id | atmos be used as part of a mock output? For example:
infrastructure_configuration_arn: 'arn:aws:imagebuilder:us-east-1:{{ !aws.account_id }}:infrastructure-configuration/{{ .atmos_stack }}-infra'R
RB15 days ago
If we create another org under the stacks/orgs directory but it's the same aws organization, is that wise? The goal is to change the
namespace as it was acting as the tenant and we removed the tenant. I recall that we were name supposed to change the namespace unless it's a different org but unsure if that's still true.R
RB15 days ago
I want to avoid running CICD on sensitive stacks like any of the root components. Is there an atmosy way of doing this in our pipeline? or do we need to create a separate configuration file to mark these stages or components or stacks, check those prior to running atmos, and then fail if it’s one of the ones we marked, and proceed if it’s not.
E
M
Miguel Zablah16 days ago
Hey! I have a question about
I know
This is fine but I think we should maybe add some docs about this because is easy to miss hahaha
Also is this only for providers or identifiers also need to be unique?
atmos auth do we need to always use unique names for providers?I know
atmos auth will store the providers in keyring but I though maybe it will store prefix of the namespace of something but it looks like it's store as is.This is fine but I think we should maybe add some docs about this because is easy to miss hahaha
Also is this only for providers or identifiers also need to be unique?
D
Dan Hansen16 days ago
I spent some time trying to figure this out about 6 months ago but never could, and am revisiting now.
atmos commands seem to run much, much slower when run inside of an nx target subprocess. Has anyone else encountered this? I suppose I could use the new atmos profiling/tracing to try and gather diagnostics?M
Matt Searle16 days ago
Hey, is it possible to dynamically configure the terraform state file prefix from user input? Either as an input parameter to the atmos command or as an environment variable?
A
Arthur16 days ago(edited)
Hi, I’m trying to migrate to Atmos Auth from
I’m stuck on how to configure tfstate-backend access for multiple permission sets across many accounts.
If I list all accounts/permission sets in the trust policy (
What’s the recommended pattern to allow SSO permission sets from multiple accounts to access the tfstate role without exceeding the limit? Did I miss something in the documentation? I kind of don’t understand how each permission set in each account can access the tfstate role if we don’t list them all in the trust policy of the tfstate role.
Thanks for your help!
aws-teams / aws-team-roles with Leapp. My goal is to use Atmos Auth without aws-teams / aws-team-roles (use only Atmos Auth with AWS SSO).I’m stuck on how to configure tfstate-backend access for multiple permission sets across many accounts.
If I list all accounts/permission sets in the trust policy (
allowed_permission_sets), I hit IAM ACLSizePerRole 4096. I’m using a centralized S3 bucket for all Terraform states.What’s the recommended pattern to allow SSO permission sets from multiple accounts to access the tfstate role without exceeding the limit? Did I miss something in the documentation? I kind of don’t understand how each permission set in each account can access the tfstate role if we don’t list them all in the trust policy of the tfstate role.
Thanks for your help!
Z
Zack17 days ago
Do you guys keep updated atmos container images anywhere for latest release that we should be using? We had been using this in some ci here and there but it's running a pretty old version at this point. If not, no big deal - we need to make our own anyway
L
Love Eklund17 days ago
Hey, are using atmos and are having some issues with how to do removing of component. We use the
Anyone have any good idea on how to handle this ?
atmos describe affected in our ci to check which components to plan and apply. The problem is that if we remove a component from a stack it doesn't seem to get destroyed, but just gets ignored.Anyone have any good idea on how to handle this ?
B
brandon20 days ago
R
RB20 days ago
Thoughts on the following?
1. throwing a warning when retrieving components from archived cloudposse/terraform-aws-modules instead of cloudposse-terraform-components org
2. throwing a warning when creating a component from a cloudposse/ module when a cloudposse-terraform-components equivalent exists
a. e.g. creating a component from the cloudposse/terraform-aws-rds module instead of the cloudposse-terraform-components/aws-rds component
1. throwing a warning when retrieving components from archived cloudposse/terraform-aws-modules instead of cloudposse-terraform-components org
2. throwing a warning when creating a component from a cloudposse/ module when a cloudposse-terraform-components equivalent exists
a. e.g. creating a component from the cloudposse/terraform-aws-rds module instead of the cloudposse-terraform-components/aws-rds component
M
Mihai I20 days ago(edited)
Question: does this feature exist? Because initing the stack and checking the yaml schema tells me otherwise:
Error
Error: invalid component dependencies section
## Explanation
'components.terraform.azure/aks.dependencies' in the file 'orgs/[redacted]'
Error
Error: exit status 1J
Johan21 days ago
Anyone found a way to use the AWS
I cannot use the
aws_eks_cluster_auth datasource, with a generated provider definition?I cannot use the
awscli on the Terraform runner (Terraform Cloud..), but I still need to generate a token like so:omponents:
terraform:
eks-addons:
providers:
aws:
region: us-east-1
kubernetes:
host: "{{ .vars.eks_cluster_endpoint }}"
cluster_ca_certificate: "{{ .vars.eks_cluster_ca_certificate }}"
exec:
api_version: "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>"
command: "aws"
args:
- "eks"
- "get-token"
- "--cluster-name"
- "{{ .vars.eks_cluster_name }}"