32 messages
👽️
L
Love Eklund14 days ago
Hey, are using atmos and are having some issues with how to do removing of component. We use the
Anyone have any good idea on how to handle this ?
atmos describe affected in our ci to check which components to plan and apply. The problem is that if we remove a component from a stack it doesn't seem to get destroyed, but just gets ignored.Anyone have any good idea on how to handle this ?
Z
Zack13 days ago
Do you guys keep updated atmos container images anywhere for latest release that we should be using? We had been using this in some ci here and there but it's running a pretty old version at this point. If not, no big deal - we need to make our own anyway
A
Arthur13 days ago(edited)
Hi, I’m trying to migrate to Atmos Auth from
I’m stuck on how to configure tfstate-backend access for multiple permission sets across many accounts.
If I list all accounts/permission sets in the trust policy (
What’s the recommended pattern to allow SSO permission sets from multiple accounts to access the tfstate role without exceeding the limit? Did I miss something in the documentation? I kind of don’t understand how each permission set in each account can access the tfstate role if we don’t list them all in the trust policy of the tfstate role.
Thanks for your help!
aws-teams / aws-team-roles with Leapp. My goal is to use Atmos Auth without aws-teams / aws-team-roles (use only Atmos Auth with AWS SSO).I’m stuck on how to configure tfstate-backend access for multiple permission sets across many accounts.
If I list all accounts/permission sets in the trust policy (
allowed_permission_sets), I hit IAM ACLSizePerRole 4096. I’m using a centralized S3 bucket for all Terraform states.What’s the recommended pattern to allow SSO permission sets from multiple accounts to access the tfstate role without exceeding the limit? Did I miss something in the documentation? I kind of don’t understand how each permission set in each account can access the tfstate role if we don’t list them all in the trust policy of the tfstate role.
Thanks for your help!
M
Matt Searle13 days ago
Hey, is it possible to dynamically configure the terraform state file prefix from user input? Either as an input parameter to the atmos command or as an environment variable?
D
Dan Hansen12 days ago
I spent some time trying to figure this out about 6 months ago but never could, and am revisiting now.
atmos commands seem to run much, much slower when run inside of an nx target subprocess. Has anyone else encountered this? I suppose I could use the new atmos profiling/tracing to try and gather diagnostics?M
Miguel Zablah12 days ago
Hey! I have a question about
I know
This is fine but I think we should maybe add some docs about this because is easy to miss hahaha
Also is this only for providers or identifiers also need to be unique?
atmos auth do we need to always use unique names for providers?I know
atmos auth will store the providers in keyring but I though maybe it will store prefix of the namespace of something but it looks like it's store as is.This is fine but I think we should maybe add some docs about this because is easy to miss hahaha
Also is this only for providers or identifiers also need to be unique?
E
R
RB11 days ago
I want to avoid running CICD on sensitive stacks like any of the root components. Is there an atmosy way of doing this in our pipeline? or do we need to create a separate configuration file to mark these stages or components or stacks, check those prior to running atmos, and then fail if it’s one of the ones we marked, and proceed if it’s not.
R
RB11 days ago
If we create another org under the stacks/orgs directory but it's the same aws organization, is that wise? The goal is to change the
namespace as it was acting as the tenant and we removed the tenant. I recall that we were name supposed to change the namespace unless it's a different org but unsure if that's still true.J
Jonathan Rose11 days ago
Can !aws.account_id | atmos be used as part of a mock output? For example:
infrastructure_configuration_arn: 'arn:aws:imagebuilder:us-east-1:{{ !aws.account_id }}:infrastructure-configuration/{{ .atmos_stack }}-infra'M
MB11 days ago
hey all! question on the atmos GHA. Is there a plan to support TFC for the plan and apply jobs? I see it's now tied to stored plan files.
J
Josh Simmonds11 days ago(edited)
Related to my question in ☝️ but orthogonally, when converting from a lot of
atmos.Component calls to !terraform.state calls, it looks like the calls for !terraform.state may only honor the terraform.backend.s3.assume_role.role_arn setting and not the terraform.backend.s3.profile property, is that correct?M
Marat Bakeev10 days ago
Hey guys... Am I right that Atmos function
Are there any plans to add it? Some nervous Nellies do not want to store plaintext state files in S3 buckets...
!terraform.state won't work, if we enable OpenTofu encryption? https://opentofu.org/docs/language/state/encryption/#rolling-back-encryptionAre there any plans to add it? Some nervous Nellies do not want to store plaintext state files in S3 buckets...
S
Sean10 days ago
Using atmos v1.205.1 I've tried to get
locals working with !terraform.state but it has been unsuccessful so farM
MP9 days ago(edited)
Hello all,
I'm having a case when using Terraform Cloud backend(Enterprise) with components that depend on each other via
structure. Instead of using
This only happens when calling components in a stack that depend on
After looking at atmos code, it looks like TF cloud backend is not supported for outputs.
I've created a patch for this in my fork(https://github.com/gitbluf/atmos/commit/96fb649e6f0918bb3dc5960b73b3349646ddd092) and this fixes the issue.
Let me know if there is something I overlooked or misunderstood.
Thanks!
I'm having a case when using Terraform Cloud backend(Enterprise) with components that depend on each other via
!terraform.output,(+ depends_on key) the auto-generated backend.tf.json file has an incorrect .structure. Instead of using
terraform.cloud at the top level, it's generating terraform.backend.cloud . (see file code snippet diff below)This only happens when calling components in a stack that depend on
!terraform.output function.< "backend": { # WRONG
< "cloud": {
< "organization": "<org-name>",
< "workspaces": {
< "name": "terraform_workspace_name" # injected with {terraform_workspace}
< }
---
> "cloud": { # GOOD
> "organization": "<org-name>",
> "workspaces": {
> "name": "terraform_workspace_name"After looking at atmos code, it looks like TF cloud backend is not supported for outputs.
I've created a patch for this in my fork(https://github.com/gitbluf/atmos/commit/96fb649e6f0918bb3dc5960b73b3349646ddd092) and this fixes the issue.
Let me know if there is something I overlooked or misunderstood.
Thanks!
M
Marat Bakeev9 days ago
I've vibe-coded a PR, that adds support for SSE-C encryption in atmos's
The implementation is based on how OpenTofu does it in it's S3 backend, and this was something I was comfortable in vibe-coding. I've tested it locally with encrypted state, and lookups worked fine. I don't think it's feasible for me to vibe-code opentofu's native state encryption...
https://github.com/cloudposse/atmos/pull/2060
!terraform.state lookups. Would you guys consider it? I know that AWS is burying SSE-C, but on our current project we are forced to use Hetzner Cloud, and it only supports SSE-C.The implementation is based on how OpenTofu does it in it's S3 backend, and this was something I was comfortable in vibe-coding. I've tested it locally with encrypted state, and lookups worked fine. I don't think it's feasible for me to vibe-code opentofu's native state encryption...
https://github.com/cloudposse/atmos/pull/2060
R
Rafael Herrero6 days ago
Hello team, i am trying to use atmos helmfile command pointing to our internal cloud k8s cluster, but i cannot set the flag
When deploying Helmfile components to non-AWS Kubernetes clusters (GKE, AKS, k3d, self-hosted), setting
$ atmos helmfile diff my-component -s my-stack
Error: helm_aws_profile_pattern is required
The validation in
Can i open a PR to accept the option to set
The idea is to move AWS pattern validation from early initialization (checkHelmfileConfig()) to runtime execution (inside the if UseEKS block in ExecuteHelmfile())
use_eks .When deploying Helmfile components to non-AWS Kubernetes clusters (GKE, AKS, k3d, self-hosted), setting
use_eks: false doesn't work because validation still requires AWS-specific patterns:$ atmos helmfile diff my-component -s my-stack
Error: helm_aws_profile_pattern is required
The validation in
checkHelmfileConfig() runs before stack-level configuration is loaded. It only sees the global default (use_eks: true from atmos.yaml), never stack-level overrides. Even though users set use_eks: false in their stack configuration, the validation has already failed by the time that override is processed.Can i open a PR to accept the option to set
use_eks: false but keeping the default as true so it doesnt break for users that are not explicitly using this flag?The idea is to move AWS pattern validation from early initialization (checkHelmfileConfig()) to runtime execution (inside the if UseEKS block in ExecuteHelmfile())
R
RB5 days ago
Can this repo be archived https://github.com/cloudposse/terraform-aws-components in favor of the new cloudposse-terraform-components github organization ?
B
brandon5 days ago(edited)
What's the correct way to combine atmos and a terraform version manager like
tfenv / tenv?E
el4 days ago
hello! I'm revisiting
If I run
Just as a sanity check, is there anything obvious that I'm missing or misunderstanding here? Thanks!
atmos after last using it 2+ years ago, using version 1.205.1. I'm trying to figure out how to use auth.identities with AWS SSO so I'm not prompted to pick an identity each time. First, here's the relevant code in atmos.yaml:auth:
providers:
aws-sso:
kind: aws/iam-identity-center
region: us-west-2
start_url: <redacted>
auto_provision_identities: true
identities:
staging-us/awsadministratoraccess:
# default: true - works if I uncomment this line, but I don't want to use a default identity
kind: aws/permission-set
via:
provider: aws-sso
principal:
name: AWSAdministratorAccess
account:
name: staging-usIf I run
atmos terraform plan eks --stack app-uw2-staging-us --identity="staging-us/awsadministratoraccess", I get the following error and it prompts me to pick an identity: No default identity configured. Please choose an identity:. Similarly, setting auth.identity on a component within a stack does not work as expected either: auth:
identity: staging-us/awsadministratoraccessJust as a sanity check, is there anything obvious that I'm missing or misunderstanding here? Thanks!
M
Miguel Zablah4 days ago
Hey guys! I think there's a bug in v1.206.0 related to config loading and profile validation.
After upgrading to
in my case it looks like the issue is that my default identify doesn't match any profile so it fails.
Should I open an issue or is there a config change that I miss?
After upgrading to
1.206.0, any atmos command (including atmos version) fails with:Error: profile not foundin my case it looks like the issue is that my default identify doesn't match any profile so it fails.
Should I open an issue or is there a config change that I miss?
E
el4 days ago(edited)
Hello again! I'm encountering some other small issues as I get up to speed with
Using this guide with a minimal
edit: also looks like
atmos again - please let me know if I should create proper Github issues or if dropping them in here is sufficient.Using this guide with a minimal
atmos.yaml file, I get this misleading error. The fix is adding base_path: "./" to the config - was this a default setting at some point?edit: also looks like
stacks in the atmos config needs included_paths: "**/*" to work properlyK
Kyle Decot4 days ago(edited)
Hello, I'm a bit confused on to get
I have two stacks:
First I defined all of my auth identites in
Then in each of my stacks I specified the appropriate identity as the default. So for
and in
This however does not work as atmos will state that there are multiple default identites when running
It seems like atmos is loading all stack files even though I'm only attempting to apply
What am I misunderstanding here and what is the correct way to automatically pick the correct identity for all components within a stack?
Thanks so much as always for your help
atmos to automatically use the appropriate auth identity when applying a stack/component.I have two stacks:
organization and staging.First I defined all of my auth identites in
atmos.yaml:auth:
providers:
aws:
kind: aws/iam-identity-center
region: us-east-2
start_url: XXXXXX
identities:
organization:
kind: aws/permission-set
via:
provider: aws
principal:
account:
id: "XXXXXX"
name: developer
staging:
kind: aws/permission-set
via:
provider: aws
principal:
account:
id: "XXXXXX"
name: developerThen in each of my stacks I specified the appropriate identity as the default. So for
stacks/organization.yaml I have:auth:
identities:
organization:
default: trueand in
stacks/staging.yaml I have:auth:
identities:
staging:
default: trueThis however does not work as atmos will state that there are multiple default identites when running
atmos terraform apply network -s staging:┃ Multiple default identities found. Please choose one:
┃ Press ctrl+c or esc to exit
┃ > organization
┃ stagingIt seems like atmos is loading all stack files even though I'm only attempting to apply
staging via the -s flag.What am I misunderstanding here and what is the correct way to automatically pick the correct identity for all components within a stack?
Thanks so much as always for your help
B
Bruce3 days ago
hmm so account-settings module has been upgraded to v2. looks like the provider config has changed from
does that mean in order to upgrade...we'll need to setup atmos profiles for our CI?
dynamic assume_role (which pulled the role from account-map) to an empty provider.does that mean in order to upgrade...we'll need to setup atmos profiles for our CI?
Slackbot3 days ago
@Deep joined #atmos. They’re also new to SweetOps.
Slackbot3 days ago
@Salman Shaik joined #atmos. They’re also new to SweetOps.
S
Stephen3 days ago
I appear to be getting the following error from atmos pro. Running
1.206.1 via github actionsERRO Pro API Error operation=UploadAffectedStacks request="" status=403 success=false trace_id=d0dd655ec5cbc795467e1aa6248c174e error_message="" context=map[]
# Error
**Error:** failed to upload stacks API response error: API request failed with status 403 (trace_id: d0dd655ec5cbc795467e1aa6248c174e)J
JS3 days ago
@JS has joined the channel
P
Prasanna3 days ago
@Prasanna has joined the channel
D
david2 days ago
unsure if the person who opened this ticket is in here, but we’re seeing this same issue with
1.206.1M
Marat Bakeev2 days ago
Hey guys, a question about using atmos with geodesic containers. We currently are using Leapp, and want to replace it.
Am I right that having atmos installed in the host system is a requirement?
What if we want to have fully isolated geodesic containers, and to only have atmos inside of them? So containers know nothing about the authentication setup of other containers? I think we tried this, but terraform was failing to get credentials, if atmos auth is configured only inside the geodesic container?
Am I right that having atmos installed in the host system is a requirement?
What if we want to have fully isolated geodesic containers, and to only have atmos inside of them? So containers know nothing about the authentication setup of other containers? I think we tried this, but terraform was failing to get credentials, if atmos auth is configured only inside the geodesic container?
U
U0AF36NB6S2about 7 hours ago(edited)
I had asked a question in #general: https://sweetops.slack.com/archives/CQCDCLA1M/p1771189798862939 But it appears this is the more proper place to ask. I just realized we're running an older version of Atmos for our actions, 1.200.0. I run a more updated version locally, I can check the plan-diff with my version and see if it ignores the change if it's likely that's the issue.