38 messages
👽️
Ian Cornett5 months ago
I'm trying to use
🧵
!store with SSM and it's generally working. My team notices a ton of extra null values that are being stashed under our various component paths. For example, the values I'm storing for lambda should behooks:
store-outputs:
events:
- after-terraform-apply
command: store
name: dev/ssm
outputs:
lambda_function_name: .lambda_function_name
lambda_function_arn: .lambda_function_arn
lambda_function_role_arn: .lambda_function_role_arn
lambda_function_url: .lambda_function_url
sns_topic_arn: .sns_topic_arn
lambda_ssm_parameter_name: .lambda_ssm_parameter_name
kms_key_id: .kms_key_id
kms_key_alias: .kms_key_alias 🧵
Ed5 months ago
Hi all,
I'm finding that
For more information, I'm using atmos v1.192.0 and running the following command:
My base-ref is just a clone of my repo one commit behind my main (the commit involves changing a
In my stack I have:
But I find that in the output of
Which shouldn't be there, right? In any case, it ends up being included in plans and apply jobs which is not what I'm after! 🙂
I'm finding that
metadata.locked: true and metadata.enabled: false do not prevent the describe affected command from including them. Is that expected behaviour?For more information, I'm using atmos v1.192.0 and running the following command:
atmos describe affected --verbose=true --repo-path=base-ref --include-dependents=true --include-settings=true --exclude-locked=true >> foo.jsonMy base-ref is just a clone of my repo one commit behind my main (the commit involves changing a
var in a single stack manifest).In my stack I have:
components:
terraform:
federated_identity_credentials:
metadata:
enabled: false
locked: true
component: federated_identity_credentialsBut I find that in the output of
describe affected I get: {
"component": "federated_identity_credential",
"component_type": "terraform",
"component_path": "components/terraform/federated_identity_credential",
"stack": "example-default-dev",
"stack_slug": "example-default-dev-federated_identity_credential",
"affected": "stack.vars",
"affected_all": [
"stack.vars"
],
"dependents": [],
"included_in_dependents": true,
"settings": {
"depends_on": {
"0": {
"component": "resource_group"
},
"1": {
"component": "user_assigned_identity"
},
"2": {
"component": "application"
}
},
"github": {
"actions_enabled": true
},
"integrations": {
"github": {
"gitops": {
"artifact-storage": {
...
},
"infracost-enabled": false,
"matrix": {
"group-by": ".stack_slug | split(\"-\") | [.[0], .[2]] | join(\"-\")",
"sort-by": ".stack_slug"
},
"terraform-version": "1.9.8"
}
}
}
}
},Which shouldn't be there, right? In any case, it ends up being included in plans and apply jobs which is not what I'm after! 🙂
Duffy Gillman5 months ago
I have attempted a basic setup of
1. in
2. in my
I'm running with AdminstratorAccess, so I have not defined read/write roles. When I
!store functionality and am not having luck getting things written to the store for later retrieval. I've tried for a stripped down configuration:1. in
atmos.yaml, I have a store defined:stores:
ssm:
type: aws-ssm-parameter-store
options:
region: us-west-22. in my
vpc component I have defined a hook:components:
terraform:
vpc:
metadata:
component: vpc
vars:
ipv4_primary_cidr_block: "10.0.0.0/16"
assign_generated_ipv6_cidr_block: false
hooks:
store-outputs:
events:
- after-terraform-apply
command: store
name: ssm
outputs:
vpc_cidr_block: .vpc_cidr_block
vpc_id: .vpc_idI'm running with AdminstratorAccess, so I have not defined read/write roles. When I
apply, nothing ends up in Parameter Store. Am I missing another piece to the puzzle?PePe Amengual5 months ago
How do you guys deploy manifest in k8s with atmos? or you do that on a pipeline?
Ed5 months ago(edited)
Hi guys,
Is it possible to "scope" variables to a subset of components without them ending up in the top level stack manifest
For context, I have a decent number of components in a stack which share a
Cheers!
Is it possible to "scope" variables to a subset of components without them ending up in the top level stack manifest
vars block?For context, I have a decent number of components in a stack which share a
tag input so obviously I'd like to specify this once. The issue is that it's a big stack so if I put it in the stack vars and change it, every component will run terraform plan/apply. Is there a middle ground or some pattern to avoid this? I guess I could put all the components into a single component catalog configuration but then I'd lose the ability to configure them separatelyCheers!
PePe Amengual5 months ago
So I'm using helm/helfile with atmos now and I can deploy some Charts BUT some other charts have input vars like
environment, namespace etc and since that is how our name_pattern we need to use them, so how could I avoid those variables to be pass to helmfile?Duffy Gillman5 months ago
Is there a handy way to concatenate or interpolate outputs from two
!store calls?Nayeem Mohammed5 months ago
Hey guys, looking to get some help with this terraform module
https://github.com/cloudposse/terraform-aws-codebuild/tree/main
I am creating codebuild projects using atmos and the above module
it's creating the env vars by default which i have not defined and i'm unable to exempt them. any ideas??
the env vars that it currently adds
I want to exempt IMAGE_REPO_NAME and IMAGE_TAG variables
https://github.com/cloudposse/terraform-aws-codebuild/tree/main
I am creating codebuild projects using atmos and the above module
it's creating the env vars by default which i have not defined and i'm unable to exempt them. any ideas??
the env vars that it currently adds
AWS_REGION
us-east-1
PLAINTEXT
AWS_ACCOUNT_ID
11111111
PLAINTEXT
IMAGE_REPO_NAME
UNSET
PLAINTEXT
IMAGE_TAG
latest
PLAINTEXTI want to exempt IMAGE_REPO_NAME and IMAGE_TAG variables
Love Eklund5 months ago
Hey!
I have a question about list merging and reusing components.
Lets say I have a component for creating a containerjob (cloudrun job/ containerapp jobs, etc)
I might have an abstract component that defines a bunch of env variables and also a default command (which is also a list)
Now if if I want to reuse this component and add some extra env variables in one verions of the job that works fine if I have the
is there a way around this or a better way to handle it?
I have a question about list merging and reusing components.
Lets say I have a component for creating a containerjob (cloudrun job/ containerapp jobs, etc)
I might have an abstract component that defines a bunch of env variables and also a default command (which is also a list)
components:
terraform:
container_job/defaults:
metadata:
type: abstract
vars:
env:
- name: foo
value: bar
...
...
..,
- name: foo100
value: bar100
command: ["echo","baz"]Now if if I want to reuse this component and add some extra env variables in one verions of the job that works fine if I have the
list_merge_strategy = append . However, if I also want to override the command to instead be ["echo", "foo"] it becomes ["echo", "baz", "echo", "foo" ] ( due to the append)is there a way around this or a better way to handle it?
Jonathan Rose5 months ago
Noting mock outputs are intermittently overlooked / not honored. For example, if I run the same test 10 times in a row, it'll fail once or twice. Anyone else notice that behavior on Atmos 1.188.0?
Dan Hansen5 months ago
I'm working on getting
atmos describe affected integrated into some of our workflows. It currently reports that every single component/stack has changed all of their properties. How should I go about debugging this?RB5 months ago
I’m having some trouble deploying aws-team-roles component. The default providers from the upstream component tries to use the
OrganizationAccountAccessRole but this role doesn’t seem to be modified to allow this role assumption. Is there a terraform component that modifies this existing role or does this role need to be modified manually for each account ?Nayeem Mohammed5 months ago(edited)
Hey guys,
I'm looking for some basic understanding on creating stacks
I have started creating resources in aws using components
vpc, ec2, eks, etc
If i want to create a component and use vpc how can i use the component id's directly instead of hard coding it (which i am doing). I know it can be done in a better way but i'm currently un-aware
my stack yaml
vpc:
eks:
vpc_name here in eks cluster i have hard-coded it
can i use something like
{component.vpc.vpc_id}
I'm looking for some basic understanding on creating stacks
I have started creating resources in aws using components
vpc, ec2, eks, etc
If i want to create a component and use vpc how can i use the component id's directly instead of hard coding it (which i am doing). I know it can be done in a better way but i'm currently un-aware
my stack yaml
vpc:
vpc:
# Backend configuration for environment-based key structure
backend:
s3:
workspace_key_prefix: "dev/terraform/vpc"
vars:
# CloudPosse label configuration
enabled: true
environment: "dev" # ← Environment
name: "vpc" # ← Component name
vpc:
name: "vpc-dev" # ← User-defined VPC name (used directly)
cidr_block: "172.22.0.0/21"
availability_zones:
- "us-west-2a"
- "us-west-2b"
- "us-west-2c"
# Subnet Configuration
subnets:
private:
- "172.22.0.0/24"
- "172.22.1.0/24"
- "172.22.2.0/24"
public:
- "172.22.3.0/24"
- "172.22.4.0/24"
- "172.22.5.0/24"eks:
eks-cluster:
# Backend configuration for environment-based key structure
backend:
s3:
workspace_key_prefix: "dev/terraform/eks-cluster"
metadata:
component: "eks-cluster"
settings:
depends_on:
vpc:
component: "vpc"
vars:
# CloudPosse Context Configuration
eks_cluster_name: "eks-cluster-dev"
# Development environment specific overrides
cluster_version: "1.33"
#VPC Configuration
vpc_name: "vpc-dev"
vpc_id: "vpc-3jd78hh83" vpc_name here in eks cluster i have hard-coded it
can i use something like
{component.vpc.vpc_id}
Erik Osterman (Cloud Posse)4 months ago
This introduces
atmos auth the single best way to handle authentication and identities across all clouds. It supports SSO, SAML, OIDC, and IAM Users. Manage all your IAM configuration in atmos.yaml; no more messing around with AWS profiles. Centralize all your identity configuration in one place. Makes it easy for all users to authenticate without any manual configuration. https://atmos.tools/cli/commands/auth/usageErik Osterman (Cloud Posse)4 months ago
☝️ replaces our earlier endorsements of Leapp (deprecated).
PePe Amengual4 months ago
We are using helmfile in atmos and we are doing it this way :
But we want to avoid passing the filename
---
repositories:
- name: prometheus
url: <https://prometheus-community.github.io/helm-charts>
releases:
- name: prometheus
chart: prometheus/prometheus
version: 27.39.0
namespace: monitoring
createNamespace: true
force: false
values:
- mgmt-wus3-prometheus.helmfile.vars.yamlBut we want to avoid passing the filename
mgmt-wus3-prometheus.helmfile.vars.yaml or if we have to have a way to templetarize the name so it can be environment agnostic, we tries many things from the helmfile docs and they just do not work, is there any magic in atmos that helps resolve this? at the end of the day is atmos creating mgmt-wus3-prometheus.helmfile.vars.yaml file.Alek4 months ago
Hello team! I'm sorry if this question is basic - I've recently inherited a large Atmos-based IAC codebase and am still trying to wrap my head around it. It is less than a year old.
I made some changes that touch multiple components, and would like to
To do that, I want to write a simple GHA workflow that applies changes to all stacks suffixed with
Just wanted to double-check with you wether that idea is common amongst Atmos users, and whether I'm thinking about Atmos stacks correctly. Thank you for any tips. 🙏
I made some changes that touch multiple components, and would like to
apply my PR changes to all affected stacks in our staging stage. For context, our stack naming use the general best practice: {tenant}-{environment}-{stage} (i.e. plat-use1-staging).To do that, I want to write a simple GHA workflow that applies changes to all stacks suffixed with
-{stage} (i.e. staging) on demand. I come from a traditional Terraform background, where we used TF workspaces as environments (prod, stage, dev).Just wanted to double-check with you wether that idea is common amongst Atmos users, and whether I'm thinking about Atmos stacks correctly. Thank you for any tips. 🙏
PePe Amengual4 months ago(edited)
In atmos inheritance do maps get merged and list do not? for example I have a list in a catalog file component that looks like this :
In the same file we have a map
and in my component I have
and when I do describe affected I see the
was this a bug in a version or this was always the behavior?
I'm a bit confused because I always thought you could overwrite anything on a component.
subnets:
- a
- b
- cIn the same file we have a map
role_assignments:
"pepe"
- "super admin"
john:
- "not admin:and in my component I have
pepe:
metadata.....(inherits the catalog file mentioned)
vars:
subnets: []
role_assignments: {}and when I do describe affected I see the
subnets as en empty list and the role_assignments map get merged insteadwas this a bug in a version or this was always the behavior?
I'm a bit confused because I always thought you could overwrite anything on a component.
toka4 months ago
Hey guys!
Trying to use Atmos GitHub actions for my project and I'm stuck. Working right now on migrating slap-glued TF code (the usual stuff we all love)
into Atmos framework for GCP project.
I'm working on Atmos Terraform Plan at the moment. I'm using Determine Affected Stacks jobs to generate job matrix for components.
It generates Plan jobs, but it seems underlying Terraform code is not executed in Plan at all, despite jobs are passing all green.
I've added simple step
First time trying to reuse Atmos actions, so I'm assuming it is some kind of misconfig on my end.
The catch is I'm using GCP, so I cannot simply set
Hoping for some direction! Snippet from my
Trying to use Atmos GitHub actions for my project and I'm stuck. Working right now on migrating slap-glued TF code (the usual stuff we all love)
into Atmos framework for GCP project.
I'm working on Atmos Terraform Plan at the moment. I'm using Determine Affected Stacks jobs to generate job matrix for components.
It generates Plan jobs, but it seems underlying Terraform code is not executed in Plan at all, despite jobs are passing all green.
I've added simple step
Atmos plan (raw) that executes atmos terraform plan directly and then I get to see TF plan output - name: Atmos plan (raw)
run: |
set -eo pipefail
echo "Component=${{ matrix.component }} | Stack=${{ matrix.stack }}"
atmos terraform plan '${{ matrix.component }}' -s '${{ matrix.stack }}' -- -input=false -no-color | tee plan.txtFirst time trying to reuse Atmos actions, so I'm assuming it is some kind of misconfig on my end.
The catch is I'm using GCP, so I cannot simply set
integrations.github.gitops with artifact-storage just yet. GCP storage API is compatible with S3 but I guess the authentication flow is a bit different (no AWS ARN roles and such). Not sure how much it affects the Action I'm trying to use though.Hoping for some direction! Snippet from my
atmos.yaml :integrations:
github:
terraform-version: "1.13.1"
infracost-enabled: false
artifact-storage:
plan-repository-type: "s3"
metadata-repository-type: ""
region: "us-xxxx"
bucket: "toka-xxxxxxxx-bucket"
table: ""
role: ""
role:
plan: ""
apply: ""
matrix:
sort-by: .stack_slug
group-by: .stack_slug
components:
terraform:
[...]
settings:
github:
actions_enabled: trueA
Alek4 months ago(edited)
Hello team! I received a recommendation from AWS EKS team to run Karpenter and CoreDNS addon on Fargate profiles, but today I found contradictory recommendation from you under the EKS cluster component documentation. In the same docs, I see that running addons on Fargate is also not recommended.
Do I understand correctly that you recommend to run Karpenter+CoreDNS deployments on a small, isolated set of MNGs, and everything else on the Karpenter-managed Nodes? Or, do you also allow other cluster tools onto these nodes as well, and run
In the past, I migrated away from MNGs and CA into the Fargate profiles + Karpenter with success (granted, a bit of pain as well). Could someone elaborate on the recommendations being given here, just for my understanding?
Sorry if this feels more like it should belong in #aws - I inherited a large Atmos codebase, and want to follow established patterns while wrapping my head around it.
Do I understand correctly that you recommend to run Karpenter+CoreDNS deployments on a small, isolated set of MNGs, and everything else on the Karpenter-managed Nodes? Or, do you also allow other cluster tools onto these nodes as well, and run
cluster-autoscaler to scale these node groups up/down?In the past, I migrated away from MNGs and CA into the Fargate profiles + Karpenter with success (granted, a bit of pain as well). Could someone elaborate on the recommendations being given here, just for my understanding?
Sorry if this feels more like it should belong in #aws - I inherited a large Atmos codebase, and want to follow established patterns while wrapping my head around it.
Ricky Fontaine4 months ago(edited)
Hi all! I'm having some trouble using the yq
using the Terraform output function. I'm trying something like:
Has anyone here successfully joined an array through the
join function in atmos. I am creating a Kubernetes ingress and want an annotation like<http://alb.ingress.kubernetes.io/subnets|alb.ingress.kubernetes.io/subnets>: subnet1, subnet2, subnet3using the Terraform output function. I'm trying something like:
<http://alb.ingress.kubernetes.io/subnets|alb.ingress.kubernetes.io/subnets>: !terraform.output vpc plat-ue1-prod ".public_subnet_ids[] | join("","")" but haven't gotten it to work. I've also tried something like '.public_subnet_ids[] | join(",")' but that doesn't work either.Has anyone here successfully joined an array through the
!terraform.output function? Let me know if you want more infoPePe Amengual4 months ago
what is wrong with this vendor.yaml in 1.194.1?
from the docs:
Only change was the source format and update from 1.170.0 to 1.194.1
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: vendor iac repo
description: Atmos vendoring manifest
spec:
imports: []
sources:
- source: "git::https://{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git?ref={{.Version}}"
version: "main"
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"from the docs:
The component attribute in each source is optional. It's used in the atmos vendor pull -- component <component> command if the component is passed in. In this case, Atmos will vendor only the specified component instead of vendoring all the artifacts configured in the vendor.yaml manifest.# Error
yaml: line 8: did not find expected key
Error: Process completed with exit code 1.Only change was the source format and update from 1.170.0 to 1.194.1
Thomas Spear4 months ago
I received what I consider to be a spam message in DM here. How can I report this?
Erik Osterman (Cloud Posse)4 months ago
⚡️ Heads up! Atmos v1.195.0 is coming soon with 450% performance gains across the board.
This release reworks some of Atmos’s internal plumbing. We’ve greatly expanded test coverage and validated it with several large customers, but as always, there’s a small chance of regressions.
This release reworks some of Atmos’s internal plumbing. We’ve greatly expanded test coverage and validated it with several large customers, but as always, there’s a small chance of regressions.
Jonathan Rose4 months ago(edited)
Is the
replace in go.mod introduced in v1.195.0 a temporary workaround? We used to install atmos with go install but can't due to this change (re: https://github.com/cloudposse/atmos/pull/1631)Kumar4 months ago
👋 Hello, Atmos team!
I recently came across Atmos and looks interesting how it handles the DRY principle with Terraform. I'm currently evaluating Terragrunt vs Atmos for restructuring our infrastructure and have a few questions.
Context:
- Multi-cloud setup: GCP (primary) and AWS (secondary)
- Currently using a monolithic (modules and environments structure ) Terraform template across ~40 environments
- Key requirements: DRY, state isolation per resource type, YAML configuration preference / can be HCL too
- 120+ workspaces with significant configuration duplication
- Current pain points: Our main issues are: 120 duplicated backend.tf files, 120 duplicated provider.tf files, and no mechanism for sharing default configurations across environments.
- What attracted you to Atmos: What particularly impressed us about Atmos is the catalog + org + product + environment inheritance model - it seems to solve our duplication problem
much better than Terragrunt's single-layer _envcommon approach.
State coupling: Changing database tier = terraform refresh all 7 resource types even though only 1 changed
Questions for the team:
1. Multi-cloud Support (GCP + AWS):
- I understand Atoms is on top of terraform but still few thing i have to go through from documentation ( since for new comer overwhelming)
- Are there any GCP-specific limitations we should be aware of?
- Do most Atmos users primarily use AWS, or is GCP equally well-supported?
2. Exit Strategy / Migration Back to Native Terraform:
- If we need to move away from Atmos in the future, how difficult is the migration back to native Terraform?
- I understand Atmos generates .tfvars files - can we simply use those generated files and remove Atmos, or is there more involved?
3. YAML Validation & Type Safety:
I would love to use YAML comparing to HCL since it's developer friendly but
I've got to know that YAML-based approaches can have challenges compared to native HCL:
- Type safety: HCL provides compile-time type checking, while YAML is untyped
- Validation timing: Errors may only surface during terraform plan rather than when editing ( for example: HCL tfvars can give suggestion in vscode which is easy for devs to give right values )
- Custom error handling: Need to build validation ourselves
- Does Atmos provide JSON Schema validation for stack files?
- Are there pre-flight validation tools (atmos validate) to catch errors before Terraform execution?
- How does the team recommend handling IDE support and autocomplete for YAML configs?
- Any best practices for preventing runtime errors due to YAML structure issues?
4. Atlantis Integration:
- We use Atlantis for CI/CD - I saw Atmos has atmos atlantis generate for auto-generating atlantis.yaml
- How well does this work in practice with large-scale setups (120+ workspaces)?
- Any known issues or best practices?
Would love to hear from anyone using Atmos with GCP! Really appreciate your insights.
Thanks in advance! 🙏
I recently came across Atmos and looks interesting how it handles the DRY principle with Terraform. I'm currently evaluating Terragrunt vs Atmos for restructuring our infrastructure and have a few questions.
Context:
- Multi-cloud setup: GCP (primary) and AWS (secondary)
- Currently using a monolithic (modules and environments structure ) Terraform template across ~40 environments
- Key requirements: DRY, state isolation per resource type, YAML configuration preference / can be HCL too
- 120+ workspaces with significant configuration duplication
- Current pain points: Our main issues are: 120 duplicated backend.tf files, 120 duplicated provider.tf files, and no mechanism for sharing default configurations across environments.
- What attracted you to Atmos: What particularly impressed us about Atmos is the catalog + org + product + environment inheritance model - it seems to solve our duplication problem
much better than Terragrunt's single-layer _envcommon approach.
State coupling: Changing database tier = terraform refresh all 7 resource types even though only 1 changed
Questions for the team:
1. Multi-cloud Support (GCP + AWS):
- I understand Atoms is on top of terraform but still few thing i have to go through from documentation ( since for new comer overwhelming)
- Are there any GCP-specific limitations we should be aware of?
- Do most Atmos users primarily use AWS, or is GCP equally well-supported?
2. Exit Strategy / Migration Back to Native Terraform:
- If we need to move away from Atmos in the future, how difficult is the migration back to native Terraform?
- I understand Atmos generates .tfvars files - can we simply use those generated files and remove Atmos, or is there more involved?
3. YAML Validation & Type Safety:
I would love to use YAML comparing to HCL since it's developer friendly but
I've got to know that YAML-based approaches can have challenges compared to native HCL:
- Type safety: HCL provides compile-time type checking, while YAML is untyped
- Validation timing: Errors may only surface during terraform plan rather than when editing ( for example: HCL tfvars can give suggestion in vscode which is easy for devs to give right values )
- Custom error handling: Need to build validation ourselves
- Does Atmos provide JSON Schema validation for stack files?
- Are there pre-flight validation tools (atmos validate) to catch errors before Terraform execution?
- How does the team recommend handling IDE support and autocomplete for YAML configs?
- Any best practices for preventing runtime errors due to YAML structure issues?
4. Atlantis Integration:
- We use Atlantis for CI/CD - I saw Atmos has atmos atlantis generate for auto-generating atlantis.yaml
- How well does this work in practice with large-scale setups (120+ workspaces)?
- Any known issues or best practices?
Would love to hear from anyone using Atmos with GCP! Really appreciate your insights.
Thanks in advance! 🙏
Jonathan Rose4 months ago
Trying to provision FSx for Windows File Server using self-managed Active Directory, which requires explicit username and password. The values are stored in secrets manager, but attempts to parse the value appear to be fail with one or more errors:
Error message:
I'm also trying to put the values in quotes, but that doesn't appear to work either.
password: !terraform.state secrets-manager "wrapper.domainjoin.secret_string[password] // ""MOCK__password"""
username: !terraform.state secrets-manager "wrapper.domainjoin.secret_string[username] // ""MOCK__username"""Error message:
failed to evaluate terraform backend variable wrapper.domainjoin.secret_string[username] // "MOCK__username" for component secrets-manager in stack cfsb-infra-fsx-ue1-dev
in YAML function: !terraform.state secrets-manager "wrapper.domainjoin.secret_string[username] // ""MOCK__username"""
EvaluateYqExpression: failed to evaluate YQ expression '.wrapper.domainjoin.secret_string[username] // "MOCK__username"': 1:35: lexer: invalid input text "username] // \"MO..."I'm also trying to put the values in quotes, but that doesn't appear to work either.
Sam4 months ago(edited)
Hey everyone, I hit an issue with using the
So I was wondering what the recommended approach was for sharing state information. My situation is as follows:
I've got 2 stacks,
The
The
So now when I plan the component in either stack it errors with the above (and a massive stack trace).
I would appreciate any guidance on how to do this in a better way as I am new to atmos.
!terraform.output function in a circular manner. It caused a recursive loop. FWIW I'm using the most recent version of atmos binaryruntime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0x14044e80b60 stack=[0x14044e80000, 0x14064e80000]
fatal error: stack overflowSo I was wondering what the recommended approach was for sharing state information. My situation is as follows:
I've got 2 stacks,
core and staging and 1 component vpcThe
vpc component in core uses an output from the vpc component in staging which contains transit gateway attachment IDs, VPC CIDR rangesThe
vpc component in staging uses an output from the vpc component in core containing a transit gateway IDSo now when I plan the component in either stack it errors with the above (and a massive stack trace).
I would appreciate any guidance on how to do this in a better way as I am new to atmos.
Y
Yangci Ou4 months ago
Hey Atmos team!
Just to begin, I watched the demo of
In a vanilla Terraform project (or any other use cases of exclusively using
Right now, I have to add empty, or rather dummy values, in those attributes so it can pass, or else it'll fail.
Just to begin, I watched the demo of
atmos auth and I've been playing around with it in couple projects. I love it. I'm also really excited for the auth console and the auth shell (I've been primarily using auth exec --, so that'll be a significant improvement).In a vanilla Terraform project (or any other use cases of exclusively using
atmos auth without an Atmos project, is it possible to boot without the stacks.base_path and stacks.included_paths configs in the atmos.yaml?Right now, I have to add empty, or rather dummy values, in those attributes so it can pass, or else it'll fail.
A
Ana Nikolic4 months ago
Hi team,
I’m doing a POC for Atmos, we are trying to step away from Terragrunt. I’ve managed to deploy my first resource. However, I’m struggling to understand how Terraform state management works with Atmos, or more specifically, how to make it behave the way I want.
For our setup, the main “stack unit” will be isolation by GCP project — ideally, each project should have its own state file / workspace. But I can’t seem to get this to work: when I run
I've put the following in atmos.yaml and stacks/_default.yaml, wanted to start with local backend.
Any help in what I need to change/add is appreciated ?
I’m doing a POC for Atmos, we are trying to step away from Terragrunt. I’ve managed to deploy my first resource. However, I’m struggling to understand how Terraform state management works with Atmos, or more specifically, how to make it behave the way I want.
For our setup, the main “stack unit” will be isolation by GCP project — ideally, each project should have its own state file / workspace. But I can’t seem to get this to work: when I run
terraform workspace list , I see only default .I've put the following in atmos.yaml and stacks/_default.yaml, wanted to start with local backend.
Any help in what I need to change/add is appreciated ?
Yota4 months ago(edited)
Hello everyone 👋,
I am currently looking for tools to create and manage an AWS organization that complies with the best practices required by standards (SOC2, HIPAA, etc.). I am studying LZA, AFT, ADF, CfCT, Gruntwork Account Factory, orgFormation, and of course Atmos (which is why I am here 🙂).
After reading your “Reference Architecture for AWS” documentation, I want to try it out on a new AWS account. I've read the docs, responded to the design decisions, but unfortunately, I'm stuck here: https://docs.cloudposse.com/layers/accounts/prepare-aws-organization/
More specifically, at “This requires the prior creation of an AWS organization, which we will create with Terraform in the Deploy Accounts guide: https://docs.cloudposse.com/layers/accounts/deploy-accounts/#-prepare-account-deployment.”
The guide says “Check the configuration of the ‘account’ in the stack catalog.” I assume this step will create the AWS organization with Atmos, but I've never used it before. Everywhere, the documentation assumes that you are familiar with Atmos. But it seems like I'm stuck in a loop: to learn Atmos, I need a new organization to create accounts and play around with. But to do that, I need to know Atmos first.
Would you be kind enough to give me an example of a complete repository structure (files and directories) so that I can deploy an architecture?
Thanks!
I am currently looking for tools to create and manage an AWS organization that complies with the best practices required by standards (SOC2, HIPAA, etc.). I am studying LZA, AFT, ADF, CfCT, Gruntwork Account Factory, orgFormation, and of course Atmos (which is why I am here 🙂).
After reading your “Reference Architecture for AWS” documentation, I want to try it out on a new AWS account. I've read the docs, responded to the design decisions, but unfortunately, I'm stuck here: https://docs.cloudposse.com/layers/accounts/prepare-aws-organization/
More specifically, at “This requires the prior creation of an AWS organization, which we will create with Terraform in the Deploy Accounts guide: https://docs.cloudposse.com/layers/accounts/deploy-accounts/#-prepare-account-deployment.”
The guide says “Check the configuration of the ‘account’ in the stack catalog.” I assume this step will create the AWS organization with Atmos, but I've never used it before. Everywhere, the documentation assumes that you are familiar with Atmos. But it seems like I'm stuck in a loop: to learn Atmos, I need a new organization to create accounts and play around with. But to do that, I need to know Atmos first.
Would you be kind enough to give me an example of a complete repository structure (files and directories) so that I can deploy an architecture?
Thanks!
E
erik4 months ago
Atmos 1.196.0 will show your active identities and time remaining;
atmos auth listAlberto Rojas4 months ago(edited)
Hello!
We use Atlantis and we are trying to install Atmos in the Alpine Atlantis image, if I followed the documentation but it fails with:
If I search it, it is not there:
We use Atlantis and we are trying to install Atmos in the Alpine Atlantis image, if I followed the documentation but it fails with:
apk add atmos@cloudposse
WARNING: The repository tag for world dependency 'atmos@cloudposse' does not exist
ERROR: Not committing changes due to missing repository tags. Use --force-broken-world to override.If I search it, it is not there:
apk search --repository <https://dl.cloudsmith.io/public/cloudposse/packages/alpine/v3.21/main> | grep atmosAlberto Rojas4 months ago
Hello, me again, on the journey to onboard Atmos in our pipeline with Atlantis we see that the documentation mostly tries you to use pure Terraform commands instead of atmos cli, is there a reason behind that?
Another question is, can the backend configuration be automatically generated, we have some IAM Roles for some S3 buckets so we want to compute the role name depending on the stack used?
Another question is, can the backend configuration be automatically generated, we have some IAM Roles for some S3 buckets so we want to compute the role name depending on the stack used?
Slackbot4 months ago
This message was deleted.
Erik Osterman (Cloud Posse)4 months ago
Atmos 1.196.0 has regressions in prompting for auth identities even when none are configured. Fix is enroute.