51 messages
👽️
A
Alexandre Feblot20 days ago
How to handle "sensitive terraform output" ?
Hi, sorry to come back to this but after having read https://github.com/cloudposse/atmos/blob/osterman/secrets-management/docs/prd/secrets-management.md , I don't understand how to deal with terraform sensitive outputs, and pass them properly to
• other terraform components
• other helm or ansible components
The secrets-management PRD does not seem to address this, as it focuses on manually defined secrets, using the atmos cli.
I would expect a store-like mechanism that allows "connecting" stack outputs to stack inputs for sensitive information, leveraging some "safe intermediate storage".
I feel like (am I wrong?) that SSM parameter store for example could maybe be an easy win, as it supports SecretStrings, and is supported as an atmos store already. Setting the parameter type as SecureString if the output is set as sensitive might be possible?
Sure, it would change the definition/scope of atmos stores, but again, there is a gap here that secret-management does not fill.
Hi, sorry to come back to this but after having read https://github.com/cloudposse/atmos/blob/osterman/secrets-management/docs/prd/secrets-management.md , I don't understand how to deal with terraform sensitive outputs, and pass them properly to
• other terraform components
• other helm or ansible components
The secrets-management PRD does not seem to address this, as it focuses on manually defined secrets, using the atmos cli.
I would expect a store-like mechanism that allows "connecting" stack outputs to stack inputs for sensitive information, leveraging some "safe intermediate storage".
I feel like (am I wrong?) that SSM parameter store for example could maybe be an easy win, as it supports SecretStrings, and is supported as an atmos store already. Setting the parameter type as SecureString if the output is set as sensitive might be possible?
Sure, it would change the definition/scope of atmos stores, but again, there is a gap here that secret-management does not fill.
S
Stewart19 days ago(edited)
lol, sorry...fat fingered
Z
Zack18 days ago
❓️ I have atmos creating two different workdirs for the same component using JIT 🧵
Z
Zack18 days ago
❓️ Wanted to also confirm, that JIT is still not smart enough overwrite its own
workdir if the source.version has not changedE
erik18 days ago
We're looking for feedback on our new agent skills. @Andriy Knysh (Cloud Posse) put these together.
https://atmos.tools/changelog/ai-agent-skills
https://atmos.tools/changelog/ai-agent-skills
S
Sean Nguyen18 days ago(edited)
Is there guidance for using
This week, we uncovered that
When we changed the offending catalog files to use
Given component_A and component_B, the spacelift stacks for component_B were failing when:
•
•
This seems to be not an issue after we switched to
!terraform.state vs atmos.Component ? I’m struggling to understand the differences based on the documentation.This week, we uncovered that
atmos.Component was breaking one of our stacks (it was causing terraform workspace to flap during plans which ultimately caused the apply step in Spacelift to fail).When we changed the offending catalog files to use
!terraform.state syntax instead, things began working again.Given component_A and component_B, the spacelift stacks for component_B were failing when:
•
catalog/component_A/defaults.yaml -> had atmos.Component templating to fetch outputs from component_B•
component_B -> uses remote-state module to query outputs from component_AThis seems to be not an issue after we switched to
!terraform.state syntax..M
MB18 days ago
hey all! I see the following in the documentation.
https://atmos.tools/design-patterns/version-management/vendoring-components#advanced-vendor-configuration
specifically:
Is this still supported?
https://atmos.tools/design-patterns/version-management/vendoring-components#advanced-vendor-configuration
specifically:
sources:
# Vendor multiple versions of the same component
- component: vpc
source: "<http://github.com/cloudposse/terraform-aws-vpc.git///?ref={{.Version}}|github.com/cloudposse/terraform-aws-vpc.git///?ref={{.Version}}>"
targets:
- path: "vpc/{{.Version}}"
version: "2.1.0"
- path: "vpc/latest"
version: "2.2.0"Is this still supported?
Error: yaml: unmarshal errors:
line 27: cannot unmarshal !!map into string
line 29: cannot unmarshal !!map into string
line 31: cannot unmarshal !!map into stringR
RB17 days ago
Is there anyway to cache the deep merged stack yaml to save time when running subsequent commands like plan/apply? Hoping to run something like
atmos describe components > deepmerge.yaml and the run atmos plan xyz -s ue1-dev --cached-deepmerge-yaml deepmerge.yaml or similar to save timeR
RB17 days ago
What's a good way in the stack yaml to reference the aws account id in an arn without hard coding it? Our devs created a new hardcoded input called var.aws_account_id in the stage mixin but there must be a better way
J
Jonathan Rose16 days ago(edited)
Anyone else seeing this warning?
Any chance it's related to this warning?
It seems to be specific to vendored components
│ Warning: Incomplete lock file information for providers
│
│ Due to your customized provider installation methods, Terraform was forced
│ to calculate lock file checksums locally for the following providers:
│ - hashicorp/aws
│ - hashicorp/tls
│
│ The current .terraform.lock.hcl file only includes checksums for
│ linux_amd64, so Terraform running on another platform will fail to install
│ these providers.
│
│ To calculate additional checksums for another platform, run:
│ terraform providers lock -platform=linux_amd64
│ (where linux_amd64 is the platform to generate)Any chance it's related to this warning?
│ Warning: Version constraints inside provider configuration blocks are deprecated
│
│ on providers_override.tf.json line 16, in provider.aws:
│ 16: "version": "5.100.0"
│
│ Terraform 0.13 and earlier allowed provider version constraints inside the
│ provider configuration block, but that is now deprecated and will be
│ removed in a future version of Terraform. To silence this warning, move the
│ provider version constraint into the required_providers block.It seems to be specific to vendored components
O
Orion7X16 days ago
👋 Hey all! We're evaluating Atmos (alongside some other tools/frameworks) for a pretty complex existing Terraform setup. We manage many deployments across cloud providers, accounts, regions, and tenancy models (multi/single-tenant). Each has a "type" with its own blueprint of Terraform code + a tfvars file per deployment. Works fine today, but only for monostates - so we are looking for something more flexible.
We'd like a pattern like this:
- A top-level stack per deployment (e.g.
- Each top-level stack only needs two things: 1) an import for its deployment type, and 2) deployment-specific vars
- Blueprints are composable - a blueprint imports layers, layers import components
Example top-level stack:
Blueprint stack:
Each Layer would import some number of Components, such as vpc, subnets, etc. for the networking layer.
This looks similar to the Layered Stack Configuration pattern in the docs. My main question is around var precedence: how do I set a default (e.g.
Two options I see:
1. Drop the blueprint layer and list all layer imports directly in each top-level stack - but this risks drift between deployments over time, so probably a no-go for us
2. Use
Is option 2 the right approach here, or is there a better pattern for this? Thanks! 🙏
We'd like a pattern like this:
- A top-level stack per deployment (e.g.
mt-prod-us, lt-prod-us-customer1)- Each top-level stack only needs two things: 1) an import for its deployment type, and 2) deployment-specific vars
- Blueprints are composable - a blueprint imports layers, layers import components
Example top-level stack:
yaml
# stacks/mt/prod-us.yml
name: mt-prod-us
import:
- catalog/blueprints/mt
vars:
stage: prod
region: us-east-1
cidr_block: "1.1.1.0/24"
rds_instance_type: "db.r5.12xlarge"Blueprint stack:
yaml
# catalog/blueprints/mt.yml
import:
- catalog/layers/aws/networking
- catalog/layers/aws/compute
- catalog/layers/aws/storageEach Layer would import some number of Components, such as vpc, subnets, etc. for the networking layer.
This looks similar to the Layered Stack Configuration pattern in the docs. My main question is around var precedence: how do I set a default (e.g.
cidr_block: 192.168.0.0/24) in a blueprint/layer/component, but allow a top-level stack to override it?Two options I see:
1. Drop the blueprint layer and list all layer imports directly in each top-level stack - but this risks drift between deployments over time, so probably a no-go for us
2. Use
overrides.vars in the top-level stack instead of vars - seems to work in testing, but not sure if this is an anti-patternIs option 2 the right approach here, or is there a better pattern for this? Thanks! 🙏
J
Jonathan Rose15 days ago
any insight to when chore(deps): update terraform cloudposse/repository/github to v1.1.0 by renovate[bot] · Pull Request #29 · cloudposse-terraform-components/aws-github-repository will be released?
M
MB13 days ago(edited)
hey all, any idea how vendoring should be part of the pipeline? What's your vision on it?
B
Brandon13 days ago
hi there, when taking a layered approach to stacks, how does atmos recommend dealing with the layer x must exist before planning layer y, is it atmos workflows, or should github workflows determine the affected layers, or a combo of both? If there are any examples, it would be a huge help.
T
Thomas Spear13 days ago
Hi team, we have some stacks that we want to rename. They have a typo in them while other non-terraform code has the correct spelling and we want to fix it.
The problem is: doing so seems to cause atmos to treat them as a different stack, and tries to create them fresh. We would like to avoid adding import blocks for this but we also don't have direct access to manually move the state files and so we need atmos to do it via our CI pipeline if possible because the stack name is used across multiple environments including production. Is there any way currently?
I see this page talks about renaming components, but what about renaming stacks?
The problem is: doing so seems to cause atmos to treat them as a different stack, and tries to create them fresh. We would like to avoid adding import blocks for this but we also don't have direct access to manually move the state files and so we need atmos to do it via our CI pipeline if possible because the stack name is used across multiple environments including production. Is there any way currently?
I see this page talks about renaming components, but what about renaming stacks?
R
RB13 days ago
Does atmos pro allow applying each commit like in spacelift ? or does atmos pro only allow for applying on the default branch ?
R
RB12 days ago
Re: upgrading from atmos 1.168.0 to latest version, the
e.g.
ignore_missing_template_values (link) seems to be the method @Pablo Paez found. Is there anyway to set this at a more global level than on every imported catalog ?e.g.
import:
- path: "<path_to_atmos_manifest2>"
context: {}
skip_templates_processing: false
ignore_missing_template_values: trueR
RB12 days ago(edited)
B
Brandon12 days ago(edited)
hello, wondering what happens if you're using
!terraform.state and the component/stack you're querying for doesn't exist yet, or hasn't been applied yet?R
RB11 days ago
Do we have a way of doing terraform state migrations per component-stack slug yet ?
Ideally, we’d like to do this in a
Ideally, we’d like to do this in a
migrations.tf style hcl but do it per component-stack so migrations-redis-banana-ue1-dev.tf won’t be applied to redis-banana-ue2-prodR
RB10 days ago
How do you pin providers across all components ?
We ran into an issue today due to the newest https://github.com/cloudposse/terraform-provider-utils/releases/tag/v1.32.0 provider and need to pin across all of our components to
We ran into an issue today due to the newest https://github.com/cloudposse/terraform-provider-utils/releases/tag/v1.32.0 provider and need to pin across all of our components to
v1.31.0J
Johan10 days ago
Hi! Looking forward to use the new ai integration! I love how Atmos is evolving 🙂
Did anyone already succeed in using litellm as an AI backend?
Did anyone already succeed in using litellm as an AI backend?
O
Orion7X10 days ago
Is there a way to get a component DAG view of all components for a stack? I'm aware of
atmos describe dependents, but I'm looking for a command that operates at the stack level.O
Orion7X10 days ago
It appears that
components.terraform.base_path (in atmos.yaml) doesn't support a nested directory structure - can someone confirm? I would like to organize components, at the very least, by cloud provider: /components/terraform/aws/..., but I can't get this to work.J
Jonathan Rose9 days ago
question, I am doing POC for GitHub Actions | atmos and trying to understand how to get up and running quickly without having the requirements noted (e.g. s3 bucket, dynamoDB table)
S
Syio9 days ago(edited)
Hello, I’m trying to understand the correct way to use workdir and source provisioning with the Terraform plan GitHub Action.
I’m currently using
cloudposse/github-action-atmos-terraform-plan with my components which use workdir + source provisioning enabled, because I want to avoid cache collisions when running multiple stacks in parallel that share the same component.
The issue I’m encountering is that the action attempts to write the Terraform plan file to:
terraform/components/<component>/
However, when workdir is enabled, Atmos provisions the component to:
.workdir/<stack>-<component>/
This causes a mismatch between where the action expects the component to exist and where Atmos actually provisions it.
For context: My component base path is terraform/components
My current workaround is to pre-create the expected component folder in the pipeline, while still running Terraform from the component workdir.
Before continuing with this approach, I wanted to ask: Is there a recommended way to use workdir + source provisioning with the plan action? Or is there a configuration option I might be missing so the action uses the Atmos workdir path instead?
I’m currently using
cloudposse/github-action-atmos-terraform-plan with my components which use workdir + source provisioning enabled, because I want to avoid cache collisions when running multiple stacks in parallel that share the same component.
The issue I’m encountering is that the action attempts to write the Terraform plan file to:
terraform/components/<component>/
However, when workdir is enabled, Atmos provisions the component to:
.workdir/<stack>-<component>/
This causes a mismatch between where the action expects the component to exist and where Atmos actually provisions it.
For context: My component base path is terraform/components
My current workaround is to pre-create the expected component folder in the pipeline, while still running Terraform from the component workdir.
Before continuing with this approach, I wanted to ask: Is there a recommended way to use workdir + source provisioning with the plan action? Or is there a configuration option I might be missing so the action uses the Atmos workdir path instead?
J
Jonathan Rose9 days ago
Trying to understand what "import" is being referenced in the error below. I tried setting logs to both debug and trace and couldn't figure it out 🤔
$ atmos describe affected
Error
Error: failed to find import
$ atmos tf plan --affected
Error
Error: failed to find importL
Leonardo Agueci9 days ago(edited)
Hi all, I'm testing version 1.209.0 and I think I found and issue on the authentication realm
If I run a plan for a component:
Instead, if I run a plan for the entire stack:
Then looking into the
It seems that with
I'm using AWS SSO as provider
If I run a plan for a component:
atmos terraform plan <component> -s <stack>, credentials are stored into .config/atmos/<realm>Instead, if I run a plan for the entire stack:
atmos terraform plan -s <stack> --all, I get the following warning:WARN Credentials found under realm "b1fe90cf620cb71a" but current realm is "(no realm)". This typically happens after changing auth.realm in config or ATMOS_AUTH_REALM. Run 'atmos auth login' to re-authenticate under the current realm.Then looking into the
.config/atmos/ directory I can see two different foldersls -l atmos/drwx------ 3 root root 4096 Mar 13 10:30 awsdrwx------ 3 root root 4096 Mar 13 10:30 b1fe90cf620cb71aIt seems that with
--allcredentials are stored without the realm in the path (I actually haven’t been asked to reauthenticate)I'm using AWS SSO as provider
E
erik9 days ago
Got an idea for something Atmos should do?
Instead of filling out a long, boring feature template, just submit an AI prompt.
Describe the workflow, command, or problem you want Atmos to solve.
If it resonates with us, we might just implement it.
👉️ https://github.com/cloudposse/atmos/issues
Instead of filling out a long, boring feature template, just submit an AI prompt.
Describe the workflow, command, or problem you want Atmos to solve.
If it resonates with us, we might just implement it.
👉️ https://github.com/cloudposse/atmos/issues
M
Michael Dizon9 days ago
Running 1.209.0. I get this error when trying to use helmfile:
i’m not using helm_aws_profile_pattern, but it seems to be picking it up from somewhere. Am i missing a config?
WARN helm_aws_profile_pattern is deprecated, use --identity flag instead
The config profile (core--gbl-tooling-helm) could not be foundi’m not using helm_aws_profile_pattern, but it seems to be picking it up from somewhere. Am i missing a config?
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
helmfile:
base_path: "components/helmfile"
cluster_name_template: "core-{{ .vars.stage }}-{{ .vars.environment }}-app"
helm_aws_profile_pattern: null
use_eks: true
kubeconfig_path: /tmp/{stage}-{environment}-kubeconfigA
Alexandre Feblot9 days ago
FYI: Error on doc example: https://atmos.tools/cli/configuration/stacks/#advanced-template-with-validation
With templating enabled, the advanced name_pattern example works fine for
What works for me is this:
Note the removed "-" at the beginning of the first line and at the end of the last line.
With templating enabled, the advanced name_pattern example works fine for
atmos list staks but fails with a templating error for atmos tf plan some_component.What works for me is this:
name_template: |-
{{ $ns := .vars.namespace -}}
{{- $tenant := .vars.tenant -}}
....
{{- $stack_name }}Note the removed "-" at the beginning of the first line and at the end of the last line.
K
Kyle Avery8 days ago
Trying to implement gcp-wif with GHA, seems there may be a mistake in the docs - https://atmos.tools/cli/configuration/auth/providers#workload-identity-federation
I see this error in GH
Seems as though the configuration should look like this?
I see this error in GH
**Error:** parse gcp/wif spec: invalid provider config invalid auth config: spec is nil
# Initialize Providers
**Error:** failed to initialize providers: invalid provider config: provider=gcp-wif: parse gcp/wif spec: invalid provider config invalid auth config: spec is nilSeems as though the configuration should look like this?
auth:
providers:
gcp-wif:
kind: gcp/workload-identity-federation
spec:
project_id: my-gcp-project
project_number: "123456789012"
workload_identity_pool_id: github-pool
workload_identity_provider_id: github-provider
service_account_email: <mailto:ci-sa@my-project.iam.gserviceaccount.com|ci-sa@my-project.iam.gserviceaccount.com>J
Jonathan Rose8 days ago
cloudposse-terraform-components/aws-github-repository at v0.3.0 states that
required_code_scanning is unsupported due to permadrift. it looks like the underlying issue has been resolved as of v6.9.0 (refs: https://github.com/integrations/terraform-provider-github/pull/2701). are there plans to reintroduce this feature?P
Paavo Pokkinen7 days ago
Hey everyone!
I’ve been looking into Atmos couple of days now, and it looks really good for our use case. Something I am struggling with is how small I want my stacks to be split.
We’re still relatively tiny SaaS company, but as we aim to be hosted on hyperscaler marketplaces, we probably need multi-cloud strategy (GCP being main one). We also do some consulting, and might host something outside of our core product, so I’ve utilized now “namespace-tenant” split, like in atmos docs.
My name_template is currently:
This leads to stack names such as:
So far in the environment I don’t have indication of Cloud region, only the vendor. Not sure if I should have it? Environments will be only “prod”/“non-prod” split, we’ll handle app specific staging, dynamic preview envs etc. with Argo-CD inside the clusters.
I’m heavily utilizing Google’s excellent terraform modules for project creation, shared vpc and K8S cluster management. These are already on quite high level / opinionated, so components right now are really lightweight.
I’ve split layers like this:
• meta: things like org settings, folder creation and project creation
◦ why: my thinking is project creation should be split off from actual resource stacks, project rarely if at all changes after creation, and it can be slow to validate everything
• network -> shared VPC stuff
• cluster -> GKE cluster creation
• clsuter-addons -> what necessary stuff goes into Kube cluster, like Argo-CD
As to applications, I don’t know yet. We’ll probably aim to have multi-app clusters, apps could be deployed totally outside of Atmos as well. Definitely considering Argo-CD there, which I’ve had good experiences.
Any feedback would be appreciated here! Am I having too small stacks?
I’ve been looking into Atmos couple of days now, and it looks really good for our use case. Something I am struggling with is how small I want my stacks to be split.
We’re still relatively tiny SaaS company, but as we aim to be hosted on hyperscaler marketplaces, we probably need multi-cloud strategy (GCP being main one). We also do some consulting, and might host something outside of our core product, so I’ve utilized now “namespace-tenant” split, like in atmos docs.
My name_template is currently:
"{{.vars.namespace}}-{{.vars.tenant}}-{{.vars.environment}}-{{.vars.stage}}-{{.vars.layer}}"This leads to stack names such as:
mp-plat-gcp-non-prod-meta
mp-plat-gcp-non-prod-network
mp-plat-gcp-non-prod-cluster
mp-plat-gcp-shared-metaSo far in the environment I don’t have indication of Cloud region, only the vendor. Not sure if I should have it? Environments will be only “prod”/“non-prod” split, we’ll handle app specific staging, dynamic preview envs etc. with Argo-CD inside the clusters.
I’m heavily utilizing Google’s excellent terraform modules for project creation, shared vpc and K8S cluster management. These are already on quite high level / opinionated, so components right now are really lightweight.
I’ve split layers like this:
• meta: things like org settings, folder creation and project creation
◦ why: my thinking is project creation should be split off from actual resource stacks, project rarely if at all changes after creation, and it can be slow to validate everything
• network -> shared VPC stuff
• cluster -> GKE cluster creation
• clsuter-addons -> what necessary stuff goes into Kube cluster, like Argo-CD
As to applications, I don’t know yet. We’ll probably aim to have multi-app clusters, apps could be deployed totally outside of Atmos as well. Definitely considering Argo-CD there, which I’ve had good experiences.
Any feedback would be appreciated here! Am I having too small stacks?
E
Elon7 days ago
Hi everyone.
I'm new to atmos and there is one quirck I cannot figure out and didn't find straight answer in the docs.
I'm mainly working with public modules from the terraform registry, that aren't cloud posse modules.
That means they don't have the context variables: namespace, tenant, environment, stage.
What is the best practice of adding those variables to the module automatically, without causing a drift to the existing module if I pull a new version?
I'm new to atmos and there is one quirck I cannot figure out and didn't find straight answer in the docs.
I'm mainly working with public modules from the terraform registry, that aren't cloud posse modules.
That means they don't have the context variables: namespace, tenant, environment, stage.
What is the best practice of adding those variables to the module automatically, without causing a drift to the existing module if I pull a new version?
T
Tyler Rankin7 days ago
G’day - we are experiencing a
Seemingly all atmos commands (describe component, validate stacks, validate schema, etc.) complete successfully, with the exception of
Recent change to cloudposse/utils v1.32.0 bumped atmos above v1.200.0, and now all of our remote-state stacks are failing with a generic terraform error
Any tips for the best path to resolution? We’d like to use the latest version of atmos. TIA!
fatal error: stack overflow when using atmos > v1.200.0. Are we up against a stack count limit? import limit? we do make use of go templates and functions (!terraform.state).Seemingly all atmos commands (describe component, validate stacks, validate schema, etc.) complete successfully, with the exception of
atmos describe stacks -s <stack>.Recent change to cloudposse/utils v1.32.0 bumped atmos above v1.200.0, and now all of our remote-state stacks are failing with a generic terraform error
failed to find import. Pinning utils to v1.31.0 resolves this, which i believe uses an older atmos version < 1.200.0.Any tips for the best path to resolution? We’d like to use the latest version of atmos. TIA!
P
Paavo Pokkinen5 days ago
I am a bit confused on Helmfile integration on Atmos. What is the use case vs Terraform provider Helmfile?
There seems to be some integration to EKS. Has anyone gotten this to work with GKE, so that clusters created on Terraform side are handed over to Helmfile side? I am not sure how to approach kubeconfig, seems like it must be written to a file somewhere? Or maybe try somehow fetching it from outputs, and putting it to ENV variable before Helmfile invocation? Example would be great. 🙂
There seems to be some integration to EKS. Has anyone gotten this to work with GKE, so that clusters created on Terraform side are handed over to Helmfile side? I am not sure how to approach kubeconfig, seems like it must be written to a file somewhere? Or maybe try somehow fetching it from outputs, and putting it to ENV variable before Helmfile invocation? Example would be great. 🙂
E
erik5 days ago
Anyone here using Gitlab?
E
erik5 days ago(edited)
In 1.210.0 we released Native CI support in atmos. It's 🧪 experimental in terms of maturity, but this is what will soon replace our github actions.
✅️ GitHub Status Checks
✅️ GitHub Job Summaries
✅️ GitHub Outputs
✅️ Atmos Toolchain (automatically install opentofu or almost any tool you need)
✅️ GitHub OIDC
That means this one command:
Can do the following:
1. Authentic with OIDC
2. Install all tools like opentofu
3. Initialize the backend
4. Run Terraform
5. Post Status Checks
6. Post Job Summary
Coming soon:
• Planfile storage & retrieval: GitHub Artifacts
• Comment Support
https://atmos.tools/changelog/native-ci-integration
https://atmos.tools/cli/configuration/ci
✅️ GitHub Status Checks
✅️ GitHub Job Summaries
✅️ GitHub Outputs
✅️ Atmos Toolchain (automatically install opentofu or almost any tool you need)
✅️ GitHub OIDC
That means this one command:
atmos terraform plan mycomponent -s dev-us-east-1Can do the following:
1. Authentic with OIDC
2. Install all tools like opentofu
3. Initialize the backend
4. Run Terraform
5. Post Status Checks
6. Post Job Summary
Coming soon:
• Planfile storage & retrieval: GitHub Artifacts
• Comment Support
https://atmos.tools/changelog/native-ci-integration
https://atmos.tools/cli/configuration/ci
P
Paavo Pokkinen4 days ago
Does Atmos make any assumptions on directory structure under components/terraform? I’ve been placing components to sub-directories according to service they mostly interact with, eg “gcp”, “aws”, “github”.
At least
At least
terraform.output yaml function seems to make assumptions:failed to generate backend file: open /Users/paveq/modernpath/live-infrastructure/components/terraform/gke/backend.tf.json: no such file or directoryterraform.state function works, but I wonder if I should fix my structure.D
Dan Hansen3 days ago
Checking to see if this behavior change in
v1.210.0 was intentional. I'm guessing its not? hooks.store-output.name evaluates to staging but since now we don't process yaml functions when hydrating and running hooks, it evaluates to !template staging , raising **Error:** store "!template staging" not found in configuration hooks:
store-outputs:
# Determine where we store outputs based on the project_id
name: !template "{{ index .settings.context.project_to_store .settings.context.project_id }}"J
Jonathan Rose3 days ago
@Erik Osterman (Cloud Posse) absolutely loving v1.210.0 as far. Only one thing I need to fix: I need the CI templates to hide the terraform warnings.
P
Paavo Pokkinen3 days ago
Is there any command like “run all (components) in a stack, in correct dependency order?”
J
Jonathan Rose3 days ago
Any idea how to resolve?
Run atmos describe affected --format=matrix
atmos describe affected --format=matrix
shell: /usr/bin/bash -e {0}
env:
ATMOS_PROFILE: ci
ATMOS_IDENTITY: cfsb-aws-config-acct
INFO Starting GitHub OIDC authentication provider=github-oidc
INFO GitHub OIDC authentication successful provider=github-oidc
# Error
**Error:** git reference not found on local filesystem: exit status 128
## Hints
💡 Make sure the ref is correct and was cloned by Git from the remote, or use
the '--clone-target-ref=true' flag to clone it.
💡 Refer to <https://atmos.tools/cli/commands/describe/affected> for more details.
Error: Process completed with exit code 128.J
Jonathan Rose3 days ago
I am getting a weird bug in the latest version of my service catalog.
atmos vendor pull works fine locally, but yields the following error in CI:+ atmos vendor pull
# Error
**Error:** the '--everything' flag is set, but vendor config file does not exist
## Explanation
vendorE
erik3 days ago
Really loving these status checks in atmos native CI
D
DE3 days ago(edited)
Question... we are implementing Versioning for our components as well as Resource Templates. Following documentation, but expanding the versions to Semantic Versioning as shown below. Can we assume that Vendoring Components would also work under this Versioning schema?
Component Versioning:
components/terraform/component-1
├── v1.0.0
│ └── tf-files.tf
└── v2.0.0
└── tf-files.tf
Resource Template Versioning:
catalog/resource_templates/component-1
├── v1.0.0.yaml
└── v2.0.0.yamlE
erik2 days ago
I spent some time learning how to use the AWS MCP servers with Atmos. TL;DR: it works very well.
I created some custom commands,
I created some custom commands,
atmos mcp aws install to install them, and then configured my .mcp.json to run the MCP using atmos authJ
Jonathan Rose2 days ago
Does cloudposse/terraform-github-repository at v1.0.0 not support configuring
copilot_code_review under rulesets?K
Kyle Decot1 day ago(edited)
Is there a way to bypass / disable the auto "json-ification" of strings obtained via
Here I want the "raw" string however atmos is trying to help me and instead gives me a map 😢
!terraform.state? For example:terraform:
secretsmanager:
vars:
hello_world: !terraform.state hello .world # string containing: "{\"hello\": \"world\"}"Here I want the "raw" string however atmos is trying to help me and instead gives me a map 😢
E
Elonabout 8 hours ago
Hi,
General question, what's the difference between aws modules and aws components?
For example:
https://docs.cloudposse.com/modules/library/aws/s3-bucket/
Vs
https://docs.cloudposse.com/components/library/aws/s3-bucket/
Which one should I use?
General question, what's the difference between aws modules and aws components?
For example:
https://docs.cloudposse.com/modules/library/aws/s3-bucket/
Vs
https://docs.cloudposse.com/components/library/aws/s3-bucket/
Which one should I use?