51 messages
👽️
Miguel Zablahover 1 year ago
hey I'm debuging some template and I notice that atmos is not updating correctly the error dose it cache the template result? if so is there a way to clear this?
Dennis DeMarcoover 1 year ago
I'm having an issue, I'm using atmos to autogenerate the backend. I'm getting a Error: Backend configuration changed. I am not quite, as I don't see Atmos making the backend
Dennis DeMarcoover 1 year ago
hmm atmos terraform generate backend is not making backend files, Any tips?
Miguel Zablahover 1 year ago
Hi!
I'm having an issue with this workflow:
where it will do a plan for all stacks even when I got some disable stacks on the CI/CD and since one of the components is dependent of another it fails with.
I get the same error when running this locally:
is there a way to skip a stack or mark it as ignore?
this is the error:
it's complaining about this concat I do:
but this works when vpc is apply but since this stack is not being uses at the moment it fails
any idea how to fix this?
I'm having an issue with this workflow:
cloudposse/github-action-atmos-affected-stackswhere it will do a plan for all stacks even when I got some disable stacks on the CI/CD and since one of the components is dependent of another it fails with.
I get the same error when running this locally:
atmos describe affected --include-settings=false --verbose=trueis there a way to skip a stack or mark it as ignore?
this is the error:
template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereferenceit's complaining about this concat I do:
'{{ concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets | default (list)) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets | default (list)) | toRawJson }}'but this works when vpc is apply but since this stack is not being uses at the moment it fails
any idea how to fix this?
Drew Fultonover 1 year ago
Good morning, I'm new here but looks like a great community. I've been using various Cloud Posse modules for terraform for a while but am now trying to set up a new AWS account from scratch to learn the patterns for the higher level setup. I've run into a problem and am hoping for some help. I feel like its probably just a setting somewhere but for the life of me I can't find it.
So I have been working through the Cold Start and have gotten through the account setup successfully but running the
1. When I first ran the
2. Now if I run the
If I run
I've tried deleting the
I've run with both Debug and Trace logs and am not seeing anything that points to where this error may be coming from.
I've been at this for hours yesterday and a few more hours this morning and decided it was time to seek some help.
Thanks for any advice!
So I have been working through the Cold Start and have gotten through the account setup successfully but running the
account-map commands is resulting in errors. I'll walk through the steps I've tried in case my tweaks have confused the root issue... For reference, I am using all the latest versions of the various components mentioned and pulled them in again just before posting this.1. When I first ran the
atmos terraform deploy account-map -s core-gbl-root command, I got an error that it was unable to find a stack file in the /stacks/orgs folder. That was fine as I wasn't using that folder but in the error message, it was clear that it was using a default atmos.yaml (this one) that includes orgs/**/* in the include_paths and not the one that I have been using on my machine. I've spent a long time trying to get it to use my local yaml and finally gave up and just added an empty file in the orgs folder to get passed that error. Then I get to a new error...2. Now if I run the
plan for account-map I get what looks like a correct full plan and then a new error at the end: ╷
│ Error:
│ Could not find the component 'account' in the stack 'core-gbl-root'.
│ Check that all the context variables are correctly defined in the stack manifests.
│ Are the component and stack names correct? Did you forget an import?
│
│
│ with module.accounts.data.utils_component_config.config[0],
│ on .terraform/modules/accounts/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
exit status 1If I run
atmos validate component account -s core-gbl-root I get successful validations and the same with validating account-map.I've tried deleting the
.terraform folders from both the accounts and account-map components and re-run the applies but get the same thing.I've run with both Debug and Trace logs and am not seeing anything that points to where this error may be coming from.
I've been at this for hours yesterday and a few more hours this morning and decided it was time to seek some help.
Thanks for any advice!
Samuel Thanover 1 year ago(edited)
hi all, am fresh new to atmos. And i was quite mind blown why i have not discover this tool yet 🙂
while still figuring my way into the documentation, I have a question about the components and stacks.
If i as an ops engineer were to create and standardise my own component in a repository to store all the standard "libraries" of components. Is it advisable if the stacks, and atmos.yaml be in a separate repository ?
Meaning, for a developer will only need to declare the various component inside a stacks folder of their own project's repository. Like only need to write yaml files and not needing to deal/write terraform code.
We then during execution have a github workflow that clones the core component repository into the project repository, and then complete the infra related deployment. is that something supported ?
while still figuring my way into the documentation, I have a question about the components and stacks.
If i as an ops engineer were to create and standardise my own component in a repository to store all the standard "libraries" of components. Is it advisable if the stacks, and atmos.yaml be in a separate repository ?
Meaning, for a developer will only need to declare the various component inside a stacks folder of their own project's repository. Like only need to write yaml files and not needing to deal/write terraform code.
We then during execution have a github workflow that clones the core component repository into the project repository, and then complete the infra related deployment. is that something supported ?
Alcpover 1 year ago
atlantis integration question, I have this config in atmos.yaml
the plan_requirements field doesn't seem to have any effect on the generated atlantis.yaml
project_templates:
project-1:
# generate a project entry for each component in every stack
name: "{tenant}-{stage}-{environment}-{component}"
workspace: "{workspace}"
dir: "./components/terraform/{component}"
terraform_version: v1.6.3
delete_source_branch_on_merge: true
plan_requirements: [undiverged]
apply_requirements: [mergeable,undiverged]
autoplan:
enabled: true
when_modified:
- '**/*.tf'
- "varfiles/{tenant}-{stage}-{environment}-{component}.tfvars.json"
- "backends/{tenant}-{stage}-{environment}-{component}.tf"the plan_requirements field doesn't seem to have any effect on the generated atlantis.yaml
John Polanskyover 1 year ago
trying to create a super simple example of atmos + cloudposse/modules to see if they will work for our needs. I'm using the s3-bucket but when I do a plan on it.. it prints out the terraform plan but that shows
I can't make heads or tails of this error.. there are no
Thanks in advance
│ Error: failed to find a match for the import '/opt/test/components/terraform/s3-bucket/stacks/orgs/**/*.yaml' ('/opt/test/components/terraform/s3-bucket/stacks/orgs' + '**/*.yaml')I can't make heads or tails of this error.. there are no
stacks/orgs under the s3-bucket module when I pulled it in with atmos vendor pullThanks in advance
S
Samuel Thanover 1 year ago(edited)
Continuing from my understanding of the design pattern. Base on this screenshot. Can i ask few questions:
1. The infrastructure repository holds all the atmos components, stacks, modules ?
2. The application repository only requires to write the taskdef.json for deployment into ECS ?
3. If there are additional infrastructure the application needs, example s3, dynamodb ... etc. The approach is to get the developer to write a PR to the infrastructure repository first with the necessary "stack" information, prior to performing any application type deployment ?
1. The infrastructure repository holds all the atmos components, stacks, modules ?
2. The application repository only requires to write the taskdef.json for deployment into ECS ?
3. If there are additional infrastructure the application needs, example s3, dynamodb ... etc. The approach is to get the developer to write a PR to the infrastructure repository first with the necessary "stack" information, prior to performing any application type deployment ?
Erik Osterman (Cloud Posse)over 1 year ago
tretinhaover 1 year ago
Hey, on a setup like this:
My keycloak_acm component is failing to actually get the output of the one above it. Am I doing this fundamentally wrongly?
The defaults.yaml being imported looks like this:
import:
- catalog/keycloak/defaults
components:
terraform:
keycloak_route53_zones:
vars:
zones:
"[redacted]":
comment: "zone made for the keycloak sso"
keycloak_acm:
vars:
domain_name: [redacted]
zone_id: '{{ (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id }}'My keycloak_acm component is failing to actually get the output of the one above it. Am I doing this fundamentally wrongly?
The defaults.yaml being imported looks like this:
components:
terraform:
keycloak_route53_zones:
backend:
s3:
workspace_key_prefix: keycloak-route53-zones
metadata:
component: route53-zones
keycloak_acm:
backend:
s3:
workspace_key_prefix: keycloak-acm
metadata:
component: acm
depends_on:
- keycloak_route53_zonesPePe Amengualover 1 year ago(edited)
Hello, me again....... component setting are arbitrary keys?
can I do that?
settings:
pepe:
does-not-like-sushi: truecan I do that?
PePe Amengualover 1 year ago
Is there a way to inherit metadata for all components without having to create an
abstract component? something that all component should havetokaover 1 year ago
Hi 👋 calling for help with configuring gcs backend.
I'm bootstraping a GCP organization. I have a module that created a seed project with initial bits, including gcs bucket that I would like to use for storing tfstate files.
I've run atmos configured with local tf backend for the init.
Now I'd like to move my backend from local to the bucket and move from there. I've added bucket configuration to
Unfortunately atmos says that this bucket doesn't exist, despite I have copied a test file into the bucket 😕
I'm bootstraping a GCP organization. I have a module that created a seed project with initial bits, including gcs bucket that I would like to use for storing tfstate files.
I've run atmos configured with local tf backend for the init.
Now I'd like to move my backend from local to the bucket and move from there. I've added bucket configuration to
_defaults.yaml for my org:backend_type: gcs
backend:
gcs:
bucket: "bucket_name"Unfortunately atmos says that this bucket doesn't exist, despite I have copied a test file into the bucket 😕
╷
│ Error: Error inspecting states in the "local" backend:
│ querying Cloud Storage failed: storage: bucket doesn't existPePe Amengualover 1 year ago(edited)
can I vendor all the components using vendor.yaml? or do I have to set every component that I want to vendor?
RBover 1 year ago
what’s the best way to distinguish between custom components and vendored components from cloudposse?
Erik Osterman (Cloud Posse)over 1 year ago
@burnzy is this working for you? https://github.com/cloudposse/terraform-yaml-stack-config/pull/95 @Jeremy G (Cloud Posse) is looking into something similar and we'll likely get this merged. Sorry it fell through the cracks.
PePe Amengualover 1 year ago
is it possible to vendor pull from a different repo?
PePe Amengualover 1 year ago
I got a brand new error with version 1.90.0 with my vendor file:
ATMOS_LOGS_LEVEL=Trace atmos vendor pull
ls -l
shell: /usr/bin/bash -e {0}
env:
ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos/config
ATMOS_BASE_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos
GITHUB_TOKEN: ***
Processing vendor config file '/home/runner/work/pepe-iac/pepe-iac/atmos/vendor.yaml'
template: source-0:1: function "secrets" not definedRyanover 1 year ago
Morning everyone. Is there a switch with atmos terraform plan to get it to -out to a readable file? I see it with straight terraform, but wasn't sure how the command would look as part of an atmos command like terraform plan -s mystack-stack
PePe Amengualover 1 year ago(edited)
Continuing my vendoring journey with Atmos, I will like to avoid having to do this :
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
excluded_paths:
- "**/production/**"
- "**/qa/**"
- "**/development/**"
- "**/staging/**"
- "**/management/**"
PePe Amengualover 1 year ago
When do you thin we can get https://github.com/cloudposse/github-action-atmos-affected-stacks updated to use atmos 1.92.0? are you guys ok if I push a PR?
PePe Amengualover 1 year ago
Is there a way to always terraform apply all components of a stack?
PePe Amengualover 1 year ago(edited)
I have an Idea: The Plan and apply actions support grabbing the config from the
, by adding a new input to the action to something like this:
we can allow a more flexible and backwards compatible solution for people that needs to have integrations per stack
atmos.yaml .integrations.github.gitops.* and since the .integrations is a free map I was wondering if we could add an option to pass a scope to the .integrations so that we can do something like :github:
sandbox:
gitops:
role:
plan: sandboxrole
gitops:
role:
plan: generalrole, by adding a new input to the action to something like this:
atmos-integrations-scope:
description: The scope for the integrations config in the atmos.yaml
required: false
default: ".integrations.github.gitops"we can allow a more flexible and backwards compatible solution for people that needs to have integrations per stack
Markusover 1 year ago(edited)
I'm having some weird issues with the github-action-atmos-terraform-apply action, where it seemingly forgets which command it should run. I'm using OpenTofu and have set
I'm using the GitHub actions exactly as described in the docs.
I've combed through both actions to try and figure out what the difference is, but I got no clue (I did find a discrepancy on how the cache is loaded which led to the cache key not being found as the path is different). For context, it's defined in
Is there anything I should look out for to make this work? Happy to post config files, they are pretty standard.
components.terraform.command: tofu which the ...-plan action picks up perfectly fine (and it also works locally), but ...-apply ignores that setting and tries to use terraform which isn't installed. Using ATMOS_COMPONENTS_TERRAFORM_COMMAND works, which makes me believe it's an issue on how the config is read (although it's the only thing that is being ignored).I'm using the GitHub actions exactly as described in the docs.
I've combed through both actions to try and figure out what the difference is, but I got no clue (I did find a discrepancy on how the cache is loaded which led to the cache key not being found as the path is different). For context, it's defined in
atmos.yaml in the workspace root, not in /rootfs/usr/local/etc/atmos/atmos.yaml which is the config-path (the folder not the file) that's defined when running the action.Is there anything I should look out for to make this work? Happy to post config files, they are pretty standard.
PePe Amengualover 1 year ago
and yet another interesting opportunity: since
github-action-atmos-terraform-plan uses action/checkout inside the action, if you vendor files but do not commit the action/cache wipes them out.Dennis Bernardyover 1 year ago
Hey, I'm not sure if I'm getting this right. https://atmos.tools/core-concepts/components/terraform/backends/#terraform-backend-inheritance states, that if I want to manage multiple accounts I need a separate bucket, dynamodb and iam role. So far so good, but if I run
My backend configuration looks like this:
The profile is switched based on the stack.
atmos describe stacks it seems like the first stack is running fine, but as soon as it is trying to fetch the second stack I get Error: Backend configuration changed. I use init_run_reconfigure: true in atmos.yaml.My backend configuration looks like this:
terraform:
backend_type: s3
backend:
s3:
profile: "{{ .vars.account_profile }}"
encrypt: true
key: "terraform.tfstate"
bucket: "kn-terraform-backend-{{ .vars.account_id }}-{{ .vars.region }}"
dynamodb_table: "TerraformBackendLock"The profile is switched based on the stack.
Ryanover 1 year ago
I'm trying to decide something similar with my backend. I have everything right now in Govcloud West, but was hoping to monorepo East as part of the same deployment repository. Unsure if anyones done that here.
tretinhaover 1 year ago
Is it expected to get "<no value>" for when I describe a stack that has one of its values coming from a datasource like Vault, or does that mean that I'm not grabbing the value correctly? Thanks!
tretinhaover 1 year ago
I tried it without specifying an actual secret too and got a "map[]" in the value field of my variable. This is why I think I'm not correctly grabbing the variable but I just wanted to confirm
tretinhaover 1 year ago
For context, I'm trying to grab the value like this:
- name: KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY
value: '{{ (datasource "vault-dev" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'tretinhaover 1 year ago
Ah, also wanted to mention that when I do something like:
I do receive a value. Also, I've set up a custom command just to see if I'm getting the VAULT_TOKEN correctly inside atmos and it seems that I am:
gomplate -d vault=vault+http:/// -i '{{(datasource "vault" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'I do receive a value. Also, I've set up a custom command just to see if I'm getting the VAULT_TOKEN correctly inside atmos and it seems that I am:
$ atmos validate-token
Executing command:
echo "VAULT_TOKEN is: $VAULT_TOKEN"
VAULT_TOKEN is: [[redacted]]PePe Amengualover 1 year ago(edited)
is it possible use the cloudposse/github-action-atmos-get-setting@v2 action to retrieve a setting that is not at the component level and is outside the
terraform/components?( like a var or anything else)Kalman Speierover 1 year ago
hey folks, i would like to use secrets from 1password, is there a way to replace the terraform command with
op run --env-file=.env -- tofu ?Kalman Speierover 1 year ago(edited)
it seems i can create a custom command in atmos.yaml, but overriding terraform
apply is not working. command just hanging.PePe Amengualover 1 year ago
can you do this?
to then import the file after is being created?
- path: "sandbox/service"
context: {}
skip_templates_processing: false
ignore_missing_template_values: false
skip_if_missing: trueto then import the file after is being created?
PePe Amengualover 1 year ago
do you guys allow GH action to create caches? https://github.com/cloudposse/github-action-atmos-terraform-plan/actions/runs/11432993177/job/31804302214
Run actions/cache@v4
with:
path: ./
key: atmos
enableCrossOsArchive: false
fail-on-cache-miss: false
lookup-only: false
save-always: false
env:
AWS_REGION: us-east-2
Cache not found for input keys: atmosKalman Speierover 1 year ago
is there any simple example regarding whats the best practice to create a kubernetes cluster (let’s say gke but the provider doesn’t really matter) and deploy some kubernetes manifests to the newly provisioned cluster?
Kalman Speierover 1 year ago
so what i’m after really is how to provide kube host, token and cert from the cluster component to another component.
Kalman Speierover 1 year ago
what is the recommended way,
atmos.Component template function or something else maybe?MPover 1 year ago
Hi all - how do you usually handle replacing an older version of an atmos component with a new one? For example, I have an existing production EKS cluster defined in the "cluster" component in my "acme-production.yaml" stack file. I want to replace this with a new cluster component with different attributes.
I looked into using the same stack file and adding a new component called "cluster_1", but that breaks some of my reusable catalog files that have the component name set to "cluster". I know I can also just create a new stack file but that approach also seems not ideal.
Any advice is appreciated!
I looked into using the same stack file and adding a new component called "cluster_1", but that breaks some of my reusable catalog files that have the component name set to "cluster". I know I can also just create a new stack file but that approach also seems not ideal.
Any advice is appreciated!
Kalman Speierover 1 year ago
is that possible to reference to a list with atmos.Component function? it seems the genereated tfvars will be
string instead of list:...
components:
terraform:
foo:
vars:
bar: '{{ (atmos.Component "baz" .stack).outputs.mylist }}'
...Hamza Ololover 1 year ago
hey all,
is there a way to edit where the
My current non dynamic solution is setting a local env variable with the aws profile name and atmos picks that up fine and finds my credentials. But is there a way to configure the atmos.yaml so that the
so far it doesnt look to have an option like that.
is there a way to edit where the
terraform init -reconfigure looks for AWS credentials? I want to select an AWS profile name dynamically via my cli flags and without hardcoding it into the terraform source codeMy current non dynamic solution is setting a local env variable with the aws profile name and atmos picks that up fine and finds my credentials. But is there a way to configure the atmos.yaml so that the
terraform init -reconfigure looks for the aws profile in a flag in my cli command such as-s <stackname> ? where the stackname matches my aws profile name?so far it doesnt look to have an option like that.
Patrick McDonaldover 1 year ago
When working in a monorepo for Atmos with various teams organized under stacks/orgs/acme/team1, team2, etc., will the affected stacks GitHub Action detect changes in other teams' stacks? At times, we only want to plan/apply changes to our team's stacks and not those of other teams.
Kalman Speierover 1 year ago
is it possible get the path of a stack? for example:
...
components:
terraform:
kubeconfig:
vars:
filename: '{{ .stacks.base_path }}/{{ .stack }}/kubeconfig'
...RBover 1 year ago
Anything on the roadmap for org wide stacksets natively supported as an upstream component? These are very useful because the stackset auto deploys infrastructure when a new account is created.
For example,
For example,
aws-team-roles component could be converted to a stackset and get deployed without manually needing to provision the component.Ryanover 1 year ago
Just a question as a more junior atmos/terraformer. I am prepping to deploy some terraform stacks to GovCloud East. My backend sits in West right now. I took this morning to create an east "global" yaml that references my West backend and ARNs, but configures East region. I tested a small dev S3 module and it deployed in the org, in East, in the account it's supposed to be in. I'm wondering from a design perspective do these backends get split off. It would seem easier if I can just leverage what I already have, but I'm not sure what any negatives to doing this are. Hope this question makes sense, just trying to strategize with regards to my backend and atmos.
Michal Tomaszekover 1 year ago
Hi, I'm trying to deploy Spacelift components following these instructions:
https://docs.cloudposse.com/layers/spacelift/?space=tenant-specific&admin-stack=managed
Everything seems to be ok until I try to deploy plat-gbl-spacelift stack. I get:
│ Error:
│ Could not find the component 'spacelift/spaces' in the stack 'plat-gbl-spacelift'.
│ Check that all the context variables are correctly defined in the stack manifests.
│ Are the component and stack names correct? Did you forget an import?
│
│
│ with module.spaces.data.utils_component_config.config[0],
│ on .terraform/modules/spaces/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
Any ideas?
https://docs.cloudposse.com/layers/spacelift/?space=tenant-specific&admin-stack=managed
Everything seems to be ok until I try to deploy plat-gbl-spacelift stack. I get:
│ Error:
│ Could not find the component 'spacelift/spaces' in the stack 'plat-gbl-spacelift'.
│ Check that all the context variables are correctly defined in the stack manifests.
│ Are the component and stack names correct? Did you forget an import?
│
│
│ with module.spaces.data.utils_component_config.config[0],
│ on .terraform/modules/spaces/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
Any ideas?
Patrick McDonaldover 1 year ago
Hello, our setup is each AWS account has its own state bucket and DynamoDB table. I’m using a role in our identity account that authenticates via GitHub OIDC and can assume roles in target accounts. My challenge is with the GitHub Action for "affected stacks", how can I configure Atmos to assume the correct role in each target account when it runs? Any guidance would be much appreciated!
Vitaliiover 1 year ago
hello guys
I am playing Atmos and GitHub for my project
currently, I am having a problem with posting comments to GitHub pull requests with
I can
my question is:
can I run
In the documentation, I didn`t find anything about it
If you know any other way how I can post comments to pull requests in a readable way via
please help
appreciate any help
I am playing Atmos and GitHub for my project
currently, I am having a problem with posting comments to GitHub pull requests with
atmos terraform plan <stack> -s #####I can
t parse output in a readable this relative to terraform -no-color`my question is:
can I run
atmos terraform plan <stack> -s ##### -no-color or any other arguments that will be relative to -no colorIn the documentation, I didn`t find anything about it
If you know any other way how I can post comments to pull requests in a readable way via
atmos, or some parse way to parseplease help
appreciate any help
Michael Dizonover 1 year ago
hey guys! just a quick PR for the
terraform-aws-config module https://github.com/cloudposse/terraform-aws-config/pull/124