31 messages
๐ฝ๏ธ
Alberto Rojas4 months ago
๐ Hello, not sure if this is an expected behaviour ๐งต
Carter Danko4 months ago
Question about specifying stores in the
Instead of specifying each store for each account individually like
Is there any option to dynamically create those store entries based on the vars like
doing that would allow us to just infer the right store when the hooks run, vs hardcoding all the different stores for each of our accounts.
atmos.yaml if anyone has any guidance:Instead of specifying each store for each account individually like
stores:
sandbox/use1/devops:
type: aws-ssm-parameter-store
options:
region: us-east-1
read_role_arn: arn:aws:iam::123456789:role/sandbox-gbl-devops-terraform
write_role_arn: arn:aws:iam::123456789:role/sandbox-gbl-devops-terraformIs there any option to dynamically create those store entries based on the vars like
tenant-environment-stage and dynamically build those in the atmos.yaml and reference them in the downstream stacks? IE something like.stores:
{{tenant}}/{{stage}}:
type: aws-ssm-parameter-store
options:
region: us-east-1
read_role_arn: arn:aws:iam::{{account_id}}:role/{{tenant}}-gbl-{{stage}}-terraform
write_role_arn: arn:aws:iam::{{account_id}}:role/{{tenant}}-gbl-{{stage}}-terraformdoing that would allow us to just infer the right store when the hooks run, vs hardcoding all the different stores for each of our accounts.
Jonathan Rose4 months ago
I have, what I perceive to be an odd request. Could Atmos vendoring be used to "bootstrap" an Atmos project? If so, is there a way to leverage variables/templates so then, when running
atmos vendor pull, it templates select files?Alberto Rojas4 months ago
Hello, is the
pwd:
--chdir flag working as expected?pwd:
repository-name/components/terraform/dynamodbatmos --chdir full/path/to/repository-name terraform generate varfile -s foo-stack foo-component โ
๏ธ worksatmos --chdir full/path/to/repository-name terraform plan -s foo-stack foo-component ๐ฅ fails with: The atmos.yaml CLI config file was not found.ls -ls full/path/to/repository-name atmos.yaml hereAndrew Beveridge4 months ago
Hey folks, trying to learn atmos rapidly, do y'all have any training courses or videos or anything you recommend (including paid)?
I know I can walk myself through https://atmos.tools/introduction/ (and I'm working on that already) but I'd love to accelerate my learning with something guided or instructed, if it exists! ๐
I know I can walk myself through https://atmos.tools/introduction/ (and I'm working on that already) but I'd love to accelerate my learning with something guided or instructed, if it exists! ๐
Erik Osterman (Cloud Posse)4 months ago
E
Esteban Angee4 months ago
Hi all,
I have the following configuration:
Atmos is applying my config, literally, not getting the value from the state, it is rather applying the key id literally, without executing the !terraform.state function. It only happens in my laptop, not in our automation server, which leads me to thing it's a config issue, however, the atmos.yaml file is the same.
Kindly appreciate if you have any idea of what this could be
I have the following configuration:
s3-bucket:
vars:
defaults:
server_side_encryption_configuration:
rule:
- apply_server_side_encryption_by_default:
kms_master_key_id:
!terraform.state kms "wrapper.s3.key_arn // ""MOCK__kms_master_key_id"""
sse_algorithm: 'aws:kms'
bucket_key_enabled: trueAtmos is applying my config, literally, not getting the value from the state, it is rather applying the key id literally, without executing the !terraform.state function. It only happens in my laptop, not in our automation server, which leads me to thing it's a config issue, however, the atmos.yaml file is the same.
Kindly appreciate if you have any idea of what this could be
PePe Amengual4 months ago
Does
atmos auth support the use of terraform.output functions?Erik Osterman (Cloud Posse)4 months ago
RFC: support atmos version constraints
https://github.com/cloudposse/atmos/pull/1759
This is a PRD that aims to implement something similar to the Terraform version setting in Terraform to ensure that users are using the appropriate version of Atmos for a given repository. Feedback, welcome.
https://github.com/cloudposse/atmos/pull/1759
This is a PRD that aims to implement something similar to the Terraform version setting in Terraform to ensure that users are using the appropriate version of Atmos for a given repository. Feedback, welcome.
Erik Osterman (Cloud Posse)4 months ago
RFC: Atmos profiles
https://github.com/cloudposse/atmos/pull/1752
This is PRD aiming to implement the concept of Atmos profiles.
You ever had the problem where in CI you need a certain set of configuration values? But locally, you use a different set. Developers might want a certain set of settings that reduce some of the output in Atmos so it's less overwhelming, while maybe core platform engineers prefer a different set of configuration values.
Maybe you're using our new setting for Atmos auth. Different user groups will have different default identities. How can we configure that?
Atmos Profiles aims to solve this by allowing you to easily set a profile which overrides any settings.
https://github.com/cloudposse/atmos/pull/1752
This is PRD aiming to implement the concept of Atmos profiles.
You ever had the problem where in CI you need a certain set of configuration values? But locally, you use a different set. Developers might want a certain set of settings that reduce some of the output in Atmos so it's less overwhelming, while maybe core platform engineers prefer a different set of configuration values.
Maybe you're using our new setting for Atmos auth. Different user groups will have different default identities. How can we configure that?
Atmos Profiles aims to solve this by allowing you to easily set a profile which overrides any settings.
Erik Osterman (Cloud Posse)4 months ago
Feel free to comment on any of the above pull requests and add your ideal requirements.
Alberto Rojas3 months ago(edited)
Hello, how do you decomission/destroy something using the catalog?
Example: I have two dynamodb definition within a catalog, I applied them so they exists, later on I want to get rid of one of the dynamodb
how do atmos notice that, I ran
I think I have a related question but about the
Example: I have two dynamodb definition within a catalog, I applied them so they exists, later on I want to get rid of one of the dynamodb
how do atmos notice that, I ran
atmos describe affected but that does not catch that change ๐คI think I have a related question but about the
terraform import how is it handle?Zack3 months ago
@Erik Osterman (Cloud Posse), Didn't you post something about context7 and atmos? Trying to look at it now but I it got taken out by free slack
Y
Yangci Ou3 months ago
Is this a known bug / upcoming fix with
โข I think I've noticed that the docs webpage is updated before the releases are out (e.g. the
atmos auth console , where it seems like the --identity flag is not passed in. I've confirmed $IDENTITY nor $ATMOS_IDENTITY env vars are not set..โข I think I've noticed that the docs webpage is updated before the releases are out (e.g. the
auth console commands were on the docs page even when the latest released was 1.195.0), so wanted to make sure and report this. Yangci Ou3 months ago
Have y'all considered something to "discover" or generate SSO configurations of the available accounts and permission set roles?
As in, in
But, when you log into the AWS Identity Center SSO start portal, there's multiple accounts and roles under each to choose from. It'd be nice to have a way to automatically generate the configs for all the available roles.
โข It could be something like
To be a honest, I don't think this is a pain point, just a small quality of life. Manual configurations is fine as it's an one time thing and it's explicit.
As in, in
atmos.yaml, consumers/repositories have to manually write the different identities there is. Nothing wrong with that, and I actually like the fact that it's like that since specific codebases should only use these identities/roles.But, when you log into the AWS Identity Center SSO start portal, there's multiple accounts and roles under each to choose from. It'd be nice to have a way to automatically generate the configs for all the available roles.
โข It could be something like
atmos auth *?* --provider company-sso --generate and it returns all the available identities to put into the atmos.yaml: accountName/roleName:
kind: aws/permission-set
via:
provider: company-sso
principal:
name: AdministratorAccess
account:
name: account-name/IDTo be a honest, I don't think this is a pain point, just a small quality of life. Manual configurations is fine as it's an one time thing and it's explicit.
Rob parker3 months ago(edited)
Has anyone paired AWS Harmonix (open-source Backstage IDP that scaffolds IaC/files & commits via GitHub API) with Atmos?
Since Harmonix decouples CI/CD & supports any IaC in templates, seems ~80% plug-and-play:
โข Devs scaffold Atmos components via Harmonix UI (choose atmos module , inputs are added to stack yaml)
โข Harmonix natively triggers GitHub commits on actions via its API (scaffolding, new service instances or an app, environments - inputs just added to stacks yaml )
โข Atmos handles execution via GitOps - (no real changes needed on Atmos side)
Harmonix concepts align well: Environment โ Environment Provider (AWS account) โ Workload/Apps maps to Atmos stacks, components & context linkage .
Since Harmonix decouples CI/CD & supports any IaC in templates, seems ~80% plug-and-play:
โข Devs scaffold Atmos components via Harmonix UI (choose atmos module , inputs are added to stack yaml)
โข Harmonix natively triggers GitHub commits on actions via its API (scaffolding, new service instances or an app, environments - inputs just added to stacks yaml )
โข Atmos handles execution via GitOps - (no real changes needed on Atmos side)
Harmonix concepts align well: Environment โ Environment Provider (AWS account) โ Workload/Apps maps to Atmos stacks, components & context linkage .
PePe Amengual3 months ago
@Igor Rodionov https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/89 this action does not run on self hosted runner in
kubernetes mode if the github.workspace context variable is usedAna Nikolic3 months ago(edited)
Question: How to handle unapplied merged changes with
Scenario:
โข Merged a PR to
โข Changes were never applied to infrastructure
โข Opened new PR to trigger plan again
โข
How do you handle cases where config is in
I am used with Terragrunt, a whitespace commit would re-trigger planning in 'affected' file. What's the Atmos pattern for this?
Note: I am referencing https://github.com/cloudposse/github-action-atmos-affected-stacks/ as inspiration, however, not using it out of the box.
atmos describe affected?Scenario:
โข Merged a PR to
main that adds a new component to a stackโข Changes were never applied to infrastructure
โข Opened new PR to trigger plan again
โข
atmos describe affected returns [] because both branches have identical configsHow do you handle cases where config is in
main but not yet applied?I am used with Terragrunt, a whitespace commit would re-trigger planning in 'affected' file. What's the Atmos pattern for this?
Note: I am referencing https://github.com/cloudposse/github-action-atmos-affected-stacks/ as inspiration, however, not using it out of the box.
PePe Amengual3 months ago
what is the reasoning to add this step onto the apply action? https://github.com/cloudposse/github-action-atmos-terraform-apply/blob/replace-workspace-to-env-variable/action.yml#L346
P
PePe Amengual3 months ago
any reason why the plan and apply actions do not use the same path? I think this is making the plan action not able to find available caches
Love Eklund3 months ago
I was looking at the atmos schema for the manifest files
https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
and noticed it support an
and then I could use access it like
in my catalog files. But that does not seem to be the case.
I also cannot find any documentation on it on the website.
Does anyone here know what the env block does ?
https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
and noticed it support an
env property. What does that env property do ? I assume that I could define value there that would be available as env variables when calling the stack likeenv:
foo: "bar"and then I could use access it like
!env fooin my catalog files. But that does not seem to be the case.
I also cannot find any documentation on it on the website.
Does anyone here know what the env block does ?
Alek3 months ago(edited)
Hello team! It is likely a super basic question.
Is vendoring strictly versioned components in a folder-based structure supported using the
I am performing a breaking change upgrade to
This doesn't seem to work, the vendorred component is miss-referencing the modules:
Am I on the right path here?
Is vendoring strictly versioned components in a folder-based structure supported using the
atmos/v1/ComponentVendorConfig?I am performing a breaking change upgrade to
aurora-postgres component, and to that end, I would like to pull in a temporary, canary version of my component:# ./stacks/catalog/aurora-postgres/staging.yaml
import:
- catalog/aurora-postgres/defaults
components:
terraform:
aurora-postgres:
backend:
s3:
workspace_key_prefix: aurora-postgres
metadata:
component: aurora-postgres/v1.540.4 # changed from "aurora-postgres"
inherits:
- aurora-postgres/defaults
vars:
enabled: true
...
# ./components/terraform/aurora-postgres
โโโ .
โโโ .terraform/
โ โโโ environment
โ โโโ modules/
โ โโโ providers/
โ โโโ terraform.tfstate
โโโ backend.tf.json
โโโ cluster-regional.tf
โโโ component.yaml
โโโ context.tf
โโโ kms.tf
โโโ main.tf
โโโ outputs.tf
โโโ plat-euw1-staging-aurora-postgres.planfile
โโโ plat-euw1-staging-aurora-postgres.terraform.tfvars.json
โโโ providers.tf
โโโ remote-state.tf
โโโ ssm.tf
โโโ v1.540.4/
โ โโโ .terraform/
โ โโโ README.md
โ โโโ backend.tf.json
โ โโโ cluster-regional.tf
โ โโโ component.yaml
โ โโโ context.tf
โ โโโ kms.tf
โ โโโ main.tf
โ โโโ outputs.tf
โ โโโ plat-euw1-staging-aurora-postgres-aurora-postgres.terraform.tfvars.json
โ โโโ plat-euw1-staging-aurora-postgres.planfile
โ โโโ plat-euw1-staging-aurora-postgres.terraform.tfvars.json
โ โโโ providers.tf
โ โโโ remote-state.tf
โ โโโ ssm.tf
โ โโโ variables.tf
โ โโโ versions.tf
โโโ variables.tf
โโโ versions.tfThis doesn't seem to work, the vendorred component is miss-referencing the modules:
infrastructure on ๎ main [$โยป!+?] via ๐ณ colima on โ๏ธ core-identity.IdentityAdminTeamAccess (eu-west-1) [50m39s] took 11s
๓ฐ 7% โฏ atmos tf plan aurora-postgres -s plat-euw1-staging
Initializing the backend...
Successfully configured the backend "s3"! OpenTofu will automatically
use this backend unless the backend configuration changes.
Initializing modules...
...
โท
โ Error: Unreadable module directory
โ
โ Unable to evaluate directory symlink: lstat ../account-map: no such file or directory
โต
โท
โ Error: Unreadable module directory
โ
โ The directory could not be read for module "iam_roles" at providers.tf:16.
โต
Error
exit status 1Am I on the right path here?
L
Love Eklund3 months ago
Hey,
I'm looking into this
https://atmos.tools/functions/yaml/template#advanced-examples
Seems like if I have one file that defines a value using this pattern
and another that imports this and tries to merge the value (or override in this case with list overwrite)
I get an error like this
Am I doing something wrong here ?
I'm looking into this
https://atmos.tools/functions/yaml/template#advanced-examples
Seems like if I have one file that defines a value using this pattern
# catalog/blob/defaults
components:
terraform:
blob:
metadata:
component: blob
vars:
foo: !template "{{ toJson .settings.bar }}" # this parses to a listand another that imports this and tries to merge the value (or override in this case with list overwrite)
import:
- catalog/blob/defaults
components:
terraform:
blob:
metadata:
component: blob
vars:
foo: []I get an error like this
Error: merge error: mergo merge failed: cannot override two slices with different type ([]interface {}, string)Am I doing something wrong here ?
T
toka3 months ago
Hey, looking for an advice what approach to take.
During component plan/apply I face this error
However
Searched my yamls for any kind of special characters, non-ascii characters, indent etc.
What kind of approach you could you recommend to pinpoint the issue?
During component plan/apply I face this error
Error: yaml: line 23: did not find expected comment or line breakHowever
atmos validate component or atmos describe component works just fine. Seems like some quirk I cannot find for two days lol.Searched my yamls for any kind of special characters, non-ascii characters, indent etc.
What kind of approach you could you recommend to pinpoint the issue?
T
Thomas Spear3 months ago(edited)
Hi team, looking for some guidance on how to minimize warnings from tf about values for undefined variables.
When building out our stacks, we have some variables that are global in scope because they are used by multiple components, but they aren't used by every component in a given stack. When running
Atmos, on the other hand, wants us to define the variables via
We currently work around this by adding variable blocks that go unused to the components that don't need those variables and it feels unclean to me so I thought I'd ask if there is anything we're doing incorrectly by doing it the way we are, or if there is a better way that feels cleaner, or can something be done in atmos to improve this?
When building out our stacks, we have some variables that are global in scope because they are used by multiple components, but they aren't used by every component in a given stack. When running
atmos terraform deploy, these globally defined variables' values are causing terraform - on the components that don't use those variables - to throw the "Value for undefined variable" warning. We want to silence these warnings and terraform says to define those vars as TF_VAR_* variables:To silence this warning, the recommended approach is to use environment variables prefixed withTF_VAR_. Terraform treats these environment variables differently from variables specified in.tfvarsfiles or on the command line. Environment variables are presumed to be set with the intent of ignoring them when not relevant to a specific configuration, thus avoiding the "undeclared variable" warning.
Atmos, on the other hand, wants us to define the variables via
!env VAR_NAME as an input value via stack yaml component_name.vars.var_name instead of using TF_VAR_ variables, and I don't know if it's still the case or not because I've avoided using the TF_VAR_ style variables, but atmos at one point even threw warnings about using them.We currently work around this by adding variable blocks that go unused to the components that don't need those variables and it feels unclean to me so I thought I'd ask if there is anything we're doing incorrectly by doing it the way we are, or if there is a better way that feels cleaner, or can something be done in atmos to improve this?
T
toka3 months ago
Hello gents,
I'm working on deploying GKE cluster via Atmos GH actions and bootstrapping Argo with it within a single GH PR.
So far I have three atmos components:
cluster, argocd deployment and initial k8s secrets deployment.
They are all dependent on each other.
It's easy to handle during local Atmos Apply by executing components in the right order (cluster -> argocd boostrap -> Atmos GH actions? The produced job matrix executes them independently, so the context of
How do you handle that? Atmos Pro maybe?
I'm working on deploying GKE cluster via Atmos GH actions and bootstrapping Argo with it within a single GH PR.
So far I have three atmos components:
cluster, argocd deployment and initial k8s secrets deployment.
They are all dependent on each other.
It's easy to handle during local Atmos Apply by executing components in the right order (cluster -> argocd boostrap -> Atmos GH actions? The produced job matrix executes them independently, so the context of
depends_on is not available.How do you handle that? Atmos Pro maybe?
M
Mike3 months ago
I'm reviewing the atmos stack concept and inheritance.
In the sample layout, the idea is that _default.yaml say under acme OU will contain default configs such as OU name: acme. You can continue defining variables and grouping them under the folder structure which reflects some hierarchy structure. For stacks that actually will be deployed, you'll need a stack yaml file that is "made" of the assorted mixins (config snippets), defaults yaml, etc. So essentially, you'll have a ton of yaml with small config snippets. The actual stack you use to deploy an actual service (deploy a generic webserver) will likely contain a ton of imports?
In the sample layout, the idea is that _default.yaml say under acme OU will contain default configs such as OU name: acme. You can continue defining variables and grouping them under the folder structure which reflects some hierarchy structure. For stacks that actually will be deployed, you'll need a stack yaml file that is "made" of the assorted mixins (config snippets), defaults yaml, etc. So essentially, you'll have a ton of yaml with small config snippets. The actual stack you use to deploy an actual service (deploy a generic webserver) will likely contain a ton of imports?
M
Mike3 months ago
1. Is that more or less the concept that you can scale config management via folder structuring?
2. You have to build out all of the reusable config snippet yamls?
3. Then you constitute a final manifest via imports for a stack that represents a staging deployment for a OU (finance) in region (US-east2)?
2. You have to build out all of the reusable config snippet yamls?
3. Then you constitute a final manifest via imports for a stack that represents a staging deployment for a OU (finance) in region (US-east2)?
P
Pedro3 months ago(edited)
Hi all, hoping someone can help me out with a doubt. I want to provide an variable/input to Component A, that is part of an output of Component B. Unfortunately it's the type the is only known after apply, so the plan fails with:
Now, obviously I could resolve/handle this in the root module with a
Right now I have something like:
I've tried a mix of
Now, I'm sure that this is a solved problem, but still haven't found any blessed and documented approach (at least not clear to me). So would appreciate a pointer in the right direction
The "for_each" value includes keys or set values from resource attributes that cannot be determined until apply, and so OpenTofu cannot determine what will identify the instances of this resource.Now, obviously I could resolve/handle this in the root module with a
try() but I would prefer to avoid having the deal with that module and instead learn the "correct " approach to handle this with Atmos. Essentially, I want to provide a fallback/default to be used during plan, similar to what Terragrunt does with their mock_outputs (forgive the necessary comparison with Terragrunt).Right now I have something like:
role: !terraform.state gcp/service-accounts .custom_roles.subscriber-{{ .settings.context.environment }}.idI've tried a mix of
role: "{{ coalesce (terraform.State gcp/service-accounts .custom_roles.publisher-{{ .settings.context.environment }}.id) (dict) }}"
# Errors with: "template: templates-all-atmos-sections:638: function "terraform" not defined"
role: "{{ coalesce (!terraform.State gcp/service-accounts .custom_roles.publisher-{{ .settings.context.environment }}.id) (dict) }}"
# Errors with: template: templates-all-atmos-sections:638: unexpected "!" in parenthesized pipeline (sort of expected this one
Others...Now, I'm sure that this is a solved problem, but still haven't found any blessed and documented approach (at least not clear to me). So would appreciate a pointer in the right direction
P
Pedro3 months ago
Hey all, me again. This time with a quick question regrading workflows. I'm attempting to have a workflow that will trigger a multi-component apply using
--query but that's failing with with the following error: Error: component == : the component argument can't be used with the multi-component (bulk operations) flags --affected , --all , --query and --components. The command however works just fine if I call it outside a workflow, so wondering if this is some expected behaviour or the way workflows work.M
Mihai I3 months ago
Hey, I've been trying to setup microsoft azure oidc auth, according to this tutorial: https://atmos.tools/cli/commands/auth/tutorials/azure-authentication/
I'm encountering this error, any advice?
Just used this demo setup:
On atmos 1.200.0
I'm encountering this error, any advice?
Initialize Providers
Error: invalid provider kind
## Explanation
unsupported provider kind: azure/oidc
Initialize Providers
Error: failed to initialize providers: invalid provider config: provider=azure-oidc: invalid provider kind: unsupported provider kind: azure/oidc
Error
Error: failed to initialize auth manager
failed to initialize providers: invalid provider config: provider=azure-oidc: invalid provider kind: unsupported provider kind: azure/oidcJust used this demo setup:
auth:
providers:
azure-oidc:
kind: azure/oidc
tenant_id: .
client_id: .
subscription_id: .
identities:
azure-ci:
kind: azure/subscription
via:
provider: azure-oidc
principal:
subscription_id: ...On atmos 1.200.0