35 messages
👽️
PePe Amengualover 1 year ago
with the atmos github actions I can plan/apply no problem but now I want to destroy and I do not have a
count = module.this.enabled ? 1 : 0, how do you guys destroy? I will like to be able to find the delete yaml from the stack and destroy that one if possibleKalman Speierover 1 year ago
hey folks, for some reason atmos generate wrong terraform workspace name, i have a component named
nats and stack named dev and instead of dev-nats the workspace is simply dev what could cause that ?Kalman Speierover 1 year ago
possible to organize a few smaller components into one catalog?
Kalman Speierover 1 year ago
whats the best way to share vars between some components but not all of them?
if i place in the stack yaml vars section, i got warnings from the components which are not using them.
if i place in the stack yaml vars section, i got warnings from the components which are not using them.
Warning: Value for undeclared variable
Dennis Bernardyover 1 year ago
Hey, when using helmfile with atmos it requires to have
helm_aws_profile_pattern and cluster_name_pattern set. Is there a way to use a name_template like in the stack configuration?Hao Wangover 1 year ago
Atmos may need a RAG application for QA 🙂
Hao Wangover 1 year ago
I’m looking into RAG recently, and it is not hard to write one
Ryanover 1 year ago
Good morning gents, just checking in here - is this the module usually used to automate remote backend stand-up - https://github.com/cloudposse/terraform-aws-tfstate-backend
Derrick Hammerover 1 year ago
Hello, I just found this project and trying to plan how im going to design things. Would like input on how atmos required git repos structured in respect to monorepo vs multirepo? I intend to create modules in 1 repo and create environments in another. Also curious about submitting to terraform/opentudfu registry and I think they require 1 repo per module or something. Would appreciate input from others with experience!
Kudos.
Kudos.
PePe Amengualover 1 year ago
Hello,
atmos describe affected always output the component that changed , or that is a somewhat recent change?Andriy Knysh (Cloud Posse)over 1 year ago
@PePe Amengual I believe you were asking for this functionality (
metadata.enabled) ^NotWillFarrellover 1 year ago
Hi, I've been reading up on your documentation on Atmos and the reference architecture for the past 1 or 2 weeks and some things are not clicking in my head. I hope you can give me some pointers in the right direction because it is not easy to find this on the Internet.
For AWS, I see references in the mixins to regions, tenants and stages but the sample info only gives me a name, but I'm not seeing how it relates to let's say an Account ID. This for example:
Am I overlooking some documentation part where I can see what can be a default for OUs/Accounts? There should be some relationship to AWS terminology right?
I hope you can give me a hint. Thanks in advance!
For AWS, I see references in the mixins to regions, tenants and stages but the sample info only gives me a name, but I'm not seeing how it relates to let's say an Account ID. This for example:
vars:
stage: sandbox
# Other defaults for the `sandbox` stage/accountAm I overlooking some documentation part where I can see what can be a default for OUs/Accounts? There should be some relationship to AWS terminology right?
I hope you can give me a hint. Thanks in advance!
tretinhaover 1 year ago
does anybody have thoughts on how to manage ECS image tags? we are used to creating different image tags whenever something is ready to be tested or to go to production on each project's pipeline. So let's say I have an application that just generated a new docker tag corresponding to some new changes, and that this tag is now saved in ECR, how can I reflect this image tag in my atmos/infrastructure repository? at first I thought about the app opening a PR to atmos, changing the line that corresponds to the image, but I'm unsure if this is the best way. How do you typically deal with this?
Stephan Helasover 1 year ago
Hi,
i've found, that the validate output differs between 1.98 and 1.99 (and ever since). I don't know if i am doing anything wrong or if its a bug:
old behavior (1.98.0)
new behavior (1.99.0)
i've found, that the validate output differs between 1.98 and 1.99 (and ever since). I don't know if i am doing anything wrong or if its a bug:
old behavior (1.98.0)
❯ atmos validate component wms-base -s wms-xe02-sandbox
component 'wms-base' in stack 'wms-xe02-sandbox' validated successfullynew behavior (1.99.0)
❯ atmos validate component wms-base -s wms-xe02-sandbox
'atmos' supports native ' wms-base' command with all the options, arguments and flags.
In addition, 'component' and 'stack' are required in order to generate variables for the component in the stack.
atmos wms-base <component> -s <stack> [options]
atmos wms-base <component> --stack <stack> [options]
component 'wms-base' in stack 'wms-xe02-sandbox' validated successfullyRBover 1 year ago
Is there already prior art using github actions and atmos to use a readonly for the plan and an admin role for the apply ?
Erik Osterman (Cloud Posse)over 1 year ago
tokaover 1 year ago
I need to share my local submodules/child modules with every component that I will define in atmos.
I'd like to move to atmos, but I have a codebase that I need to migrate to atmos with many modules. At this point I cannot afford to rewrite each small module into a component, but I'd like to move my root modules into atmos components as a starting point. I'd like to build my components out of the existing modules.
Any advice how to approach?
I'd like to move to atmos, but I have a codebase that I need to migrate to atmos with many modules. At this point I cannot afford to rewrite each small module into a component, but I'd like to move my root modules into atmos components as a starting point. I'd like to build my components out of the existing modules.
Any advice how to approach?
RBover 1 year ago
Hi all, if you folks have a second, i have a couple questions on the component migration. Very excited for the component testing 🥳
https://github.com/cloudposse/terraform-aws-components/issues/1177#issuecomment-2474148290
https://github.com/cloudposse/terraform-aws-components/issues/1177#issuecomment-2474148290
Kalman Speierover 1 year ago(edited)
hey folks, is there a way to generate kubernetes provider blocks for different cloud providers?
scenario: stack-1 is ecs, stack-2 is gke, and i’d like to use the same components for kubernetes resources.
i can output host, cert and token from the cluster components, however i’d like to configure kubernetes provider with oauth2 access token. using
scenario: stack-1 is ecs, stack-2 is gke, and i’d like to use the same components for kubernetes resources.
i can output host, cert and token from the cluster components, however i’d like to configure kubernetes provider with oauth2 access token. using
google_client_config data source for example.Miguel Zablahover 1 year ago
Hey guys I was looking for a way to read secrets from 1password and saw this PR:
https://github.com/cloudposse/atmos/pull/762
What are the plans for this?
This will actually solved a loot of issues and simplify my work on some projects hehe
https://github.com/cloudposse/atmos/pull/762
What are the plans for this?
This will actually solved a loot of issues and simplify my work on some projects hehe
shirkevichover 1 year ago
Hey guys, thanks for awesome project! Trying to recreate multi workspace project in terraform.io with atmos.
My use case is provisioning pretty same infra for multiple tenants. Need your advise on how to proper organise variables.
Each component in tenant share a list of variables like
Then for each component I'm passing
deploy/bulbasaur-stg.yaml
All good for now, then for
The problem here is that I want to use previously defined
I tried to create mixin and thought that it can be templated like that:
mixins/tenants/bulbasaur-stg.yaml
deploy/_defaults.yaml.tmpl
It is not working giving me
Clearly I'm doing it wrong. Should I create a component that just outputs
P.S. I have
My use case is provisioning pretty same infra for multiple tenants. Need your advise on how to proper organise variables.
Each component in tenant share a list of variables like
project_id and region which I put to mixin with the same name as tenant.Then for each component I'm passing
project_number with atmos.Component (tenants are named as pokemons):deploy/bulbasaur-stg.yaml
vars:
tenant: bulbasaur
stage: stg
import:
- path: "deploy/_defaults.yaml.tmpl"
context:
stack: bulbasaur
components:
terraform:
tenant:
vars:
foo: bar
db:
vars:
project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}' <-- this I also want to DRY somehow
cloudrun:
vars:
project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}'
jobs:
vars:
project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}'All good for now, then for
cloudrun and jobs components I need same list of ENV variables that are used to provision docker image.The problem here is that I want to use previously defined
project_id and pokemon_name in templating of those envs...I tried to create mixin and thought that it can be templated like that:
mixins/tenants/bulbasaur-stg.yaml
vars:
region: europe-west3
env_vars:
TENANT: '{{ .vars.tenant }}'
DATABASE_USER: 'user@{{ .vars.project_id }}.iam'
BIGQUERY_PROJECTID: '{{ .vars.project_id }}'
...deploy/_defaults.yaml.tmpl
import:
- mixins/tenants/{{ .stack }}
terraform:
backend_type: gcs
backend:
gcs:
bucket: "tf-state"It is not working giving me
<no-value> for TENANTClearly I'm doing it wrong. Should I create a component that just outputs
env_vars instead and pass it to cloudrun and jobs?P.S. I have
name_pattern: "{tenant}-{stage}" in atmos.yamlRBover 1 year ago
Opentofu is considering deprecating workspaces
https://github.com/opentofu/opentofu/issues/2160
Is it possible to use atmos without workspaces and instead use unique keys per stack instead of unique workspaces per stack ?
https://github.com/opentofu/opentofu/issues/2160
Is it possible to use atmos without workspaces and instead use unique keys per stack instead of unique workspaces per stack ?
Junkover 1 year ago
For root modules that do not use the terraform-null-label module (e.g., modules from terraform-aws-modules instead of CloudPosse), I find it challenging to maintain a consistent naming convention for resources. Specifically, I use a mix of CloudPosse-provided modules and other third-party modules as needed, but ensuring uniformity in the naming and tagging of provisioned resources (not just the stack’s name_pattern, but the actual resource names) is difficult.
I’ve tried using the Component Vendor’s Mixin feature to blend context.tf, but it proved to be inconvenient.
Does anyone have ideas or alternative methods for achieving a uniform naming and tagging convention across all resources? Any suggestions would be greatly appreciated!
I’ve tried using the Component Vendor’s Mixin feature to blend context.tf, but it proved to be inconvenient.
Does anyone have ideas or alternative methods for achieving a uniform naming and tagging convention across all resources? Any suggestions would be greatly appreciated!
John Seekinsover 1 year ago(edited)
👋 We're experimenting with Atmos and seeing a strange behavior with templating:
It seems like templates just...aren't being processed and I'm not really sure how to debug this...
The docs imply this should "just work". I'm clearly missing something obvious, and would love some help.
(
$ atmos describe stacks --process-templates | grep Component
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'It seems like templates just...aren't being processed and I'm not really sure how to debug this...
The docs imply this should "just work". I'm clearly missing something obvious, and would love some help.
(
Atmos 1.107.1 on darwin/arm64)Karelover 1 year ago
Hi, I'm assessing Atmos in a use case that needs to provision a set of infrastructure components that are deployed separately (their own TF root module), in different AWS accounts. I also have to use Atlantis to apply changes. To simplify if I have three components C1, C2 and C3. C2 and C3 depend on the result of applying C1. I want to orchestrate the plan/apply flow of C1, C2 and C3 in that order. With Atmos, do I need to define a Workflow and how would it be used from Atlantis?
Samuel Thanabout 1 year ago(edited)
Hi, i'm at the stage of initialising the tfstate-backend in a brownfield environment context.
I was successful in creating the s3 and dynamodb part and migrating all the workspace to s3.
However, i hit a wall when it comes to the process of enabling the access_roles_enabled flag.
The following is the error i recieved
My namespace is cs however the stack name -core-gbl-root is foreign to me as i've not declare any of that. Not sure how that came about.
This was the stack yaml i'm using to deploy the tfstate-backend. I used the output of the iam role created by the access_role flag and pass it into the role_arn prior to turning on the access_role_enable flag.
Is there some sort of mapping i have miss configured ?
My folder structure is stacks/orgs/cs/xxx/dev/ap-southeast-2/tfstate-backend.yaml and my stack name is cs-xxx-apse2-dev.
I was successful in creating the s3 and dynamodb part and migrating all the workspace to s3.
However, i hit a wall when it comes to the process of enabling the access_roles_enabled flag.
The following is the error i recieved
Error:
│ Could not find the component 'account-map' in the stack 'cs-core-gbl-root'.
│ Check that all the context variables are correctly defined in the stack manifests.
│ Are the component and stack names correct? Did you forget an import?My namespace is cs however the stack name -core-gbl-root is foreign to me as i've not declare any of that. Not sure how that came about.
This was the stack yaml i'm using to deploy the tfstate-backend. I used the output of the iam role created by the access_role flag and pass it into the role_arn prior to turning on the access_role_enable flag.
Is there some sort of mapping i have miss configured ?
tfstate-backend:
backend:
s3:
role_arn: null
vars:
access_roles_enabled: true # Set to false initially, and only used for cold start.
enable_server_side_encryption: true
enabled: true
force_destroy: false
name: terraformstate
prevent_unencrypted_uploads: true
label_order: ["namespace", "tenant", "environment", "stage", "name"]
access_roles:
default: &tfstate-access-template
write_enabled: true
allowed_roles: {}
denied_roles: {}
allowed_permission_sets: {}
denied_permission_sets: {}
allowed_principal_arns: [
"arn:aws:iam::XXXXXXXX:role/XXXXXXXX"
]
denied_principal_arns: []
tags:
component: "tfstate-backend"
expense-class: "storage"My folder structure is stacks/orgs/cs/xxx/dev/ap-southeast-2/tfstate-backend.yaml and my stack name is cs-xxx-apse2-dev.
Erik Osterman (Cloud Posse)about 1 year ago
Cross posting here for visibility
https://sweetops.slack.com/archives/CB6GHNLG0/p1732048971984679
https://sweetops.slack.com/archives/CB6GHNLG0/p1732048971984679
Erik Osterman (Cloud Posse)about 1 year ago
Anyone in opposition to changing the default behavior of running “atmos” to displaying help, rather than entering the UI mode? Then moving the UI to “atmos ui”?
Raymond Schippersabout 1 year ago(edited)
Apologies for the stupid question, but I am using the
Which results in a AWS API error as it's not a valid ARN or domain, has anyone worked around this?
aws-teams, aws-teams-roles, aws-saml modules with the REF architecture. There is a few roles like gitops that won't be accessible via SSO but only via assume role. However, then trying to apply the IAM policies for these roles the following IAM condition is generated:+ Principal = {
+ Federated = ""
}
Which results in a AWS API error as it's not a valid ARN or domain, has anyone worked around this?
shirkevichabout 1 year ago
Guys, we've decided to adopt atmos, but we're on Google Cloud.
Here are several PR's to create usable workflow in GitHub. GCS is used to store/retrieve the plan and firestore for metadata.
https://github.com/cloudposse/github-action-terraform-plan-storage/pull/35
https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/93
https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/64
https://github.com/cloudposse/github-action-atmos-affected-stacks/pull/55
It is also using GitHub OIDC with workload identity provider on google side. Those PR's are tested with one of our tenants that we've migrated already.
If you'll find it reasonable to integrate this please help me with the naming of the fields for metadata.
Here are several PR's to create usable workflow in GitHub. GCS is used to store/retrieve the plan and firestore for metadata.
https://github.com/cloudposse/github-action-terraform-plan-storage/pull/35
https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/93
https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/64
https://github.com/cloudposse/github-action-atmos-affected-stacks/pull/55
It is also using GitHub OIDC with workload identity provider on google side. Those PR's are tested with one of our tenants that we've migrated already.
If you'll find it reasonable to integrate this please help me with the naming of the fields for metadata.
Miguel Zablahabout 1 year ago
Hey guys I have a question I need a way to get some creds from 1password and I will like to use the cli for this, is there a way to do this with atmos before a component run? like a pre-hook where I can set the ENV? I know we have datasources but can this run cli cmd? or is there a better way to do this?
Samuel Thanabout 1 year ago(edited)
After some mucking around + head banging.
I think, I come to understand that, if i did not deploy the account-map component in using the {namespace}-core-glb-root naming convention as the Stackname, a lot of my deployment of components that relies of the account-map component behind the scene, I.E s3-buckets, will fail as it tries to look for the account-map component based on the default value (ref here https://github.com/cloudposse-terraform-components/aws-s3-bucket/blob/e19d3e7adb38805553246e740627fbef6c8b52a6/src/variables.tf#L6)
Coming from a brownfield implementation, if the account-map component is deployed outside of the naming convention of {namespace}-core-glb-root. Say for example i deployed account-map component's to stack name cs-abc-apse2-prod
I should be needing to declare variables like to following to overide the default value, for this example the component for aws-s3-bucket ?
Unless i start writing my own component instead of using cloudposse's https://github.com/cloudposse-terraform-components, am i right to assume that, to reduce any headache , it is better to deploy the account-map component under the {tenant}-core-glb-root stack name ?
I think, I come to understand that, if i did not deploy the account-map component in using the {namespace}-core-glb-root naming convention as the Stackname, a lot of my deployment of components that relies of the account-map component behind the scene, I.E s3-buckets, will fail as it tries to look for the account-map component based on the default value (ref here https://github.com/cloudposse-terraform-components/aws-s3-bucket/blob/e19d3e7adb38805553246e740627fbef6c8b52a6/src/variables.tf#L6)
Coming from a brownfield implementation, if the account-map component is deployed outside of the naming convention of {namespace}-core-glb-root. Say for example i deployed account-map component's to stack name cs-abc-apse2-prod
I should be needing to declare variables like to following to overide the default value, for this example the component for aws-s3-bucket ?
"account_map_environment_name"
"account_map_stage_name"
"account_map_tenant_name"Unless i start writing my own component instead of using cloudposse's https://github.com/cloudposse-terraform-components, am i right to assume that, to reduce any headache , it is better to deploy the account-map component under the {tenant}-core-glb-root stack name ?
denizabout 1 year ago
Hi All!
Atmos newbie here. I’m working on adopting the quickstart-advanced tutorial for my organization’s AWS structure, and I have some questions about providers and backend configuration during the initial bootstrapping phase. I would appreciate any guidance as it is quite a lot to take in 🙂
1. Provider Configuration: During the initial bootstrapping, what’s the recommended approach for declaring provider configurations to maximize reusability and minimize duplication? I noticed the
2. State Management: I want to maintain separate state buckets per stage(account). What’s the recommended approach for creating and managing the
3. Backend Configuration: How should the backend configuration be structured across different accounts and stages? Is it possible to create the backend configuration with parameters instead of hardcoded bucket names/dynamo table names?
4. Account Configuration: I saw the account component which is used to define the OUs etc. I believe this needs to be deployed to the management account?
Current Atmos Repo structure:
Desired AWS Organization Structure
Atmos newbie here. I’m working on adopting the quickstart-advanced tutorial for my organization’s AWS structure, and I have some questions about providers and backend configuration during the initial bootstrapping phase. I would appreciate any guidance as it is quite a lot to take in 🙂
1. Provider Configuration: During the initial bootstrapping, what’s the recommended approach for declaring provider configurations to maximize reusability and minimize duplication? I noticed the
awsutils component needs providers, but I’m unsure about the best way to structure this.2. State Management: I want to maintain separate state buckets per stage(account). What’s the recommended approach for creating and managing the
tfstate-backend component in this scenario?(which account/region)3. Backend Configuration: How should the backend configuration be structured across different accounts and stages? Is it possible to create the backend configuration with parameters instead of hardcoded bucket names/dynamo table names?
4. Account Configuration: I saw the account component which is used to define the OUs etc. I believe this needs to be deployed to the management account?
Current Atmos Repo structure:
.
├── README.md
├── atmos.yaml
├── components
│ └── terraform
│ ├── account
│ │ ├── context.tf
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ ├── account-map
│ │ ├── context.tf
│ │ ├── dynamic-roles.tf
│ │ ├── main.tf
│ │ ├── modules
│ │ │ ├── iam-roles
│ │ │ │ ├── README.md
│ │ │ │ ├── context.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ ├── variables.tf
│ │ │ │ └── versions.tf
│ │ │ ├── roles-to-principals
│ │ │ │ ├── README.md
│ │ │ │ ├── context.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ └── variables.tf
│ │ │ └── team-assume-role-policy
│ │ │ ├── README.md
│ │ │ ├── context.tf
│ │ │ ├── github-assume-role-policy.mixin.tf
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ ├── outputs.tf
│ │ ├── remote-state.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ └── tfstate-backend
│ ├── context.tf
│ ├── iam.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
│ └── versions.tf
├── stacks
│ ├── catalog
│ │ └── tfstate-backend.yaml
│ ├── mixins
│ │ ├── region
│ │ │ ├── eu-west-1.yaml
│ │ │ └── global-region.yaml
│ │ ├── stage
│ │ │ └── root.yaml
│ │ ├── tenant
│ │ │ └── core.yaml
│ │ └── tfstate-backend.yaml
│ └── orgs
│ └── dunder-mifflin
│ ├── _defaults.yaml
│ └── core
│ ├── _defaults.yaml
│ └── root
│ ├── _defaults.yaml
│ └── eu-west-1.yaml
└── vendor.yaml
19 directories, 48 filesDesired AWS Organization Structure
[ACC] Root/Management Account (account name: dunder-mifflin-root)
│
├── [OU] Security
│ ├── [ACC] Log-Archive
│ └── [ACC] Security-Tooling
│
├── [OU] Core
│ ├── [ACC] Monitoring
│ └── [ACC] Shared-Services
│
└── [OU] Workloads
├── [OU] Production
│ └── [ACC] Prod
│
└── [OU] Non-Production
└── [ACC] Non-ProdDennis Bernardyabout 1 year ago
Hey, I'm currently running into problems with the gomplate datasources. I defined this:
and enabled templating in atmos.yaml. (It's kind of confusing, that it's just "template.settings" in atmos.yaml, but "settings.templates.settings" in the stack confguration, while atmos.yaml also has a "settings" key for merge behaviour)
But my problem is, that the templating does not get evaluated here:
I tried adding the env in the templates settings in atmos.yaml like specified in the documentation but there it is simply ignored.
settings:
templates:
settings:
env:
AWS_PROFILE: "{{ .vars.aws_profile }}"
gomplate:
timeout: 5
datasources:
certificate:
url: "aws+smp:///ai/infra/acm/certificate/{{ .vars.account }}.url"and enabled templating in atmos.yaml. (It's kind of confusing, that it's just "template.settings" in atmos.yaml, but "settings.templates.settings" in the stack confguration, while atmos.yaml also has a "settings" key for merge behaviour)
But my problem is, that the templating does not get evaluated here:
template: all-atmos-sections:151:30: executing "all-atmos-sections" at <datasource "certificate">: error calling datasource: Couldn't read datasource 'certificate': Error reading aws+smp from AWS using GetParameter with input {
Name: "/ai/infra/acm/certificate/{{ .vars.account }}.url",
WithDecryption: true
}: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
SharedCredsLoad: failed to load profile, {{ .vars.aws_profile }}.
EC2RoleRequestError: no EC2 instance role found
caused by: RequestError: send request failed
caused by: Get "<http://169.254.169.254/latest/meta-data/iam/security-credentials/>": read tcp 127.0.0.1:56391->127.0.0.1:10011: read: connection reset by peerI tried adding the env in the templates settings in atmos.yaml like specified in the documentation but there it is simply ignored.
Bobabout 1 year ago
Hello!
Wondering if there's a way to dynamically generate stack names based on directory and file name? Been trying templating, but been lost. For context, I am just following the simple tutorial, and have the following structure:
I have the following on atmos.yaml name_template:
name_template: '{{ .vars.namespace }}-{{ .vars.stage }}-{{ .vars.region }}'
For dev-eus2.yaml, I just want to be able to have the stack name "core-dev-eus2" auto generated somehow without having to define the vars for each yaml. I started looking into "templating/sprig" and imports. I can do imports, but that looked so many duplicated variables that may result into human error, so want to figure out a way to dynamically do it somehow. Thanks!
Wondering if there's a way to dynamically generate stack names based on directory and file name? Been trying templating, but been lost. For context, I am just following the simple tutorial, and have the following structure:
├── atmos.yaml
├── components
│ └── terraform
│ └── weather
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
│ └── versions.tf
└── stacks
├── catalog
│ ├── station.yaml
├── deploy
│ └── core
│ └── dev-eus2.yamlI have the following on atmos.yaml name_template:
name_template: '{{ .vars.namespace }}-{{ .vars.stage }}-{{ .vars.region }}'
vars:
stage: dev
namespace: core
region: eus2
import:
- catalog/station
components:
terraform:
station:
vars:
location: Stockholm
lang: seFor dev-eus2.yaml, I just want to be able to have the stack name "core-dev-eus2" auto generated somehow without having to define the vars for each yaml. I started looking into "templating/sprig" and imports. I can do imports, but that looked so many duplicated variables that may result into human error, so want to figure out a way to dynamically do it somehow. Thanks!