61 messages
👽️
Michaelabout 1 year ago
Is there any documentation for writing Atmos integrations or is this not a recommended practice? https://atmos.tools/terms/integration/
cricketscabout 1 year ago(edited)
Is there some best practices users have figured out or recommended by the authors regarding using the
atmos.Component template function across different (AWS) accounts? It seems a bit tricky with role assumption. For example if a user or a automation runner assumes role A in account A but has an atmos.Component template function that references to state in stack B then if it seems that the B stack's state bucket would need to allow cross account access from A role? Is there another way? It seems like you could end up with a lot of access config following this strategy.kofiabout 1 year ago
Hello!
I am new to atmos and I am trying to implement it for our new infrastructure (I hope I am asking at the good place, sorry if it is not the case).
I am currently blocked on trying to get working atmos with the http backend (interface with gitlab). Unfortunately the http backend doesn't support workspace (terraform doc) and therefore atmos is crashing when trying to select the workspace.
In your documentation (link) the http backend is supported. Is there a way to get it working? Maybe I can use environment variables to force a workspace like here? Am I on the good path or am I missing something?
Thanks in advance for your help!
I am new to atmos and I am trying to implement it for our new infrastructure (I hope I am asking at the good place, sorry if it is not the case).
I am currently blocked on trying to get working atmos with the http backend (interface with gitlab). Unfortunately the http backend doesn't support workspace (terraform doc) and therefore atmos is crashing when trying to select the workspace.
In your documentation (link) the http backend is supported. Is there a way to get it working? Maybe I can use environment variables to force a workspace like here? Am I on the good path or am I missing something?
Thanks in advance for your help!
G
Georgi Angelovabout 1 year ago
Hey guys, is there a way to control bucket ACL through the
cloudfront-s3-cdn module? I saw that the module in question depends on s3-log-storage which depends on the module s3-buckets which are all managed by CloudPosse. I saw that there is an input grants which I believe controls this. I'm talking about the settings here.kofiabout 1 year ago
Hey!
I would like to be able to pass all extra args of a custom CLI command to the underneath command without defining them in the
My use case would be to give to
I would like to be able to pass all extra args of a custom CLI command to the underneath command without defining them in the
flag section. Is it possible?My use case would be to give to
ansible-playbook some extra flags without defining all of them in the custom CLI of atmos. Do you think it makes sense?Erik Osterman (Cloud Posse)about 1 year ago
Should we move the release notifications to a new channel? We will probably have 20 or more releases this month.
John Seekinsabout 1 year ago
Having some state problems with atmos that I'm struggling to resolve...
Any tips on how to actually fix this problem?
$ atmos terraform plan msk -s staging-data-msk
exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
$ atmos terraform plan msk -s staging-data-msk -- -reconfigure
exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".Any tips on how to actually fix this problem?
Dan Hansenabout 1 year ago
Is it common practice to use Terraform child modules inside components? The docs encourage component scope not to be too small, so I was wondering where sensible defaults for more resource-level configurations across stacks should live.
We've typically used child modules for this purpose, but I'm unsure if the intention behind some of the atmos abstractions is to flatten the root graph?
We've typically used child modules for this purpose, but I'm unsure if the intention behind some of the atmos abstractions is to flatten the root graph?
RBabout 1 year ago
If we want to create new subnets on an existing vpc, is there an upstream component that can be used?
I looked at this component https://github.com/cloudposse-terraform-components/aws-vpc and couldn’t find a way to add more subnets such as a subnet specifically for databases, for example.
I looked at this component https://github.com/cloudposse-terraform-components/aws-vpc and couldn’t find a way to add more subnets such as a subnet specifically for databases, for example.
RBabout 1 year ago
I saw a peer hard code directly in the
vpc component. One block per unique subnet like mqtt and rds subnets direclty in the component instead of making it based on inputs. I skimmed through the component best practices and didn’t see anything regarding this. Id assume that we want to move stuff more into yaml and less in terraform if we can help it and if it’s in terraform it should be multi-region, multi-account. I suppose if we’re so used to atmos that this may be obvious but it may not be obvious to newcomers.Kyle Decotabout 1 year ago
👋 Hello! I'm just getting started w/ Atmos and I'm running into an issue when attempting to use
atmos.Component in combination w/ a key that contains a - . When doing (atmos.Component "vpc" .stack).outputs.foo-bar I'm getting bad character U+0022 '-' . I then attempted to use index (atmos.Component "vpc" .stack) "outputs" "foo-bar" however that gives index error calling index: index of nil pointer . Any suggestions?David Elstonabout 1 year ago
Hi everyone, enjoying using Atmos ❤️ I just had a quick clarification question regarding setting the
Could someone clarify, this refers to using the remote_state terraform module only and not say if I ran
or if I referenced an output in a stack via a yaml function such as
This is the behavior I'm seeing, just wanting to know if I'm not doing something wrong
remote_state_backend configuration, reading the backend configuration docs it saysWhen working with Terraform backends and writing/updating the state, the terraform-backend-read-write role will be used. But when reading the remote state of components, the terraform-backend-read-only role will be used.Could someone clarify, this refers to using the remote_state terraform module only and not say if I ran
atmos terraform output my_component -s my_stackor if I referenced an output in a stack via a yaml function such as
!terraform.output my_component my_stack my_output_valueThis is the behavior I'm seeing, just wanting to know if I'm not doing something wrong
Dan Hansenabout 1 year ago
I'm having some trouble figuring out why vendoring isn't working. I can't get any related logs other than
Failed to vendor 1 components.Petr Dondukovabout 1 year ago(edited)
Is it possible to generate
required_providers block via atmos?Daveabout 1 year ago
Hey!
I am pretty new to atmos, so sorry for the beginner questions 😄 Anyhow I am a little confused about how templating works for the yaml files. So first of all, I do not seem to be able to have them evaluated:
I got a pretty simple stack config like this:
I would expect to have the test key in the tags variable templated (somehow, I am not exactly sure what I would see there), but it does not get evaluated. On the other hand the !exec function is evaluated nicely. This is the output for "atmos describe stacks --components tfstate-backend"
So can you help me figure out how I can make these templates evaluated? Or a complete example would be nice, where I could see something like this work, unfortunately I did not find anything similar in the provided examples.
Also, what I would actually like to figure out is how I could reference variable defined on any level (global, or any other level) and use them to construct values e.g. to pass to the component (like define some prefix on an upper level and a list of items on some lower level, and pass the values combined to the component) in the stack configuration. So I was previously using terragrunt a lot, and there it is very straightforward to do so, because you can just reference any object and use the same functions as I would use in terraform (and some more). Can you help me with some example how atmos designed to handle such use case? I found this in the docs, which seem to be similar to what I am looking for: https://atmos.tools/core-concepts/stacks/yaml-functions/template#advanced-examples But this example is not very complete, and with not being able to evaluate the templates I am just stuck in how to move forward. Also it would be nice to have a reference of what exactly the context of this template evaluation is (e.g. this was the only place i found in the documentation where .settings are referenced in a template, so I wonder what else is available)
Thanks for the attention and for reading all of this ^^ :)
I am pretty new to atmos, so sorry for the beginner questions 😄 Anyhow I am a little confused about how templating works for the yaml files. So first of all, I do not seem to be able to have them evaluated:
I got a pretty simple stack config like this:
import:
- deploy/dev/_defaults.yaml
- catalog/tfstate-backend.yaml
vars:
tags:
Stack: tfstate-backend
test: !template '{{ .stack }}'
test2: !exec echo value2I would expect to have the test key in the tags variable templated (somehow, I am not exactly sure what I would see there), but it does not get evaluated. On the other hand the !exec function is evaluated nicely. This is the output for "atmos describe stacks --components tfstate-backend"
dev:
components:
terraform:
tfstate-backend:
...
stack: dev
vars:
enable_server_side_encryption: true
enabled: true
force_destroy: false
name: tfstate
prevent_unencrypted_uploads: true
region: us-east-1
stage: dev
tags:
Managed-By: Terraform
Stack: tfstate-backend
test: '{{ .stack }}'
test2: |
value2
workspace: devSo can you help me figure out how I can make these templates evaluated? Or a complete example would be nice, where I could see something like this work, unfortunately I did not find anything similar in the provided examples.
Also, what I would actually like to figure out is how I could reference variable defined on any level (global, or any other level) and use them to construct values e.g. to pass to the component (like define some prefix on an upper level and a list of items on some lower level, and pass the values combined to the component) in the stack configuration. So I was previously using terragrunt a lot, and there it is very straightforward to do so, because you can just reference any object and use the same functions as I would use in terraform (and some more). Can you help me with some example how atmos designed to handle such use case? I found this in the docs, which seem to be similar to what I am looking for: https://atmos.tools/core-concepts/stacks/yaml-functions/template#advanced-examples But this example is not very complete, and with not being able to evaluate the templates I am just stuck in how to move forward. Also it would be nice to have a reference of what exactly the context of this template evaluation is (e.g. this was the only place i found in the documentation where .settings are referenced in a template, so I wonder what else is available)
Thanks for the attention and for reading all of this ^^ :)
Daveabout 1 year ago
Also one more thing, I find a little strange.
Previously when I was working with terraform and terragrunt one of the things we thought important was that I want to be able to explicitly define the list of variables that are passed to a specific terraform root module (which are called components in Atmos's case). The reason for this is that I want to see in the configuration which components depend on which value. So I can be sure about the blast radius of a change, because when I change a variable (or delete it for that matter) I know to which components are referencing them (and finding them is really just a cmd+shift+f in my editor).
As I see with Atmos doing deep merging of variables it was not designed this way, any common vars coming from a _default file or a mixin are passed to every single component, so it is hard to see the exact dependencies (and not to mention issues with having two components which are expecting some input in a different way with the same variable name). Probably I could devise a setup using templates to handle this but I would feel like I am not using Atmos the way it was intended to be used. So I would love some clarification on this, or hear some opinions on how you think this should be handled.
thanks again :)
Previously when I was working with terraform and terragrunt one of the things we thought important was that I want to be able to explicitly define the list of variables that are passed to a specific terraform root module (which are called components in Atmos's case). The reason for this is that I want to see in the configuration which components depend on which value. So I can be sure about the blast radius of a change, because when I change a variable (or delete it for that matter) I know to which components are referencing them (and finding them is really just a cmd+shift+f in my editor).
As I see with Atmos doing deep merging of variables it was not designed this way, any common vars coming from a _default file or a mixin are passed to every single component, so it is hard to see the exact dependencies (and not to mention issues with having two components which are expecting some input in a different way with the same variable name). Probably I could devise a setup using templates to handle this but I would feel like I am not using Atmos the way it was intended to be used. So I would love some clarification on this, or hear some opinions on how you think this should be handled.
thanks again :)
RBabout 1 year ago
Hi all. I saw some questions related to remote state. Are there docs on the decision to use remote state vs terraform data sources? I skimmed this document and the data sources that it references are related to stack yaml instead of terraform.
https://atmos.tools/core-concepts/share-data/
https://atmos.tools/core-concepts/share-data/
Daveabout 1 year ago
next thing I am struggling with: I am trying to use the vpc and the vpc-flow-log-bucket components, somehow I cannot make it work. I was following the example (in simplified way, I have a pretty simple structure), but I get the following error:
Unfortunately I can't seem to figure out what is wrong, or what I should be checking... So if anyone could give a hint that would be great :)
╷
│ Error: stack name pattern '{stage}' includes '{stage}', but stage is not provided
│
│ with module.vpc_flow_logs_bucket[0].data.utils_component_config.config[0],
│ on .terraform/modules/vpc_flow_logs_bucket/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵Unfortunately I can't seem to figure out what is wrong, or what I should be checking... So if anyone could give a hint that would be great :)
cricketscabout 1 year ago
I'm not 100% sure if this is conceptually a good question, but is there a notion of reserved/official/already used by Atmos keys in a component's
settings? Is there a systematic way to find out what these keys are? For example, It looks to me like github and integrations might be examples of such keys?Pablo Paezabout 1 year ago
Hello 👋🏻
I'd appreciate your advice on how to organize the components pulled via
Context
We are pulling different components using
Challenge
In the above example, we are grouping the route53-related components. This approach doesn't work out of the box. It requires updating the relative module references (e.g., https://github.com/cloudposse-terraform-components/aws-dns-delegated/blob/main/src/providers.tf#L17).
Updating the path is trivial; instead of
The issue is that the next time we run
Questions
• Is there a better way of doing this?
• I had in mind to add a CI/CD pipeline that will verify the components pulled using
Thanks in advance for your guidance!
I'd appreciate your advice on how to organize the components pulled via
atmos vendor pull in different folders.Context
We are pulling different components using
atmos vendor pull and the amount of components is rapidly growing. Our first approach to improve this situation is to group them in different folders. For example:components
terraform
account-map
route53
dns-primary
delegated-zoneChallenge
In the above example, we are grouping the route53-related components. This approach doesn't work out of the box. It requires updating the relative module references (e.g., https://github.com/cloudposse-terraform-components/aws-dns-delegated/blob/main/src/providers.tf#L17).
Updating the path is trivial; instead of
../account-map/modules/iam-roles we set ../../account-map/modules/iam-roles.The issue is that the next time we run
atmos vendor pull to update the component, we need to update the relative paths again.Questions
• Is there a better way of doing this?
• I had in mind to add a CI/CD pipeline that will verify the components pulled using
atmos vendor pull have not drifted from upstream. This gets difficult, given the situation described above.Thanks in advance for your guidance!
Jesse Gonzalezabout 1 year ago
I also have a questions about
Part 1:
Our current DNS architecture consists of 2 primary top level domains:
•
•
Each stage owns a hosted zone for both domains. For the dev stage this would be
Using account-map we have defined
Delegation for
Questions
• Is there a way to provision zones using
• If you want delegation for a subdomain (for reasons), for example a zone
dns-primary and dns-delegated, both vendored from cloudposse-terraform-components.Part 1:
Our current DNS architecture consists of 2 primary top level domains:
•
<http://example.com|example.com> - vanity domain•
example.internal - service discovery domain. only applicable to the stage, no need to share these records ever. If we need to expose a service we use the vanity domain. This zone is always private in our environment.Each stage owns a hosted zone for both domains. For the dev stage this would be
<http://dev.example.com|dev.example.com> and dev.example.internal.Using account-map we have defined
dns_account_account_name (btw, why not just name this var dns_account_name 🙂) , which forces creation of dns-primary zones in the dns account. This works well for public routable TLDs domains we requiring delegation for, but not strictly necessary for our example.internal domain. As a solution we define example.internal in the stage dns-primary component:components:
terraform:
dns-primary:
metadata:
type: real
vars:
domain_names:
- <http://example.com|example.com>
- example.internal # this creates a public zone but not a valid TLDDelegation for
example.internal is not required, because records in this zone are never shared outside the stage. Chalk this up to "good enough; move on".Questions
• Is there a way to provision zones using
dns-primary outside of the dns account?• If you want delegation for a subdomain (for reasons), for example a zone
<http://marketing.dev.example.com|marketing.dev.example.com>, what's the recommended approach?Jesse Gonzalezabout 1 year ago
Part 2:
Moving on to
Observation 1:
The dns-delgated interface does not allow for creation of both private and public zones with the avialable
Observation 2:
The module does not appear to properly handle the above
Observation 3:
There are a handful of references in
But as you pointed out in the post by @Pablo Paez you are working to eliminate this year.
Questions:
• Since we do need both private and public hosted zones, is the recommended approach here to use the multiple component instances pattern? Something like:
Moving on to
dns-delegated for both the vanity and service discovery subomains. The module interface accepts a map as zone_config. If I use this component as follows:components:
terraform:
dns-delegated:
metadata:
type: real
vars:
zone_config:
- subdomain: stg
zone_name: <http://example.com|example.com>
- subdomain: stg
zone_name: example.internal
request_acm_certificate: true
dns_private_zone_enabled: falseObservation 1:
The dns-delgated interface does not allow for creation of both private and public zones with the avialable
zone_config interface.Observation 2:
The module does not appear to properly handle the above
zone_config input using the zipmap function in locals:zone_map = zipmap(var.zone_config[*].subdomain, var.zone_config[*].zone_name)
## this becomes
## zipmap (["stg", "stg"], ["<http://example.com|example.com>", "example.internal"])
##
## and ultimately
## {
## "stg" = "example.internal"
## }Observation 3:
There are a handful of references in
cloudposse-terraform-components using hardcoded remote_state references to dns-delegated https://github.com/search?q=org%3Acloudposse-terraform-components+dns-delegated+path%3Aremote-state.tf&type=codeBut as you pointed out in the post by @Pablo Paez you are working to eliminate this year.
Questions:
• Since we do need both private and public hosted zones, is the recommended approach here to use the multiple component instances pattern? Something like:
components:
terraform:
dns-delegated/private:
metadata:
type: real
component: dns-delgated
vars:
zone_config:
- subdomain: stg
zone_name: example.internal
request_acm_certificate: true
dns_private_zone_enabled: true
dns-delegated/public:
metadata:
type: real
component: dns-delgated
vars:
zone_config:
- subdomain: stg
zone_name: <http://example.com|example.com>
request_acm_certificate: true
dns_private_zone_enabled: falseIsmael PRabout 1 year ago(edited)
👋 hello, we're doing a POC with atmos + spacelift using the spacelift components to create spaces and admin stacks. I was able to create the spaces successfully but I'm not able to create the admin stacks. I though was an issue with my own implementation of stacks in atmos, so I decided to try with the quick-start-advanced from the atmos repo just adding teh spacelift stuff but still have the same issue...
The error I'm facing is:
EDIT: The error happens when running
I'm getting crazy, I reviewed it multiple times, I redo it from scratch twice following the guide from the repo... but no luck... can maybe someone give me some light on this? 🙂
thanks in advance!
The error I'm facing is:
╷
│ Error: failed to find a match for the import '**/spacelift-ismatest/atmos-example/components/terraform/spacelift/admin-stack/stacks/orgs//.yaml' ('**/spacelift-ismatest/atmos-example/components/terraform/spacelift/admin-stack/stacks/orgs' + '/.yaml')
│
│ with module.all_admin_stacks_config.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks,
│ on .terraform/modules/all_admin_stacks_config.spacelift_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│ 1: data "utils_spacelift_stack_config" "spacelift_stacks" {
│
╵
╷
│ Error: failed to find a match for the import '**/spacelift-ismatest/atmos-example/components/terraform/spacelift/admin-stack/stacks/orgs//.yaml' ('**/spacelift-ismatest/atmos-example/components/terraform/spacelift/admin-stack/stacks/orgs' + '/.yaml')
│
│ with module.spaces.data.utils_component_config.config[0],
│ on .terraform/modules/spaces/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
EDIT: The error happens when running
atmos terraform apply admin-stack -s root-gbl-spaceliftI'm getting crazy, I reviewed it multiple times, I redo it from scratch twice following the guide from the repo... but no luck... can maybe someone give me some light on this? 🙂
thanks in advance!
E
erikabout 1 year ago
Since we have a lot of newcomers, I thought I would start a canvas!
https://sweetops.slack.com/canvas/C031919U8A0
let me know what we should add.
https://sweetops.slack.com/canvas/C031919U8A0
let me know what we should add.
Slackbotabout 1 year ago
shirkevichabout 1 year ago
Stupid question:
After refactoring and deletion of component with
when I try to remove this component from my deployment atmos is failing now on
How can I find where leftovers of this component is present? 🤔
After refactoring and deletion of component with
atmos terraform destroy foowhen I try to remove this component from my deployment atmos is failing now on
atmos describe affected -s bar --verbose with:ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
Executing template function 'atmos.Component(foo, bar)'
template: describe-stacks-all-sections:47:35: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component:
Could not find the component 'foo' in the stack 'bar'.
Check that all the context variables are correctly defined in the stack manifests.
Are the component and stack names correct? Did you forget an import?How can I find where leftovers of this component is present? 🤔
Zackabout 1 year ago(edited)
Hi - I’m trying to do a simple “brownfield” PoC for atmos and moving some root modules into components that have a remote s3 backend already. Our existing remote state doesn’t have any
workspace_key_prefix defined - so during an atmos plan, it wants to create everything instead of reading what’s already there. What am I supposed to do here? It doesn’t appear that I can null out workspace_key_prefixAndrew Chemisabout 1 year ago(edited)
@Andriy Knysh (Cloud Posse) Ive got another !terraform.output question for you. I'm on
1.148.1 and im currently stuck in an infinite loop. I set my log level to trace and its running the same set of commands forever. I'm doing a lookup cross-component and cross-stack. The values are all exported from the landingzone stack.components:
terraform:
databricks-environment:
vars:
s3_vpce_id: !terraform.output databricks-landing-zone cards-dev-uw2 vpc_endpoint_id_s3
sts_vpce_id: !terraform.output databricks-landing-zone cards-dev-uw2 vpc_endpoint_id_sts
kinesis_vpce_id: !terraform.output databricks-landing-zone ards-dev-uw2 vpc_endpoint_id_kinesis
secretsmanager_vpce_id: !terraform.output databricks-landing-zone cards-dev-uw2 vpc_endpoint_id_secretsmanager
backend_relay_id: !terraform.output databricks-landing-zone cards-dev-uw2 mws_endpoint_backend_relay
backend_rest_id: !terraform.output databricks-landing-zone cards-dev-uw2 mws_endpoint_backend_rest
vcpe_backend_relay_id: !terraform.output databricks-landing-zone cards-dev-uw2 vpc_endpoint_id_relay
vcpe_backend_rest_id: !terraform.output databricks-landing-zone cards-dev-uw2 vpc_endpoint_id_rest
...
Found stack manifests:
- cards/cards-dev/dev.yaml
- cards/cards-dev/landing-zone.yaml
- cards/cards-dev/sandbox.yaml
- cards/cards-dev/test.yaml
- cards/cards-prod/landing-zone.yaml
Found component 'databricks-landing-zone' in the stack 'cards-dev-uw2' in the stack manifest 'cards/cards-dev/landing-zone'
ProcessTmplWithDatasources(): processing template 'all-atmos-sections'
ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 1
ProcessTmplWithDatasources(): processed template 'all-atmos-sections'
Executing Atmos YAML function: !terraform.output databricks-landing-zone cards-dev-uw2 vpc_endpoint_id_secretsmanager
...infinitely Gheorghe Casianabout 1 year ago
Hello,
I need some help to debug one of the issues I am having when updating atmos version from 1.70.0 to 1.148.1. I am running atmos using geodesic docker image 3.4.2.
After the update I get the following error:
I checked the
• stacks/catalog/terraform/mq-broker/default.yaml
• stacks/mixins/services/cms/mq-broker-tmpl-gbl.yaml
Any ideas how to debug this issue ?
I need some help to debug one of the issues I am having when updating atmos version from 1.70.0 to 1.148.1. I am running atmos using geodesic docker image 3.4.2.
After the update I get the following error:
atmos describe affected
no matches found for the import 'mixins/services/cms/catalog/terraform/mq-broker/default' in the file 'mixins/services/cms/mq-broker-tmpl-gbl.yaml'
Error: failed to find a match for the import '/tmp/1737103375364882584/stacks/mixins/services/cms/catalog/terraform/mq-broker/default.yaml' ('/tmp/1737103375364882584/stacks/mixins/services/cms/catalog/terraform/mq-broker' + 'default.yaml')I checked the
mq-broker-tmpl-gbl.yaml file, it looks good to me, can't figure it out what is wrong with it.• stacks/catalog/terraform/mq-broker/default.yaml
components:
terraform:
mq-broker/default:
metadata:
type: abstract
vars:
enabled: true
apply_immediately: true
auto_minor_version_upgrade: true
deployment_mode: ACTIVE_STANDBY_MULTI_AZ
engine_type: ActiveMQ
engine_version: 5.15.16
host_instance_type: mq.t2.micro
publicly_accessible: false
general_log_enabled: true
audit_log_enabled: true
encryption_enabled: true
use_aws_owned_key: true• stacks/mixins/services/cms/mq-broker-tmpl-gbl.yaml
import:
- catalog/terraform/mq-broker/default
components:
terraform:
ui-cms-mq-broker:
metadata:
component: mq-broker
inherits:
- mq-broker/default
vars:
deployment_mode: SINGLE_INSTANCE
name: ui-cms
ssm_enabled: true
ssm_parameter_name_format: "/{{ .stage }}/%s/%s"Any ideas how to debug this issue ?
Michaelabout 1 year ago
Loving the new !store YAML function. Are there any plans to implement a !secretstore function that redacts a retrieved secret in stdout when describing stacks or components?
https://atmos.tools/core-concepts/stacks/yaml-functions/store/
https://atmos.tools/core-concepts/stacks/yaml-functions/store/
R
Rasitabout 1 year ago
Hello,
I'm very new with atmos. I'm trying to implement S3-bucket component by adding vendor list. and configured its catalog variables.
I'm getting below error when I try to atmos terraform plan for the s3 resource that I wanted to create. I managed to create vpc resource. all looks well for vpc. What would be the issue with s3-bucket?
I'm very new with atmos. I'm trying to implement S3-bucket component by adding vendor list. and configured its catalog variables.
I'm getting below error when I try to atmos terraform plan for the s3 resource that I wanted to create. I managed to create vpc resource. all looks well for vpc. What would be the issue with s3-bucket?
PePe Amengualabout 1 year ago
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) @Matt Calhoun, looking at this https://atmos.tools/core-concepts/projects/configuration/stores, how do we add a new store, specifically Keyvalt?
PePe Amengualabout 1 year ago(edited)
Can we do conditional imports?
• I'm having a problem with
---
import:
- sandbox/stack_defaults
- catalog/*/*• I'm having a problem with
describe-affected, even after passign --stack it tries to validate all the stacks anyhow and if for some reason there is a stack that was not vendor it fails since it can't find the importsZackabout 1 year ago
I can’t really have _
source_ *<(*atmos completion zsh*)* in my .zshrc because that command requires a atmos.config. I’m assuming that’s because atmos will also add completion to custom commands defined in that file, right?PePe Amengualabout 1 year ago(edited)
is it possible to use something like this?
Or using some of the new yaml template functions?
I'm trying to avoid using :
components:
terraform:
cosmos/pepe/{{ (datasource "prnumber") }}Or using some of the new yaml template functions?
I'm trying to avoid using :
terraform_workspace_pattern: '{namespace}-{environemnt}-{component}-{{ (datasource "prnumber") }}'Josh Simmondsabout 1 year ago
Can workflow steps in atmos be parallelized today?
Zackabout 1 year ago
is there a sane way to debug this error?
I have a bunch of templates in my stack
atmos describe component zacks-tgw-route-table-is-cool --stack cattle-demo-rdte
Found component 'zacks-tgw-route-table-is-cool' in the stack 'cattle-demo-rdte' in the stack manifest 'deploy/cattle-demo-rdte'
Found component 'uds-vpc' in the stack 'cattle-demo-rdte' in the stack manifest 'deploy/cattle-demo-rdte'
template: all-atmos-sections:174: unexpected "}" in operandI have a bunch of templates in my stack
RBabout 1 year ago
we've been trying to craft the atmos github actions and its turned into quite a lot of code. is there a complete, easy, copy-pastable atmos github action yaml for plan/apply/destroy workflows?
Thomas Spearabout 1 year ago
Hi team, reading through the Why Atmos doc, I read Stage 2: Write Terraliths, then clicked to go to Stage 3: Reinvent the Wheel. The link for this page is https://atmos.tools/introduction/why-atmos/stage-3 however when viewing the page, I'm shown "Stage 4: Move Towards Modularization".... Note the URL itself indicates stage 3. On the left nav menu, and when changing the above URL to reflect "stage-4" at the end, stage 4 is titled "Stage 4: Adoption of Open Source Modules" and I can't find anything about moving toward modularization nor reinventing the wheel. Help please?
Garyabout 1 year ago
I have a couple of questions about execution and organization:
I want to apply 30, 100, 200 instances of a component root module. They're all the same each w/different inputs like name, ownership, etcetera. I have another component root module that depends the first. It takes inputs from the outputs of the first.
A first approach would be: one stack for each of these two component instances. This turns into: 30x2, 50x2, 100x2, etc atmos component[1-2] -s stack[1-200] runs. It's too many plans/applies.
I can put these into workflows, but the workflows run serially. Something like this could take hours to plan/apply depending on the number of stacks of components.
Is there an atmos structure or construct to let me run these more efficiently?
Is the simplest option to just make a root module and instantiate my module 30, 100, 200 times and call it from an atmos stack?
I guess I'm looking for an equivalent to
I want to apply 30, 100, 200 instances of a component root module. They're all the same each w/different inputs like name, ownership, etcetera. I have another component root module that depends the first. It takes inputs from the outputs of the first.
A first approach would be: one stack for each of these two component instances. This turns into: 30x2, 50x2, 100x2, etc atmos component[1-2] -s stack[1-200] runs. It's too many plans/applies.
I can put these into workflows, but the workflows run serially. Something like this could take hours to plan/apply depending on the number of stacks of components.
Is there an atmos structure or construct to let me run these more efficiently?
Is the simplest option to just make a root module and instantiate my module 30, 100, 200 times and call it from an atmos stack?
I guess I'm looking for an equivalent to
terragrunt run-all <cmd> on subfolders which is simple aside from complicating plan outputs.Zackabout 1 year ago
❓️ where’s the best place to find the most-up-to-date (and easy to understand) schema for atmos things? I found this
but it’s not very exhaustive
but it’s not very exhaustive
Thomas Spearabout 1 year ago
I've found a broken link and submitted PR 967 to remove the link; though please do see my comments on the PR - if needed I can revert this and just copy the file back to the examples in the right spot so that the link will work properly.
PePe Amengualabout 1 year ago(edited)
I can do
name: 'pepe{{ (datasource "prnumber") }}', but how could I format this so I can add a - after pepe? ( Only if prnumber is set)Zackabout 1 year ago(edited)
Do you guys have a Program execution flowchart somewhere? I’m not sure if that’s the right term, chatgpt spit that out
This is a tool that I use that has something I’m thinking about
if you click on lifecycle diagram here:
https://docs.zarf.dev/ref/create/
this would be useful for debugging how atmos works as a user and in what stages things are processed
This is a tool that I use that has something I’m thinking about
if you click on lifecycle diagram here:
https://docs.zarf.dev/ref/create/
this would be useful for debugging how atmos works as a user and in what stages things are processed
Garyabout 1 year ago(edited)
Question about atmos rendered files:
components/terraform/<module>/*.terraform.tfvars.json . Should these be committed into the git repo? If a stack has an input for a credential (terraform output from another component), sensitive value will leak into the repo. Is it best to just ignore these and let atmos render them on execution only?Weston Platterabout 1 year ago
🧵 I have a high level atmos question. Is there a place in the atmos manifest json schema to provide a "Human Description" of what the components do?
Use case - I'm thinking of how to support someone who's reading the file who isn't steeped in Atmos knowledge and trying to search through the yaml to find the infra they need to either update or modify. In other similar siutations, I've seen teams add a
Use case - I'm thinking of how to support someone who's reading the file who isn't steeped in Atmos knowledge and trying to search through the yaml to find the infra they need to either update or modify. In other similar siutations, I've seen teams add a
description field where people write a human description that makes sense to someone like a business analystThomas Spearabout 1 year ago
Hey guys, I found
WDYT about adding something to https://atmos.tools/install guiding users to install atmos via tenv?
tenv from https://github.com/tofuutils/tenv is able to install atmos, and by configuring either .tool-versions file or .atmos-version file you can pick the right atmos version per directory or per-project when users/teams are using tenv.WDYT about adding something to https://atmos.tools/install guiding users to install atmos via tenv?
Liam McTagueabout 1 year ago
Hey guys, anyone know why I might be getting the following error on certain modules??
panic: assignment to entry in nil map
goroutine 1 [running]:
<http://github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0xc000da4c68|github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0xc000da4c68>, 0x2}, {{{0xc000c917a0, 0x14}, 0x0, {0xc0013192c0, 0x31}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
/home/runner/work/atmos/atmos/source/internal/exec/utils.go:438 +0x113d
<http://github.com/cloudposse/atmos/internal/exec.ExecuteDescribeComponent({0xc001318912|github.com/cloudposse/atmos/internal/exec.ExecuteDescribeComponent({0xc001318912>?, 0x28fc440?}, {0xc001318916?, 0x2?}, 0x1)
/home/runner/work/atmos/atmos/source/internal/exec/describe_component.go:76 +0x1f8
<http://github.com/cloudposse/atmos/internal/exec.processTagTerraformOutput({{0xc000c63d08|github.com/cloudposse/atmos/internal/exec.processTagTerraformOutput({{0xc000c63d08>, 0x2}, {{{0xc000fb88a0, 0x14}, 0x0, {0xc0006ce240, 0x31}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_terraform_output.go:47 +0x5dc
<http://github.com/cloudposse/atmos/internal/exec.processCustomTags({{0xc000c63d08|github.com/cloudposse/atmos/internal/exec.processCustomTags({{0xc000c63d08>, 0x2}, {{{0xc000fb88a0, 0x14}, 0x0, {0xc0006ce240, 0x31}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_utils.go:56 +0x128
<http://github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x28fc440|github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x28fc440>?, 0xc000f0e230?})
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_utils.go:22 +0x178
<http://github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x2ad2960|github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x2ad2960>?, 0xc001181d40})
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_utils.go:27 +0x1e3
<http://github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x2ad2960|github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x2ad2960>?, 0xc0011819b0})
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_utils.go:27 +0x1e3
<http://github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x2ad2960|github.com/cloudposse/atmos/internal/exec.processNodes.func1({0x2ad2960>?, 0xc0011812c0})
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_utils.go:27 +0x1e3
<http://github.com/cloudposse/atmos/internal/exec.processNodes({{0xc000c63d08|github.com/cloudposse/atmos/internal/exec.processNodes({{0xc000c63d08>, 0x2}, {{{0xc000fb88a0, 0x14}, 0x0, {0xc0006ce240, 0x31}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_utils.go:44 +0xf1
<http://github.com/cloudposse/atmos/internal/exec.ProcessCustomYamlTags(...)|github.com/cloudposse/atmos/internal/exec.ProcessCustomYamlTags(...)>
/home/runner/work/atmos/atmos/source/internal/exec/yaml_func_utils.go:12
<http://github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0xc000c63d08|github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0xc000c63d08>, 0x2}, {{{0xc000fb88a0, 0x14}, 0x0, {0xc0006ce240, 0x31}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
/home/runner/work/atmos/atmos/source/internal/exec/utils.go:524 +0x1d78
<http://github.com/cloudposse/atmos/internal/exec.ExecuteTerraform({{0x0|github.com/cloudposse/atmos/internal/exec.ExecuteTerraform({{0x0>, 0x0}, {0xc000d3e8d0, 0x15}, {0x0, 0x0}, {0x3029931, 0x9}, {0xc000472010, 0xa}, ...})
/home/runner/work/atmos/atmos/source/internal/exec/terraform.go:90 +0x3e8
<http://github.com/cloudposse/atmos/internal/exec.ExecuteAtmosCmd()|github.com/cloudposse/atmos/internal/exec.ExecuteAtmosCmd()>
/home/runner/work/atmos/atmos/source/internal/exec/atmos.go:150 +0xd33
<http://github.com/cloudposse/atmos/cmd.init.func18|github.com/cloudposse/atmos/cmd.init.func18>(0x55673a0?, {0x30105ff?, 0x4?, 0x3010607?})
/home/runner/work/atmos/atmos/source/cmd/root.go:58 +0x8a
<http://github.com/spf13/cobra.(*Command).execute(0x55673a0|github.com/spf13/cobra.(*Command).execute(0x55673a0>, {0xc000050090, 0x0, 0x0})
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xa91
<http://github.com/spf13/cobra.(*Command).ExecuteC(0x55673a0)|github.com/spf13/cobra.(*Command).ExecuteC(0x55673a0)>
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff
<http://github.com/spf13/cobra.(*Command).Execute(...)|github.com/spf13/cobra.(*Command).Execute(...)>
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041
<http://github.com/cloudposse/atmos/cmd.Execute()|github.com/cloudposse/atmos/cmd.Execute()>
/home/runner/work/atmos/atmos/source/cmd/root.go:120 +0x4c5
main.main()
/home/runner/work/atmos/atmos/source/main.go:10 +0x1e Petr Dondukovabout 1 year ago
How to stack pass different variables for different charts to helmfile?
Thomas Spearabout 1 year ago
I've started seeing this error after upgrading from atmos 1.115.0 to 1.144.0:
We're about to try upgrading to the latest release in hopes it's already been seen and fixed but I thought I'd check here in the meantime.
panic: interface conversion: io.Writer is *os.File, not *term.TerminalWriter
goroutine 1 [running]:
<http://github.com/cloudposse/atmos/cmd.renderError({{0x38165d8|github.com/cloudposse/atmos/cmd.renderError({{0x38165d8>, 0x17}, {0x0, 0x0}, {0x0, 0x0}})
/home/runner/work/atmos/atmos/source/cmd/workflow.go:37 +0x2df
<http://github.com/cloudposse/atmos/cmd.init.func37(0x6b20840|github.com/cloudposse/atmos/cmd.init.func37(0x6b20840>, {0xc0009b8090, 0x1, 0x3})
/home/runner/work/atmos/atmos/source/cmd/workflow.go:165 +0x50f
<http://github.com/spf13/cobra.(*Command).execute(0x6b20840|github.com/spf13/cobra.(*Command).execute(0x6b20840>, {0xc0009b8030, 0x3, 0x3})
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xa91
<http://github.com/spf13/cobra.(*Command).ExecuteC(0x6b1dd20)|github.com/spf13/cobra.(*Command).ExecuteC(0x6b1dd20)>
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff
<http://github.com/spf13/cobra.(*Command).Execute(...)|github.com/spf13/cobra.(*Command).Execute(...)>
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041
<http://github.com/cloudposse/atmos/cmd.Execute()|github.com/cloudposse/atmos/cmd.Execute()>
/home/runner/work/atmos/atmos/source/cmd/root.go:99 +0x331
main.main()
/home/runner/work/atmos/atmos/source/main.go:10 +0x25We're about to try upgrading to the latest release in hopes it's already been seen and fixed but I thought I'd check here in the meantime.
Thomas Spearabout 1 year ago
What is the right way to handle sensitive inputs with atmos? We currently put these in an environment variable but get the below warning when using one:
[WORKFLOW INFO]terraform apply 20-frontdoor-endpoint -s dashboard-test2-eastus-sbx
detected 'TF_VAR_certificate_password' set in the environment; this may interfere with Atmos's control of Terraform.Zachary Loeberabout 1 year ago
I was screwing around with opentofu's encrypted state with local age keys and decided to retrofit things to use atmos. It seems to work out pretty well -> https://github.com/zloeber/tofu-exploration/tree/tofu-encrypted-atmos
Dan Hansenabout 1 year ago
I've started using
I can work around this by modifying
atmos.Component in some of my stack configurations which causes atmos describe to break permissions boundaries (ie- requesting storage resources in another GCP project)I can work around this by modifying
ATMOS_STACKS_INCLUDED_PATHS to only include stacks that the current runtime has access to, but was wondering if there was a better way?Petr Dondukovabout 1 year ago
How to deploy the whole stack at once without specifying components? If there is no such functionality, is it possible to implement it?
Erik Osterman (Cloud Posse)about 1 year ago
Zachary Loeberabout 1 year ago
I probably should fix any misunderstandings if you have a sec to point them out --> https://dev.to/zloeber/atmos-wield-terraform-like-a-boss-3bfc
RBabout 1 year ago
Hi atmos team. What's the strategy to add a new TXT record to an existing r53 hosted zone?
Erik Osterman (Cloud Posse)about 1 year ago
• It felt quite difficult to get my existing deployment working with atmos
🧵 @Zachary Loeber and anyone else who has transitioned to using atmos with the existing code, can you please share the specifics? I want to write a migration guide, but want to know some real-world problems that were faced and how they would be done with atmos.
Erik Osterman (Cloud Posse)about 1 year ago
Pablo Paezabout 1 year ago
Hi 👋🏻
We are using the
The challenge is that using the default name set by Atmos, we have multiple subnets with the exact same name.
Looking at the variables, we thought that
Could you please point us in the right direction on how to achieve this?
We are using the
cloudposse/terraform-aws-dynamic-subnets component to define subnets and are struggling to customize their names.The challenge is that using the default name set by Atmos, we have multiple subnets with the exact same name.
Looking at the variables, we thought that
availability_zone_attribute_style or subnets_per_az_names would allow us to customize their names, but they don't seem to have any effect.Could you please point us in the right direction on how to achieve this?
James Leeabout 1 year ago
Hello everyone, I am beginner of atmos, but I am really get impressed about it.
I would like to have a quick question. Is atmos is using Terraform or is it different with Terraform?
I think Terraform is IAC tool like opentofu. I read some articles about atmos, it makes infrastructuring simple using Terraform.
I would like to have a quick question. Is atmos is using Terraform or is it different with Terraform?
I think Terraform is IAC tool like opentofu. I read some articles about atmos, it makes infrastructuring simple using Terraform.