53 messages
👽️
denizabout 1 year ago
Hi All! Getting Started with Atmos and have some questions about the bootstrapping. I appreciate some guidance.
I am a little bit confused about the deployment of the tfstate-backend component with the temporary SuperAdmin IAM user, before setting up the organization. I am using the
• If I want to have a seperate state bucket/dynamodb table for each account, where should I deploy the tfstate-backend component? Currently it is deployed at
• Where should I deploy the
Thanks!
Current Directory Structure:
Desired Organization Structure
I am a little bit confused about the deployment of the tfstate-backend component with the temporary SuperAdmin IAM user, before setting up the organization. I am using the
{tenant}-{environment}-{stage} naming convention.• If I want to have a seperate state bucket/dynamodb table for each account, where should I deploy the tfstate-backend component? Currently it is deployed at
core-gbl-root but it kinda does not make sense to me as the region of the dynamodb table is at eu-west-1.• Where should I deploy the
account and account-map components? The management account? If so what is the tenant, environment and stage for the management account?Thanks!
Current Directory Structure:
.
├── README.md
├── atmos.yaml
├── components
│ └── terraform
│ ├── account
│ │ ├── context.tf
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── providers.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ ├── account-map
│ │ ├── account-info.tftmpl
│ │ ├── context.tf
│ │ ├── dynamic-roles.tf
│ │ ├── main.tf
│ │ ├── modules
│ │ │ ├── iam-roles
│ │ │ │ ├── README.md
│ │ │ │ ├── context.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ ├── providers.tf
│ │ │ │ ├── variables.tf
│ │ │ │ └── versions.tf
│ │ │ ├── roles-to-principals
│ │ │ │ ├── README.md
│ │ │ │ ├── context.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ └── variables.tf
│ │ │ └── team-assume-role-policy
│ │ │ ├── README.md
│ │ │ ├── context.tf
│ │ │ ├── github-assume-role-policy.mixin.tf
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ ├── outputs.tf
│ │ ├── providers.tf
│ │ ├── remote-state.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ └── tfstate-backend
│ ├── context.tf
│ ├── iam.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── providers.tf
│ ├── variables.tf
│ └── versions.tf
├── stacks
│ ├── catalog
│ │ ├── account
│ │ │ └── defaults.yaml
│ │ ├── account-map
│ │ │ └── defaults.yaml
│ │ └── tfstate-backend
│ │ └── defaults.yaml
│ ├── mixins
│ │ ├── region
│ │ │ ├── eu-west-1.yaml
│ │ │ └── global.yaml
│ │ ├── stage
│ │ │ └── root.yaml
│ │ ├── tenant
│ │ │ └── core.yaml
│ │ └── tfstate-backend.yaml
│ └── orgs
│ └── dunder-mifflin
│ ├── _defaults.yaml
│ └── core
│ ├── _defaults.yaml
│ └── root
│ ├── _defaults.yaml
│ └── eu-west-1.yaml
└── vendor.yaml
22 directories, 55 filesDesired Organization Structure
[ACC] Root/Management Account (account name: dunder-mifflin-root)
│
├── [OU] Security
│ ├── [ACC] security-log-archive
│
├── [OU] Core
│ ├── [ACC] core-monitoring
│ └── [ACC] core-shared-services
│
└── [OU] Workloads
├── [OU] Production
│ └── [ACC] workloads-prod
│
└── [OU] Non-Production
└── [ACC] workloads-non-prodMichal Tomaszekabout 1 year ago(edited)
Hey! FYI, if anyone uses Mise/asdf to manage Atmos version + Renovate to handle dependency updates, I recently created PR adding support for Atmos in respective Renovate managers. It was merged already and deployed by Renovate. I see it works for my repos, so if you use the same setup, it should take care of the updates, too.
See docs:
https://docs.renovatebot.com/modules/manager/mise/
https://docs.renovatebot.com/modules/manager/asdf/
See docs:
https://docs.renovatebot.com/modules/manager/mise/
https://docs.renovatebot.com/modules/manager/asdf/
Miguel Zablahabout 1 year ago
I have a question about abstract components, should they be ignore on the UI since they are not real components?
I have a
I have a
defaults.yaml file on the catalog that is abstract that I'm using on some real components that I'm inheriting this abstract component but I see both of this on the atmos UI is this intended? should abstract components not be hidden from this UI?Andrew Chemisabout 1 year ago(edited)
Question about
I see this comment saying template functions should rarely be used https://sweetops.slack.com/archives/C031919U8A0/p1732031315369959?thread_ts=1732030781.552169&cid=C031919U8A0 but then in documentation template functions are the
I'm hitting strange errors with atmos.Component where locally values are grabbed correctly but in TFC outputs are just null, despite existing in state. Or the first run grabs the Ids but then in subsequent runs the Ids are lost. All troubleshooting has failed - I'm looking at remote-state, but this doesnt quite seem like the best option for passing values
How are others passing values between stacks or components? I have several hundred VPCs so hardcoding values is not an option. I could use data calls against the VPC name tag but trying to avoid them.
remote-state and atmos.Component for sharing data between components/stacks. Basically, trying to share VPC and Subnet IDs.I see this comment saying template functions should rarely be used https://sweetops.slack.com/archives/C031919U8A0/p1732031315369959?thread_ts=1732030781.552169&cid=C031919U8A0 but then in documentation template functions are the
"new and improved way to share data. So which is it? https://atmos.tools/core-concepts/components/terraform/remote-state/I'm hitting strange errors with atmos.Component where locally values are grabbed correctly but in TFC outputs are just null, despite existing in state. Or the first run grabs the Ids but then in subsequent runs the Ids are lost. All troubleshooting has failed - I'm looking at remote-state, but this doesnt quite seem like the best option for passing values
How are others passing values between stacks or components? I have several hundred VPCs so hardcoding values is not an option. I could use data calls against the VPC name tag but trying to avoid them.
Josh Simmondsabout 1 year ago
👋 Question about the
params outlined in the Brownfield considerations doc. Am I right in thinking that this can be used to mimic the outputs of the module run, such that I could bypass applying certain modules and just statically define the outputs of, say, the
Example:
could I define that under my
remote_state_backend_type: static
remote_state_backend:
static:params outlined in the Brownfield considerations doc. Am I right in thinking that this can be used to mimic the outputs of the module run, such that I could bypass applying certain modules and just statically define the outputs of, say, the
account-map module and not actually apply it, thereby leveraging your shared module logic without having to reverse engineer everything to completion?Example:
remote_state_backend_type: static
remote_state_backend:
static:
artifacts_account_account_name: "artifacts"
audit_account_account_name: "myaccount"
aws_partition: "aws"
dns_account_account_name: "myaccount"
full_account_map:
dev-data: "1111111111111"
myaccount: "1111111111111"
sandbox: "1111111111111"
security: "1111111111111"
sre: "1111111111111"could I define that under my
account-map component invocation and have other modules (which use the account-map state lookup) read that in?Josh Simmondsabout 1 year ago
Related to the use of templating and gomplate, what's the recommended way to combine two different outputs together into a single value? I have two lists(string) values I need to combine, both of which are outputs from the vpc module.
Was my initial thought, but that syntax doesn't work for pulling multiple values out of the outputs from the component
['{{ coll.Flatten (atmos.Component "vpc" .stack).outputs.private_subnet_ids .outputs.public_subnet_ids }}']Was my initial thought, but that syntax doesn't work for pulling multiple values out of the outputs from the component
Junkabout 1 year ago
I’d like to hear your thoughts on how people are approaching GitOps in Kubernetes these days. What are some of the popular practices, how are you handling it, and what do you think are the pros and cons of your methods?
Currently, I’m using Argo CD’s App of Apps strategy to manage different ecosystems through GitOps. It’s been working well overall, but there are a few pain points I’ve run into:
• Drift detection for Kubernetes resources with Terraform hasn’t been easy to manage.
• Automating the deployment of the same setup across multiple clusters worked well a year or two ago but now feels a bit outdated or clunky.
• Adding cluster-specific prefixes or suffixes to Sub App names created by Bootstrap Apps is tedious and adds unnecessary complexity.
• As the number of clusters increases, Argo CD seems to struggle with the load, which makes scaling a challenge.
I’m curious about how others have solved these kinds of issues or whether there are better ways to do what I’m doing. What’s been working for you, and what do you think are the strengths and weaknesses of your approach? Looking forward to hearing everyone’s ideas and learning from your experiences. Thanks!
Currently, I’m using Argo CD’s App of Apps strategy to manage different ecosystems through GitOps. It’s been working well overall, but there are a few pain points I’ve run into:
• Drift detection for Kubernetes resources with Terraform hasn’t been easy to manage.
• Automating the deployment of the same setup across multiple clusters worked well a year or two ago but now feels a bit outdated or clunky.
• Adding cluster-specific prefixes or suffixes to Sub App names created by Bootstrap Apps is tedious and adds unnecessary complexity.
• As the number of clusters increases, Argo CD seems to struggle with the load, which makes scaling a challenge.
I’m curious about how others have solved these kinds of issues or whether there are better ways to do what I’m doing. What’s been working for you, and what do you think are the strengths and weaknesses of your approach? Looking forward to hearing everyone’s ideas and learning from your experiences. Thanks!
RBabout 1 year ago
Very cool updates. Is there a plan to unveil atmos on hacker news and other platforms?
PePe Amengualabout 1 year ago
the easies way to use a ENV variable or override a stack global variable value is to use templates? https://atmos.tools/core-concepts/stacks/templates/datasources/#environment-variables
PePe Amengualabout 1 year ago
why....
atmos vendor pull -stack sandbox
Atmos supports native 'terraform' commands with all the options, arguments and flags.
In addition, 'component' and 'stack' are required in order to generate variables for the component in the stack.
atmos terraform <subcommand> <component> -s <stack> [options]
atmos terraform <subcommand> <component> --stack <stack> [options]
For more details, execute 'terraform --help'
command 'atmos vendor pull --stack <stack>' is not supported yetJesus Fernandezabout 1 year ago
👋 hello, newbie here experimenting with Atmos!
I've been following the examples, and now trying to align to our own organization layout I came across a problem with the workspace auto-generation, which apparently fails to find the variables from the pattern.
If I explicitly set:
it works as expected (resolves to e.g.
I might be missing something very obvious, but cannot find what. Any hints?
thanks in advance!
I've been following the examples, and now trying to align to our own organization layout I came across a problem with the workspace auto-generation, which apparently fails to find the variables from the pattern.
If I explicitly set:
terraform_workspace_pattern: "{tenant}--{stage}--{region}--{component}"it works as expected (resolves to e.g.
foo--prod--eu-west-1--eks), but if I add another variable (in my case domain) it doesn't (resolves to foo--prod--eu-west-1--{domain}--eks:terraform_workspace_pattern: "{tenant}--{stage}--{region}--{domain}--{domain}--{component}"I might be missing something very obvious, but cannot find what. Any hints?
thanks in advance!
Erik Osterman (Cloud Posse)about 1 year ago
For those interested, here's the current Atmos development roadmap: https://github.com/orgs/cloudposse/projects/34/views/1
Roman Orlovskiyabout 1 year ago
Hi guys. Using
For now, I just committed and pushed this
atmos describe affected I implemented a pipeline that runs on PR to find affected stacks and runs plan/apply on them. In that PR I had to change some values in the output of account-map/modules/iam-roles/outputs.tf, and as result, atmos describe affected included literally all of the stacks instead of just a couple due to affected: cmponent.module as documented here, because all of the components reference account-map component as a local module to get the IAM roles for providers and other stuff.For now, I just committed and pushed this
account-map/modules/iam-rokes/outputs.tf change to the main branch first and rerun the PR pipeline as a workaround, which helped. However, is there a better approach (or a plan to implement) for filtering out some of the triggers for the atmos describe affected ? I did not find anything regarding this in the docsdeniz gökçinabout 1 year ago
Hello again 🙂
Getting close to set up my imaginary organization dunder-mifflin and wanted to ask something. I deployed tfstate-backend and accounts for core-gbl-root and now working on the account-map component. the issue I am having is, if I do not set the export
Getting close to set up my imaginary organization dunder-mifflin and wanted to ask something. I deployed tfstate-backend and accounts for core-gbl-root and now working on the account-map component. the issue I am having is, if I do not set the export
ATMOS_CLI_CONFIG_PATH=/Users/denizgokcin/codes/infrastructure and ATMOS_BASE_PATH=/Users/denizgokcin/codes/infrastructure , before executing my atmos commands, although the plan for the account-map succeeds, I get the following long error. For some reason, one of the dependencies of the account-map is looking for the stacks in the wrong place. Interestingly this did not happen with tfstate-backend or the account component. Does anyone have an idea how to pass this problem without hardcoding paths from my local machine?... successful plan
You can apply this plan to save these new output values to the Terraform state, without changing any real
infrastructure.
╷
│ Error: failed to find a match for the import '/Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/orgs/**/*.yaml' ('/Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/orgs' + '**/*.yaml')
│
│
│ CLI config:
│
│ base_path: .
│ components:
│ terraform:
│ base_path: components/terraform
│ apply_auto_approve: false
│ deploy_run_init: true
│ init_run_reconfigure: true
│ auto_generate_backend_file: true
│ command: ""
│ helmfile:
│ base_path: components/helmfile
│ use_eks: true
│ kubeconfig_path: /dev/shm
│ helm_aws_profile_pattern: '{namespace}-{tenant}-gbl-{stage}-helm'
│ cluster_name_pattern: '{namespace}-{tenant}-{environment}-{stage}-eks-cluster'
│ command: ""
│ stacks:
│ base_path: stacks
│ included_paths:
│ - orgs/**/*
│ excluded_paths:
│ - '**/_defaults.yaml'
│ name_pattern: '{tenant}-{environment}-{stage}'
│ name_template: ""
│ workflows:
│ base_path: stacks/workflows
│ logs:
│ file: /dev/stdout
│ level: Debug
│ schemas:
│ jsonschema:
│ base_path: stacks/schemas/jsonschema
│ opa:
│ base_path: stacks/schemas/opa
│ templates:
│ settings:
│ enabled: true
│ sprig:
│ enabled: true
│ gomplate:
│ enabled: true
│ timeout: 0
│ datasources: {}
│ initialized: true
│ stacksBaseAbsolutePath: /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks
│ includeStackAbsolutePaths:
│ - /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/orgs/**/*
│ excludeStackAbsolutePaths:
│ - /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/**/_defaults.yaml
│ terraformDirAbsolutePath: /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/components/terraform
│ helmfileDirAbsolutePath: /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/components/helmfile
│ default: true
│
│
│ with module.accounts.data.utils_component_config.config[0],
│ on .terraform/modules/accounts/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
exit status 1PePe Amengualabout 1 year ago(edited)
Question: base on this apply workflow:
Do you guys filter out the content of the PR to then decide what stacks, components where changed before you do the apply? now that
jobs:
pr:
name: PR Context
runs-on: ubuntu-latest
steps:
- uses: cloudposse-github-actions/get-pr@v2
id: pr
outputs:
base: ${{ fromJSON(steps.pr.outputs.json).base.sha }}
head: ${{ fromJSON(steps.pr.outputs.json).head.sha }}
auto-apply: ${{ contains( fromJSON(steps.pr.outputs.json).labels.*.name, 'auto-apply') }}
no-apply: ${{ contains( fromJSON(steps.pr.outputs.json).labels.*.name, 'no-apply') }}
atmos-affected:
name: Determine Affected Stacks
if: needs.pr.outputs.no-apply == 'false'
needs: ["pr"]
runs-on: ubuntu-latest
steps:
- id: affected
uses: cloudposse/github-action-atmos-affected-stacks@v6.0.0
with:
base-ref: ${{ needs.pr.outputs.base }}
head-ref: ${{ needs.pr.outputs.head }}
atmos-config-path: ${{ inputs.atmos_cli_config_path }}
atmos-version: ${{ inputs.atmos_version }}
outputs:
stacks: ${{ steps.affected.outputs.matrix }}
has-affected-stacks: ${{ steps.affected.outputs.has-affected-stacks }}Do you guys filter out the content of the PR to then decide what stacks, components where changed before you do the apply? now that
describe affected supports --stack it is possible to filter the stack to detect changes on and apply only that stackFederico Nicolelliabout 1 year ago
Hello team! 👋
i've just started experimenting Atmos and i've also setup the GH actions to plan and apply.
Everything's working more or less as expected, except destroying a resource.
If i set
i've just started experimenting Atmos and i've also setup the GH actions to plan and apply.
Everything's working more or less as expected, except destroying a resource.
If i set
enabled: false in the resource yaml file, the github action detects that there's a change in that stack but the plan shows zero to change...is it possible/supported to destroy a resource via the GH action? (it is very possible that i'm missing something 🙂 )Junkabout 1 year ago
Hello team! I have a question about Atmos GitHub Actions workflows.
I’m trying to make some improvements based on the example provided in the Atmos Terraform Dispatch Workflow. I’ve set up and stored reusable workflows as shown in the example. When I push changes and execute the dispatch workflow, it runs successfully. However, the workflow doesn’t seem to track changes from the plan step or produce any meaningful summary of those changes. All required configurations, such as OIDC settings, GitOps S3 bucket, DynamoDB, and Terraform backend, are correctly configured. Do you have any hints on why this might be happening? 😇
I’m trying to make some improvements based on the example provided in the Atmos Terraform Dispatch Workflow. I’ve set up and stored reusable workflows as shown in the example. When I push changes and execute the dispatch workflow, it runs successfully. However, the workflow doesn’t seem to track changes from the plan step or produce any meaningful summary of those changes. All required configurations, such as OIDC settings, GitOps S3 bucket, DynamoDB, and Terraform backend, are correctly configured. Do you have any hints on why this might be happening? 😇
Stephan Helasabout 1 year ago(edited)
Hi,
i have a question regardig the new !template function. As it handles outputs containing maps and lists, i can use it to pass yaml lists to a component. but what i can't do is use a subkey of this object in the context of the component. i can only use it 'asis'
Example
high level component to import other components
suppose i define this in the stack component
catalog/high_level.yaml
which i then can use in catalog/components/low_level.yaml
i have a question regardig the new !template function. As it handles outputs containing maps and lists, i can use it to pass yaml lists to a component. but what i can't do is use a subkey of this object in the context of the component. i can only use it 'asis'
Example
high level component to import other components
suppose i define this in the stack component
settings
ami_filters:
owner_id: "redhat"
name: "foobar"
component:
terraform:
foo:
....catalog/high_level.yaml
import:
- path: catalog/components/low_level
context:
ami_filters: !template '{{ .settings.ami_filters | toJson }}'which i then can use in catalog/components/low_level.yaml
components:
terraform:
foo:
vars:
ami_filters: '{{ .ami_filters }}' <--- this will work
ami_owner: '{{ .ami_filters.owner_id }}' <--- this will fail, at <.ami_filters.owner_id>: can't evaluate field owner_id in type interface {}Stephan Helasabout 1 year ago
can i disable the atmos update check?
PePe Amengualabout 1 year ago
When is this one going to get merged? https://github.com/cloudposse/atmos/pull/762 Sadly gotemplate does not support azure keyvault as a datasource
PePe Amengualabout 1 year ago
@Igor Rodionov is this possible :
yaml
- name: Get atmos settings
id: atmos-settings
uses: cloudposse/github-action-atmos-get-setting@v2
with:
settings: |
- stack: ${{ inputs.stack }}
settingsPath: settings.integrations.github.gitops.azure.ARM_CLIENT_ID
outputPath: ARM_CLIENT_IDAlcpabout 1 year ago
with terraform version 1.10.1..
I am getting this error for atmos plan or apply
How to resolve this?
I am getting this error for atmos plan or apply
│ Error: Extraneous JSON object property
│
│ on backend.tf.json line 11, in terraform.backend.s3:
│ 11: "role_arn": "arniam::00000:role/abc-usw2-e1",
│
│ No argument or block type is named "role_arn".
╵
How to resolve this?
Boris Dygaabout 1 year ago
Hello!
Anybody had an error like this one? When I add a new context value (‘environment’ in my particular case) to an existing stack and try to deploy, I get:
… template: … executing … at <.environment>: map has no entry for key “environment”.
I’ve checked the import parameters - they are OK. Somehow the problem persists when I try to introduce new values to context
Anybody had an error like this one? When I add a new context value (‘environment’ in my particular case) to an existing stack and try to deploy, I get:
… template: … executing … at <.environment>: map has no entry for key “environment”.
I’ve checked the import parameters - they are OK. Somehow the problem persists when I try to introduce new values to context
Stephan Helasabout 1 year ago
FYI, atmos scheme is missing new metadata.enabled field
"metadata": {
....
"component": {
"type": "string"
},
"enabled": {
"type": "boolean"
},
^^^^^^^^^^^^
"inherits": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
....Matt Schmidtabout 1 year ago(edited)
Question about using static remote state.
https://atmos.tools/core-concepts/components/terraform/brownfield#hacking-remote-state-with-static-backends
I have a very basic MVP based on the simple tutorial.
• Terraform component
I have set up abstract inheritance with static remote_state.
• Atmos abstract component
• Atmos component
I am using the terraform.output function to reference the output from the static component.
• Atmos component
But when run, its not using the static values at all
https://atmos.tools/core-concepts/components/terraform/brownfield#hacking-remote-state-with-static-backends
I have a very basic MVP based on the simple tutorial.
• Terraform component
wttrI have set up abstract inheritance with static remote_state.
• Atmos abstract component
weather-report-abstract based on wttr component with static remote state overriding an output location • Atmos component
weather-report-disneyland inherits from weather-report-abstractI am using the terraform.output function to reference the output from the static component.
• Atmos component
hello-world which uses weather-report-disneyland output to do some string concatBut when run, its not using the static values at all
Ratabout 1 year ago
Hi there. I've got a question about the go templating abilities. I'm trying to construct a JSON object to pass into a module and through testing I've paired it down to just a range loop:
However, any time I try to run atmos, it gives
Could anyone point me in the right direction? Thanks
records: '{{ range $i, $fqdn := ((atmos.Component "nproxy" .stack).outputs.fqdns) }}{{$fqdn}}{{end}}'However, any time I try to run atmos, it gives
template: all-atmos-sections:233: unexpected "}" in operandCould anyone point me in the right direction? Thanks
cricketscabout 1 year ago
Is there a way to narrow the scope of Atmos describe affected to a particular stack?
cricketscabout 1 year ago
Since updating my Atmos I get json schema validations errors when I run
atmos describe stacks (don't know the previous behavior). For example one error seems to me to indicate that I should have components in definitions/ not at the root of the file. Could I get some broader context of what may be going on here?Stephan Helasabout 1 year ago
Hello. i have a quick design question. can i use the remote-state provider with gitlab terraform http backend?
Dennis Bernardyabout 1 year ago
Hey, is it intended that
atmos terraform show <stack_name>.planfile deletes the planfile afterwards? If yes, can I stop atmos from doing this?Josh Simmondsabout 1 year ago
Thanks for fielding our questions today @Erik Osterman (Cloud Posse) ! Just so @Matt Schmidt and I know, when (or with whom) should we follow up with for the example code ya’ll offered up?
R
Raymond Schippersabout 1 year ago
A minor nitpick, it would appear that the 1.0.1 release of cloudflare-zone only had a ref starting with v, which appears to be inconsistent with the other release (and other components)
https://github.com/cloudposse/terraform-cloudflare-zone/releases
https://github.com/cloudposse/terraform-cloudflare-zone/releases
Raymond Schippersabout 1 year ago
Has anyone used atmos to manage an existing cloudflare zone? When you register a domain via cloudflare the zone will automatically be created. I assume I need to import it somehow, but tips on how to go about that in atmos would be appreciated
Dennis Bernardyabout 1 year ago
Hey, the documentation of atmos for helmfile states, that "deploy" runs "diff" and then "apply", but the code looks like it runs just "sync"?
https://github.com/cloudposse/atmos/blob/v1.123.0/internal/exec/helmfile.go#L128
This is a bit misleading, as I was wondering why diff did not show any changes, but my chart was deployed anyways.
https://github.com/cloudposse/atmos/blob/v1.123.0/internal/exec/helmfile.go#L128
This is a bit misleading, as I was wondering why diff did not show any changes, but my chart was deployed anyways.
Josh Simmondsabout 1 year ago
Does Atmos have any known incompatibility with terraform 1.10? Trying to upgrade from 1.9.8 -> 1.10.2 to use the new
Example:
Doesn't work even if I explicitly call
ephemeral resource type, and atmos complains about the backend configuration changing, but won't let me try to reinit or reconfigure to migrate state even with this in my configcomponents:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: trueExample:
$ atmos terraform deploy aurora -s stackname git:(branchname|✚3…4⚑4)
template: all-atmos-sections:452:52: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure"Doesn't work even if I explicitly call
atmos terraform init <component> --stack <stack> --args="-reconfigure" eitherSamuel Thanabout 1 year ago
I suppose to report this to the developers of atmos, so i'm pasting the error here for help. This is when i'm deploying the ecs-service module.
Planning failed. Terraform encountered an error while generating this plan.
╷
│ Warning: Value for undeclared variable
│
│ The root module does not declare a variable named "public_lb_enabled" but a value was found in file
│ "cs-ai-apse2-dev-aws-project-tofu.terraform.tfvars.json". If you meant to use this value, add a "variable" block to the configuration.
│
│ To silence these warnings, use TF_VAR_... environment variables to provide certain "global" settings to all configurations in your
│ organization. To reduce the verbosity of these warnings, use the -compact-warnings option.
╵
╷
│ Warning: Deprecated Resource
│
│ with aws_s3_bucket_object.task_definition_template,
│ on main.tf line 570, in resource "aws_s3_bucket_object" "task_definition_template":
│ 570: resource "aws_s3_bucket_object" "task_definition_template" {
│
│ use the aws_s3_object resource instead
╵
╷
│ Warning: Argument is deprecated
│
│ with aws_ssm_parameter.full_urls,
│ on systems-manager.tf line 56, in resource "aws_ssm_parameter" "full_urls":
│ 56: overwrite = true
│
│ this attribute has been deprecated
╵
╷
│ Error: Request cancelled
│
│ with module.alb[0].data.utils_component_config.config[0],
│ on .terraform/modules/alb/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.
╵
╷
│ Error: Request cancelled
│
│ with module.ecs_cluster.data.utils_component_config.config[0],
│ on .terraform/modules/ecs_cluster/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.
╵
╷
│ Error: Plugin did not respond
│
│ with module.roles_to_principals.module.account_map.data.utils_component_config.config[0],
│ on .terraform/modules/roles_to_principals.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more
│ details.
╵
╷
│ Error: Request cancelled
│
│ with module.vpc.data.utils_component_config.config[0],
│ on .terraform/modules/vpc/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.
╵
Stack trace from the terraform-provider-utils plugin:
panic: assignment to entry in nil map
goroutine 62 [running]:
<http://github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0x14000072050|github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0x14000072050>, 0x2c}, {{{0x14000b05d40, 0x14}, 0x0, {0x14000f07170, 0x2f}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
<http://github.com/cloudposse/atmos@v1.122.0/internal/exec/utils.go:438|github.com/cloudposse/atmos@v1.122.0/internal/exec/utils.go:438> +0xdac
<http://github.com/cloudposse/atmos/pkg/component.ProcessComponentInStack({0x1400010e3c0|github.com/cloudposse/atmos/pkg/component.ProcessComponentInStack({0x1400010e3c0>?, 0x2?}, {0x140009df870?, 0x2?}, {0x0?, 0x5?}, {0x0?, 0x3?})
<http://github.com/cloudposse/atmos@v1.122.0/pkg/component/component_processor.go:33|github.com/cloudposse/atmos@v1.122.0/pkg/component/component_processor.go:33> +0x180
<http://github.com/cloudposse/atmos/pkg/component.ProcessComponentFromContext(|github.com/cloudposse/atmos/pkg/component.ProcessComponentFromContext(>{0x1400010e3c0, 0x1b}, {0x140003985fa, 0x2}, {0x1400039862a, 0x2}, {0x1400039837b, 0x5}, {0x14000398620, 0x3}, ...)
<http://github.com/cloudposse/atmos@v1.122.0/pkg/component/component_processor.go:80|github.com/cloudposse/atmos@v1.122.0/pkg/component/component_processor.go:80> +0x294
<http://github.com/cloudposse/terraform-provider-utils/internal/provider.dataSourceComponentConfigRead({0x1043b9728|github.com/cloudposse/terraform-provider-utils/internal/provider.dataSourceComponentConfigRead({0x1043b9728>?, 0x14000570c60?}, 0x140001a0f00, {0x0?, 0x0?})
<http://github.com/cloudposse/terraform-provider-utils/internal/provider/data_source_component_config.go:121|github.com/cloudposse/terraform-provider-utils/internal/provider/data_source_component_config.go:121> +0x2f8
<http://github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x14000825ea0|github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x14000825ea0>, {0x1043b9728, 0x14000570c60}, 0x140001a0f00, {0x0, 0x0})
<http://github.com/hashicorp/terraform-plugin-sdk/v2@v2.35.0/helper/schema/resource.go:823|github.com/hashicorp/terraform-plugin-sdk/v2@v2.35.0/helper/schema/resource.go:823> +0xe4
<http://github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0x14000825ea0|github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0x14000825ea0>, {0x1043b9728, 0x14000570c60}, 0x140001a0380, {0x0, 0x0})
<http://github.com/hashicorp/terraform-plugin-sdk/v2@v2.35.0/helper/schema/resource.go:1043|github.com/hashicorp/terraform-plugin-sdk/v2@v2.35.0/helper/schema/resource.go:1043> +0x110
<http://github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0x14000bfc150|github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0x14000bfc150>, {0x1043b9728?, 0x14000570ba0?}, 0x14000570b40)
<http://github.com/hashicorp/terraform-plugin-sdk/v2@v2.35.0/helper/schema/grpc_provider.go:1436|github.com/hashicorp/terraform-plugin-sdk/v2@v2.35.0/helper/schema/grpc_provider.go:1436> +0x5a0
<http://github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0x140009c5860|github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0x140009c5860>, {0x1043b9728?, 0x140005702a0?}, 0x140004ae460)
<http://github.com/hashicorp/terraform-plugin-go@v0.25.0/tfprotov5/tf5server/server.go:688|github.com/hashicorp/terraform-plugin-go@v0.25.0/tfprotov5/tf5server/server.go:688> +0x1cc
<http://github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1042e36a0|github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1042e36a0>, 0x140009c5860}, {0x1043b9728, 0x140005702a0}, 0x140001a0180, 0x0)
<http://github.com/hashicorp/terraform-plugin-go@v0.25.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:665|github.com/hashicorp/terraform-plugin-go@v0.25.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:665> +0x1c0
<http://google.golang.org/grpc.(*Server).processUnaryRPC(0x14000813800|google.golang.org/grpc.(*Server).processUnaryRPC(0x14000813800>, {0x1043b9728, 0x14000570210}, {0x1043c6cc0, 0x140007a8340}, 0x14000c399e0, 0x14000bfe960, 0x106130700, 0x0)
<http://google.golang.org/grpc@v1.67.1/server.go:1394|google.golang.org/grpc@v1.67.1/server.go:1394> +0xb64
<http://google.golang.org/grpc.(*Server).handleStream(0x14000813800|google.golang.org/grpc.(*Server).handleStream(0x14000813800>, {0x1043c6cc0, 0x140007a8340}, 0x14000c399e0)
<http://google.golang.org/grpc@v1.67.1/server.go:1805|google.golang.org/grpc@v1.67.1/server.go:1805> +0xb20
<http://google.golang.org/grpc.(*Server).serveStreams.func2.1()|google.golang.org/grpc.(*Server).serveStreams.func2.1()>
<http://google.golang.org/grpc@v1.67.1/server.go:1029|google.golang.org/grpc@v1.67.1/server.go:1029> +0x84
created by <http://google.golang.org/grpc.(*Server).serveStreams.func2|google.golang.org/grpc.(*Server).serveStreams.func2> in goroutine 56
<http://google.golang.org/grpc@v1.67.1/server.go:1040|google.golang.org/grpc@v1.67.1/server.go:1040> +0x13c
Error: The terraform-provider-utils plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
exit status 1Weston Platterabout 1 year ago(edited)
Has anyone run into an issue where the component name is interpreted by atmos/terraform as a command line flag?
I am experimenting with conditionally setting the name_pattern via
[I am using atmos
Error parsing command-line flags: flag provided but not defined: -rds-keyI am experimenting with conditionally setting the name_pattern via
name_template and wondering if my config adjustments are causing the error.[I am using atmos
v1.123.0.]cricketscabout 1 year ago
Why might I get an error that states no argument or block type is named role_arn for a role_arn under terraform > backend > s3 in a backend.tf.json file?
Mike Hennessyabout 1 year ago
I'm currently poc'ing atmos and I'm running into something that I can't quite square, hoping someone can shed some light. I'm using a stack name template
This works just fine, however when running terraform it complains about an extra undeclared variable
{{.vars.environment}}-{{.vars.short_region}} where environment is defined in a _defaults.yaml for the respective environments (dev, prod, etc.) and short_region is defined as a regional default (eg. use1).This works just fine, however when running terraform it complains about an extra undeclared variable
short_region. I searched and found this thread where the recommendation was to not use global variables. I'm not sure how to square that with the need to declare these variables in a relatively "global" sense. Am I missing something obvious?Erik Osterman (Cloud Posse)about 1 year ago
Did you know we have a context provider for terraform? This is a magical way to enforce naming conventions like with our null label module. Since it is a provider, however, context is automatically available to all child modules of the root module. This is the best way to implement naming conventions, tag conventions and then validate that it is meeting your requirements. Stop using variables, and start using context.
https://github.com/cloudposse/atmos/tree/main/examples/demo-context
https://github.com/cloudposse/atmos/tree/main/examples/demo-context
H
Haithamabout 1 year ago
Yevgen (md2k)about 1 year ago(edited)
Hi all, i found that Atmos using mergo library to provide merge capabilities for structures, but also it looks like mergo not actively maintained anymore. author, seems, not accepting any new PRs and telling to people to wait for mergo/v2 , but nothing going forward with v2 for more than a year. what you think if cloudposse fork mergo to maintain?
one of the problems with mergo missing some nice features like removal of duplicates items from slice on append, and deep copy for slices also working not in a perfect way, most of the things can be solved by a few PRs, but as per above probably it won't go.
one of the problems with mergo missing some nice features like removal of duplicates items from slice on append, and deep copy for slices also working not in a perfect way, most of the things can be solved by a few PRs, but as per above probably it won't go.
deniz gökçinabout 1 year ago(edited)
Hi all 👋
In my previous job, we were heavily using cloudposse modules in our terragrunt setup. The examples/complete folder was helping a lot when understanding how a root module worked. Now that you have migrated to atmos, I am a little bit lost about how to use your modules in a non-atmos, pure terraform environment. For instance, I am trying to deploy the karpenter controller to an eks cluster(
Am I missing anything? Any tips on deploying karpenter using cloudposse modules?
(Note: As this is not directly related to atmos, but somehow it was, I was not sure if this channel was the correct place to ask this)
Thanks!
In my previous job, we were heavily using cloudposse modules in our terragrunt setup. The examples/complete folder was helping a lot when understanding how a root module worked. Now that you have migrated to atmos, I am a little bit lost about how to use your modules in a non-atmos, pure terraform environment. For instance, I am trying to deploy the karpenter controller to an eks cluster(
cloudposse/eks-cluster/aws) and copying the *tf files in cloudposse-terraform-components/aws-eks-karpenter-controller is not working as there are some dependencies to the account-map and iam-roles(I think)Am I missing anything? Any tips on deploying karpenter using cloudposse modules?
(Note: As this is not directly related to atmos, but somehow it was, I was not sure if this channel was the correct place to ask this)
Thanks!
Ismael PRabout 1 year ago
Hi all! I am making a POC with atmos to use it together with Atlantis and I have some doubts that I am not able to solve with the documentation.
According to the documentation here looks like the atmos+atlantis integration requires to "pre-render" the atlantis projects, backends and variables, which is not a big problem for me, but the problem is that there are plenty of features that are not supported by such integration, like Terraform provider overrides or the dependencies between components.
The first thing I would like to know is if I am right with what I'm saying or I understood something wrong... and if there is a way to use all the features of Atmso with atlantis... I guess that this would require atlantis to launch
thanks in advance!
According to the documentation here looks like the atmos+atlantis integration requires to "pre-render" the atlantis projects, backends and variables, which is not a big problem for me, but the problem is that there are plenty of features that are not supported by such integration, like Terraform provider overrides or the dependencies between components.
The first thing I would like to know is if I am right with what I'm saying or I understood something wrong... and if there is a way to use all the features of Atmso with atlantis... I guess that this would require atlantis to launch
atmos instead of terraform to generate the plan. am I right? is this feasible? has anyone tried it?thanks in advance!
Jesus Fernandezabout 1 year ago(edited)
Hi there! I'm trying to do a small PoC with Atmos+Github Actions where 1 component (cluster) depends on another (network). I've several questions:
1- I see no way to guarantee order rather than explicitly writing a workflow or playing with
2- When I try to use
thanks!
1- I see no way to guarantee order rather than explicitly writing a workflow or playing with
atmos describe affected --include-dependents output.2- When I try to use
atmos.Component function in a component, I get an error in the atmos-settings step, which I guess tries to resolve the output but still doesn't have neither tofu installed, nor credentials set up yet. Looks like a chicken-egg problem to me, unless I'm missing something very obvious?thanks!
Michaelabout 1 year ago
Has anyone here ever used Atmos custom commands to extend functionality to support deploying Ansible playbooks? Recently had a use case for this but was curious if anyone else has been running a similar setup
Andrew Chemisabout 1 year ago
I am trying to run
Is there a work-around? Any plans on supporting this?
atmos describe affected using a self-hosted github enterprise. Im getting the error repository host is not supported.Is there a work-around? Any plans on supporting this?
athlonzabout 1 year ago(edited)
Hello team, I am currently trying to use
The trace logs seems good, the stack files are on that path. The same error is given on Powershell, cmd and mingw64 (git bash for windows). There is no issue under linux running the same commands.
What am I missing?
atmos on Windows (against my free will) on a project, but I have the following path error and I cannot pinpoint the issue:failed to find a match for the import 'C:\Users\andrea.zoli\projects\dbridge\infrastructure\stacks\orgs\dbridge\**\*.yaml' ('.' + 'C:\Users\andrea.zoli\projects\dbridge\infrastructure\stacks\orgs\dbridge\**\*.yaml')The trace logs seems good, the stack files are on that path. The same error is given on Powershell, cmd and mingw64 (git bash for windows). There is no issue under linux running the same commands.
What am I missing?
Nick Dunnabout 1 year ago
Hey, everyone. I'm trying to POC GitHub Actions with Atmos. I have a very simple job set up taken straight from the example in the action's repository. The job technically succeeds, but nothing happens. I've created everything needed in AWS and I've added the integration settings to my
atmos.yaml. But there's no output that shows, no plan file in the S3 bucket, and no job summary, despite the fact that the job succeeds. I'll include more details in a 🧵, here. But I'm hoping someone can point out the obvious thing I'm missing.PePe Amengualabout 1 year ago(edited)
looks like something changed on 1.130.0 with atmos vendor now it can't parse the vendor file:
Run cd atmos/
Vendoring from 'vendor.yaml'
unsupported URI scheme: git::https
Error: Process completed with exit code 1.1.129.0 works @Andriy Knysh (Cloud Posse)PePe Amengualabout 1 year ago(edited)
and
1.123.1 has a bug, too where the apply will fail with Error: invalid character 'â' looking for beginning of valueAndrew Chemisabout 1 year ago(edited)
@Erik Osterman (Cloud Posse) I'm using the new Yaml templating functions
Atmos version 1.130.0
The plan in TFC is a string literal.
Here is my stack definition:
And when I run locally it is correctly a value.
When I run in cloud, it is
!terraform.output and hitting some errors with Terraform Cloud backend. I am getting different results when running locally vs pushing changes to cloud.Atmos version 1.130.0
The plan in TFC is a string literal.
Here is my stack definition:
vars:
enabled: true
name: vpc
transit_gateway_id: '!terraform.output tgw transit_gateway_id'
network_firewall_tgw_route_table_id: '!terraform.output tgw network_firewall_tgw_rt_id'
network_ingress_egress_tgw_route_table_id: '!terraform.output tgw network_ingress_egress_tgw_rt_id'And when I run locally it is correctly a value.
When I run in cloud, it is
"tgw transit_gateway_id"Thomas Spearabout 1 year ago
Hi, I am new to atmos. Our team is trying it out so I apologize in advance if I mix up any terms. My colleague has a stack and catalog setup which creates resources in different environments based on various components.
I see in one stack as an example we have:
and this ensures that platform-elements has run for each environment before we create a keyvault.
Now I want to add a global resource related to DNS. This only needs to be created one time because it is global, rather than once per environment. So I thought I will create this from my lowest environment, sandbox by including it here as so.
But it doesn't seem to do what I expected. How is the file dependency intended to operate? Is there any difference between that and simply doing
My file above includes a catalog file:
I see in one stack as an example we have:
components:
terraform:
key-vault:
settings:
depends_on:
1:
component: "platform-elements"and this ensures that platform-elements has run for each environment before we create a keyvault.
Now I want to add a global resource related to DNS. This only needs to be created one time because it is global, rather than once per environment. So I thought I will create this from my lowest environment, sandbox by including it here as so.
components:
terraform:
key-vault:
settings:
depends_on:
1:
file: "stacks/orgs/frontend/azure/eastus/sandbox/jenkins-private-dns-zone-virtual-network-links.yaml"
2:
component: "platform-elements"But it doesn't seem to do what I expected. How is the file dependency intended to operate? Is there any difference between that and simply doing
component: private-dns-zone-virtual-network-linksMy file above includes a catalog file:
components:
terraform:
private-dns-zone-virtual-network-links:
metadata:
component: private-dns-zone-virtual-network-links
vars:
[-snip-]
iam: