55 messages
👽️
Petr Dondukov12 months ago
How to pass a value from one helmfile component to other?
Daniel Booth12 months ago
So, I am trying to pass outputs from one component to another. The issue I am having is they are constantly being passed into the following component as a string. Say for example, I have a list of IPs I am trying to pass into a separate component:
vars:
enabled: true
nodes: '{{ (atmos.Component "k8s_masters" .stack).outputs.ips }}'Daniel Booth12 months ago
the original ip is a list but once I am defining this as a variable for the second component it is being inputted as a string again
Daniel Booth12 months ago
throwing the following error:
The given value is not suitable for var.nodes declared at <http://variables.tf:11,1-17|variables.tf:11,1-17>: list of string required.Daniel Booth12 months ago
Cannot tell if this is a terraform limitation or an atmos limitation or perhaps I do not understand how the template functions work
Olivier12 months ago
Hi everyone,
I'm having some issues understanding the documentation.
Here's my filesystem layout
I'm trying to deploy vault. Which is effectively an ECS cluster with some apps on it. It should dépends on the network and secret stacks.
However I'm not sure on how to share data between stacks, I've tried to do as follow (I think that is how the docs intends us to do it ?)
In my
I've also tried to set this in the
I Don't understand where I should put these function and what params should I pass to them.
I think I'm missing some informations as the _defaults.yaml settings are not automatically imported unless I import them with
in my deploy/stage-env.yaml.
Am I doing something wrong ?
I'm having some issues understanding the documentation.
Here's my filesystem layout
./
components/
tofu/
ecs/
vpc/
secrets/
stacks/
catalog/
vault.yaml
network.yaml
secrets.yaml
deploy/
stage-env.yaml
_defaults.yamlI'm trying to deploy vault. Which is effectively an ECS cluster with some apps on it. It should dépends on the network and secret stacks.
However I'm not sure on how to share data between stacks, I've tried to do as follow (I think that is how the docs intends us to do it ?)
In my
catalog/vault.yaml i've defined dependencies on network and secret. In the same file i've tried to use atmos.Component Template function to reference outputs of these two stacks. But I get an error : "private_subnets_ids": "{{ (atmos.Component \"network\" .stack).outputs.private_subnets_ids }}" The given value is not suitable for var.private_subnets_ids declared at <http://variables.tf:10,1-31|variables.tf:10,1-31>: list of string requiredI've also tried to set this in the
deploy/stage-env.yaml file, but i get the same issue.I Don't understand where I should put these function and what params should I pass to them.
"{{ (atmos.Component \"vpc\" .stack).outputs.private_subnets_ids }}" also doesn't work.I think I'm missing some informations as the _defaults.yaml settings are not automatically imported unless I import them with
import:
- deploy/_defaults.yamlin my deploy/stage-env.yaml.
Am I doing something wrong ?
Matt Parkes12 months ago
Is Atmos going to support the s3 backend without dynamodb pattern that Terraform and now Tofu are adopting: https://github.com/opentofu/opentofu/issues/599
cricketsc12 months ago
Bit of a vague question, but are there any "go to" Atmos based techniques for helping with providers that use a secret for auth? I was considering using a data source to acquire the info from a remote source like AWS Secrets Manager or Vault and then using a TF variable to get the secret from Atmos to Terraform. Does that sound like a reasonable approach?
PePe Amengual12 months ago
Is there a way to disable a stack? like
enabled: false in the vars section or something? I was thinking on doing provider upgrades and I thought about creating a stack in sandbox that gets enabled by a flag and then the pipeline can pick it up and plan and apply all the components defined at once.PePe Amengual12 months ago
if there an atmos emoji? 👽️ ?
PePe Amengual12 months ago
what is the difference between
--process-functions=false and --skip for describe affected? is skip supposed to disable a specific function type?PePe Amengual12 months ago
With the new fixes in vendor is it possible now to use
"stacks/**/myfile.yaml" ?Carter Danko12 months ago
Question regarding the new feature that I can't seem to get working. It seems like when I log atmos with
ATMOS_LOGS_LEVEL=Info atmos terraform plan which is the default log level we have setup, I don't get any response from that new function on the spinner, but when I do the same command with a different log level -> ATMOS_LOGS_LEVEL=Debug atmos terraform plan I get the "Fetching output" (along with other debug logs). I don't think that's expected but also can't quite track down why it would only show with DebugDan Hansen12 months ago
Is there any documented process around how state files should be maintained as root module state shifts around an Atmos setup?
For example, if a component is inlined & imported into a different root module, a component instance is renamed, etc
For example, if a component is inlined & imported into a different root module, a component instance is renamed, etc
burnzy12 months ago
Anyone know how to load a json file as a string. Trying to do with the following:
But not having any luck
!include myfile.json @jsonBut not having any luck
cricketsc12 months ago
for Atmos helmfile "commands": is the path of a state values file, if specified via flag, from the Atmos helmfile base_path ?
cricketsc12 months ago
If I run
atmos version I see an indication that it is possible to go up to 1.166.0-rc.2 . It it intentional to have a release candidate as what is possible to upgrade to?mirageAlchemy12 months ago
Is it possible to inherit
providers? Say I defined a abstract resource in my catalog and mark it with name: foo in provider, because I am using the context provider, can I further inherit this resource and mark it with tenant: Mark ? My gut feeling tell me that I can but somehow in the final description name: foo is missing.Kane11 months ago(edited)
Hi, Could anyone point me in correct direction to resolve this remote state error please:
│ Error: failed to find a match for the import '/infrastructure/environments/odp-prod-azure/components/terraform/azure/keyvault/stacks/orgs/*/.yaml' ('/infrastructure/environments/odp-prod-azure/components/terraform/azure/keyvault/stacks/orgs' + '*/.yaml')
│
│ with module.vnet-simple.data.utils_component_config.config,
│ on .terraform/modules/vnet-simple/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ Error: failed to find a match for the import '/infrastructure/environments/odp-prod-azure/components/terraform/azure/keyvault/stacks/orgs/*/.yaml' ('/infrastructure/environments/odp-prod-azure/components/terraform/azure/keyvault/stacks/orgs' + '*/.yaml')
│
│ with module.vnet-simple.data.utils_component_config.config,
│ on .terraform/modules/vnet-simple/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
module "vnet-simple" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.8.0"
component = "vnet-simple"
stack = "azure/prod/uksouth" << have also tried just "uksouth"
context = module.this.context
}
-- trying to use in
virtual_network_subnet_ids = [module.vnet-simple.outputs.public_subnet_id]
-- output in vnet-simple
output "public_subnet_id" {
description = "The ID of the public subnet"
value = element([for s in azurerm_subnet.snet : s.id if length(regexall("public", lower(s.name))) > 0], 0)
}Daniel Booth11 months ago
When creating a workflow is there a way I can access the current stack it is being executed on via an environment variable or something similar;
for example:
for example:
echo:
steps:
- type: shell
command: echo $STACKDaniel Booth11 months ago
as when using a step with type atmos the stack is automatically inherited from the workflow command:
- name: Create network
command: terraform deploy k8s_networkDaniel Booth11 months ago
I just need a way to access my stack name through shell steps
Joost Pluijmers11 months ago
Hey SweetOps! Good morning from the Netherlands. I've been struggling with my terragrunt/terraform setup for a while now - a not unknown story here I guess. I've been working over the weekend to get a POC working with atmos and tofu. With the last stumbling block being: allowing alternate module sources for local development. People have been search/replacing source urls for local development. Those of course get committed back sometimes, a hassle.
The plan I have with my newbie understanding is to use the addition of variables in module source parameters from tofu. This does work nicely BUT atmos does not pass the varsfile to the init function. I reckon in hindsight that this is to remain compatible with terraform. But Is there some way to get this vars file to the tofu init dynamically?
Or perhaps i've totaly missed the point and is there another much better way to do this, please help me, I've been wrecking my head on this for hours now 🙂
PS, yes I'm also that guy (https://github.com/cloudposse/atmos/issues/1154) :P
The plan I have with my newbie understanding is to use the addition of variables in module source parameters from tofu. This does work nicely BUT atmos does not pass the varsfile to the init function. I reckon in hindsight that this is to remain compatible with terraform. But Is there some way to get this vars file to the tofu init dynamically?
Or perhaps i've totaly missed the point and is there another much better way to do this, please help me, I've been wrecking my head on this for hours now 🙂
PS, yes I'm also that guy (https://github.com/cloudposse/atmos/issues/1154) :P
N
Nitzan Frock11 months ago
Hello! I'm running into an issue running atmos on a mac with apple silicon. this may be a "me" issue, but this prevents me from using it. I am also running on a locked down system and only have access to my own user space in case this makes a difference?
Daniel Booth11 months ago
'{{ (atmos.Component "zerossl_eba_credentials" .stack).outputs.kid }}'can I use this to access vars from another component?
like
'{{ (atmos.Component "zerossl_eba_credentials" .stack).vars.kid }}'Erik Osterman (Cloud Posse)11 months ago
We've just rolled out some new a la carte support offerings! Curious to get your feedback (good or bad — please DM).
https://docs.cloudposse.com/support/
https://docs.cloudposse.com/support/
PePe Amengual11 months ago
any ETA on when this could get resolved ? https://github.com/cloudposse/github-action-atmos-get-setting/issues/62 the
cloudposse/github-action-atmos-terraform-plan does not work anymore when using !terraform.outputsRyan11 months ago
Making a new atmos thread. Is there anything on 1.165 that might prevent a stack from being read? I have copied and pasted detected stacks into new stack files with a new stage name and it’s just not seeing it in atmos or when called via CLI at all.
Nitzan Frock11 months ago
Hello! I had a question about how stacks and deployments relate together. Maybe this is me still trying to wrap my head around the organization and structure of designing these stack components, so maybe this is a dumb question, but I was wondering how one would go about creating a stack for an entire "architecture" rather than individual components. For example, I would want to create an entire stack that is something like "request-driven-server" and then that will define all the necessary subcomponents it needs. then the final
Maybe the question can be approached from the other direction of, if i have a workflow that deploys all of these components, each time a new component is added, it will also need to be added to the appropriate stack and workflow? I guess I'm looking for a way to say I want to deploy all components for the
atmos terraform deploy command would be atmos terraform deploy request-drive-server -s <stack-name> .Maybe the question can be approached from the other direction of, if i have a workflow that deploys all of these components, each time a new component is added, it will also need to be added to the appropriate stack and workflow? I guess I'm looking for a way to say I want to deploy all components for the
dev environment, or spin be able to quickly spin up a new server with some predetermined architecture/configuration. Hopefully this question makes sense, I appreciate the help!Pablo Paez11 months ago
Hi 👋🏻
I'm trying to use Gomplate datasources to read secrets from AWS Secrets Manager, but I don't get the plan working due to a permissions issue.
From the error I'm getting, it seems that
The docs (https://atmos.tools/core-concepts/stacks/templates/datasources) mention:
Is this necessary? Or is there a way to make the datasource using the same credentials as the component where it is being used?
Thanks in advance for your help!
I'm trying to use Gomplate datasources to read secrets from AWS Secrets Manager, but I don't get the plan working due to a permissions issue.
From the error I'm getting, it seems that
atmos plan is not using the provider.tf of the component to select the appropriate AWS role of the target account. Instead, it is using the role of the Identity account. I can reproduce the same behavior with the SNS and DynamoDB components.The docs (https://atmos.tools/core-concepts/stacks/templates/datasources) mention:
env:
# AWS profile with permissions to access the S3 bucket
AWS_PROFILE: "<AWS profile>"Is this necessary? Or is there a way to make the datasource using the same credentials as the component where it is being used?
Thanks in advance for your help!
Zack11 months ago
Is it true that this type of templating is relative to the current file only?
🧵
🧵
Andrew Chemis11 months ago
Appears to be a regression in the latest version of atmos, 1.166.0
confirmed working on 1.165.3 -
.settings are not correctly being parsed as values.
confirmed working on 1.165.3 -
.settings are not correctly being parsed as values.
│ Error: "assume_role.0.role_arn" (arn:aws:iam::{{ .settings.account_id }}:role/terraformcloud_apply_role) is an invalid ARN: invalid account ID value (expecting to match regular expression: ^(aws|aws-managed|third-party|\d{12}|cw.{10})$)
│
│ with provider["<http://registry.terraform.io/hashicorp/aws|registry.terraform.io/hashicorp/aws>"],
│ on provider.tf line 20, in provider "aws":
│ 20: role_arn = format("arn:aws:iam::%s:role/%s", var.account_id, var.automation_role_name)
│
╵
Failed generating plan JSON
Exit code: 1
Failed to marshal plan to json: error marshaling prior state: unsupported attribute "has_secret_string_wo"
Operation failed: 2 errors occurred:
* failed running terraform plan (exit 1)
* failed generating plan JSON: failed running command (exit 1)Erik Osterman (Cloud Posse)11 months ago
Hey all! We would love if you could submit your advanced stack configurations to us so we can implement regression tests. These configs should have nothing you wouldn’t want public in the atmos repository. You can submit them to @Andriy Knysh (Cloud Posse) or myself as a DM. Also, this will be both invaluable to us to understand how atmos is used, as well other others to who would benefit from the advanced examples.
Daniel Booth11 months ago
Hi guys do you have the atmos logo somewhere, I can download, the little green alien
Daniel Booth11 months ago
for an architecture diagram
Miguel Zablah11 months ago
I was waiting on this PR: https://github.com/cloudposse/atmos/pull/984
but just saw that it got split with this PR: https://github.com/cloudposse/atmos/pull/1149
Do we know when this will get merger? I will like to update Atmos but I use git shh and I rather wait for this to merge hehe
but just saw that it got split with this PR: https://github.com/cloudposse/atmos/pull/1149
Do we know when this will get merger? I will like to update Atmos but I use git shh and I rather wait for this to merge hehe
RB11 months ago
Any common footguns that cause atmos to cause it to take a long time process all the yaml deep merging before starting the terraform workflows?
Dhruv Tiwari11 months ago
Is the new remote state sharing tf function
affected stacks:
I am able to move forward with these settings for affected-stacks:
But these fail at
atmos plan:
Both vpc-main and vpc-main-endpoints components are in the same stack and vpc-main has been deployed, (using s3 and dynamodb for remote state)
!terrafrom.output incompatible with github actions cloudposse/github-action-atmos-affected-stacks@v6 and cloudposse/github-action-atmos-terraform-plan@v4 , both the actions are unable to find the outputs :affected stacks:
Fetching vpc_cidr_block output from vpc-main in qa-qa-deploy
✗ Fetching vpc_cidr_block output from vpc-main in qa-qa-deploy
Error: Process completed with exit code 1.I am able to move forward with these settings for affected-stacks:
with:
atmos-config-path: .
atmos-include-settings: true
# atmos-include-dependents: true
skip-atmos-functions: true
atmos-version: ${{ env.ATMOS_VERSION }}
nested-matrices-count: 1But these fail at
atmos plan:
Run cloudposse/github-action-atmos-get-setting@v2
✗ Fetching vpc_cidr_block output from vpc-main in qa-qa-deploy
Error: Error: Command failed: atmos describe component vpc-main-endpoints -s qa-qa-deploy --format=json --process-templates=false
✗ Fetching vpc_cidr_block output from vpc-main in qa-qa-deploy
Error: Error: Command failed: atmos describe component vpc-main-endpoints -s qa-qa-deploy --format=json --process-templates=false
✗ Fetching vpc_cidr_block output from vpc-main in qa-qa-deploy
at genericNodeError (node:internal/errors:984:15)
at wrappedFn (node:internal/errors:538:14)
at checkExecSyncError (node:child_process:891:11)
at execSync (node:child_process:963:15)
at runAtmosDescribeComponent (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/lib/atmos.ts:8:1)
at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/lib/settings.ts:27:1)
at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:40:1
at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1
at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1Both vpc-main and vpc-main-endpoints components are in the same stack and vpc-main has been deployed, (using s3 and dynamodb for remote state)
Daniel Booth11 months ago
I appear to be having an issue with the latest update,
I have defined a cluster issuer in my stack like so using helmfile:
previously this worked, passing the output from the terraform component
The following is successful:
although with the latest version I get this
I have defined a cluster issuer in my stack like so using helmfile:
zerossl-cluster-issuer:
metadata:
component: cert-manager/cluster-issuer
settings:
depends_on:
1:
component: "test/cert-manager"
2:
component: "test/zerossl_eba_credentials"
vars:
enabled: true
name: zerossl-issuer
email: EMAIL
zeroSSLKeyID: '{{ (atmos.Component "zerossl_eba_credentials" .stack).outputs.kid }}'
zeroSSLKey: '{{ (atmos.Component "zerossl_eba_credentials" .stack).outputs.hmac_key }}'
cloudflareAPIToken: !env CLOUDFLARE_API_TOKENpreviously this worked, passing the output from the terraform component
zerossl_eba_credentials although now with the latest update the values are empty when passed. into the helmfile, these values are marked as sensitiveThe following is successful:
atmos_1.166.0_darwin_arm64 atmos helmfile apply zerossl-cluster-issuer -s testalthough with the latest version I get this
STDERR:
Error: 1 error occurred:
* <http://ClusterIssuer.cert-manager.io|ClusterIssuer.cert-manager.io> "zerossl-issuer" is invalid: spec.acme.externalAccountBinding.keyID: Required valueDaniel Booth11 months ago
I can raise an issue on github if you'd like
Daniel Booth11 months ago
Infact when I apply with the latest version of atmos then switch back to 1.1666 I can even see the key being populated correctly in the helmfile diff
spec:
acme:
externalAccountBinding:
- keyID:
+ keyID: KEYDaniel Booth11 months ago
okay so essentially this only occurs when I use this:
when passing tf outputs into helmfile but switching to the !terraform output function fixed the issue
{{ (atmos.Component "zerossl_eba_credentials" .stack).outputs.kid }}'when passing tf outputs into helmfile but switching to the !terraform output function fixed the issue
Jonathan Rose11 months ago
Hello! I am trying to implement component validation using jsonschema as seen below:
As part of my component, I want to ensure IGW and NATGW cannot be created, but when I test by adding
{
"$id": "vpc-component",
"$schema": "<https://json-schema.org/draft/2020-12/schema>",
"title": "vpc component validation",
"description": "JSON Schema for the 'vpc' Atmos component.",
"type": "object",
"properties": {
"vars": {
"type": "object",
"properties": {
"create_igw": {
"type": "boolean",
"additionalProperties": false
},
"enable_dhcp_options": {
"type": "boolean",
"additionalProperties": true
},
"enable_nat_gateway": {
"type": "boolean",
"additionalProperties": false
}
}
}
}
}As part of my component, I want to ensure IGW and NATGW cannot be created, but when I test by adding
create_igw: true in my dev stack, it still validates correctly. thoughts?Igor M11 months ago
How do you guys manage creating CRDs when using Terraform/Helm with Atmos for managing the cluster?
CRDs are typically recommended to be installed separately from the Helm chart.
CRDs are typically recommended to be installed separately from the Helm chart.
Jonathan Rose11 months ago
I have a small handful of terraform modules hosted in a private bitbucket workspace. I bundle atmos (along with terraform tools) in a docker image. What is the best approach to be able to fetch the privately hosted modules using
atmos vendor pull?Michael Dizon11 months ago
i’m wondering if anyone has every encountered a scenario where list items for a variable get duplicated:
variable "cluster_parameters" {
default = []
description = "A list of parameters that will be added to the cluster parameter group."
type = list(object({
apply_method = string
name = string
value = any
}))
}
from yaml config:
vars:
cluster_parameters:
- apply_method: "immediate"
name: "max_connections"
value: 2000
- apply_method: "immediate"
name: "log_output"
value: "table"
from atmos terraform plan:
cluster_parameters:
- apply_method: "immediate"
name: "max_connections"
value: 2000
- apply_method: "immediate"
name: "log_output"
value: "table"
- apply_method: "immediate"
name: "max_connections"
value: 2000
- apply_method: "immediate"
name: "log_output"
value: "table"Mathias Betancurt11 months ago
Hello!
I've just migrated to Atmos v1.167.0 (incidentally, because a colleague ran into a bug shipped with it -> https://sweetops.slack.com/archives/C031919U8A0/p1742935112860699) and there was a change in behaviour somewhere so now, the unskippable terraform init that gets executed when using !terraform.output in a component always fails on a local backend (because it always will want to try to migrate from local to local)
Setting TF_INPUT does not seem to help.
I've just migrated to Atmos v1.167.0 (incidentally, because a colleague ran into a bug shipped with it -> https://sweetops.slack.com/archives/C031919U8A0/p1742935112860699) and there was a change in behaviour somewhere so now, the unskippable terraform init that gets executed when using !terraform.output in a component always fails on a local backend (because it always will want to try to migrate from local to local)
Error executing command: Command failed: atmos describe component namespaces -s dev-platform --format json
...
DEBU Found component 'namespaces' in the stack 'dev-platform' in the stack manifest 'environments/dev/platform'
....
DEBU Executing Atmos YAML function: !terraform.output minikube dev-infra kubernetes_provider.client_certificate
DEBU No TTY detected. Falling back to basic output. This can happen when no terminal is attached or when commands are pipelined.
...
DEBU Found component 'minikube' in the stack 'dev-infra' in the stack manifest 'environments/dev/infra'
...
DEBU Executing 'terraform init minikube -s dev-infra'
✗ Fetching kubernetes_provider.client_certificate output from minikube in dev-infra
FATA Failed to execute terraform output component=minikube stack=dev-infra
error=
│ exit status 1
│
│ Error: Error asking for state migration action: input is disabledSetting TF_INPUT does not seem to help.
Jonathan Rose11 months ago
what is the recommended way to save planfiles as json? i would like to include in checkov scanning!
Patrick McDonald11 months ago
Hey all, I updated to Atmos
Looks like it’s no longer recognizing the custom subcommand. Anyone else run into this or know if something changed around custom command support in the latest version?
v1.167.0, and now my custom terraform subcommands are breaking. For example:atmos terraform generate-cataglog vpc
Incorrect Usage
Unknown command generate-cataglog for atmos terraformLooks like it’s no longer recognizing the custom subcommand. Anyone else run into this or know if something changed around custom command support in the latest version?
Pablo Paez11 months ago
Hi 👋🏻
We are using Atmos with Terraform (and AWS), and we are getting more drifts than desired due to some tools adding extra
Is there "an easy way" to set the lifecycle
We are using Atmos with Terraform (and AWS), and we are getting more drifts than desired due to some tools adding extra
Tags to Atmos created resources.Is there "an easy way" to set the lifecycle
ignore_tags globally?RB11 months ago
Silly question: Where is
fixed or short defined in the yaml ?Petr Dondukov11 months ago
Hi all! I have a question. If I want to use terraform module source code, for example from github, do I need to create a Vendor Config to upload to the repository? components.terraform.*.metadata.component only supports a directory?
Sean Nguyen11 months ago(edited)
Hey all.
Maybe this belongs in #terraform, but I was wondering if there was a canonical way in Terraform to list all the stacks a component has been deployed to?
We have a certain application which is deployed via a component (let’s call this
We’ve considered simply updating
Maybe this belongs in #terraform, but I was wondering if there was a canonical way in Terraform to list all the stacks a component has been deployed to?
We have a certain application which is deployed via a component (let’s call this
APP) in many stacks. We have another component which manages our identity provider (ID_PROVIDER) from which we centrally manage many user groups. The idea is that ID_PROVIDER would be able to find all real instantiations of APP and generate a list of user groups to manage.We’ve considered simply updating
APP to define the user groups there, but didn’t want to mix in another TF provider and configurations into an already complex component.Miguel Zablah11 months ago
Hi! I have a question when a component inherits an abstract one should this one pass the custom provider settings? or dose settings do not pass with component inherences?