37 messages
👽️
G
Gheorghe Casian6 months ago
Hello,
The
I also noticed it has started to fetch the outputs configured with the
Any ideas what happened with the
The
atmos describe affected --format yaml --file components.yaml --repo-path tmp/ command is running slower than usual today. It is now taking about 13 minutes to complete, whereas it previously took only 1-2 minutes.I also noticed it has started to fetch the outputs configured with the
!terraform.output function, as shown in the screenshot. The atmos version is: 1.178.0.Any ideas what happened with the
atmos describe affected command ?Prajeesh Chandran6 months ago
Hi, There’s a note on the terraform aws components about upgrading to use the components from the new individual repos. Should we start using the latest versions of each component directly from individual repos, or is there a recommended approach to avoid version incompatibilities across components?
Jonathan Rose6 months ago
Hello! I have a use case where I need to define to
The result I get is:
I'm guessing this is caused it sees two providers called
aws providers, one of which has an alias (e.g. second_account). When I define the following in the stack:terraform:
providers:
aws:
alias: second_account
region: "us-east-1"
version: "5.100.0"The result I get is:
{
"provider": {
"aws": {
"alias": "second_account",
"default_tags": {
"tags": {
"Environment": "platform",
"Namespace": "cfsb",
"Stage": "dev",
"Tenant": "it"
}
},
"region": "us-east-1",
"version": "5.100.0"
}
}
}I'm guessing this is caused it sees two providers called
aws and tries to consolidate the configs?Eric Skaggs6 months ago
Not sure if this is the correct channel or not. On our hub/spoke setup currently we are only in us-east-1 and are wanting to add another region. Where would I define the new region at with in the stack/catalog/components?
Miguel Zablah5 months ago
Hey! I'm trying to make the GitHub workflow Component Updater to work but it's not working for me and I think maybe the docs are a bit outdated is there any examples of this working?
Here is my workflow:
The components that I use are under the
it's also set the same way on the
Any idea why this is not working for me?
Here is my workflow:
name: 👽 Component Updater
on:
workflow_dispatch:
schedule:
- cron: '0 8 * * *'
permissions:
contents: write
issues: write
pull-requests: write
jobs:
update:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Update Atmos Components
uses: cloudposse/github-action-atmos-component-updater@v2
with:
max-number-of-prs: 5
github-access-token: ${{ secrets.GITHUB_TOKEN }}
include: |
vendor/*
cloudflare/*
eks/*
exclude: |
aws/*The components that I use are under the
vendor/ directory this is inside the components path components/terraform/and my components file is call vendor.yaml it's on root and it's set on the atmos.yaml file as follow:vendor:
base_path: "./vendor.yaml"it's also set the same way on the
rootfs/usr/local/etc/atmos/atmos.yamlbut when I run this actions it do not found any components.Any idea why this is not working for me?
Bart Palmowski5 months ago
can I somehow dump "describe" generated by atmos but even when it is invalid?
Jonathan Rose5 months ago
I have a question around local development. We currently distributed our Atmos IAC Service Catalog as a docker image, however some developers are resistant to this approach because they are unable to use docker due to constraints (e.g. using VDI's that don't permit docker). Has anyone else run into this issue? If so, what was the solution.
Phani Kotcharlakota5 months ago
Hey all 👋
I have a stack manifest that imports an account mixin:
In my mixin (
What’s the right way in Atmos to substitute
Is it as simple as referencing
Thanks in advance!
I have a stack manifest that imports an account mixin:
import:
- path: mixins/account/prod1
- catalog/aurora-cluster/prod
components:
terraform:
aurora-cluster:
metadata:
inherits:
- aurora-cluster/prod
vars:
var1: value1
cluster_role: "arn:aws:iam::<account-id>:role/rds-s3-import-role"In my mixin (
mixins/account/prod1.yaml) I’ve defined:vars:
account_id: 123456789What’s the right way in Atmos to substitute
account_id here so that cluster_role automatically expands to use the account from the mixin?Is it as simple as referencing
vars.account_id, or is there a more standard pattern for this in Atmos configs?Thanks in advance!
Miguel Zablah5 months ago
Hey I have a question, dose atmos
https://atmos.tools/core-concepts/stacks/yaml-functions/store.get
!store or !store.get supports AWS secret manager or is it only SSM?https://atmos.tools/core-concepts/stacks/yaml-functions/store.get
PePe Amengual5 months ago
This might be another bug but not sure:
But the component is called :
We do have a servicebus component deployed with name
# Error component is locked: component 'servicebus' cannot be modified (metadata.locked: true)But the component is called :
servicebus/pepe:
settings:
depends_on:
1:
component: "servicebus"
metadata:
component: servicebusWe do have a servicebus component deployed with name
servicebus and that is locked. so I wonder if Atmos is getting confusedThomas Spear5 months ago
Hi, we have 68 stacks to run through in our dev environment and most of them use the azurerm provider, for instance, so it would be great to not download the provider 68 times. If the first stack downloads the azurerm provider, is there a way to make the later stacks reuse that already downloaded provider?
Sam5 months ago
Hey Atmos gurus.
I have tried reading as much doco as I can, and scouring through this forum, but I cannot seem to find a fix for my teams specific use case.
• We recently deployed a component and with the
• Another tech came along 2 weeks later to fix something else in that codebase, and was using
◦ Normally we would use
Is there an argument in Atmos to help us manage this, without having to lock our
I have tried reading as much doco as I can, and scouring through this forum, but I cannot seem to find a fix for my teams specific use case.
• We recently deployed a component and with the
required_providers set to aws version = >=6.0. The tech that deploy the component did so using aws6.08. • Another tech came along 2 weeks later to fix something else in that codebase, and was using
aws6.11. This had a change to how some default variable was set, and ran an atmos terraform apply that made some changes. Tech no#1 came back and his plan was no longer clean due to his older provider wanting to change back the 6.11 change◦ Normally we would use
terraform init --upgrade to avoid this, OR set specific version pinning such as required providers = aws6.08 (however your doco suggests against this), OR commit the .terraform.lock.hcl fileIs there an argument in Atmos to help us manage this, without having to lock our
required providers to specific versions, rather than a >=?Sean5 months ago
Trying to test out atmos-pro, workflows trigger when merging to
Not sure what I should be trying to troubleshoot for this, Is this the action trying to push to a workspace in atmos-pro.com ?
pro-flow branch instead of main so that we can continue to manage our environment without any conflict. We run git actions with a git-actions runner in our eks environment. The "Trigger Affected Workflows" errors out with# Error
failed to upload stacks: 500 Internal Server Error Not sure what I should be trying to troubleshoot for this, Is this the action trying to push to a workspace in atmos-pro.com ?
T
Thomas Spear5 months ago
Some of the code examples on the website seem to be missing (pictured is the GHA integrations Terraform Apply page at link)
I've tried Chrome, Brave, and Safari but all have the same issue so I don't think this is a browser or client side issue.
I've tried Chrome, Brave, and Safari but all have the same issue so I don't think this is a browser or client side issue.
Dan Hansen5 months ago(edited)
We're looking at starting to use Atmos workflows. Are folks using
TF_VAR_ environment variables to pass apply-time values to the workflow? For example, a specifying version tag (var.version_tag) that is used in the Terraform components' resource definitionsBruce5 months ago
for aws support tier in each account, do we change that manually? or is that a setting configured in the account components somewhere?
Ian Cornett5 months ago(edited)
I'm trying to run
github-action-atmos-terraform-plan@v5.2.1 and running into an issue late with the step named Publish Summary or Generate GitHub Issue Description for Drift Detection where it's generating an invalid path for the working directory Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/work/terraform-<redacted>/terraform-<redacted>/.//home/runner/work/terraform-<redacted>/terraform-<redacted>/components/terraform/aws/api-gateway'. No such file or directory . Given that the path is baked in with working-directory: ./${{ steps.vars.outputs.component_path }} how do I resolve this? I would imagine more people would be hitting it if it were a bug.Duffy Gillman5 months ago(edited)
I'm trying to configure a GitHub OIDC provider and several roles to use with the provider, using atmos and two modules from
two vendored modules: aws-github-oidc-provider, and aws-github-oidc-role
I have included these in my
./stacks/dev/dev-github-oidc-provider.yaml
.stacks/dev/cicd-platform-oidc-role.yaml
when I try to run
cloudposse-terraform-components . I'm running into a snag when I try to pass the OIDC provider ARN to the OIDC role module. I'm not sure if this is an atmos bug or a configuration problem on my end. Here are the details:two vendored modules: aws-github-oidc-provider, and aws-github-oidc-role
I have included these in my
dev stack with these configurations:./stacks/dev/dev-github-oidc-provider.yaml
import:
- ./_defaults
components:
terraform:
dev-github-oidc-provider:
metadata:
component: "github-oidc-provider"
vars:
name: "dev-github-oidc-provider" .stacks/dev/cicd-platform-oidc-role.yaml
import:
- ./_defaults
components:
terraform:
cicd-platform-oidc-role:
metadata:
component: "github-oidc-role"
vars:
github_oidc_provider_id: !terraform.output dev-github-oidc-provider oidc_provider_arn
organization: "REDACTED"
github_actions_allowed_repos: ["REDACTED"]
iam_policy:
- version: "2012-10-17"
statements:
- effect: "Allow"
actions:
- "ecr:CompleteLayerUpload"
- "ecr:GetAuthorizationToken"
- "ecr:UploadLayerPart"
- "ecr:InitiateLayerUpload"
- "ecr:BatchCheckLayerAvailability"
- "ecr:PutImage"
resources:
- !terraform.output ecr-platform-api global repository_arnwhen I try to run
atmos terraform apply cicd-platform-oidc-role -s dev, the Terraform apply fails with this message:Terraform has been successfully initialized!
Switched to workspace "dev-cicd-platform-oidc-role".
╷
│ Error: Failed to read variables file
│
│ Given variables file dev-github-oidc-provider.terraform.tfvars.json does not exist.Jonathan H5 months ago
Hey all, I'm new to atmos as my team has recently adopted it and we are migrating our legacy terraform over. I've read quite a lot of the docs but still learning to think in the atmos way.
What I'd like to do is have a stack containing my resources,
In order to do this, I would like to reference
I was able to get this to sort of work using using
So I have a couple questions:
• How are other people doing secrets like this? Is anybody using sops in this way?
• Am I thinking in a non-atmos way about this? If so, what's the atmos way to think about it?
• Any other suggestions for how to handle a sops encrypted file smoothly?
What I'd like to do is have a stack containing my resources,
my-resources.yaml, and that set of resources needs secrets. Our standard as a team is to encrypt secrets in the same git repo as our IaC using sops. So ideally, in the same directory as my-resources.yaml, I want to have my-resources.sops.yaml (excluded in atmos.yaml configuration) which contains the secrets. Every different my-resources.yaml in different regions/environments would have a different sops file.In order to do this, I would like to reference
./my-resources.sops.yaml in my-resources.yaml and pass it as a var. What I DON'T want is to have to reference the relative path from the components directory in the stack, i.e. ../../stacks/my-account/my-region/my-env/my-resources.sops.yaml because that's too easy to goof up when copy/pasting examples and such. I'm getting stuck here because every yaml function seems to run from the context of the terraform component directory. Like !exec . I couldn't figure out any trickery that gets me a variable with the absolute path to the stack directory without including the relative path somewhere.I was able to get this to sort of work using using
!include. Doing the file directly, unfortunately, parses the yaml that the file contains. I need to convert it back into a string, but in that process the contents are changed slightly (probably whitespace), but sops doesn't like that because a mac signature is wrong. I tried converting the sops file on the disk into a base64 encoded version, and then !include doesn't parse it because it just sees text. I'm able to base64 decode the string in my terraform module, and then I can decrypt the sops content successfully. Of course doing that is jumping through a major hoop of base64 instead of just committing my sops file. I wish there was a way to NOT have !include parse the data, just keep it as a string of bytes.So I have a couple questions:
• How are other people doing secrets like this? Is anybody using sops in this way?
• Am I thinking in a non-atmos way about this? If so, what's the atmos way to think about it?
• Any other suggestions for how to handle a sops encrypted file smoothly?
Armon5 months ago
Hey all,
Does someone has tips how to debug which file(s) is causing this?
Does someone has tips how to debug which file(s) is causing this?
> atmos validate stacks
DEBU Set logs-level=debug logs-file=/dev/stderr
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
cannot override two slices with different type ([]interface {}, string)
DEBU Telemetry is disabled, skipping capture
Error
merge error: mergo merge failed: cannot override two slices with different type ([]interface {}, string)Love Eklund5 months ago
Hey, I recently joined a company that uses Atmos. I'm wondering how are people versioning their module per stack. At the moment our stack might look like this for example:
But now if I change the component, all stacks that use that component (for example, dev, staging, prod) will get that change, but I might only want to deploy it to staging first.
All I can find about in the documentation is this
https://atmos.tools/best-practices/components/#version-components-for-breaking-changes
But I suspect someone smart has come up with a better way using less copying of code, maybe using commit hashes or something.
If you have a smarter idea, please let me know, thanks!
components:
terraform:
postgresql:
metadata:
component: postgresql
vars:
auto_grow_enabled: false
high_availability: null
sku_name: "B_Standard_B2s"But now if I change the component, all stacks that use that component (for example, dev, staging, prod) will get that change, but I might only want to deploy it to staging first.
All I can find about in the documentation is this
https://atmos.tools/best-practices/components/#version-components-for-breaking-changes
But I suspect someone smart has come up with a better way using less copying of code, maybe using commit hashes or something.
If you have a smarter idea, please let me know, thanks!
Junk5 months ago
I upgraded Atmos from version 1.180.0 to 1.190.0 (latest at the moment). After the upgrade, I noticed some unexpected behavior when running
With the following vendor configuration file, Atmos 1.180.0 correctly reads the vendor declaration, generates the Terraform root component in the specified directory, and includes the expected contents. However, in 1.190.0, only the directory gets created, and it remains empty without any of the contents that should be included.
When I set the
Is this a bug, or could it be a misconfiguration on my side? I tested this on three different devices and reproduced the same behavior in all cases.
atmos vendor pull. It may be a mistake on my side or something I overlooked in the update notes, but the behavior differs from the previous version.With the following vendor configuration file, Atmos 1.180.0 correctly reads the vendor declaration, generates the Terraform root component in the specified directory, and includes the expected contents. However, in 1.190.0, only the directory gets created, and it remains empty without any of the contents that should be included.
When I set the
atmos log level to debug, I see warnings about glob expression matching, but I don’t fully understand what they mean.apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: vendor-config
description: Atmos terraform component vendoring manifest
spec:
imports: []
sources:
- component: s3-bucket
source: <http://github.com/terraform-aws-modules/terraform-aws-s3-bucket.git///?ref={{.Version}}|github.com/terraform-aws-modules/terraform-aws-s3-bucket.git///?ref={{.Version}}>
version: v5.7.0
targets: ["components/terraform/s3-bucket"]
included_paths:
- "**/modules/**"
- "**/*.tf"
- "**/README.md"
- "**/CHANGELOG.md"
- "**/LICENSE"
tags:
- aws
- storageIs this a bug, or could it be a misconfiguration on my side? I tested this on three different devices and reproduced the same behavior in all cases.
Duffy Gillman5 months ago(edited)
I am getting an unexpected error using !terraform.state and !terraform.output. I am trying to gather ARNs from a set or ECR repositories to add them to a policy allowing a CI/CD user to push images. I cannot find a syntax that works. Here is the original attempt (with names redacted/changed to protect the innocent):
when atmos processes this file, I get:
"invalid number of arguments in the Atmos YAML function !terraform.state image1-
ecr
global repository_arn !terraform.state image2-ecr global repository_arn"
import:
- ./_defaults
- path: catalog/ecr-deployer-role
context:
ecr_repository_arns:
- !terraform.state image1-ecr global repository_arn
- !terraform.state image2-ecr global repository_arn
git_repository: bogus_repo_namewhen atmos processes this file, I get:
"invalid number of arguments in the Atmos YAML function !terraform.state image1-
ecr
global repository_arn !terraform.state image2-ecr global repository_arn"
Ricky Fontaine5 months ago
Hey all! I have a question about the
I'm trying to get started with the component by importing a couple repos. When I run
Looking elsewhere online, I think this is due to how the GitHub Token is configured. Since I started getting this error, I've tried passing in the
Let me know if you need more info, config files, etc. Thanks!
aws-github-repository component.I'm trying to get started with the component by importing a couple repos. When I run
atmos terraform plan on my local machine, where I'm logged into GitHub, the plan runs without issue. For CI / CD, we use the GitHub Actions provided by atmos. When I raise a PR, the workflow fails with the following error:Planning failed. Terraform encountered an error while generating this plan.
Error: Cannot import non-existent remote object
While attempting to import an existing object to
"module.repository.github_repository.default[0]", the provider detected that
no object exists with the given id. Only pre-existing objects can be
imported; check that the id is correct and that it is associated with the
provider's configured region or endpoint, or use "terraform apply" to create
a new remote object for this resource.
# Error
exit status 1 Looking elsewhere online, I think this is due to how the GitHub Token is configured. Since I started getting this error, I've tried passing in the
GITHUB_TOKEN secret as an environment variable to the action. Is there any guidance on how to authenticate the Atmos GitHub Actions workflows with the Terraform GitHub provider?Let me know if you need more info, config files, etc. Thanks!
Thomas Spear5 months ago(edited)
Hi all, in atmos 1.182.0 when running
atmos terraform clean sometimes we see this output where it tries twice to delete the files, and errors out the second time since the first attempt was successful. There seems to be no rhyme nor reason for this behavior, as it happens across different components on different runs of our workflow, and sometimes doesn't happen. So I wanted to ask about it. It doesn't seem to have any ill-effects, but any ideas what's causing this?✓ Deleted project-name/components/terraform/vnet-elements/.terraform/
✓ Deleted project-name/components/terraform/vnet-elements/vnet-elements-eastus-prd-vnet-elements.terraform.tfvars.json
✓ Deleted project-name/components/terraform/vnet-elements/vnet-elements-eastus-prd-vnet-elements.planfile
✓ Deleted project-name/components/terraform/vnet-elements/.terraform.lock.hcl
✓ Deleted project-name/components/terraform/vnet-elements/backend.tf.json
x Cannot delete project-name/components/terraform/vnet-elements/.terraform/: path does not exist
x Cannot delete project-name/components/terraform/vnet-elements/vnet-elements-eastus-prd-vnet-elements.terraform.tfvars.json: path does not exist
x Cannot delete project-name/components/terraform/vnet-elements/vnet-elements-eastus-prd-vnet-elements.planfile: path does not exist
x Cannot delete project-name/components/terraform/vnet-elements/.terraform.lock.hcl: path does not exist
x Cannot delete project-name/components/terraform/vnet-elements/backend.tf.json: path does not existJunk5 months ago
Would it be difficult to use the components in
cloudposse-terraform-components with Cloudposse Atmos if we don’t structure them into an AWS organizations configuration like account-map?Alex K5 months ago
hey everyone
Run into an issue with
It looks like the component tries to read stack yamls, but does not consider it a template and does not render it.
Please advise how to debug the issue
Run into an issue with
aws-team-roles component. When planning changes for the component it throws this error│ Error: template: catalog/services/<service>/_defaults.yaml.tmpl:3:57: executing "catalog/services/<service>/_defaults.yaml.tmpl" at <.settings.config.short_stage>: map has no entry for key "settings"
│
│ with module.iam_roles.module.account_map.data.utils_component_config.config[0],
│ on .terraform/modules/iam_roles.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│It looks like the component tries to read stack yamls, but does not consider it a template and does not render it.
atmos v 1.168.0 aws-team-roles v1.526.0Please advise how to debug the issue
Junk5 months ago
I'm trying to migrate from the old null-label module to the new context provider/data sources.
Here's an example of what I'm attempting:
Is this an officially supported usage pattern?
Can I inject
Are there any pitfalls, edge cases, or recommendations for making this work smoothly?
I want to ensure I benefit from all the new provider's features and maintain compatibility with official CloudPosse modules with Atmos.
Here's an example of what I'm attempting:
data "context_config" "this" {}
data "context_label" "this" {}
module "this" {
source = "cloudposse/vpc/aws"
version = "~> 2.2.0"
context = data.context_label.this
ipv4_primary_cidr_block = "10.16.0.0/20"
assign_generated_ipv6_cidr_block = false
}Is this an officially supported usage pattern?
Can I inject
data.context_label.this (or even data.context_config.this) directly into the context input of the VPC module?Are there any pitfalls, edge cases, or recommendations for making this work smoothly?
I want to ensure I benefit from all the new provider's features and maintain compatibility with official CloudPosse modules with Atmos.
Cyberjesus5 months ago
Has the
The fact that
tpl function (tmpl.Inline) from gomplate been disabled in Atmos? We have tried using both but it complains that the function does not exist so we had to resort to this abomination that renders a template to be rendered on the second pass:{{- print `{{ toJson (data.YAML ` "`" }}
#build a template to render on the second pass
{{- print "`" `) }}` }}The fact that
data.YAML works proves that we have gomplate enabled, as well as sprig's toJson.Jonathan Rose5 months ago
Can't wait for Support Atmos Toolchain for 3rd Party Tools by samtholiya · Pull Request #1466 · cloudposse/atmos to be ready lol
Jonathan H5 months ago
Do the off-the-shelf atmos github actions only handle the terraform stacks, or can they do other supported things like helmfile? I've got a bunch of helm charts that are deployed using homegrown scripts, but migration to atmos-managed might make sense if it's not too big of a lift.
If the actions don't do helmfile, what parts are not supported that I'd have to script in gha if I wanted to deploy them with atmos?
If the actions don't do helmfile, what parts are not supported that I'd have to script in gha if I wanted to deploy them with atmos?
J
Jonathan H5 months ago(edited)
I think the most recent atmos release may have broken some things. Our pipelines that use the GHA actions are getting this near the end:
When I pin it to
Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/_work/infrastructure/infrastructure/.//home/runner/_work/infrastructure/infrastructure/atmos/components/iam-role-legacy'. No such file or directoryNormalize path separators sounds suspiciously related to the error I'm seeing there.When I pin it to
0.191.0 it succeeds.Thomas Spear5 months ago
Hi, does atmos have any ability to migrate component resources from one workspace to another? Since
atmos terraform destroy works at the component level, I need to split up a module used by one component into 2 separate modules used by 2 separate components so that we can have a workflow that will destroy some of the resources provisioned without destroying all of them, so I'm looking for some method that doesn't involve creating the new workspaces and running through a lot of import {} blocks, if at all possible.RB5 months ago(edited)
I can’t seem to grab submodules when I try to do this in my vendor.yaml file.
✗ atmos vendor pull --component account-map
✗ ls components/terraform/account-map/modules
ls: cannot access 'components/terraform/account-map/modules': No such file or directory - component: "account-map"
source: "<https://github.com/cloudposse-terraform-components/aws-account-map//src?ref=v{{.Version}}>"
version: "1.535.2"
targets:
- "components/terraform/{{.Component}}"
included_paths:
- "**/*"
- "**/modules/**"Thomas Spear5 months ago
Is there any way to set an expiration date on an Azure KeyVault secret created by a store configuration in Atmos? Our org has an Azure Policy enabled that doesn't allow secrets to be created without an expiration date, and it seems atmos hangs with no error. At least that's what @Angel Bermudez mentioned in testing just now.
Duffy Gillman5 months ago
I am trying to use gomplate functions to select a single value out of a list and am having no luck. Are general gomplate functions supported? I'm trying variations on this:
all
I'm using this as a source for gomplate functions: https://docs.gomplate.ca/functions/
Are these unsupported? And if so, is there advice on slicing or selecting from output lists?
components:
terraform:
foo:
vars:
subnets: !template '{{ toJson ((atmos.Component "subnets" .stack).outputs.private_subnet_ids | coll.Index 0) }}'all
coll.* functions I attempt to use yield a Go error message like "can't evaluate field Index in type interface {}"I'm using this as a source for gomplate functions: https://docs.gomplate.ca/functions/
Are these unsupported? And if so, is there advice on slicing or selecting from output lists?
Thomas Spear5 months ago(edited)
Any tips on why this:
Produces:
components:
terraform:
postgres-database-base:
metadata:
type: abstract
vars:
resource_names:
application_schema_name: "{{ .vars.application | replace "-" "_" }}"Produces:
invalid stack manifest 'catalog/postgres-database-base.yaml'
yaml: line X: did not find expected key.vars.application is properly defined, removing the | replace "-" "_" avoids the error but then produces the .vars.application value as expected, containing dashes instead of underscores. Atmos 1.182.0