45 messages
👽️
kofi10 months ago(edited)
Hello!
We are trying to use the
1. If we try to include a file using
2. If we assign to a var, it get executed (and we can see it in the
Is it clair enought? Do you have some ideas why this would happen?
We are trying to use the
!exec function to execute a script that is returning a json encoded dict but we are getting multiple issues/questions:1. If we try to include a file using
vars: !exec ./my_script args then it is not executing at all the command (no mention of "INFO Executing Atmos YAML function input=" and no vars in tfvars.json)2. If we assign to a var, it get executed (and we can see it in the
tfvars.json) but if we try to fetch the var later on, then it fails (then no tfvars.json at all, templating error can't evaluate field my_var in type interface {}).Is it clair enought? Do you have some ideas why this would happen?
Phil Christensen10 months ago(edited)
Hi folks! I've been using atmos for awhile now an love it, but I'm running into a small issue trying to update to Terraform 1.11. It seems that
What is the preferred way to deal with this issue?
Thanks in advance for any help...
atmos is still using the deprecated backend assume_role syntax, so now that it's been completely removed, I get this error:│ Error: Unsupported argument
│
│ on backends/eaze-staging-roles.tf line 7:
│ 7: role_arn = "arn:aws:iam::273354670955:role/eaze-atlantis-admin"
│
│ An argument named "role_arn" is not expected here.What is the preferred way to deal with this issue?
Thanks in advance for any help...
RB10 months ago
Im using the atmos-atlantis integration and the terraform steps at the bottom do not seem to mention the backend configs. If you run it as per the docs and you affect multiple stack yamls in the same component, it will fail because it generates multiple backends
RB10 months ago
Also, using the atmos atlantis integration, not committing the atlantis.yaml, tfvars, or the backend. I cannot just run the atmos atlantis build-all command in the pre workflow run because it only affects the
default workspace (each workspace is a unique clone of the repo in atlantis) so the atmos.yaml and atlantis.yaml are only created in the default/ directory, meaning the atmos.yaml file is missing from the ue1-prod/ directory and similar.Bart Palmowski10 months ago(edited)
Hi, I'm learning atmos and my usecase is that I'm creating a lot of RDS databases (in a single aws account) but I'm not sure how to express that in atmos. Should I have:
1. multiple stacks, each with the name of the database I'm creating (so one stack per RDS one component each)
2. one stack but with many components (one component per RDS so one stack but many components)
1. multiple stacks, each with the name of the database I'm creating (so one stack per RDS one component each)
many stacks one component each2. one stack but with many components (one component per RDS so one stack but many components)
one stack many components eachBart Palmowski10 months ago
One more question, I could not find a proper reference/spec document for stacks and components, for example I can't find
terraform_workspace documented (outside of that schema or source code): https://github.com/cloudposse/atmos/blob/v1.174.0/pkg/datafetcher/schema/config/global/1.0.json#L433Bart Palmowski10 months ago
Hi, I'm trying to use
vendor.yaml to pull terraform modules from a private registry but nothing seems to work, any idea how could I make it work?$ atmos vendor pull
INFO Vendoring from 'vendor.yaml'
x myregistry.tld/platform/mymodule/default (1.2) Failed to vendor myregistry.tld/platform/mymodule/default: error : myregistry.tld/platform/mymodule/default: failed to download package: relative paths require a module with a pwd
Vendored 0 components. Failed to vendor 1 components.
Error
failed to vendor components: 1marksie198810 months ago(edited)
Hi All, I have just started to use atmos for our infrastructure after many issues with Terragrunt. We utilise GCP and I have setup our stacks in the project structure:
Each stack will be for a specific tool / app e.g. github WIF.
There are a few things i dont understand and was wondering if there are some videos explaining or if someone could help me:
1. How do i use the outputs from the bootstrap in the github stack? I need the project_id from bootstrap
2.Im not sure the best way to do my stack name_pattern, i thought about {project}-{stack} but the global-roles isn't in a project?
3.I see a lot about the tagging with the provider context, I'm not sure what the best tagging is structure is as we don't have "environments" like dev prod etc which tends to be the documented way to tag, all i know is that everything should have a "managed_by" tag for terraform so that people dont mess with it.
4.When i try to create a workspace i just get: "" is not a valid state name. I assume i need to do something else other than just
Some help on this would be appreciated, we are a startup software house so a lot of this is new territory 🙂
deploy/
deploy/global-roles.yaml
deploy/projects/infra/bootstrap.yaml
deploy/projects/infra/github.yamlEach stack will be for a specific tool / app e.g. github WIF.
There are a few things i dont understand and was wondering if there are some videos explaining or if someone could help me:
1. How do i use the outputs from the bootstrap in the github stack? I need the project_id from bootstrap
2.
3.
4.
atmos tofu workspace new <comp> -s xxx My backend is GCS if that makes a differenceSome help on this would be appreciated, we are a startup software house so a lot of this is new territory 🙂
Jonathan Rose10 months ago
Hello! I have been tasked with making a security group component using terraform-aws-security-group/wrappers at v5.3.0 · terraform-aws-modules/terraform-aws-security-group with the following validation requirements:
• ingress cannot use 0.0.0.0/0
• egress cannot use 0.0.0.0/0
desired validations
1. Forbid egress to 0.0.0.0/0, any port
2. Forbid ingress from 0.0.0.0/0, any port
3. Forbid port 22, from any IP address.
4. Block port 3389 from 0.0.0.0/0
I have the following jsonschema so far, but I am getting errors:
error message:
• ingress cannot use 0.0.0.0/0
• egress cannot use 0.0.0.0/0
desired validations
1. Forbid egress to 0.0.0.0/0, any port
2. Forbid ingress from 0.0.0.0/0, any port
3. Forbid port 22, from any IP address.
4. Block port 3389 from 0.0.0.0/0
I have the following jsonschema so far, but I am getting errors:
{
"$id": "security-group-component",
"$schema": "<https://json-schema.org/draft/2020-12/schema>",
"title": "security-group component validation",
"description": "JSON Schema for the 'security-group' Atmos component.",
"type": "object",
"properties": {
"vars": {
"type": "object",
"properties": {
"items": {
"type": "object",
"patternProperties": {
"^.*$": {
"type": "object",
"properties": {
"egress_cidr_blocks": {
"type": "array",
"items": {
"type": "string",
"pattern": "^((?!0\\.0\\.0\\.0\\/0).)*$"
},
"minItems": 1,
"uniqueItems": true
},
"ingress_cidr_blocks": {
"type": "array",
"items": {
"type": "string",
"pattern": "^((?!0\\.0\\.0\\.0\\/0).)*$"
},
"minItems": 1,
"uniqueItems": true
}
}
}
}
}
}
}
}
}error message:
jsonschema file:///atmos/stacks/schemas/jsonschema/security-group.json compilation failed: '/properties/vars/properties/items/patternProperties/%5E.%2A$/properties/ingress_cidr_blocks/items/pattern' does not validate with <https://json-schema.org/draft/2020-12/schema#/allOf/1/$ref/properties/properties/additionalProperties/$dynamicRef/allOf/1/$ref/properties/properties/additionalProperties/$dynamicRef/allOf/1/$ref/properties/patternProperties/additionalProperties/$dynamicRef/allOf/1/$ref/properties/properties/additionalProperties/$dynamicRef/allOf/1/$ref/properties/items/$dynamicRef/allOf/3/$ref/properties/pattern/format>: '^((?!0\.0\.0\.0\/0).)*$' is not valid 'regex'marksie198810 months ago
Hi All, I managed to get Atmos working (yippee) just have one question, i used custom name_template:
The issue is that these vars are only used for the name_template, and they appear as warnings for undeclared values by terraform which is annoying.
How do i get around this? We mainly use 3rd party modules, but im thinking that the only way around this would be to have a "wrapper" module that would include the 3rd party module, then i could also use the context provider too, is that a good way to do this? then i no longer need them in the vars section and can move them out to the context.
name_template: "{{.vars.atmos_project}}-{{.vars.atmos_region}}-{{.vars.atmos_stack}}"The issue is that these vars are only used for the name_template, and they appear as warnings for undeclared values by terraform which is annoying.
How do i get around this? We mainly use 3rd party modules, but im thinking that the only way around this would be to have a "wrapper" module that would include the 3rd party module, then i could also use the context provider too, is that a good way to do this? then i no longer need them in the vars section and can move them out to the context.
Slackbot10 months ago
This message was deleted.
RB10 months ago(edited)
Another issue with atmos integration in Atlantis, if you don't commit tfvars, how would you auto plan atmos projects in Atlantis?
Bart Palmowski10 months ago
Hi, I'm trying to use the action: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L104 but the problem is that the setting
settings.github.actions_enabled just cannot be set?Steve9 months ago
is using vendor.yaml the recommended way to use https://github.com/cloudposse-terraform-components/aws-account ?
If so, are there any tricks to have the component at
If so, are there any tricks to have the component at
/components/terraform/aws-account instead of /components/terraform/aws-account/srcJonathan Rose9 months ago
Does Atmos support mock outputs when running terraform plan to depends on output from other components?
Nitzan Frock9 months ago(edited)
Hello! I'm not entirely sure if this should go here or somewhere else, but I'm attempting to setup a CI/CD pipeline, and looking at the atmos docs for setting up for GHA is relatively straightforward, but I'm struggling to figure out how to update an ECS container's image with a deployment. I'm using the cloudposse aws ecs service component, so i'm wondering how the
The "whole picture" is having images build on push to main in the app repo for dev environment, which will trigger a deployment in the infra repo with the new image tag passed in. For production a similar approach would happen, but the process would be initiated on a release-please PR being merged to main.
ecr_image property would be updated? Would it make sense to use the remote store and set a parameter there to the appropriate tagged image, and then simply redeploy?The "whole picture" is having images build on push to main in the app repo for dev environment, which will trigger a deployment in the infra repo with the new image tag passed in. For production a similar approach would happen, but the process would be initiated on a release-please PR being merged to main.
Cyrus Dukart9 months ago
Hi folks,
Going through the setup process. Don't know best place to put this but documentation for setting up the Component Updater Action is missing a required repository permission.
Currently:
5. Assign only the following Repository permissions:
Should Be:
5. Assign only the following Repository permissions:
Cheers
Going through the setup process. Don't know best place to put this but documentation for setting up the Component Updater Action is missing a required repository permission.
Currently:
5. Assign only the following Repository permissions:
+ Contents: Read and write
+ Pull Requests: Read and write
+ Metadata: Read-onlyShould Be:
5. Assign only the following Repository permissions:
+ Contents: Read and write
+ Issues: Read and write <--- MISSING
+ Pull Requests: Read and write
+ Metadata: Read-onlyCheers
PePe Amengual9 months ago(edited)
Is it possible to use
import on abstract components? I want to import some values from a stack, but it needs to use the stack name to import the right file, so I can keep the catalog file agnostic of the stackErik Osterman (Cloud Posse)9 months ago
@Jonathan Rose starting a new message since the last one got so long.
https://atmos.tools/core-concepts/projects/configuration/stores
https://atmos.tools/core-concepts/stacks/yaml-functions/store
Is there documentation on using !store with terraform outputs?https://atmos.tools/core-concepts/projects/configuration/stores
https://atmos.tools/core-concepts/stacks/yaml-functions/store
Steve9 months ago
Is anyone successfully using the RenovateBot to manage vendor module dependencies in atmos configurations?
Matt Parkes9 months ago
Is anyone using SOPS with Atmos? I found this: https://github.com/cloudposse/atmos/issues/592 but it's a bit stale now. I have a few awful ideas with the
exec feature, but something more native would be lovely.Jonathan Rose9 months ago
Random question...As I'm building out my service catalog, I am learning I need to group together some of the tests (run plan/apply/destroy to ensure components work as expected), but there are some dependencies that need to be established. For example:
• "network" test to validate components for VPC, VPC endpoints, and security groups
• "secure" test to validate KMS, ECR, S3
• "compute" test to validate EC2
Here is the question, if the three tests are running in parallel and there's a dependency in "compute" on outputs from network (e.g.
• "network" test to validate components for VPC, VPC endpoints, and security groups
• "secure" test to validate KMS, ECR, S3
• "compute" test to validate EC2
Here is the question, if the three tests are running in parallel and there's a dependency in "compute" on outputs from network (e.g.
!terraform.output vpc vpc_id) and secure (e.g. !terraform.output kms wrapper.ebs.key_arn), is there a way to "connect the dots" if that makes sense?Cristian9 months ago
Hi guys,
I am getting errors on CI when applying terraform changes. The error is
On Stackoverflow they mentioned that this is fixed by running terraform init, which is managed by atmos. Any suggestions on how to fix this?
I am getting errors on CI when applying terraform changes. The error is
Failed to obtain provider schema: Could not load the schema for provider
<http://registry.opentofu.org/hashicorp/aws|registry.opentofu.org/hashicorp/aws>: failed to instantiate provider
"<http://registry.opentofu.org/hashicorp/aws|registry.opentofu.org/hashicorp/aws>" to obtain schema: unavailable provider
"<http://registry.opentofu.org/hashicorp/aws|registry.opentofu.org/hashicorp/aws>"..On Stackoverflow they mentioned that this is fixed by running terraform init, which is managed by atmos. Any suggestions on how to fix this?
Matt Parkes9 months ago
I don't understand some parts of Using Templates in the URLs of
1. It mentions a "delimiter" of
datasources docs and I don't think it's just me:s3-tags: > # The `url` uses a `Go` template with the delimiters `${ }`, > # which is processed as first step in the template processing pipeline > url: "<s3://mybucket/{{> .vars.stage }}/tags.json"
1. It mentions a "delimiter" of
${ } but then those symbols aren't used in the example??Tobias Habermann9 months ago
Hi, is there a good way to use the remote-state module together with the "context" provider instead of the "context" module? do i need to copy over the environment/stage/tenant from a data.context_config data source?
Chris Harden9 months ago(edited)
Hi,
I'm running into an issue where I can't plan a second stack because terraform wants to migrate the state.
When a component is planned/applied a corresponding tfvars.json and planfile with the stack name as a prefix is created in the corresponding component directory.
Doesn't this imply that when I plan/apply another stack that any existing tfvars.json and planfile will persist so I can reference state from one stack in the other? E.g.,
Thanks in advance!
I'm running into an issue where I can't plan a second stack because terraform wants to migrate the state.
ERRO template: templates-all-atmos-sections:197:42: executing "templates-all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "tofu init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "tofu init -reconfigure".When a component is planned/applied a corresponding tfvars.json and planfile with the stack name as a prefix is created in the corresponding component directory.
Doesn't this imply that when I plan/apply another stack that any existing tfvars.json and planfile will persist so I can reference state from one stack in the other? E.g.,
!terraform.output <component> {{ .stack }} <output>Thanks in advance!
Jonathan Rose9 months ago(edited)
I was reviewing Cloud Posse Terraform Components and wondering if there was any roadmap for creating a component for SSM documents or EBS volumes?
Slackbot9 months ago
This message was deleted.
Samuel Than9 months ago
I'm migrating to use provider context. Few changes i'm making, one is my atmos.yaml, i've replace my name_patter with
when i just run atmos command i get this error
name_template: "{{.providers.context.values.namespace}}-{{.providers.context.values.tenant}}-{{.providers.context.values.environment}}-{{.providers.context.values.stage}}"when i just run atmos command i get this error
ERRO template: describe-stacks-name-template:1:52: executing "describe-stacks-name-template" at <.providers.context.values.tenant>: map has no entry for key "tenant"Samuel Than9 months ago
this is my default.yaml content
# yaml-language-server: $schema=<https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json>
terraform:
providers:
# Configure the context provider -<https://github.com/cloudposse/terraform-provider-context/blob/main/docs/index.md>
context:
enabled: false # A boolean value to enable or disable the provider
delimiter: "-" # The default delimiter to use for labels created by the provider
property_order: # The default order of properties to use for labels created by the provider
- namespace
- tenant
- environment
- stage
- name
- attributes
properties: # A map of properties to use for labels created by the provider - #<https://github.com/cloudposse/terraform-provider-context/blob/main/docs/index.md#nested-schema-for-properties>
namespace:
required: true
max_length: 4
tags_value_case: lower
tenant:
required: true
max_length: 4
tags_value_case: lower
environment:
required: true
tags_value_case: lower
stage:
required: true
tags_value_case: lower
validation_regex: "^(dev|prod)$"
name:
required: true
tags_value_case: lower
attributes: {}
instance: {}
component: {}
type: {}
repo: {}
stack: {}
stack_file: {}
managed_by: {}
tags_key_case: lower
values:
component: "{{.atmos_component}}"
stack: "{{.atmos_stack}}"
stack_file: "{{.atmos_stack_file}}"
Samuel Than9 months ago
and it's arrange with inheritance type approach like so
stacks/
├── default.yaml
├── catalog/
├── mixins/
│ ├── region/
│ │ ├── ap-southeast-2.yaml
│ │ └── us-east-1.yaml
│ ├── stage/
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ └── tenant/
│ ├── tenant-1.yaml
│ └── tenant-2.yaml
└── orgs/
├── acme/
└── bdnf/Samuel Than9 months ago
where the tenant value stores at the mixins tenant-1.yaml.
terraform:
providers:
context:
values:
tenant: "tenant-1"Samuel Than9 months ago
does the provider context work with layaered stack configuration type approach like https://atmos.tools/design-patterns/layered-stack-configuration
Samuel Than9 months ago
as i'm trying to figure out how to work with provider context
Zack9 months ago
❓️ Is there a way to tell atmos to run
-upgrade when tofu init-ing similar to the reconfigure switch or do I need to do that myself manuallyJonathan Rose9 months ago
Is terraform-aws-components/modules/tfstate-backend at main · cloudposse/terraform-aws-components still the recommended approach to establishing remote state backend?
Michael9 months ago
Hey team, quick question. I've been working through the Configuring OpenTofu guide and have modified my
When I attempt to run
Am I missing another piece?
atmos.yaml to reflect the following:components:
terraform:
# Reference: <https://atmos.tools/core-concepts/projects/configuration/opentofu/>
# Use the `tofu` command when calling "terraform" in Atmos
command: "/usr/bin/tofu"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
apply_auto_approve: false
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
deploy_run_init: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
init_run_reconfigure: true
# Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
auto_generate_backend_file: trueWhen I attempt to run
tofu commands, I am met with the following: √ . [infra] (HOST) workspace ⨠ atmos terraform version
DEBU Set logs-level=debug logs-file=/dev/stdout
DEBU Found ENV variable ATMOS_BASE_PATH=/workspace
DEBU processStoreConfig atmosConfig.StoresConfig=map[]
DEBU Found ENV variable ATMOS_BASE_PATH=/workspace
DEBU processStoreConfig atmosConfig.StoresConfig=map[]
DEBU Found ENV variable ATMOS_BASE_PATH=/workspace
DEBU processStoreConfig atmosConfig.StoresConfig=map[]
DEBU Executing command="/usr/bin/terraform version"
Terraform v1.5.6
on linux_arm64
Your version of Terraform is out of date! The latest version
is 1.12.1. You can update by downloading from <https://www.terraform.io/downloads.html>Am I missing another piece?
PePe Amengual9 months ago(edited)
Why is atmos executing atmos
!terraform.outputs functions in abstract components not being used on a stack?kofi9 months ago
Hello! I am curious about the distinction between workflow and custom CLI. Both of them are achieving really similar goals. I like to define workflow in split files and having the possibility to follow steps but I am missing arguments/flags that are only available for custom CLI. Is there any plans to add arguments/flags to workflow or to merge both?
Matt Parkes9 months ago
I have an issue, which I'm 90% sure I've debugged down to running
but if I run the command and give it the tfvars file myself:
atmos terraform console -s org-stage-tenant-environment my_component does not automatically set the appropriate --var-file arg or environment variable so that terraform/tofu loads the org-stage-tenant-environment-my_component.tfvars.json file (though Atmos does create it) such that if I try to "query" a variable that appears in the .tfvars.json file via the console I get:> var.matt
nullbut if I run the command and give it the tfvars file myself:
atmos terraform console -s org-stage-tenant-environment my_component -- --var-file=org-stage-tenant-environment-my_component.tfvars.json I instead get:> var.matt
"lol"Stephan Helas9 months ago(edited)
HTTP Backend question
this seems to be related to: https://github.com/cloudposse/atmos/issues/1171
If i use http backend (gitlab in this case), i'll get an error while running
this behavior does not reflect the documentation, where it says that workspaces are disabled when http backed is used. This seems to be the case
So i digged, and found this here:
https://github.com/cloudposse/atmos/pull/654
Now comes the part i don't understand. If I set TF_WORKSPACE, the error is gone. Which means atmos does something workspace releated, when it should not (but it bfehaves correct when the workspace is defined).
This workaround only works, if the
UPDATE:
if i put a check around terraform_outputs.go , the error goes away:
https://github.com/cloudposse/atmos/pull/1268
this seems to be related to: https://github.com/cloudposse/atmos/issues/1171
If i use http backend (gitlab in this case), i'll get an error while running
atmos terraform plan❯ atmos terraform plan hello-world -s plat-acme-main
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/null v3.2.4
Terraform has been successfully initialized!
workspaces not supported
Failed to get configured named states: workspaces not supported
Error
exit status 1this behavior does not reflect the documentation, where it says that workspaces are disabled when http backed is used. This seems to be the case
❯ atmos describe component hello-world -s plat-acme-main | grep backend_type
backend_type: http
remote_state_backend_type: httpSo i digged, and found this here:
https://github.com/cloudposse/atmos/pull/654
Now comes the part i don't understand. If I set TF_WORKSPACE, the error is gone. Which means atmos does something workspace releated, when it should not (but it bfehaves correct when the workspace is defined).
❯ env TF_WORKSPACE=default atmos terraform plan hello-world -s plat-acme-main
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/null v3.2.4
Terraform has been successfully initialized!
Acquiring state lock. This may take a few moments...
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.base.null_resource.name[0] will be created
+ resource "null_resource" "name" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ lang = "de"
+ location = "hh"
+ region = "eu-central-1"
+ tags = {
+ "atmos:component" = "hello-world/v0.1"
+ "atmos:manifest" = "site/plat/hello-world/main"
+ "atmos:stack" = "plat-acme-main"
+ "atmos:stage" = "main"
+ "atmos:tenant" = "plat"
}
+ text = "foo"This workaround only works, if the
default workspace is used though:❯ env TF_WORKSPACE=foo atmos terraform plan hello-world -s plat-acme-main
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
╷
│ Error: Error loading state: workspaces not supported
│
│
╵
Error
exit status 1UPDATE:
if i put a check around terraform_outputs.go , the error goes away:
https://github.com/cloudposse/atmos/pull/1268
Bart Palmowski9 months ago
Hi, in order to get the action https://github.com/cloudposse/github-action-terraform-plan-storage working I need to provision a dynamodb but there is 0 documentation for how it should look like
Igor M9 months ago
I have a scenario where I'm trying to use "inherits" on a component and it's saying that it's not part of the stack (even though the import is there).
Just wanted to flag it here for reference, or if it's a known issue. I'll just duplicate the component for now.
Just wanted to flag it here for reference, or if it's a known issue. I'll just duplicate the component for now.
Michael Dizon9 months ago
running into a issue running atmos describe stacks when using datasources (ssm). i think it’s skipping over the
AWS_PROFILEenv var. has anyone encountered this?template: describe-stacks-all-sections:84:25: executing "describe-stacks-all-sections" at <datasource "xxx__api_key">: error calling datasource: Couldn't read datasource 'xxx__api_key': Error reading aws+smp from AWS using GetParameter with input {
Name: "/atmos/dev/ssm/xxx/xxx__api_key",