50 messages
👽️
Chris King-Parraalmost 2 years ago
Where do I configure which accounts correspond to dev/stage/prog?
Ryanalmost 2 years ago
Hopefully a small question this monday morning, I'm trying to get atmos.exe functioning on Win11 with our current ver (v1.4.25), and it looks like she wants to fire up, but fails to find terraform in the %PATH%. I dropped it in path and even updated path with terraform.exe, not sure where atmos is searching for that path. See what I mean here -
C:\path\>
Executing command:
terraform init -reconfigure
exec: "terraform": executable file not found in %PATH%
C:\path\>terraform
Usage: terraform [global options] <subcommand> [args]
The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.alla2almost 2 years ago
Hi, everyone. I'm evaluating Atmos for the company to enhance Terraform. In Atmos everything revolves around a library of components which understandably where a majority of reusable modules should be stored. But I don't understand (and docs don't help) if I can define a single resource using plain terraform syntax on the deepest level of a stack without creating a component for it.
For the following file tree:
I'd like to just drop
For the following file tree:
.
├── atmos.yaml
├── components
│ └── terraform
│ └── aws-vpc
│ ├── main.tf
│ ├── outputs.tf
│ ├── versions.tf
├── stacks
│ ├── aws
│ │ ├── _defaults.yaml
│ │ └── general_12345678901
│ │ ├── core
│ │ │ ├── _defaults.yaml
│ │ │ ├── eks.tf
│ │ │ └── us-east-2.yaml
│ │ └── _defaults.yaml
│ ├── catalog
│ │ └── aws-vpc
│ │ └── defaults.yaml
└── vendor.yamlI'd like to just drop
eks.tf (a YAML version of HCL is also fine) into stacks/aws/general_12345678901/core and expect Atmos to include it into the deployment. Is it possible?rssalmost 2 years ago(edited)
v1.67.0
Add Terraform Cloud backend. Add/update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2221110085" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/572"...
Add Terraform Cloud backend. Add/update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2221110085" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/572"...
rssalmost 2 years ago
v1.67.0
Add Terraform Cloud backend. Add/update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2221110085" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/572"...
Add Terraform Cloud backend. Add/update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2221110085" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/572"...
Shivalmost 2 years ago
So I am examining the output for atmos describe component <component-name> —stack <stack_name>
I am trying to understand the output of the command .
1. what’s the difference between deps and deps_all
2. What does the imports section mean? I see catalog/account-dns and quite lot files under catalog dir
I am trying to understand the output of the command .
1. what’s the difference between deps and deps_all
2. What does the imports section mean? I see catalog/account-dns and quite lot files under catalog dir
Royalmost 2 years ago(edited)
hmmm
how to avoid including catalog items from describing the stacks? All components have
> atmos describe stacks
the stack name pattern '{tenant}-{environment}-{stage}' specifies 'tenant', but the stack 'catalog/component/_defaults' does not have a tenant defined in the stack file 'catalog/component/_defaults'how to avoid including catalog items from describing the stacks? All components have
type: abstract meta attribute.Shivalmost 2 years ago
What are some recommended patterns to add / enforce tagging as part of workflows? So if a component is not tagged as per standards . Do not apply sort of thing
rssalmost 2 years ago(edited)
v1.68.0
Add
Add
gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2230185566" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/578"...rssalmost 2 years ago
v1.68.0
Add
Add
gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2230185566" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/578"...Andrew Ochsneralmost 2 years ago
is there an easy way to have
atmos describe stacks --stacks <stackname> skip abstract components? or have any jq handy cause i just suck at jqJustinalmost 2 years ago(edited)
Hi all,
I'm working through a project right now where I need to collect the security group ID for a security group
vendored: https://github.com/cloudposse/terraform-aws-security-group
vendored: https://github.com/cloudposse/terraform-aws-ecs-web-app
For my stack configuration, I'd like to create two security groups, and then provide the IDs to the ecs-web-app.
If the security group and ecs modules have been vendored in, what is the best practice to get the CloudPosse remote_state file into place and configured so that I can reference each security group ID in the creation of my
My thinking is that I'd like to keep everything versioned and vendored from separate repositories that have their own tests / QA passes performed on them unique to Terraform, or vendored in from CloudPosse / Terraform.
I'm missing the connection of how to fetch the remote state from each security group and reference the ids in the
I'm working through a project right now where I need to collect the security group ID for a security group
vendored: https://github.com/cloudposse/terraform-aws-security-group
/components/terraform/networking/security_group/v2.2.0vendored: https://github.com/cloudposse/terraform-aws-ecs-web-app
/components/terraform/ecs/web/v2.0.1For my stack configuration, I'd like to create two security groups, and then provide the IDs to the ecs-web-app.
components:
terraform:
security_group/1:
vars:
ingress:
- key: ingress
type: ingress
from_port: 0
to_port: 8080
protocol: tcp
cidr_blocks: 169.254.169.254
self: null
description: Apipa example group 1
egress:
- key: "egress"
type: "egress"
from_port: 0
to_port: 0
protocol: "-1"
cidr_blocks: ["0.0.0.0/0"]
self: null
description: "All output traffic
security_group/2:
vars:
ingress:
- key: ingress
type: ingress
from_port: 0
to_port: 8009
protocol: tcp
cidr_blocks: 169.254.169.254
self: null
description: Apipa example group 1
egress:
- key: "egress"
type: "egress"
from_port: 0
to_port: 0
protocol: "-1"
cidr_blocks: ["0.0.0.0/0"]
self: null
description: "All output traffic
web/1:
name: sampleapp
vpc_id: <reference remote state from core stack>
ecs_security_group_ids:
- remote_state_reference_security_group/1
- remote_state_reference_security_group/2 If the security group and ecs modules have been vendored in, what is the best practice to get the CloudPosse remote_state file into place and configured so that I can reference each security group ID in the creation of my
web/1 stack? Same with the VPC created in a completely different stack.My thinking is that I'd like to keep everything versioned and vendored from separate repositories that have their own tests / QA passes performed on them unique to Terraform, or vendored in from CloudPosse / Terraform.
I'm missing the connection of how to fetch the remote state from each security group and reference the ids in the
web1 component.Andrew Ochsneralmost 2 years ago
any recommendations on how to pass output from 1 workflow command to another workflow command? right now just thinking of writing out/reading from a file.... just curious if there are other mechanisms that aren't as kludgy
rssalmost 2 years ago(edited)
v1.69.0 Restore Terraform workspace selection side effect
In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands....
In Atmos v1.55 (PR #515) we switched to using the TF_WORKSPACE environment variable for selecting Terraform workspaces when issuing Terraform commands....
E
Chris King-Parraalmost 2 years ago
What's the recommended approach to set up data dependencies between stacks (use the output of one stack as the input for another stack)? Data blocks?
rssalmost 2 years ago(edited)
v1.68.0
Add
Add
gomplate templating engine to Atmos stack manifests. Update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2230185566" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/578"...Erik Osterman (Cloud Posse)almost 2 years ago
For anyone else working on GHA with Atmos
https://sweetops.slack.com/archives/CB6GHNLG0/p1712912684845849
https://sweetops.slack.com/archives/CB6GHNLG0/p1712912684845849
Benalmost 2 years ago
hi all, i'm new to atmos and try to follow along the quick-start quide.
i got a weird issue were it seems like
it works when I replace the templated version with a real one.
i've got the stock config from https://atmos.tools/quick-start/vendor-components and running atmos 1.69.0:
i got a weird issue were it seems like
{{ .Version }} isn't rendered when running atmos vendor pull:❯ atmos vendor pull
Processing vendor config file 'vendor.yaml'
Pulling sources for the component 'vpc' from '<http://github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}}|github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}}>' into 'components/terraform/vpc'
error downloading '<https://github.com/cloudposse/terraform-aws-components.git?ref=%7B%7B.Version%7D%7D>': /usr/bin/git exited with 1: error: pathspec '{{.Version}}' did not match any file(s) known to gitit works when I replace the templated version with a real one.
i've got the stock config from https://atmos.tools/quick-start/vendor-components and running atmos 1.69.0:
❯ atmos version
█████ ████████ ███ ███ ██████ ███████
██ ██ ██ ████ ████ ██ ██ ██
███████ ██ ██ ████ ██ ██ ██ ███████
██ ██ ██ ██ ██ ██ ██ ██ ██
██ ██ ██ ██ ██ ██████ ███████
👽 Atmos 1.69.0 on darwin/arm64TommyBajaalmost 2 years ago
Hi All 👋 ,
I’m just getting started with Atmos and wanted to check if nested variable interpolation is possible? My example is creating an ECR repo and want the name to have a prefix.
I’ve put the
How are others doing this resource prefixing?
Thank you
I’m just getting started with Atmos and wanted to check if nested variable interpolation is possible? My example is creating an ECR repo and want the name to have a prefix.
I’ve put the
prefix: in a _defaults.yaml file and used {{ .vars.prefix }} in a stack/stage.yaml and it doesn’t work.vars:
prefix: "{{ .vars.namespace }}-{{ .vars.environment }}-{{ .vars.stage }}"
repository_name: "{{ .vars.prefix }}-storybook"How are others doing this resource prefixing?
Thank you
TommyBajaalmost 2 years ago
Hi,
Is there a way to use the file-path of the Stack.yaml file to be used as the
Looking in the S3 bucket, the tfstate file prefix appears to be stored as
Currently the repo structure is
Thank you
Is there a way to use the file-path of the Stack.yaml file to be used as the
workspace_key_prefix value?Looking in the S3 bucket, the tfstate file prefix appears to be stored as
bucket_name/component_name/stack_name-component_name .Currently the repo structure is
Stacks/app_name/env_name/stack_name and would prefer them to be aligned but maybe they don’t need to be.Thank you
Erik Osterman (Cloud Posse)almost 2 years ago
tokaalmost 2 years ago
Hey guys, I'm looking to adopt adopt Atmos for the whole organisation I'm currently working/building for.
I'm trying plan the monorepo structure and how to organise things to avoid the pain re-organising later on.
I see tools is pretty flexible, so right know I'm looking at https://atmos.tools/design-patterns/organizational-structure-configuration to find some convention guidance.
Looking at atmos cli terraform commands, I understand that org, tenant, region and environment/stage/account are somehow merged into one
but it's a bit hard to grasp how to approach the structure, when my goal is to deploy the same infrastructure in every region - for the most part, in multi-cloud setup. Any tips?
I'm trying plan the monorepo structure and how to organise things to avoid the pain re-organising later on.
I see tools is pretty flexible, so right know I'm looking at https://atmos.tools/design-patterns/organizational-structure-configuration to find some convention guidance.
Looking at atmos cli terraform commands, I understand that org, tenant, region and environment/stage/account are somehow merged into one
atmos terraform apply vpc-flow-logs-bucket -s org1-plat-ue2-devbut it's a bit hard to grasp how to approach the structure, when my goal is to deploy the same infrastructure in every region - for the most part, in multi-cloud setup. Any tips?
Davealmost 2 years ago
What are best practices for handling the scenario when you haven't used a tenant in your naming scheme to overcome NULL errors?
To date I have just been removing the code, realizing this is only a temporary solution
EXAMPLE
{ "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
https://github.com/cloudposse/terraform-aws-components/blob/add978eb5cf2c24a4de2ba080367bf0fdc97847d/modules/ecs-service/main.tf
ERROR: Null Value for Tenant
label_order: ["namespace", "stage","environment", "name", "attributes"] To date I have just been removing the code, realizing this is only a temporary solution
EXAMPLE
{ "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
https://github.com/cloudposse/terraform-aws-components/blob/add978eb5cf2c24a4de2ba080367bf0fdc97847d/modules/ecs-service/main.tf
map_environment = lookup(each.value, "map_environment", null) != null ? merge(
{ for k, v in local.env_map_subst : split(",", k)[1] => v if split(",", k)[0] == each.key },
{ "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
{ "RUNTIME_ENV" = format("%s-%s-%s", var.namespace, var.tenant, var.stage) },
{ "CLUSTER_NAME" = module.ecs_cluster.outputs.cluster_name },
var.datadog_agent_sidecar_enabled ? {
"DD_DOGSTATSD_PORT" = 8125,
"DD_TRACING_ENABLED" = "true",
"DD_SERVICE_NAME" = var.name,
"DD_ENV" = var.stage,
"DD_PROFILING_EXPORTERS" = "agent"
} : {},
lookup(each.value, "map_environment", null)
) : nullERROR: Null Value for Tenant
╷
│ Error: Error in function call
│
│ on main.tf line 197, in module "container_definition":
│ 197: { "APP_ENV" = format("%s-%s-%s-%s", var.namespace, var.tenant, var.environment, var.stage) },
│ ├────────────────
│ │ while calling format(format, args...)
│ │ var.environment is "cc1"
│ │ var.namespace is "dhe"
│ │ var.stage is "dev"
│ │ var.tenant is null
│
│ Call to function "format" failed: unsupported value for "%s" at 3: null value cannot be formatted.
╵RBalmost 2 years ago
Besides renaming an account, what can be done if the account-name is too long causing
module.this.id ’s to hit max character restrictions for aws resources?RBalmost 2 years ago(edited)
Are relative path imports for catalogs supported in atmos yaml ?
import:
# relative path from stacks/
- path: catalog/services/echo-server/resources/*.yaml
# relative path from catalog itself
- path: ./resources/*.yamlpvalmost 2 years ago
Is there anything that changed on the Atmos side regarding GHA pipelines? I have my workflow file that runs certain files that have commands for plan and apply. Last week, I was able to run multiple plan and apply commands in one file. Now, my workflow hangs in GHA. The only fix I have is to run one command at a time which takes a lot more time for me to deploy things
Ryanalmost 2 years ago
Hey All, I'm confused as to the best way to handle this, though I'm sure I'll learn as I get into working with Cloudposse more. In a few of the small modules I've made, I build a main or a couple main object variables with everything tied back to that master object var. In looking at terraform-aws-network-firewall, the variables are more of an any type and I have to figure out the YAML data structures for input. Both ways make sense, especially because network firewalls kind of a lot of settings to wrangle. I had most of my module built for network-firewall but based on Eriks suggestion I figured I'd give your module a try. The examples are helping but GPT is definitely helping me more than I'm helping myself in TF to YAML conversions lol.
Andrew Chemisalmost 2 years ago(edited)
Hello - was wondering if there is any updates on refactoring of the
until then, what is the suggested work around to enable using those modules in an existing organization - it seems to be creating a
account and account-map modules to enable brownfield / control tower deployments https://sweetops.slack.com/archives/C031919U8A0/p1702136079102269?thread_ts=1702135734.967949&cid=C031919U8A0 . I see 2 related PRs that look like they will never be approved. Dont know if it matters, this particular project is using TF Cloud as a backend.until then, what is the suggested work around to enable using those modules in an existing organization - it seems to be creating a
remote-state module that represents the output of account-map ? Does anyone have an example of what this is supposed to look like? And if I go this approach, how will this change when the refactored modules become available?pvalmost 2 years ago
How would I pass this variable in a yaml?
variable "database_encryption" {
description = "Application-layer Secrets Encryption settings. The object format is {state = string, key_name = string}. Valid values of state are: \"ENCRYPTED\"; \"DECRYPTED\". key_name is the name of a CloudKMS key."
type = list(object({ state = string, key_name = string }))
default = [{
state = "DECRYPTED"
key_name = ""
}]
}Stephan Helasalmost 2 years ago
Hello,
i try to pass values from settings into vars. this only works after componets are processed. what i mean by that is
this is working
This is not
Is there a way to pass settings to vars before the components are processed?
i try to pass values from settings into vars. this only works after componets are processed. what i mean by that is
this is working
import:
- accounts/_defaults
settings:
account: '0'
vars:
tenant: account
environment: test
stage: '0'
tags:
account: '{{ .settings.account }}'This is not
import:
- accounts/_defaults
settings:
account: '0'
vars:
tenant: account
environment: test
stage: '{{ .settings.account }}'Is there a way to pass settings to vars before the components are processed?
RBalmost 2 years ago
I noticed in geodesic that this env var is correct
ATMOS_BASE_PATH . I didn’t see ATMOS_CLI_CONFIG_PATH and since that is unset, atmos cannot fully understand the stack yaml.RBalmost 2 years ago
How does the
components/terraform/account-map/account-info/acme-gbl-root.sh get used by other scripts?Shivalmost 2 years ago(edited)
Is there a way to tag the name of the components , stacks , and stage in a different scheme ( mycompany-automation-stack : <stack name> , mycompany-automation-component: <component name> etc ) to my resources . We use context.tf drop in file and pass it along from root modules to child modules
Alcpalmost 2 years ago
hi team,
We have a
default.yaml has a var principal: [user1,user2]
default-use2.yaml as a var principal: [user3,user4]
For the application stack
The value for principal is [user3,user4], which is expected behaviour but if we wanted to append the list (ex: [user1,user2,user3,user4])than override, do we have a way to do it?
We have a
default.yaml parent config and default-use2.yaml child config that inherits defaultdefault.yaml has a var principal: [user1,user2]
default-use2.yaml as a var principal: [user3,user4]
For the application stack
app1-stack.yaml inherits default-use2.yaml,The value for principal is [user3,user4], which is expected behaviour but if we wanted to append the list (ex: [user1,user2,user3,user4])than override, do we have a way to do it?
Stephan Helasalmost 2 years ago
Hi,
if i define a local backend,
i'll get a schema validation error, which i fixed with this:
if i define a local backend,
terraform:
backend_type: locali'll get a schema validation error, which i fixed with this:
▶ git diff stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
diff --git a/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json b/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
index 959c7f2..f06fbfc 100644
--- a/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
+++ b/stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
@@ -437,6 +437,7 @@
"backend_type": {
"type": "string",
"enum": [
+ "local",
"s3",
"remote",
"vault",Stephan Helasalmost 2 years ago(edited)
Hi,
is there an easy way to check which components of which stacks are in sync with the infrastructure? In terragrunt this was possible using run-all and
is there an easy way to check which components of which stacks are in sync with the infrastructure? In terragrunt this was possible using run-all and
terraform state list or terraform plan -destroy . is there something similar where i can get an overview which components in all stacks are deployed?Stephan Helasalmost 2 years ago(edited)
sometime (more often then not) if i switch components in a stack, i got a tf workspace message, which i don't understand. it looks like this:
now the switch
it will work, but why is this happening?
kn_atmos_vault is just a shell script to get aws credentials from vault ..
▶ kn_atmos_vault terraform destroy wms-wms -s wms-it03-test -- -auto-approveInitializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
...
...
No changes. No objects need to be destroyed.
Either you have not created any objects yet or the existing objects were already deleted outside of Terraform.
Destroy complete! Resources: 0 destroyed.now the switch
▶ kn_atmos_vault terraform destroy wms-base -s wms-it03-test -- -auto-approveInitializing the backend...
The currently selected workspace (wms-apt01-test-wms-base) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. default
2. wms-it03-test-wms-base
3. wms-it03-test-wms-wms
Enter a value: 1
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...it will work, but why is this happening?
kn_atmos_vault is just a shell script to get aws credentials from vault ..
RBalmost 2 years ago
I’d like to use the cloudposse context, keep the name empty, set an attribute for the existing cluster name, and an attribute for the service name
This should create something like this
However, I’m hitting the 64 max char hard aws limit issue with iam role creation. 🧵
This should create something like this
{namespace}-{fixed_region}-{account_name}-{eks_cluster_name}-{service_name}
acme-ue1-somelongdevaccount-somelongclustername-servicename
acme-ue1-somelongdevaccount-somelongclustername-titan-echo-server
acme-ue1-somelongdevaccount-somelongclustername-bigpharmaceuticalorg
etcHowever, I’m hitting the 64 max char hard aws limit issue with iam role creation. 🧵
RBalmost 2 years ago
I saw atmos supports go templating for ssm params. Does atmos’ go templating also support generic aws data sources? For instance, can I retrieve a vpc id using an hcl-like data source and use it as an input to a component?
RBalmost 2 years ago
I seem to be hitting an odd issue. I just provisioned the
tfstate-backend , migrated the state, I set the terraform.backend.s3.role_arn (delegated role) to the role outputted from the tfstate-backend, and copied that role into terraform.remote_state_backend.s3.role_arn and I’m running atmos terraform plan aws-teams --stack gbl-identity . I checked the roles in between and the role assumption works, even the terraform init works which shows the role assumption works… but this error seems to show that atmos is trying to use the primary role instead of the delegated role to access the s3 bucket for each remote state.│ Error: Error loading state error
│
│ with module.iam_roles.module.account_map.data.terraform_remote_state.data_source[0],
│ on .terraform/modules/iam_roles.account_map/modules/remote-state/data-source.tf line 91, in data "terraform_remote_state" "data_source":
│ 91: backend = local.ds_backend
error loading the remote state: Unable to list objects in S3 bucket ... with prefix "account-map/": operation error S3: ListObjectsV2, https response error StatusCode: 403RBalmost 2 years ago
I know in the past we ran into
Is it acceptable to update the upstream components to use this feature so we can start dropping the
default_tags with the aws v4 provider. In the v5 provider, the default_tags seem to be fixed.Is it acceptable to update the upstream components to use this feature so we can start dropping the
tags = module.this.tags for each resource?Kubheraalmost 2 years ago(edited)
Hey Guys
I have a question if someone can help with the answer.
I have few modules developed in a separate repository and puling them down to the atmos repo while running the pipeline dynamically using the vendor pull command.
But, when I bump up the version atmos is unable to consider that as a change in a component and the atmos describe affected commands gives me an empty response, any idea what I' m missing here? below is my code snippet - vendor.yaml.
I have a question if someone can help with the answer.
I have few modules developed in a separate repository and puling them down to the atmos repo while running the pipeline dynamically using the vendor pull command.
But, when I bump up the version atmos is unable to consider that as a change in a component and the atmos describe affected commands gives me an empty response, any idea what I' m missing here? below is my code snippet - vendor.yaml.
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: accounts-vendor-config
description: Atmos vendoring manifest
spec:
sources:
- component: "account_scp"
source: git::<https://module_token>:{{env "MODULE_TOKEN"}}@gitlab.env.io/platform/terraform/modules/account-scp.git///?ref={{.Version}}
version: "1.0.0"
targets:
- "components/terraform/account_scp"
excluded_paths:
- "**/tests"
- "**/.gitlab-ci.yml"
tags:
- account_scppvalmost 2 years ago
How do I pass
in a yaml? When I try to use service_project_names: ["PROJECT_NAME"], my run does not show any changes to be made to my plan
variable "service_project_names" {
description = "list of service projects to connect with host vpc to share the network"
type = list(string)
default = []
}in a yaml? When I try to use service_project_names: ["PROJECT_NAME"], my run does not show any changes to be made to my plan
rssalmost 2 years ago(edited)
v1.71.0
Update
Update
atmos describe affected command. Update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2264365448" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/590"...rssalmost 2 years ago
v1.71.0
Update
Update
atmos describe affected command. Update docs @aknysh (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2264365448" data-permission-text="Title is private" data-url="https://github.com/cloudposse/atmos/issues/590"...