20 messages
👽️
rssover 3 years ago(edited)
v1.9.0
what
Add atmos components validation using JSON Schema and OPA policies
why
Validate component config (vars, settings, backend, and other sections) using JSON Schema
Check if the component config (including relations between different component variables) is correct to allow or deny component provisioning using OPA/Rego policies
Implement validation by atmos validate component command, and by adding a new section settings.validation to the component stack config to be used in other atmos...
what
Add atmos components validation using JSON Schema and OPA policies
why
Validate component config (vars, settings, backend, and other sections) using JSON Schema
Check if the component config (including relations between different component variables) is correct to allow or deny component provisioning using OPA/Rego policies
Implement validation by atmos validate component command, and by adding a new section settings.validation to the component stack config to be used in other atmos...
rssover 3 years ago
v1.9.0
what
Add atmos components validation using JSON Schema and OPA policies
why
Validate component config (vars, settings, backend, and other sections) using JSON Schema
Check if the component config (including relations between different component variables) is correct to allow or deny component provisioning using OPA/Rego policies
Implement validation by atmos validate component command, and by adding a new section settings.validation to the component stack config to be used in other atmos...
what
Add atmos components validation using JSON Schema and OPA policies
why
Validate component config (vars, settings, backend, and other sections) using JSON Schema
Check if the component config (including relations between different component variables) is correct to allow or deny component provisioning using OPA/Rego policies
Implement validation by atmos validate component command, and by adding a new section settings.validation to the component stack config to be used in other atmos...
rssover 3 years ago(edited)
v1.9.1
what
Add atmos CLI config path and atmos base path parameters to the component processor to support components remote state from remote repos (Note that this does not affect atmos functionality, this is to be used in the utils provider which calls into the atmos code)
why
The component processor's code is used by the utils provider to get the component's remote state
We already supported the ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH ENV vars to specify the CLI config file (atmos.yaml) path...
what
Add atmos CLI config path and atmos base path parameters to the component processor to support components remote state from remote repos (Note that this does not affect atmos functionality, this is to be used in the utils provider which calls into the atmos code)
why
The component processor's code is used by the utils provider to get the component's remote state
We already supported the ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH ENV vars to specify the CLI config file (atmos.yaml) path...
rssover 3 years ago
v1.9.1
what
Add atmos CLI config path and atmos base path parameters to the component processor to support components remote state from remote repos (Note that this does not affect atmos functionality, this is to be used in the utils provider which calls into the atmos code)
why
The component processor's code is used by the utils provider to get the component's remote state
We already supported the ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH ENV vars to specify the CLI config file (atmos.yaml) path...
what
Add atmos CLI config path and atmos base path parameters to the component processor to support components remote state from remote repos (Note that this does not affect atmos functionality, this is to be used in the utils provider which calls into the atmos code)
why
The component processor's code is used by the utils provider to get the component's remote state
We already supported the ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH ENV vars to specify the CLI config file (atmos.yaml) path...
elover 3 years ago
Is there a way to get
atmos to use a previously generated Terraform plan? e.g. something like:atmos terraform plan <component> --stack=uw2-sandbox -out planfile
atmos terraform apply planfilerssover 3 years ago(edited)
v1.10.0
what
Fix remote state for Terraform utils provider
Remove all global vars from Go code
Implement Logs.Verbose
Update terraform commands
Refactor
why
Remove all global vars from Go code - this fixes remote state for Terraform utils provider
Terraform executes a provider data source in a separate process and calls it using RPC
But this separate process is only one per provider, so if we call the code the get the remote state of two different components, the same process will be called
In the...
what
Fix remote state for Terraform utils provider
Remove all global vars from Go code
Implement Logs.Verbose
Update terraform commands
Refactor
why
Remove all global vars from Go code - this fixes remote state for Terraform utils provider
Terraform executes a provider data source in a separate process and calls it using RPC
But this separate process is only one per provider, so if we call the code the get the remote state of two different components, the same process will be called
In the...
rssover 3 years ago
v1.10.0
what
Fix remote state for Terraform utils provider
Remove all global vars from Go code
Implement Logs.Verbose
Update terraform commands
Refactor
why
Remove all global vars from Go code - this fixes remote state for Terraform utils provider
Terraform executes a provider data source in a separate process and calls it using RPC
But this separate process is only one per provider, so if we call the code the get the remote state of two different components, the same process will be called
In the...
what
Fix remote state for Terraform utils provider
Remove all global vars from Go code
Implement Logs.Verbose
Update terraform commands
Refactor
why
Remove all global vars from Go code - this fixes remote state for Terraform utils provider
Terraform executes a provider data source in a separate process and calls it using RPC
But this separate process is only one per provider, so if we call the code the get the remote state of two different components, the same process will be called
In the...
rssover 3 years ago(edited)
v1.10.2
what
Update atmos describe stacks command
why
Output atmos stack names (logical stacks derived from the context variables) instead of stack file names
In the -s (--stack) filter, support both 1) atmos stack names (logical stacks derived from the context variables); 2) stack file names
test
atmos describe stacks --sections none --components vpc
tenant1-ue2-dev:
components:
terraform:
vpc: {}
tenant1-ue2-prod:
components:
terraform:
vpc: {}
tenant1-ue2-staging:...
what
Update atmos describe stacks command
why
Output atmos stack names (logical stacks derived from the context variables) instead of stack file names
In the -s (--stack) filter, support both 1) atmos stack names (logical stacks derived from the context variables); 2) stack file names
test
atmos describe stacks --sections none --components vpc
tenant1-ue2-dev:
components:
terraform:
vpc: {}
tenant1-ue2-prod:
components:
terraform:
vpc: {}
tenant1-ue2-staging:...
rssover 3 years ago
v1.10.2
what
Update atmos describe stacks command
why
Output atmos stack names (logical stacks derived from the context variables) instead of stack file names
In the -s (--stack) filter, support both 1) atmos stack names (logical stacks derived from the context variables); 2) stack file names
test
atmos describe stacks --sections none --components vpc
tenant1-ue2-dev:
components:
terraform:
vpc: {}
tenant1-ue2-prod:
components:
terraform:
vpc: {}
tenant1-ue2-staging:...
what
Update atmos describe stacks command
why
Output atmos stack names (logical stacks derived from the context variables) instead of stack file names
In the -s (--stack) filter, support both 1) atmos stack names (logical stacks derived from the context variables); 2) stack file names
test
atmos describe stacks --sections none --components vpc
tenant1-ue2-dev:
components:
terraform:
vpc: {}
tenant1-ue2-prod:
components:
terraform:
vpc: {}
tenant1-ue2-staging:...
rssover 3 years ago(edited)
v1.10.3
what
Update atmos.yaml initialization
why
For some atmos commands (e.g. atmos version and atmos vendor), don't process stacks b/c stacks folder might not be present (e.g. during cold-start when using atmos vendor and atmos version in CI/CD systems)
what
Update atmos.yaml initialization
why
For some atmos commands (e.g. atmos version and atmos vendor), don't process stacks b/c stacks folder might not be present (e.g. during cold-start when using atmos vendor and atmos version in CI/CD systems)
rssover 3 years ago
v1.10.3
what
Update atmos.yaml initialization
why
For some atmos commands (e.g. atmos version and atmos vendor), don't process stacks b/c stacks folder might not be present (e.g. during cold-start when using atmos vendor and atmos version in CI/CD systems)
what
Update atmos.yaml initialization
why
For some atmos commands (e.g. atmos version and atmos vendor), don't process stacks b/c stacks folder might not be present (e.g. during cold-start when using atmos vendor and atmos version in CI/CD systems)
Roman Orlovskiyover 3 years ago
Hi all. I would appreciate if someone could help me understand the logic behind dns-primary and dns-delegated components when it comes to the branded zones as mentioned here https://github.com/cloudposse/terraform-aws-components/tree/master/modules/dns-primary#component-dns-primary.
A couple of questions:
• What is the difference between the “primary” and “branded” zones?
• If the branded zone like example.com has to be in the
◦ Does the purpose of the
• Is this a more preferred approach compared to managing example.com in the
This component is responsible for provisioning the primary DNS zones into an AWS account. By convention, we typically provision the primary DNS zones in thednsaccount. The primary account for branded zones (e.g.<http://example.com|example.com>), however, would be in theprodaccount, while staging zone (e.g.<http://example.qa|example.qa>) might be in thestagingaccount.
A couple of questions:
• What is the difference between the “primary” and “branded” zones?
• If the branded zone like example.com has to be in the
prod AWS account, not the dns account, then which “primary” zones would one need to create in the dns account at all? ◦ Does the purpose of the
dns account only comes down to the domain registration in this scenario?• Is this a more preferred approach compared to managing example.com in the
dns account and then using cross-account access to create additional records there?OliverSover 3 years ago
One of my customers has following architecture and workflow, and I'm wondering if atmos could be a good way to automate this:
• there is a set of "component" git repos:
◦ each repo is a standalone "capability" tailored to the business, such as "provides a datalake" or "provides a static website" or "provides a VPC for all the other components";
◦ the git repo contains service code (eg Dockerfile that creates .net core service), PLUS infra code that provisions the resources required to run that component / module (eg an EC2 instance, RDS db, etc)
• there is a set of stack git repos:
◦ each stack git repo is a unique combination of the above "modules" and is standalone, ie it has its own VPC, domain names, certs, etc; it may be in separate or same AWS account as another stack depending on the situation
◦ the stack is created by pulling code from the individual component git repos and creating VM and docker images and terraform applying on each component infra code to provision infra that will use VM/docker images
◦ a stack for a specific environment (staging, prod etc) therefore consists of multiple tfstates (each component is a separate "terraform apply"), all stored in s3
◦ each stack git repo has a separate folder for each "instance" (ie environment) of that stack
• there is a set of aws account git repos:
◦ each account git repo specifies which stacks it contains, and has infra code to deploy basics like IAM roles for a specific stacks, a bucket for the stack provisioning
The git repos are not public, they are private repos in github. There are no definite plans to open-source any of these.
Based on what I'm seeing about atmos, it is plausible that atmos could be used to bring together the various pieces:
• yaml file for a stack would specify which modules a stack is made of, and their versions (eg git commit sha) for each environment (stack instance)
• "applying" atmos on that yaml for a specific environment would fetch the code from the component git repos into the stack git repo clone folder for the env, and run the right commands (packer, terraform etc) in the right order
• there would be a separate atmos yaml file for account level setup
Any gotchas that could prevent me from using atmos to do this?
• there is a set of "component" git repos:
◦ each repo is a standalone "capability" tailored to the business, such as "provides a datalake" or "provides a static website" or "provides a VPC for all the other components";
◦ the git repo contains service code (eg Dockerfile that creates .net core service), PLUS infra code that provisions the resources required to run that component / module (eg an EC2 instance, RDS db, etc)
• there is a set of stack git repos:
◦ each stack git repo is a unique combination of the above "modules" and is standalone, ie it has its own VPC, domain names, certs, etc; it may be in separate or same AWS account as another stack depending on the situation
◦ the stack is created by pulling code from the individual component git repos and creating VM and docker images and terraform applying on each component infra code to provision infra that will use VM/docker images
◦ a stack for a specific environment (staging, prod etc) therefore consists of multiple tfstates (each component is a separate "terraform apply"), all stored in s3
◦ each stack git repo has a separate folder for each "instance" (ie environment) of that stack
• there is a set of aws account git repos:
◦ each account git repo specifies which stacks it contains, and has infra code to deploy basics like IAM roles for a specific stacks, a bucket for the stack provisioning
The git repos are not public, they are private repos in github. There are no definite plans to open-source any of these.
Based on what I'm seeing about atmos, it is plausible that atmos could be used to bring together the various pieces:
• yaml file for a stack would specify which modules a stack is made of, and their versions (eg git commit sha) for each environment (stack instance)
• "applying" atmos on that yaml for a specific environment would fetch the code from the component git repos into the stack git repo clone folder for the env, and run the right commands (packer, terraform etc) in the right order
• there would be a separate atmos yaml file for account level setup
Any gotchas that could prevent me from using atmos to do this?
rssover 3 years ago(edited)
v1.10.4
what
Parse atmos.yaml CLI config when executing atmos vendor command
Improve OPA policy evaluation and error handling
why
When executing atmos vendor pull command, we need to parse atmos.yaml and calculate the paths to stacks and components folders to write the vendored files into the correct component folder
Add timeout to OPA policy evaluation (it will show a descriptive error message instead of hanging forever if Rego policy is not correctly defined/formatted or Regex in Rego is not...
what
Parse atmos.yaml CLI config when executing atmos vendor command
Improve OPA policy evaluation and error handling
why
When executing atmos vendor pull command, we need to parse atmos.yaml and calculate the paths to stacks and components folders to write the vendored files into the correct component folder
Add timeout to OPA policy evaluation (it will show a descriptive error message instead of hanging forever if Rego policy is not correctly defined/formatted or Regex in Rego is not...
rssover 3 years ago
v1.10.4
what
Parse atmos.yaml CLI config when executing atmos vendor command
Improve OPA policy evaluation and error handling
why
When executing atmos vendor pull command, we need to parse atmos.yaml and calculate the paths to stacks and components folders to write the vendored files into the correct component folder
Add timeout to OPA policy evaluation (it will show a descriptive error message instead of hanging forever if Rego policy is not correctly defined/formatted or Regex in Rego is not...
what
Parse atmos.yaml CLI config when executing atmos vendor command
Improve OPA policy evaluation and error handling
why
When executing atmos vendor pull command, we need to parse atmos.yaml and calculate the paths to stacks and components folders to write the vendored files into the correct component folder
Add timeout to OPA policy evaluation (it will show a descriptive error message instead of hanging forever if Rego policy is not correctly defined/formatted or Regex in Rego is not...
elover 3 years ago
hey all 👋 I have an S3 backend already provisioned (using the CloudPosse tfstate-backend module), but I'm struggling to get
Running
Any idea what I might be missing here? Thanks in advance for the help 🙂
atmos to use it using the following configuration:import:
- test/shared/automation/_defaults
vars:
environment: uw2
namespace: test
region: us-west-2
terraform:
vars: {}
backend_type: s3
backend:
s3:
encrypt: true
bucket: "test-uw2-automation"
key: "terraform.tfstate"
dynamodb_table: "test-uw2-automation-lock"
acl: "bucket-owner-full-control"
region: "us-west-2"
profile: "automation"
role_arn: null
components:
terraform:
tfstate-backend:
vars:
profile: "automation"
logging_bucket_enabled: true
vpc:
vars:
enabled: true
ipv4_primary_cidr_block: 10.5.0.0/16
subnet_type_tag_key: test/subnet/type
nat_gateway_enabled: true
nat_instance_enabled: false
max_subnet_count: 3Running
atmos terraform plan vpc -s <stack> doesn't seem to pick up the S3 backend configuration; I'm running the plan after assuming a different role that doesn't even have access to the specified backend bucket and DynamoDB table.Any idea what I might be missing here? Thanks in advance for the help 🙂
rssover 3 years ago(edited)
v1.10.5
what
In atmos helmfile commands, first check if the context ENV vars are already defined. If they are not, set them in the code
why
Some users of atmos define the context ENV vars (e.g. REGION) in the caller scripts, and atmos overrides them. This fix will first check if the ENV vars are not defined by the parent process before setting them
what
In atmos helmfile commands, first check if the context ENV vars are already defined. If they are not, set them in the code
why
Some users of atmos define the context ENV vars (e.g. REGION) in the caller scripts, and atmos overrides them. This fix will first check if the ENV vars are not defined by the parent process before setting them
Gabe Maentzover 3 years ago
Working through tutorials for atmos - https://docs.cloudposse.com/tutorials/atmos-getting-started/ Has anybody came across this issue now that it appears a
Sample components (fetch-location, fetch-weather, etc.) are not validating. @Erik Osterman (Cloud Posse) - I think I can work through it, but wanted to inform your team that the tutorials will probably need some updates.
atmos.yaml is required for running atmos?Sample components (fetch-location, fetch-weather, etc.) are not validating. @Erik Osterman (Cloud Posse) - I think I can work through it, but wanted to inform your team that the tutorials will probably need some updates.
✗ . [none] (HOST) 02-atmos ⨠ atmos validate component fetch-location --stack example
Searched all stack YAML files, but could not find config for the component 'fetch-weather' in the stack 'example'.
Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
Are the component and stack names correct? Did you forget an import?