29 messages
👽️
Ryanabout 1 year ago
Hello, dropping an error in here, Atmos v1.66 on Windows 11. Happens when I try to run plans. I did a refresh of my environment but I think I have everything configured correctly, but figured I'd check in here. Thanks.
╷
│ Error: failed to find a match for the import 'C:\redacted\repo\stacks\*\.yaml' ('.' + 'C:\redacted\repo\stacks\*\.yaml')
│
│ with module.iam_roles.module.account_map.data.utils_component_config.config,
│ on .terraform\modules\iam_roles.account_map\modules\remote-state\main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
exit status 1
╷
│ Error: failed to find a match for the import 'C:\redacted\repo\stacks\*\.yaml' ('.' + 'C:\redacted\repo\stacks\*\.yaml')
│
│ with module.iam_roles.module.account_map.data.utils_component_config.config,
│ on .terraform\modules\iam_roles.account_map\modules\remote-state\main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
exit status 1
Andrew Chemisabout 1 year ago(edited)
Hitting a stack overflow when using abstract components & multiple component instances. I found the "workaround" (aka, doing it correctly), but maybe this is a bug?
details in 🧵
$ atmos describe affected
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0x14022d04600 stack=[0x14022d04000, 0x14042d04000]
fatal error: stack overflow
...details in 🧵
James Leeabout 1 year ago
I am new DevOps Engineer and I am enthusiastic on DevOps technology.
Nowdays, I am studying DevOps technologies such as Docker, Kubernetes, and Terraform from Blogs and articles.
Also I hope I would like to work with Senior Dev and improve my skill on Real project.
I am willing to collaborate for free. 😉
Nowdays, I am studying DevOps technologies such as Docker, Kubernetes, and Terraform from Blogs and articles.
Also I hope I would like to work with Senior Dev and improve my skill on Real project.
I am willing to collaborate for free. 😉
Zachary Loeberabout 1 year ago
When do we get an
atmos init ?Zachary Loeberabout 1 year ago
Or do I have to get dirty with golang again? 🙂
burnzyabout 1 year ago(edited)
Hello, are there any troubleshooting tips for solving this error:
I know it’s related to
Running v1.160.1 of atmos and v1.7.2 of opentofu
ERRO template: templates-all-atmos-sections:191:34: executing "templates-all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1I know it’s related to
atmos.Component but having trouble tracking down where the problem isRunning v1.160.1 of atmos and v1.7.2 of opentofu
Dan Hansenabout 1 year ago(edited)
Does
atmos cache stack manifests? I'm seeing an error with the new release where if I run atmos describe affected --process-functions=false --process-templates=false, a subsequent atmos describe affected raises an error regarding a missing output config.Samuel Thanabout 1 year ago
hi, i'm at the point of setting up the infrastructure repository to start using github actions to perform the plan, apply ..etc.
Previously, i've been using my local computer to perform the atmos terraform plan and apply where the backend is configured to write the states into S3 bucket of the corresponding AWS account i'm deploying to.
my question is related to this documentation, https://atmos.tools/integrations/github-actions/#atmos-configuration where it refers to setting a completely separate s3 bucket dedicated to run atmos on github actions. Can someone help clarify, do i still keep my previous setup ? or this is a separate artifact storage process ? i'm bit confused.
Previously, i've been using my local computer to perform the atmos terraform plan and apply where the backend is configured to write the states into S3 bucket of the corresponding AWS account i'm deploying to.
my question is related to this documentation, https://atmos.tools/integrations/github-actions/#atmos-configuration where it refers to setting a completely separate s3 bucket dedicated to run atmos on github actions. Can someone help clarify, do i still keep my previous setup ? or this is a separate artifact storage process ? i'm bit confused.
Marat Bakeevabout 1 year ago
Guys, would anyone be so kind to give me a "moron's guide to atmos store"? I've configured store in my atmos yaml, but then... Does that SSM needs to be configured and created somehow? If yes, then how?
And then, is there a way to store all outputs for a catalog entity - like catalog/vpc.yaml ? And, if I don't have any changes, can I trigger write somehow?
Thanks -_-
And then, is there a way to store all outputs for a catalog entity - like catalog/vpc.yaml ? And, if I don't have any changes, can I trigger write somehow?
Thanks -_-
PePe Amengualabout 1 year ago
is the a way to make a component on a stack to always deploy, when using describe affected?
tretinhaabout 1 year ago(edited)
Hey! Any idea why all my
atmos describe affected runs result in all of my components being displayed? It seems like it always notices a difference, even when using the same branch. Every time I run the affected command, everything's apparently "affected", I'm pretty sure I'm doing something wrong I'm just not entirely sure what 😄Miguel Zablahabout 1 year ago
Hey so I just updated my Atmos and I try to update some vendor packages but now I get this error:
can I not do ssh anymore?
INFO Vendoring from 'vendor.yaml'
ERRO parse "git@github.com:cloudposse/terraform-null-label.git?ref=0.25.0": first path segment in URL cannot contain coloncan I not do ssh anymore?
David Elstonabout 1 year ago
Hi, filed a bug issue https://github.com/cloudposse/atmos/issues/1057 - fairly simple that the change applied in
v1.160.0 does not include terraform state which will break if someone is using Go templating as part of their backend configurationMichaelabout 1 year ago(edited)
Can I use Atmos to test a locally-developed Terraform provider with an override in
~/.terraformrc? I can successfully plan with my local provider when in the directory with native Terraform, but Atmos queries the registry for the unpublished provider.RBabout 1 year ago
is it possible to vendor a component on-the-fly without committing the component to avoid accidental customization of a component ?
Ricky Fontaineabout 1 year ago
Hi, I'm trying to setup the Terraform Apply GitHub Action so that it automatically runs
https://atmos.tools/integrations/github-actions/atmos-terraform-apply#requirements
I'm a bit confused about the
apply for affected stacks upon merging to main:https://atmos.tools/integrations/github-actions/atmos-terraform-apply#requirements
I'm a bit confused about the
auto-apply label, because I don't see it documented anywhere. Do we need to manually add that label to the PR we want applied? ThanksMiguel Zablahabout 1 year ago(edited)
Hi, did something change with the
bc of this it looks like my dependencies are broken since it dose a init when it runs a dependency
atmos tf init <component> -s <stack> ? bc now it looks like it evaluate different than plan or apply where it's not substituting some value kinda look like it's ignoring the evaluation: 2 on the atmos.yaml file bc when I do a dynamic role find it works on plan/apply but not on init saying it dose not find region .bc of this it looks like my dependencies are broken since it dose a init when it runs a dependency
Samuel Thanabout 1 year ago
Hi, i'm using a go-template style to deploy terraform components via atmos. Example:
i'm using github actions Atmos Terraform Plan, upon running the affected stacks, i was shown this error ..
I have been able to deploy my stacks via the github actions prior to using !terraform output. ... is it something i've to change in the github workflow, like include some more authentication to perform this type of deployment ?
vars:
alb_arn_suffix: !terraform.output {{ .app_name }}/ecs lb_arn_suffix i'm using github actions Atmos Terraform Plan, upon running the affected stacks, i was shown this error ..
Fetching lb_arn_suffix output from dummy-api/ecs in cs-ai-apse2-dev
✗ Fetching lb_arn_suffix output from dummy-api/ecs in cs-ai-apse2-dev
FATA Failed to execute terraform output component=dummy-api/ecs stack=cs-ai-apse2-dev
error=
│ exit status 1
│
│ Error: No valid credential sources found
│
│ Please see <https://www.terraform.io/docs/language/settings/backends/s3.html>
│ for more information about providing credentials.
│
│ Error: failed to refresh cached credentials, no EC2 IMDS role found,
│ operation error ec2imds: GetMetadata, failed to get API token, operation
│ error ec2imds: getToken, http response error StatusCode: 400, request to EC2
│ IMDS failedI have been able to deploy my stacks via the github actions prior to using !terraform output. ... is it something i've to change in the github workflow, like include some more authentication to perform this type of deployment ?
Matt Parkes12 months ago
I can't seem to get Override an Existing Terraform Command to work (but defining custom commands is working) 🧵
Matt Parkes12 months ago(edited)
Specifically what I'm trying to do (in case there's a better way) is dynamically set some Environment Variables for the
http backend and the Environment Variable equivalent of setting a credential block in .terraformrc 🧵Stephan Helas12 months ago(edited)
Hi,
there seems to be a new problem with vendoring, if i combine local and remote sources and run vendoring using tags, atmos fails with
this config worked back witch atmos 1.105.
working with only one tag at a time
not working when both tags combined
svc-lb has two sources, one is a git::https url, the second (the one that fails) is the local file from the mixin folder.
this problem seems to be related to some tui changes.
https://github.com/cloudposse/atmos/blob/v1.163.0/internal/exec/vendor_utils.go#L418
PS:
Please cut back on visual gimmicks, that break stuff
there seems to be a new problem with vendoring, if i combine local and remote sources and run vendoring using tags, atmos fails with
failed to download package: relative paths require a module with a pwd. if i run 2 vendorings with each tag alone, it works. but if i vendor everything or use multiple tags, vendoring fail.this config worked back witch atmos 1.105.
working with only one tag at a time
❯ rm -rf components/terraform
❯ atmos vendor pull --tags mixins
INFO Vendoring from 'vendor.yaml' for tags {mixins}
✓ mixins/context/v0.2 (v2.0.0)
✓ mixins/context/v1.0 (v2.4.1)
✓ mixins/context/v1.1 (v2.11.1)
✓ mixins/context/v1.2 (v2.20.2)
✓ mixins/context/v2.0 (v2.21.0)
✓ mixins/migration-vault-provider/v1.0 (v2.17.1)
✓ mixins/migration-account-global/v1.0 (v2.17.1)
Vendored 7 components.
❯ atmos vendor pull --tags svc-lb
INFO Vendoring from 'vendor.yaml' for tags {svc-lb}
✓ svc-lb/v0.1 (v2.25.0)
✓ svc-lb/mixins/context/v2.0
Vendored 2 components.not working when both tags combined
❯ rm -rf components/terraform
❯ atmos vendor pull --tags mixins,svc-lb
INFO Vendoring from 'vendor.yaml' for tags {mixins, svc-lb}
✓ mixins/context/v0.2 (v2.0.0)
✓ mixins/context/v1.0 (v2.4.1)
✓ mixins/context/v1.1 (v2.11.1)
✓ mixins/context/v1.2 (v2.20.2)
✓ mixins/context/v2.0 (v2.21.0)
✓ mixins/migration-vault-provider/v1.0 (v2.17.1)
✓ mixins/migration-account-global/v1.0 (v2.17.1)
✓ svc-lb/v0.1 (v2.25.0)
x svc-lb/mixins/context/v2.0 Failed to vendor svc-lb/mixins/context/v2.0: error : failed to download package: relative paths require a module with a pwd
Vendored 8 components. Failed to vendor 1 components.svc-lb has two sources, one is a git::https url, the second (the one that fails) is the local file from the mixin folder.
this problem seems to be related to some tui changes.
https://github.com/cloudposse/atmos/blob/v1.163.0/internal/exec/vendor_utils.go#L418
PS:
Please cut back on visual gimmicks, that break stuff
RB12 months ago
While the visual gimmicks are easy to point to as the culprit, it’s a code cleanup and refactoring that is the main reason coupled with more developers working with the codebase and insufficient test cases. While we are adding a lot more tests along the way, we sadly don’t have all the possible scenarios.
Do you folks have code coverage thresholds? If not, would you folks be open to it to increase atmos stability?
E.g. if code coverage threshold is 20% and current code coverage is 22%, if a new dev proposed code without tests or not enough tests and it dropped the coverage to 19%, a required action would show up as a res X
Michael12 months ago
For the
For example, if the
!terraform.output YAML function, is it possible to replicate this functionality firewall_subnet_ids = local.vpc_outputs.named_private_subnets_map[var.firewall_subnet_name] using the function?For example, if the
firewall_subnet_name is "firewall", would the YAML function in the stack look like:firewall_subnet_ids: !terraform.output vpc .named_private_subnets_map.firewallErik Osterman (Cloud Posse)12 months ago
The last field is a YQ expression.
Erik Osterman (Cloud Posse)12 months ago
Yes that looks right to me
Erik Osterman (Cloud Posse)12 months ago
Note, using terraform outputs slows things down considerably. Recommendation is to use !store which is lightening fast because it doesn’t really on access to terraform state
Michael Dizon12 months ago
Just seeing if anyone else is running into this.
Something seems to have changed between
Something seems to have changed between
v1.162.1 and v1.163.0 that is forcing me to pass in atmos_cli_config_path into any remote_state module, otherwise it fails with this error (I don’t have an orgs subdir):/Users/xxxx/Code/xxx/atmos-xxx/stacks/orgs/**/*.yamlcricketsc12 months ago
Is it possible to look up backend info in the
{{ }} template syntax?