13 messages
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
Brian Ojedaover 4 years ago
# stacks/wf.yaml
workflows:
plan-all:
description: Run 'terraform plan' and 'helmfile diff' on all components for all stacks
steps:
- job: terraform plan vpc
stack: ue2-dev
- job: terraform plan eks
stack: ue2-dev
- job: helmfile diff nginx-ingress
stack: ue2-dev
- job: terraform plan vpc
stack: ue2-staging
- job: terraform plan eks
stack: ue2-stagingShould it be possible to run all jobs for a given workflow without passing the
stack arg? e.g. run all defined jobs for all stacksatmos workflow plan-all -f wfBrian Ojedaover 4 years ago
Are there any examples of using atmos with helm/helmchart? Something similar to the terraform and atmos tut?
Neeraj Mittalover 4 years ago
how do you suggest loading file system from keybase within geodesic container?
Alan Coxover 4 years ago
our authentication to AWS is managed through AWS SSO. our credentials have three parts:
•
•
•
all the auth examples i've seen for
is it possible to use cloudposse's tools in this case?
•
aws_access_key_id •
aws_secret_access_key•
aws_session_tokenall the auth examples i've seen for
geodesic and atmos rely on aws-vault and (from what i can tell) aws-vault only supports authentication with aws_access_key_id and aws_secret_access_key.is it possible to use cloudposse's tools in this case?
Alan Coxover 4 years ago(edited)
is geodesic expected to keep aws-vault credentials in between sessions?
i would think that geodesic would maintain the aws-vault credentials from one session to the next.
localhost $> geodesic
geodesic $> export AWS_VAULT_BACKEND=file
geodesic $> aws-vault list # as i expected, returns one profile that has no credentials and no sessions
geodesic $> aws-vault add wispmaster.root # "Added credentials to profile "wispmaster.root" in vault
geodesic $> aws-vault list # as i expected, returns one profile that has credentials but no sessions
geodesic $> exit
localhost $> geodesic
geodesic $> export AWS_VAULT_BACKEND=file
geodesic $> aws-vault list # not what i expect ... returns one profile that has no credentials and no sessionsi would think that geodesic would maintain the aws-vault credentials from one session to the next.
Erik Osterman (Cloud Posse)over 4 years ago
It mounts your home directory for caching
Erik Osterman (Cloud Posse)over 4 years ago
I would explore the .aws folder to see what’s there
Erik Osterman (Cloud Posse)over 4 years ago
Geodesic is just a Docker image. However you run the container determines the behavior. So I would think more about it In terms of how you would accomplish it with Docker rather than how we do it in geodesic. It would be identical. We bind mount volumes into the container and so long as all the paths are correct it will work as expected
Markus Muehlbergerover 4 years ago
I think I’ve finally wrapped my head around the stack-based approach to SweetOps. The only thing I’m missing is the recommended way of having different module versions (in a March thread, I read that this is possible) of a component (e.g. for prod and dev) in the infrastructure repo. My understanding is that components don’t get imported on the fly with Terraform but are synced regularly with vendir (once usable).
I want to avoid provisioning a component that I want to use in the dev/staging account in production because stacks trigger automatically.
Once again, thanks for all the amazing work you do around that topic! 🚀
I want to avoid provisioning a component that I want to use in the dev/staging account in production because stacks trigger automatically.
Once again, thanks for all the amazing work you do around that topic! 🚀
Cody Halovichover 4 years ago
hey folks, how can I get an s3 backend configured using atmos? My stack file looks like the following. I do not have a provider or a terraform block defined in the tf files.
Cody Halovichover 4 years ago
1 terraform:
2 vars:
3 stage: "dev"
4 namespace: "gfca"
5 region: "ca-central-1"
6
7 components:
8 terraform:
9 setup:
10 name: "terraform"
11 vpc:
12 backend:
13 s3:
14 bucket: "gfca-dev-terraform-state"
15 key: "state/vpc"
16 workspace_key_prefix: "vpc"
17 vars:
18 name: "vpc"
19 availability_zones: ['ca-central-1a', 'ca-central-1b']Cody Halovichover 4 years ago(edited)
whenever I run with
atmos terraform apply -s ^^THISONE^^, it gives me local state. if i add terraform { backend "s3" } to my tf files, it wont honor the backend configuration and prompts me for a bucket name and a key.Cody Halovichover 4 years ago(edited)
i've run the tfstate project and copypasta'd the backend.tf into each module at this point as a workaround? The atmos docs say I can provide it inline in the stacks file, it would be great if somebody could help determine why thats not working for me?