53 messages
Dhamodharanover 2 years ago
Hello all,
i am new to GCP, I have few resources created under one project-A, now im trying to create resources under another project-B, but im connecting to same backend state.
Issue is the service account in project-B unable to refresh the statefile to get the resource details in project-A. its giving me permission denied error.
How can set permission to the serviceaccount in project-B to access the resources in project-A.
When i run terraform apply, its giving this error.
Can someone help me to fix this?
Thanks
i am new to GCP, I have few resources created under one project-A, now im trying to create resources under another project-B, but im connecting to same backend state.
Issue is the service account in project-B unable to refresh the statefile to get the resource details in project-A. its giving me permission denied error.
How can set permission to the serviceaccount in project-B to access the resources in project-A.
When i run terraform apply, its giving this error.
│ Error: Error when reading or editing ComputeNetwork "projects/project-A/global/networks/wazuh-siem-vpc-01": Get "<https://compute.googleapis.com/compute/v1/projects/project-A/global/networks/wazuh-siem-vpc-01?alt=json>": impersonate: status code 403: {
│ "error": {
│ "code": 403,
│ "message": "Permission 'iam.serviceAccounts.getAccessToken' denied on resource (or it may not exist).",
│ "status": "PERMISSION_DENIED",
│ "details": [
│ {
│ "@type": "<http://type.googleapis.com/google.rpc.ErrorInfo|type.googleapis.com/google.rpc.ErrorInfo>",
│ "reason": "IAM_PERMISSION_DENIED",
│ "domain": "<http://iam.googleapis.com|iam.googleapis.com>",
│ "metadata": {
│ "permission": "iam.serviceAccounts.getAccessToken"
│ }
│ }
│ ]
│ }
│ }Can someone help me to fix this?
Thanks
Hao Wangover 2 years ago(edited)
To be honest, it is disgusting that Hashicorp changes the license and apparently it is looking to sell itself for a big price. OpenTF is one side for community to fight against the venture capitals behind the deal. The other side may be to find an alternative of Terraform. My 2 cents, one of few alternatives: ArgoCD + Crossplane. If you or your companies are already working on them, it would be a good chance to stand up against tyrannies of technologies. 👍️
Hao Wangover 2 years ago
The alternative of Consul -> etcd, which is verified by Kubernetes
Hao Wangover 2 years ago
No potential alternative of Vault afaik, but expecting one will rise soon 🙂
Hao Wangover 2 years ago
qq? Is Terragrunt taken as a competitor by new license of Terraform?
Yaakov Amarover 2 years ago
Hi, are you familiar with some tool/github action that posts a small terraform plan output as part of ci/cd pipeline ?
Michaelover 2 years ago
Heya! I'm looking at the cloudposse/terraform-aws-eks-node-group module and trying to understand, how does it do the user-data stuff for Bottlerocket to work? Does it still drop in the userdata.tpl bash script or does it have some sort of bypass somewhere and generate some sort of TOML?
Dhamodharanover 2 years ago
Hi Team,
I am facing challenges with packer, can someone help?
I am trying to build an AMI, but i am taking the previous build AMI as source, On the top of that AMI, i want to update some additional packages and some folder needs to create, but when i run the packer command second time, its failing with the folder already exist.
Is there a way to check the folder and if its available, need to skip that provisioner step.
Can someone help me to add that step and ignore if its already available.?
I am facing challenges with packer, can someone help?
I am trying to build an AMI, but i am taking the previous build AMI as source, On the top of that AMI, i want to update some additional packages and some folder needs to create, but when i run the packer command second time, its failing with the folder already exist.
Is there a way to check the folder and if its available, need to skip that provisioner step.
Can someone help me to add that step and ignore if its already available.?
Kuba Martinover 2 years ago
Hey! Happy to announce that the OpenTF repository is now public 😄
https://github.com/opentffoundation/opentf
https://github.com/opentffoundation/opentf
Vitaliover 2 years ago
Hi Team,
I am currently working on project where I need peering between different aws regions within the same account. We used to use
I will add code snippets to this thread. I really hope you can help me with this.
I am currently working on project where I need peering between different aws regions within the same account. We used to use
cloudposse/vpc-peering/aws which sadly cannot connect different regions. I tried to upgrade the code to use the cloudposse/vpc-peering-multi-account/aws module and ended up with the error: Error: query returned no results. Please change your search criteria and try again . The VPCs and subnets already exist. They are created with a different module in a different deployment.I will add code snippets to this thread. I really hope you can help me with this.
PePe Amengualover 2 years ago
We are moving Atlantis to the CNCF!!! please take a minute to give a thumbs up https://github.com/cncf/sandbox/issues/60
Leandro Lamaisonover 2 years ago
Hello. I found the SweetOps Slack while trying to create AWS Iam Identity Center permission sets using https://github.com/cloudposse/terraform-aws-sso/tree/main/modules/permission-sets with Terragrunt. Running a plan just shows:
No changes. Your infrastructure matches the configuration . It seems the inputs block in terragrunt.hcl does not have any effect. How can I use this module with Terragrunt?rssover 2 years ago(edited)
v1.5.7
1.5.7 (September 7, 2023)
BUG FIXES:
terraform init: Terraform will no longer allow downloading remote modules to invalid paths. (#33745)
terraform_remote_state: prevent future possible incompatibility with states which include unknown check block result kinds. (<a href="https://github.com/hashicorp/terraform/issues/33818"...
1.5.7 (September 7, 2023)
BUG FIXES:
terraform init: Terraform will no longer allow downloading remote modules to invalid paths. (#33745)
terraform_remote_state: prevent future possible incompatibility with states which include unknown check block result kinds. (<a href="https://github.com/hashicorp/terraform/issues/33818"...
Muhammad Taqiover 2 years ago
Hy folks, I'm trying to create a public s3 bucket, So objects can only be read-only by public and write access via keys. Below is my code. After bucket creation i can not access the objects via object url
What's wrong here?
module "s3_public_bucket" {
source = "cloudposse/s3-bucket/aws"
version = "4.0.0"
name = "${var.name}-${var.environment}-assets"
s3_object_ownership = "BucketOwnerEnforced"
acl = "public-read"
enabled = true
user_enabled = false
versioning_enabled = false
ignore_public_acls = false
block_public_acls = false
block_public_policy = false
force_destroy = true
sse_algorithm = "AES256"
allow_encrypted_uploads_only = true
allow_ssl_requests_only = true
cors_configuration = [
{
allowed_origins = ["*"]
allowed_methods = ["GET", "HEAD", ]
allowed_headers = ["*"]
expose_headers = []
max_age_seconds = "3000"
}
]
allowed_bucket_actions = [
"s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListObjects", "s3:ListMultipartUploadParts", "s3:PutObject",
"s3:PutObjectTagging", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectTagging", "s3:AbortMultipartUpload",
"s3:ReplicateObject", "s3:RestoreObject", "s3:BatchDelete", "s3:DeleteObject", "s3:DeleteObjectVersion",
"s3:DeleteMultipleObjects", "s3:*"
]
lifecycle_configuration_rules = []
}What's wrong here?
Iulian Diminencoover 2 years ago
why "s3:*" at the end?
Elad Leviover 2 years ago
I would appreciate if you could take a look on the PR
Its for
@Dan Miller (Cloud Posse)
Its for
firewall-manager - shield_advanced.tf@Dan Miller (Cloud Posse)
Rishavover 2 years ago(edited)
Hey folks, we open-sourced a li'l reusable GitHub Action workflow to run Terraform commands via PR comments: for a CLI-like experience on the web. To demo how we use the workflow:
Internally, found it ideal for DevOps/Platforms engineers to promote self-service of infra-as-code without the overhead of self-hosted runners or VMs like Atlantis.
There're some other quality-of-life improvements to trigger the workflow more seamlessly, but would be stoked to have your usage feedback/consideration or edge-cases that have yet to be patched. Cheers!
1st PR comment: For example, let's plan this configuration from a given directory in 'dev' workspace with a variable file.
-terraform=plan -chdir=stacks/sample_instance -workspace=dev -var-file=env/dev.tfvars2nd comment: After reviewing the plan output, apply the above planned configuration.
-terraform=apply -chdir=stacks/sample_instance -workspace=dev -var-file=env/dev.tfvars3rd comment: To clean up afterwards, let's plan a targeted destruction of resources in the same configuration.
-terraform=plan -destroy -target=aws_instance.sample,data.aws_ami.ubuntu -chdir=stacks/sample_instance -workspace=dev -var-file=env/dev.tfvars4th comment: After reviewing the destructive plan output, apply the above planned configuration.
-terraform=apply -destroy -target=aws_instance.sample,data.aws_ami.ubuntu -chdir=stacks/sample_instance -workspace=dev -var-file=env/dev.tfvarsInternally, found it ideal for DevOps/Platforms engineers to promote self-service of infra-as-code without the overhead of self-hosted runners or VMs like Atlantis.
There're some other quality-of-life improvements to trigger the workflow more seamlessly, but would be stoked to have your usage feedback/consideration or edge-cases that have yet to be patched. Cheers!
PePe Amengualover 2 years ago
I need to setup Aws Organization, SSO, etc and I have been looking at AFT, account factory, control tower and I wonder if I should go with AFT or just do everything independently ? 🧵
DaniC (he/him)over 2 years ago
Very interesting move by Google https://cloud.google.com/infrastructure-manager/docs/overview
Thomas Bergmannover 2 years ago
Hi, does anyone has a proper solution how to build a standard 3-tier (public/private/data) architecture on AWS with cloudposse modules? “dynamic-subnets” just creates public/private, data is missing. “cloudposse/rds” not creating any subnets. “named-subnets” creates subnets only in a single AZ. Workaround would be to stack maybe “”dynamic-subnets”" and “named-subnets” in my own module together.
rssover 2 years ago(edited)
v1.5.7
1.5.7 (September 7, 2023)
BUG FIXES:
terraform init: Terraform will no longer allow downloading remote modules to invalid paths. (#33745)
terraform_remote_state: prevent future possible incompatibility with states which include unknown check block result kinds. (<a href="https://github.com/hashicorp/terraform/issues/33818"...
1.5.7 (September 7, 2023)
BUG FIXES:
terraform init: Terraform will no longer allow downloading remote modules to invalid paths. (#33745)
terraform_remote_state: prevent future possible incompatibility with states which include unknown check block result kinds. (<a href="https://github.com/hashicorp/terraform/issues/33818"...
rssover 2 years ago(edited)
rssover 2 years ago(edited)
v1.6.0-beta1
1.6.0-beta1 (August 31, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming IAM...
1.6.0-beta1 (August 31, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming IAM...
rssover 2 years ago(edited)
v1.6.0-beta2
1.6.0-beta2 (September 14, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming...
1.6.0-beta2 (September 14, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming...
Matthew Jamesover 2 years ago
We're doing a pretty big refactor of an old legacy prod environment and moving to the null-label for everything because of the obvious advantages, however for prod items where aws doesn't allow renaming of the asset without blowing it away which isn't practical does/has cloudposse got a pattern for managing that with null-label? The trivial thing is for us to just add a ignore_changes and import the old resources - but that means that we basically have to commit ignore_changes to the resouces in any modules we have that have context embedded which doesn't seem ideal. Is it just a tough sh*t type thing or is there some pattern that you guys have found works best? The previous team was super loose with naming convs so they dont give nice consistent patterns.
DaniC (he/him)over 2 years ago
Hi folks,
I wonder if anyone has few tips around working and debugging complex data structures?
Console doesn't always help as you can't use dynamic blocks or
Do you have your own dummy/ fake module which is generic enough to allow you to try things out w/o having to create real resources in cloud envs ?
I wonder if anyone has few tips around working and debugging complex data structures?
Console doesn't always help as you can't use dynamic blocks or
for_each , not to mentioned some basic features like (https://github.com/paololazzari/terraform-repl) .Do you have your own dummy/ fake module which is generic enough to allow you to try things out w/o having to create real resources in cloud envs ?
Maheshover 2 years ago
All, Wondering if email does does not support sns topic module
jaysunover 2 years ago
hi, does anyone here have any good resources for advanced terraforming? how do i bring my environment to the next level is my question i guess.
IKover 2 years ago
hey all.. we have a gap in our module registry.. whilst it’s pretty easy to find a module, it can be a little convoluted on just exactly how to consume the module.. we use tf-docs to generate documentation but thinking it’d be helpful if there was like a dummy repo/branch with an example? in which case, could you use a long-running branch (per module repo) with a set of tfvars that goes off and does a plan/apply then destroy to ensure any code changes don’t break the module? could you then also use this to test for any breaking changes to the module? keen to hear how others are tackling this..
rssover 2 years ago(edited)
v1.6.0-beta3
1.6.0-beta3 (September 20, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming...
1.6.0-beta3 (September 20, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming...
Hao Wangover 2 years ago
Linux Foundation Launches OpenTofu: A New Open Source Alternative to Terraform, https://www.linuxfoundation.org/press/announcing-opentofu
Doug Berghover 2 years ago(edited)
I'm trying to create an aws aurora cluster with mysql 5.7 using cloudposse
terraform-aws-rds-cluster.engine="aurora-mysql" with family="aurora-mysql5.7" gives the error DBParameterGroupFamily aurora-mysql5.7 cannot be used for this instance. Please use a Parameter Group with DBParameterGroupFamily aurora-mysql8.0. Is it no longer possible to create a mysql 5.7 cluster? Also, I'd like to see what aws api call terraform is making. i've turned on logging TF_LOG_PROVIDER=TRACE but that only shows the response, not the request. Any insight is appreciated!Hao Wangover 2 years ago
Matthew Jamesover 2 years ago
Does Cloudposse publish like a module guidelines on how best to write the wrapper logic around the count? Like i see stuff like:
I can't really pick the pattern of how it's done of like when a join makes more sense vs [0].x etc. I was wondering if there was a guide so i could keep any modules we write internally match the style guide of the rest
//
vpc_id = aws_vpc.default[0].id
vpc_id = join("",aws_vpc.default.*.id)
vpc_id = (other method from getting a count value).I can't really pick the pattern of how it's done of like when a join makes more sense vs [0].x etc. I was wondering if there was a guide so i could keep any modules we write internally match the style guide of the rest
fotagover 2 years ago(edited)
Hi all! Does anyone know if
terraform fmt -recursive can be enforced somehow to run locally (not in CI) without depend to someone manually run the command or install and run it in a pre-commit git hook?Jeff Tribbleover 2 years ago
Hey folks! I'm looking to help out with https://github.com/cloudposse/terraform-aws-ecs-container-definition. There are some open PRs related to ECS service connect that I'd love to help get merged. Can anyone help set me up as a contributor?
muhahaover 2 years ago(edited)
hey👋question, how to use different AWS_PROFILE/or AWS_ROLE_ARN for aws provider and AWS_PROFILE/or AWS_ROLE_ARN for s3 backend ? TLDR I want to have DRY code and the ability to run from local and also from CI. Seems that backend does not support env variables. Thanks
EDIT: of course its doable with terragrunt and conditions, but i am switching terragrunt for terramate
EDIT: of course its doable with terragrunt and conditions, but i am switching terragrunt for terramate
RickAover 2 years ago
I'm having trouble with the Spacelift space-admin component and getting the yaml config setup properly. I'm using the examples from https://docs.cloudposse.com/components/library/aws/spacelift/spaces/, but am hitting an error I can't figure out (in thread).
muhahaover 2 years ago(edited)
Can anyone help how to use for_each in this case ? The problem is that I need to reference both modules so for_each fails to cyclic dependency. ( It does work for single certificate and domain validation, but I am not sure how to handle this with list of maps and for_each ) Thanks
locals {
certificates = [{
name = <http://foo.bar.com|foo.bar.com>
sans = ["<http://foo.bar.com|foo.bar.com>"]
}]
}
module "acm" {
for_each = { for c in local.certificates : c.name => c }
source = "terraform-aws-modules/acm/aws"
version = "~> 4.0"
domain_name = each.value.name
subject_alternative_names = each.value.sans
create_route53_records = false
validation_record_fqdns = module.route53_records[each.value.name].validation_route53_record_fqdns < --------------- THIS ?
}
module "route53_records" {
for_each = module.acm
source = "terraform-aws-modules/acm/aws"
version = "~> 4.0"
providers = {
aws = aws.organization
}
create_certificate = false
create_route53_records_only = true
distinct_domain_names = each.value.distinct_domain_names
zone_id = local.environments.organization.route53_zone_id
acm_certificate_domain_validation_options = each.value.acm_certificate_domain_validation_options
}Johnmaryover 2 years ago
Hey folks, please have anyone used terraform to handle AWS RDS Blue/green approach to upgrade MYSQL RDS from 5.7 to 8.0
Jim Stallingsover 2 years ago
I have a cloudposse sg module and an alb module paired, but when I change the name of the sg or an sg rule (even with create_before_destroy enabled) the load balancer is not updating with the new sg until the next run. Is there something wrong with this code?
module "alb_security_group" {
source = "cloudposse/security-group/aws"
version = "2.2.0"
attributes = ["lb", "sg"]
allow_all_egress = false
rules = [{
description = "Allow LB Traffic to ECS"
cidr_blocks = ["10.22.0.0/22"]
from_port = 80
protocol = "tcp"
self = "false"
to_port = 80
type = "egress"
}] # #var.lb_object.security_group_rules
vpc_id = var.lb_object.vpc_id
context = module.this.context
create_before_destroy = true
}
module "alb" {
source = "cloudposse/alb/aws"
version = "1.10.0"
vpc_id = var.lb_object.vpc_id
security_group_ids = [ module.alb_security_group.id ]
....Alex Atkinsonover 2 years ago
RDS Blue-Green Updates & TF State desync.
When applying an attribute change to a primary RDS instance with a read replica, such as the CA, then the instance blue-greens and also replaces the replica. Afterward, if you try to apply a change to the replica, it attempts to act against it's defined resource which is now named "-old1", as it's been kept around after the blue-green sequence from AWS. To correct this, I updated the state file to point at the correct endpoint and resource ID...
Has anyone seen this, or have a better approach to fixing it? Terraform refresh wouldn't do it, since it would still hit the resource id of the "-old1" instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html
When applying an attribute change to a primary RDS instance with a read replica, such as the CA, then the instance blue-greens and also replaces the replica. Afterward, if you try to apply a change to the replica, it attempts to act against it's defined resource which is now named "-old1", as it's been kept around after the blue-green sequence from AWS. To correct this, I updated the state file to point at the correct endpoint and resource ID...
Has anyone seen this, or have a better approach to fixing it? Terraform refresh wouldn't do it, since it would still hit the resource id of the "-old1" instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html
Jeff Tribbleover 2 years ago
Hey friends, could use a review on https://github.com/cloudposse/terraform-aws-ecs-container-definition/pull/169 when someone has time!
Nate McCurdyover 2 years ago
Hi all, I could use some guidance on the best way to use the CloudPosse AWS Lambda module for my use case: https://github.com/cloudposse/terraform-aws-lambda-function
Here's the situation:
• We'll use Terraform to create the Lambda infrastructure and related resources in AWS (buckets, IAM roles/policies, etc...)
• An application deployment pipeline will bundle the Lambda function code into a
• That
• Application deployments should ideally not need to involve Terraform, but just drop a new object into the well-known S3 path for the
Is this possible?
Can the cloudposse module handle creating a Lambda where the
Here's the situation:
• We'll use Terraform to create the Lambda infrastructure and related resources in AWS (buckets, IAM roles/policies, etc...)
• An application deployment pipeline will bundle the Lambda function code into a
.zip and drop it into S3.• That
.zip doesn't exist yet, but I need to create the infrastructure.• Application deployments should ideally not need to involve Terraform, but just drop a new object into the well-known S3 path for the
.zip file.Is this possible?
Can the cloudposse module handle creating a Lambda where the
s3_bucket exists but the s3_key doesn't yet? I just want to prep the infrastructure to be used later.rssover 2 years ago
v1.6.0-rc1
1.6.0-rc1 (September 27, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming...
1.6.0-rc1 (September 27, 2023)
UPGRADE NOTES:
On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
On Windows, Terraform now at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued.
The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the hashicorp/aws provider:
Configuration settings related to assuming...
Akın Tekeoğluover 2 years ago
Hi, how can I use the sqs module? It is asking me to set up the atmos config file. Is there a way to do it without Atmos?
Alex Atkinsonover 2 years ago
Note that I've been seeing issues with RDS Instances with read replicas having their CA updated then having that CA revert to the one expiring in 2024. The issue and solution are described here for anyone experiencing it.
https://github.com/hashicorp/terraform-provider-aws/issues/33546#issuecomment-1739777113
https://github.com/hashicorp/terraform-provider-aws/issues/33546#issuecomment-1739777113
vamshiover 2 years ago(edited)
Hi everyone, we are trying to use this https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/0.76.0/main.tf repo when I enable the s3_origin_enabled = true, we are getting the issue at
line number 210: data "aws_iam_policy_document" "combined" and remove the sid or make the unique , please use the unique sid , but now how fix the issue using the same repo anyone help to fix this issue
line number 210: data "aws_iam_policy_document" "combined" and remove the sid or make the unique , please use the unique sid , but now how fix the issue using the same repo anyone help to fix this issue
Gabriela Campana (Cloud Posse)over 2 years ago
@Monica Hart
Monica Hartover 2 years ago
Hello! Just wondering, is the practice of using this setup.rpm.sh mentioned in our bastion component we received in order to grab user key information still in use?
I'm not seeing chamber get installed when we deploy new bastion hosts is the issue I'm having.
#!/bin/bash
curl -1sLf '<https://dl.cloudsmith.io/public/cloudposse/packages/setup.rpm.sh>' | sudo -E bash
yum install -y chamber
for user_name in $(chamber list ${component_name}/ssh_pub_keys | cut -d$'\t' -f1 | tail -n +2);
do
groupadd $user_name;
useradd -m -g $user_name $user_name
mkdir /home/$user_name/.ssh
chmod 700 /home/$user_name/.ssh
cd /home/$user_name/.ssh
touch authorized_keys
chmod 600 authorized_keys
chamber read ${component_name}/ssh_pub_keys $user_name -q > authorized_keys
chown $user_name:$user_name -R /home/$user_name
done
echo "-----------------------"
echo "END OF CUSTOM USER DATA"
echo "-----------------------"I'm not seeing chamber get installed when we deploy new bastion hosts is the issue I'm having.