74 messages
Grubholdover 4 years ago
Hi folks, have any of you used Terraform to bootstrap iAM roles for a new AWS account? We’re deploying an environment using CloudPosse’s Terraform modules but we also need to bootstrap the roles for each new account. Would be great if you refer to some information about this
AugustasVover 4 years ago
dimensions = {
TargetGroup = var.tg_element_number != "" ? aws_lb_target_group.lb_target_group[var.tg_element_number].arn_suffix : aws_lb_target_group.lb_target_group[count.index].arn_suffix
LoadBalancer = data.aws_lb.lb.arn_suffix
}got error
Error: Invalid index
on ../aws_cloudwatch_metric_alarm.tf line 94, in resource "aws_cloudwatch_metric_alarm" "alarm":
94: TargetGroup = var.tg_element_number != "" ? aws_lb_target_group.lb_target_group[var.tg_element_number].arn_suffix : aws_lb_target_group.lb_target_group[count.index].arn_suffix
|----------------
| aws_lb_target_group.lb_target_group is tuple with 2 elements
| var.tg_element_number is "8"
The given key does not identify an element in this collection value.why is that?
Joaquin Menchacaover 4 years ago
There's a common pattern in cloud provisioning, where you check to see if the resource exists already, then if not, create it. But this doesn't seem possible to do in Terraform. What I would want to do is have a data source check the existence of the resource, and if it exist, use that resource, otherwise, create it. However, if the resource doesn't exist, data source causes the whole flow to exit with non-zero. Thus you cannot have real idempotence with the behavior of data source.
J Normentover 4 years ago
I’m thinking about using a cloud posse module for cloudfront-s3 in a production environment with pretty strict governance. (https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn) There is already an s3 module in the company that is certified and maintained. I’d like to ensure that I can use any arbitrary s3 bucket with this module. I think that I can, but I wanted to be sure before going too far down this path.
aimbotdover 4 years ago
Hey comrades, any familiar with setting up eks with emr via terraform?
Matt Gowieover 4 years ago
Not sure if this got talked about in #office-hours, but some awesome functionality coming to v1.1: Config-driven refactoring. Docs: https://github.com/hashicorp/terraform/blob/1ca10ddbe228f1a166063f907d5198f39e71bdef/website/docs/language/modules/develop/refactoring.html.md
Enables this type of HCL:
Enables this type of HCL:
# Previous
# resource "aws_instance" "a" {
# count = 2
#
# # (resource-type-specific configuration)
#}
# New
resource "aws_instance" "b" {
count = 2
# (resource-type-specific configuration)
}
moved {
from = aws_instance.a
to = aws_instance.b
}RBover 4 years ago
when is 1.1 going to be released
rssover 4 years ago
v1.1.0-beta1
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
Davidover 4 years ago
Hi fellow terraformers. I am currently working on moving to terraform v1.0.10 from 0.14 and have encountered an issue within terraform-aws-s3-log-storage.
This is being referenced by a fairly default configuration of
Which is using the latest version. From looking at the variables, they should all be known default values.
Is there anything I could be missing? (before I open up a github issue). Thanks in advance!
Error: Invalid count argument
on .terraform/modules/this.s3.s3_bucket/main.tf line 163, in resource "aws_s3_bucket_policy" "default":
163: count = module.this.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || var.policy != "") ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply...This is being referenced by a fairly default configuration of
module "s3" {
source = "cloudposse/cloudtrail-s3-bucket/aws"
version = "0.23.1"
name = "cloudtrail-${random_id.this.hex}"
force_destroy = true
}Which is using the latest version. From looking at the variables, they should all be known default values.
Is there anything I could be missing? (before I open up a github issue). Thanks in advance!
Zachover 4 years ago
latest aws provider now supports bottlerocket for EKS
resource/aws_eks_node_group: Support for
resource/aws_eks_node_group: Support for
BOTTLEROCKET_ARM_64 and BOTTLEROCKET_x86_64 ami_type argument valuesJ
Jim Parkover 4 years ago
I wonder if anyone has encountered any solutions that make migrating resources between different terraform workspaces less tedious?
Mohammed Yahyaover 4 years ago
Terraform AWS Provider version 3.64.1
• New Data Source:
• New Data Source:
• New Resource:
• New Resource:
• New Resource:
• New Resource:
• New Resource:
• New Resource:
• New Data Source:
aws_cloudfront_response_headers_policy (#21620)• New Data Source:
aws_iam_user_ssh_key (#21335)• New Resource:
aws_backup_vault_lock_configuration (#21315)• New Resource:
aws_cloudfront_response_headers_policy (#21620)• New Resource:
aws_kms_replica_external_key (#20533)• New Resource:
aws_kms_replica_key (#20533)• New Resource:
aws_prometheus_alert_manager_definition (#21431)• New Resource:
aws_prometheus_rule_group_namespace (#21470)Mohammed Yahyaover 4 years ago
• resource/aws_eks_node_group: Support for
BOTTLEROCKET_ARM_64 and BOTTLEROCKET_x86_64 ami_type argument values (#21616) @Erik Osterman (Cloud Posse) Alysonover 4 years ago(edited)
Hi, I'm getting a weird error when trying to create a mysql aurora with module 0.47.2
Error:
https://github.com/cloudposse/terraform-aws-rds-cluster/tree/0.47.2
module "rds_cluster_aurora_mysql" {
source = "cloudposse/rds-cluster/aws"
version = "0.47.2"
engine = "aurora"
cluster_family = "aurora-mysql5.7"
cluster_size = 2
namespace = "eg"
stage = "dev"
name = "db"
admin_user = "admin1"
admin_password = "Test123456789"
db_name = "dbname"
instance_type = "db.t2.small"
vpc_id = "vpc-xxxxxxx"
security_groups = ["sg-xxxxxxxx"]
subnets = ["subnet-xxxxxxxx", "subnet-xxxxxxxx"]
zone_id = "Zxxxxxxxx"
}Error:
module.rds_cluster_aurora_mysql.aws_rds_cluster.primary[0]: Creating...
Error: error creating RDS cluster: InvalidParameterCombination: The Parameter Group test-stage-rds-20211105125821182400000001 with DBParameterGroupFamily aurora-mysql5.7 cannot be used for this instance. Please use a Parameter Group with DBParameterGroupFamily aurora5.6
status code: 400, request id: 44eac3b8-f377-4d45-a2f7-7a3e95c22297
on .terraform/modules/rds_cluster_aurora_mysql/main.tf line 50, in resource "aws_rds_cluster" "primary":
50: resource "aws_rds_cluster" "primary" {https://github.com/cloudposse/terraform-aws-rds-cluster/tree/0.47.2
Yoni Leitersdorf (Indeni Cloudrail)over 4 years ago
Did anybody look at Hashicorp’s S1/IPO documentation? Quite interesting to read:
• First of all, amazing they created such a big business growing rapidly, kudos to the entire team there. They’re the first company I know of that replicated Redhat’s model successfully (everyone else have failed, including Docker).
• Vast majority of revenue is from support at very high margin - which means there’s room to compete with Hashicorp on this (offer support for a far lower price).
• Interesting to see their cloud-hosted services were $4m last year, growing 100% year-over-year, and costing a lot more to run than they’re bringing in. That tells me TFC is probably mostly free users and that the business there is still small (a couple of million a year).
• TFE probably a bigger business (under the License line), which is interesting vis-a-vis TFC.
Would love to hear other people’s thoughts on this.
• First of all, amazing they created such a big business growing rapidly, kudos to the entire team there. They’re the first company I know of that replicated Redhat’s model successfully (everyone else have failed, including Docker).
• Vast majority of revenue is from support at very high margin - which means there’s room to compete with Hashicorp on this (offer support for a far lower price).
• Interesting to see their cloud-hosted services were $4m last year, growing 100% year-over-year, and costing a lot more to run than they’re bringing in. That tells me TFC is probably mostly free users and that the business there is still small (a couple of million a year).
• TFE probably a bigger business (under the License line), which is interesting vis-a-vis TFC.
Would love to hear other people’s thoughts on this.
loganover 4 years ago
Hello team! I'm relatively new to terraform (and your slack, so apologies if posting in wrong channel), but I think I found a (somewhat critical) bug in the terraform-aws-dynamic-subnets module so wanted to share here. It appears that the
private_network_acl_id and public_network_acl_id don't behave consistent with spec. For example, the description of public_network_acl_id indicates that Network ACL ID that will be added to public subnets . However, I don't see either getting added anywhere. Instead, it seems to just drop both the ACLs entirely and prevent creation of ACLs in the module. Am I mistaken? btw changes were introduce in PR 15. Also, manually attaching the subnet IDs afterwards using the private_subnet_ids and public_subnet_ids output seems to enable/disable the custom ACLS on each update, but could be my configuration - still investigating.Naty Lazarovichover 4 years ago
Hey guys, I’m getting this error when creating a vpc module . A week ago everything worked without any issues and I didn’t change the configuration.
Error:
Configuration:
resource:
Error:
Error creating Redshift Subnet Group: ClusterSubnetGroupAlreadyExists: The Cluster subnet group 'my-vpc' already exists.
│ status code: 400, request id: ***********
│
│ with module.network.module.vpc.aws_redshift_subnet_group.redshift[0],
│ on .terraform/modules/network.vpc/main.tf line 530, in resource "aws_redshift_subnet_group" "redshift":
│ 530: resource "aws_redshift_subnet_group" "redshift" {Configuration:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.63.0"
}
}
}
provider "aws" {
region = "eu-central-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
public_subnets = ["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.3.0/24", "10.0.4.0/24", "10.0.5.0/24"]
redshift_subnets = ["10.0.6.0/24", "10.0.7.0/24", "10.0.8.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
}resource:
<https://github.com/terraform-aws-modules/terraform-aws-vpc>Brad McCoyover 4 years ago
Join us tomorrow for The Sydney HashiCorp User Group Monthly Meetup where we will be going over Terraform: https://www.meetup.com/sydney-hashicorp-user-group/events/281535723/
Davidover 4 years ago
Is there any way to force a data source to be lookup up only during an
apply? I have a case where I want to lookup a Vault secret via a data source, and I need to ensure it's picking up the latest secret at apply time, while using a planfile made much earlierTim Irvinover 4 years ago
Hi, I hope this isn't a FAQ. I am trying to use
I have a module that calls a number of CP modules, and as I hit the ones that haven'tt been updated things bomb out trying to use the
tenant in my labels, but many of the CP modules still have a context.tf using cloudposse/label/null v. 0.24.1 , for example: https://github.com/cloudposse/terraform-aws-named-subnets/blob/master/context.tf -- is that intentional? Is there a workaround I can use to avoid this.I have a module that calls a number of CP modules, and as I hit the ones that haven'tt been updated things bomb out trying to use the
tenant var.aimbotdover 4 years ago
Hey Friends. I stood up an EKS stack with this stack: https://github.com/cloudposse/terraform-aws-eks-cluster , I modified it to set up an ALB with https://github.com/cloudposse/terraform-aws-alb-ingress for ingress... I am not sure where to go from here to use this ingress within my stack though. Do yall have any docs you can point me at or something to help me get unstuck? I'd appreciate it greatly.
aimbotdover 4 years ago
👋 hi. I'm slightly dense sometimes and I'm having trouble grokking the note about
aws-auth here: https://github.com/cloudposse/terraform-aws-eks-cluster ... I'm currently seeing a situation where my clusters aws-auth configMap lacks the values set in the terraform for map_additional_iam_roles. I suspect it has to do with whats in the notes in the readme. can i get an ELI5?Michael Bottomsover 4 years ago
What's the general consensus on tf config scanning tools. tfsec, checkov, etc... Any of them stand out one way or the other?
rssover 4 years ago(edited)
v1.0.11
1.0.11 (November 10, 2021)
ENHANCEMENTS:
backend/oss: Added support for sts_endpoint (#29841)
BUG FIXES:
config: Fixed a bug in which ignore_changes = all would not work in override files (<a href="https://github.com/hashicorp/terraform/issues/29849" data-hovercard-type="pull_request"...
1.0.11 (November 10, 2021)
ENHANCEMENTS:
backend/oss: Added support for sts_endpoint (#29841)
BUG FIXES:
config: Fixed a bug in which ignore_changes = all would not work in override files (<a href="https://github.com/hashicorp/terraform/issues/29849" data-hovercard-type="pull_request"...
Daniel Huescaover 4 years ago
Hello friends!
quick question, is there any terraform module I can use for creating global aws documentdb clusters? 🤔
We're planning our DR and we wanted to automate infrastructure as much as possible
quick question, is there any terraform module I can use for creating global aws documentdb clusters? 🤔
We're planning our DR and we wanted to automate infrastructure as much as possible
Matt Gowieover 4 years ago
Interesting project for anyone consuming a lot of modules: https://github.com/keilerkonzept/terraform-module-versions
Mohammed Yahyaover 4 years ago
Mohammed Yahyaover 4 years ago
very cool ^^
Marcin Mrotekover 4 years ago
Hi, I’m using https://github.com/cloudposse/terraform-aws-ecs-web-app/tree/0.65.2 and I face a strange problem. I’m doing it the way that is presented in “without_authentication”
error I got
By chance you may know what I’m doing wrong here?
alb_security_group = module.alb.security_group_id
alb_target_group_alarms_enabled = true
alb_target_group_alarms_3xx_threshold = 25
alb_target_group_alarms_4xx_threshold = 25
alb_target_group_alarms_5xx_threshold = 25
alb_target_group_alarms_response_time_threshold = 0.5
alb_target_group_alarms_period = 300
alb_target_group_alarms_evaluation_periods = 1
alb_arn_suffix = module.alb.alb_arn_suffix
alb_ingress_healthcheck_path = "/"
# Without authentication, both HTTP and HTTPS endpoints are supported
alb_ingress_unauthenticated_listener_arns = module.alb.listener_arns
alb_ingress_unauthenticated_listener_arns_count = 2
# All paths are unauthenticated
alb_ingress_unauthenticated_paths = ["/*"]
alb_ingress_listener_unauthenticated_priority = 100error I got
Error: Invalid count argument
│
│ on .terraform/modules/gateway.alb_ingress/main.tf line 50, in resource "aws_lb_listener_rule" "unauthenticated_paths":
│ 50: count = module.this.enabled && length(var.unauthenticated_paths) > 0 && length(var.unauthenticated_hosts) == 0 ? length(var.unauthenticated_listener_arns) : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform
│ cannot predict how many instances will be created. To work around this, use the -target argument to first
│ apply only the resources that the count depends on.By chance you may know what I’m doing wrong here?
Ivi Alex Franco Silvaover 4 years ago
Hi guys. Is there a possibility to use terraform import to module https://github.com/cloudposse/terraform-aws-ec2-instance?
Davidover 4 years ago
Hi everyone, I was wondering if it was possible to stop the regex from stripping “/” from the name of cloudwatch log groups, we use these to create virtual directory paths https://github.com/cloudposse/terraform-aws-cloudwatch-logs
Thorsten Behrensover 4 years ago
Hello there! I am looking at forking <https://github.com/cloudposse/terraform-aws-ec2-instance-group> into
terraform-linode-instance-group . I'd love to feed that back into your repo when it's done. Is that desirable from your end, and if so, are there guidelines for making that as smooth as possible?Grubholdover 4 years ago
Hi folks, I’m trying to use the
Any hints or example will be highly appreciated!
terraform-aws-components to bootstrap an account on an already available organisation. I went through as a first step to the account module but pretty stuck with understanding how it works and how the yaml found in the README can help with specifying the organization_config etc.Any hints or example will be highly appreciated!
Deepak Kosarajuover 4 years ago(edited)
Any one think this as a bug in terraform-tfe-cloud-infrastructure-automation - renaming workspace under project is creating new workspace with new name and deleting workspace with old name, expected behavior is in-place rename of workspace. Tried renaming this to new name after this workspace is created and
go-tfe library used by this update call in tfe-provider
tf plan/apply deleted the workspace and created new workspace which is not the expected behavior as per the API tfe_provider uses as per following references.go-tfe library used by this update call in tfe-provider
Saleem Aliover 4 years ago(edited)
Hi Everyone, Anybody tried creating AWS aurora global db with 5 secondary clusters (or atleast more than 2 secondary clusters) using terraform module https://registry.terraform.io/modules/cloudposse/rds-cluster/aws/latest or any related module ?
Davidover 4 years ago
I just published a terraform provider for Pulumi Cloud, allowing terraform to directly read pulumi stack outputs, like:
This code has helped my company transition our large terraform codebase to a hybrid model using both terraform and pulumi, and the source is available here: https://github.com/transcend-io/terraform-provider-pulumi
I'll be writing a blog post soon about our strategy in the migration as well
terraform {
required_providers {
pulumi = {
version = "0.0.2"
source = "<http://hashicorp.com/transcend-io/pulumi|hashicorp.com/transcend-io/pulumi>"
}
}
}
provider "pulumi" {}
data "pulumi_stack_outputs" "stack_outputs" {
organization = "transcend-io"
project = "some-pulumi-project"
stack = "dev"
}
output "version" {
value = data.pulumi_stack_outputs.stack_outputs.version
}
output "stack_outputs" {
value = data.pulumi_stack_outputs.stack_outputs.stack_outputs
}This code has helped my company transition our large terraform codebase to a hybrid model using both terraform and pulumi, and the source is available here: https://github.com/transcend-io/terraform-provider-pulumi
I'll be writing a blog post soon about our strategy in the migration as well
Matt Gowieover 4 years ago
https://www.hashicorp.com/blog/announcing-terraform-aws-cloud-control-provider-tech-p[…]btIWC6JXhWThnCFKLwW9s4XieG3LKs7cVsMFaJ0r23nJrR4wUJzrB-aLwyHo
Automated Terraform AWS Provider — Will be great once AWS starts to move a lot of their services to Cloud Control.
Automated Terraform AWS Provider — Will be great once AWS starts to move a lot of their services to Cloud Control.
Mzattover 4 years ago
👋 , does anyone have a clean way to generate outputs/variable files?
rssover 4 years ago
v1.1.0-beta2
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed...
Stephen Tanover 4 years ago
Hi - I have an old PR here which I’ve just got around to fixing. https://github.com/cloudposse/terraform-aws-code-deploy/pull/9 - there is an associated thread which explains the ARN issue. The tagging I hope is self explanatory. https://sweetops.slack.com/archives/CB6GHNLG0/p1633728090408800?thread_ts=1633617559.399300&cid=CB6GHNLG0 cc @RB who last cast his/her eye on the code
aimbotdover 4 years ago
Hey comrades. I've been running the EKS stack provided by your TF template. Its been pretty successful until now. My pipeline appears to be failing when it hits the
I've confirmed that my auth is valid for at least the state resource, its pulling from the remote state from the right account. it should be using the same credentials to deploy the cluster.
terraform plan path. The error being:module.eks_cluster.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]
╷
│ Error: configmaps "aws-auth" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth[0],
│ on .terraform/modules/eks_cluster/auth.tf line 132, in resource "kubernetes_config_map" "aws_auth":
│ 132: resource "kubernetes_config_map" "aws_auth" {
│
╵I've confirmed that my auth is valid for at least the state resource, its pulling from the remote state from the right account. it should be using the same credentials to deploy the cluster.
aimbotdover 4 years ago
I'm not sure of the right path forward to resolve this.
Erik Osterman (Cloud Posse)over 4 years ago
Aside from the HashiCorp LMS, what are the best resources for beginners to learn terraform? (e.g. udemy courses, et al)
Mzattover 4 years ago(edited)
So, how are you all handling bootstrapping roles for CICD in your projects? A service account with admin permissions in multiple projects? I'm taking an approach of scoping the role per service per repo basically. I'm using a cf template to create a role, then assuming that role in my pipeline. Just wondering if that's a valid approach or if anyone has anything else they do.
Henry Carterover 4 years ago(edited)
Am I right in thinking that the
terraform-aws-transit-gateway module doesn't add any routes for the tgw to the subnet_ids within vpc_attachments? Does the cloudposse tgw module handle this?geertnabout 4 years ago
With Terraform 1.0.10 I always see TRACE level logs in /tmp for core & providers regardless of the TF_LOG/TF_LOG_CORE/TF_LOG_PROVIDER setting. With tf 1.1b2 I don't see them. Anyone else seeing this behaviour?
Erez Weissabout 4 years ago
Hi everyone!
Where can I ask a question about one of your modules? 🙂
Where can I ask a question about one of your modules? 🙂
Mzattabout 4 years ago
Hey all 👋 I was wondering how my approach sounds. I have a repo with a few related services. I would like to run a basic CICD workflow with gh actions. I'm using OIDC to authenticate into our AWS account. I'm running into the typical chicken/egg problem with iac, I'd like to control the iam role used in the CICD workflow, but need to create it prior to triggering a workflow.
My idea was to create a module for the iam role, and bootstrap that resource from my local machine (same backend as cicd) by using the target resource ability.
My idea was to create a module for the iam role, and bootstrap that resource from my local machine (same backend as cicd) by using the target resource ability.
tf plan --target=module.iam_role and tf apply --target=module.iam_role . My thought is this will bootstrap that resource, so my cicd can take over from there. Does this sound like a sane approach? I was going to ask during office hours this week 😁Matt Gowieabout 4 years ago
Does anyone know of a reliable way to determine at runtime within terraform code if it’s the first time Terraform is being applied? Without using an annoying
first_apply variable of course.lorenabout 4 years ago
maybe
terraform state list is empty?Matt Gowieabout 4 years ago
Has anyone built rotating AWS AMI IDs on a time schedule? As in, every 3 months I want to update my AMI ID to latest? A colleague and I are working through trying to do this with
time_rotating and we’re continually hitting walls due to Terraform’s lack of capability to compare date values and store / trigger updating values outside of locals.Anand Gautamabout 4 years ago(edited)
I am using the firewall manager module to provision (https://github.com/cloudposse/terraform-aws-firewall-manager). To enable firehose, there is a var
Can I declare a S3 module and feed the S3 bucket name from S3 module into the firewall manager module? How would I go about doing that? Thanks in advance for your guidance, much appreciated!
firehose_enabled which creates a firehose kinesis destination and stores the logs in S3 bucket. Underneath, it’s using a S3 module (https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) to provision. I am getting an error (probably because bucket name exists) when provisioning the firewall manager module: Error: Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'ap-southeast-1'
│ with module.firewall-manager.module.firehose_s3_bucket[0].aws_s3_bucket.default[0],
│ on .terraform/modules/firewall-manager.firehose_s3_bucket/main.tf line 5, in resource "aws_s3_bucket" "default":
│ 5: resource "aws_s3_bucket" "default" {Can I declare a S3 module and feed the S3 bucket name from S3 module into the firewall manager module? How would I go about doing that? Thanks in advance for your guidance, much appreciated!
Steve Wade (swade1987)about 4 years ago
does anyone know if its possible to output the GCP service account key as json ?
Steve Wade (swade1987)about 4 years ago
i was trying to do the below to no avail ...
output "github-action-json" {
value = base64decode(google_service_account_key.ci_tools["github-actions"].private_key)
sensitive = true
}Grummfyabout 4 years ago
any idea to have a nice way to handle schema creation with terraform on rds mysql?
I find a tones of terraform to do this in postgresql but othing nice in mysql
I find a tones of terraform to do this in postgresql but othing nice in mysql
Grummfyabout 4 years ago
for now the only solution that I have seen is using mysql cli from an ec2/ecs instance that have access to the rds instance with an hash of an sql file (see https://github.com/hashicorp/terraform/issues/10740#issuecomment-267224310 or https://stackoverflow.com/a/59928898/7015902 )
DaniC (he/him)about 4 years ago
@Grummfy just a clarification: are you interested only in the schema creation but not the DDL objects inside it, is that correct? If so, how are you planning to maintain you DDL objects ?
Grummfyabout 4 years ago
for the datastructure, it will be deployed by the application, but the infrastrure must provide the schema
Grummfyabout 4 years ago
so yes, only schema
Grummfyabout 4 years ago
ideally, it would be create user with specific priviledge for eachs chema
Stephen Tanabout 4 years ago
@RB - you recently merged: https://github.com/cloudposse/terraform-aws-code-deploy/pull/9 - thank you for the prompt merge! However, I’ve found an issue with the tagging for which you asked that this line was set: https://github.com/cloudposse/terraform-aws-code-deploy/blob/master/main.tf#L192
The problem is that when I run terraform, we get the value of ec2_tag_set being NULL - ie not being set at all. The logic can’t be producing what a non zero result. We have other variables checked using:
so I’m going to create another PR which sets this instread - will you do me the kindness of approving etc? Thank you!
for_each = length(var.ec2_tag_set) > 0 ? [] : var.ec2_tag_setThe problem is that when I run terraform, we get the value of ec2_tag_set being NULL - ie not being set at all. The logic can’t be producing what a non zero result. We have other variables checked using:
for_each = var.deployment_style == null ? [] : [var.deployment_style]so I’m going to create another PR which sets this instread - will you do me the kindness of approving etc? Thank you!
JeremyPabout 4 years ago(edited)
Hey there - Im trying to use the new SLO module here and passing this yaml for it . But my plan is saying
force_delete ,groups, message, query, thresholds and validate are required. It suggest to be it is trying to use type metric instead of monitor, but I don't know why?prod/test-slo:
name: Test SLO
type: monitor
description: Test SLO
monitor_ids: []
tags: ["managedby:terraform", "env:prod"]YSabout 4 years ago
Hi everyone, I was wondering if anyone has tried setup vpc peering using this https://registry.terraform.io/modules/cloudposse/vpc-peering-multi-account/aws/latest?
I couldn’t set it up as it’s giving me the below error
This is what I have in my main.tf
Any hints or advise will be highly appreciated! Thanks!
I couldn’t set it up as it’s giving me the below error
Error: query returned no results. Please change your search criteria and try again
│
│ with module.vpc_peering_cross_account.data.aws_route_table.accepter[0],
│ on .terraform/modules/vpc_peering_cross_account/accepter.tf line 67, in data "aws_route_table" "accepter":
│ 67: data "aws_route_table" "accepter" { This is what I have in my main.tf
module "vpc_peering_cross_account" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account.git?ref=tags/0.17.1>"
namespace = "ex"
stage = "dev"
name = "vpc_peering_cross_account"
requester_aws_assume_role_arn = "arn:aws:iam::xxxx:role/BURoleForCrossAccountVpcPeering"
requester_region = "eu-west-2"
requester_vpc_id = "vpc-04dcfe9aaaxxxxxx"
requester_allow_remote_vpc_dns_resolution = "true"
requester_subnet_tags = { "Name" = "vpc-central-subnet-3"}
accepter_aws_assume_role_arn = "arn:aws:iam::yyyy:role/BURoleForCrossAccountVpcPeering"
accepter_region = "eu-west-1"
accepter_vpc_id = "vpc-0a28ca6a26dyyyyy"
accepter_allow_remote_vpc_dns_resolution = "true"
accepter_subnet_tags = { "Name" = "vpc-private-data-1"}
context = module.this.context
}Any hints or advise will be highly appreciated! Thanks!
DaniC (he/him)about 4 years ago
for folks who are not using TFE or Spacelift what do you use as private module registry?
Am aware that 99% are using GH as a source for their modules however due to lack of supporting dynamic version - ie
Am also aware of https://github.com/outsideris/citizen but it feels like is not ready for prod usage, thoughts?
Am aware that 99% are using GH as a source for their modules however due to lack of supporting dynamic version - ie
source = "git::<https://git@github.com/org/tf-db.git?ref=$VERSION> is a nightmare to keep updating it especially since you are in the world of poly-repo (each child module with is own git repo).Am also aware of https://github.com/outsideris/citizen but it feels like is not ready for prod usage, thoughts?
DaniC (he/him)about 4 years ago
different thread:
in one of the talk (maybe office hours, i forgot) it was mentioned that will be better to separate the labmda zip creatoin from the deployment of the fct by having the "Build" phase pushing it to S3 bucket while TF lambda module will just consume it from S3 bucket.
Here is my q:
the CD part of TF which will deploy the new fct, how will it know which S3 bucket to use? To be specific, should the S3 objects be prefix with a version number which can be consumed in various envs deployments.
in one of the talk (maybe office hours, i forgot) it was mentioned that will be better to separate the labmda zip creatoin from the deployment of the fct by having the "Build" phase pushing it to S3 bucket while TF lambda module will just consume it from S3 bucket.
Here is my q:
the CD part of TF which will deploy the new fct, how will it know which S3 bucket to use? To be specific, should the S3 objects be prefix with a version number which can be consumed in various envs deployments.
Jermyducanabout 4 years ago
I want to build a new AWS ecs cluster with cloudposse templates. How can get my ecs cluster building code along with the terraform modules required on my local machine
Should I clone the entire cloudposse terraform repository?.
Should I clone the entire cloudposse terraform repository?.
Mohammed Yahyaabout 4 years ago
Adnanabout 4 years ago(edited)
Hi All,
did anyone of you experience following terraform auth errors intermittently in the CI/CD pipeline?
It feels like the problem is network issues inside kubernetes but I am not sure yet.
Any idea why this could be happening?
did anyone of you experience following terraform auth errors intermittently in the CI/CD pipeline?
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
Please see <https://registry.terraform.io/providers/hashicorp/aws>
for more information about providing credentials.
Error: WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400It feels like the problem is network issues inside kubernetes but I am not sure yet.
Any idea why this could be happening?
Grummfyabout 4 years ago
did you read the link provided?
Łukasz Pelczarabout 4 years ago
Hi all, I wanted to use
https://github.com/cloudposse/terraform-aws-iam-role
to create a policy and add to the role,
This is my code:
But after making the plan I get an error:
Where am I making a mistake?
https://github.com/cloudposse/terraform-aws-iam-role
to create a policy and add to the role,
This is my code:
data "aws_iam_policy_document" "test-cloudwatch-put-metric-data" {
statement {
effect = "Allow"
resources = ["*"]
actions = ["cloudwatch:PutMetricData"]
}
}
module "test-instance-role" {
source = "git::<https://github.com/cloudposse/terraform-aws-iam-role.git?ref=tags/0.13.0>"
enabled = true
namespace = "myproject"
stage = "dev"
name = "test-instance-role"
policy_description = "test policy"
role_description = "test permissions role"
policy_documents = [
data.aws_iam_policy_document.test-cloudwatch-put-metric-data.json
]
}But after making the plan I get an error:
Error: Unsupported argument
on .terraform/modules/test-instance-role/main.tf line 17, in data "aws_iam_policy_document" "assume_role_aggregated":
17: override_policy_documents = data.aws_iam_policy_document.assume_role.*.json
An argument named "override_policy_documents" is not expected here.
Error: Unsupported argument
on .terraform/modules/test-instance-role/main.tf line 33, in data "aws_iam_policy_document" "default":
33: override_policy_documents = var.policy_documents
An argument named "override_policy_documents" is not expected here.Where am I making a mistake?
lorenabout 4 years ago
well that's a thing apparently... https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/
Alex Jurkiewiczabout 4 years ago
Has anyone migrated an existing organisation into control tower? How did it go? The process seems extremely scary to me