55 messages
Shrivatsan Narayanaswamyabout 4 years ago
Hi Team, I would like add the network acl id to the outputs of https://github.com/cloudposse/terraform-aws-dynamic-subnets, so that i could use them to add acl rules to block and unblock ips.. So i would like to know the procedure to make my contirbutions to cloudposse modules..
Simonabout 4 years ago
We currently don’t really use terraform dynamically. Just using a template to deploy our instances, we destroy via the console atm.
I want to upgrade our controller from v0.12 to v1.1 and going through the gradual updates doesn’t really make sense since we don’t really care about their states. Would uninstalling 0.12 and installing 1.1 pose any issues? Like any syntax changes that would require our deployment template to change too?
I want to upgrade our controller from v0.12 to v1.1 and going through the gradual updates doesn’t really make sense since we don’t really care about their states. Would uninstalling 0.12 and installing 1.1 pose any issues? Like any syntax changes that would require our deployment template to change too?
Slackbotabout 4 years ago
Cloud Posse, LLC has joined this channel by invitation from SweetOps.
E
erikabout 4 years ago
@erik has joined the channel
Justin Smithabout 4 years ago
I'm attempting to incorporate the Cloudposse
terraform-aws-iam-s3-user module into a module that I'm writing. After I add it and attempt to run a sample scenario to try it out, Terraform throws the error: The argument "region" is required, but was not set. However, region is, in fact, set in the AWS provider, and if I comment out the terraform-aws-iam-s3-user module in the code, the error goes away. I'm mystified.Leon Garciaabout 4 years ago(edited)
hello has anyone seen this issue with the
I confirm that the endpoint was createdm I am using an existing VPC so I am just passing the information of my current VPC and the client CIDRs.. I was not able to track down the issue but seems related to this resource:
terraform-aws-ec2-client-vpn module ? Basiscally at the end of the apply I get InvalidClientVpnEndpointId.NotFound Endpoint <id> does not existI confirm that the endpoint was createdm I am using an existing VPC so I am just passing the information of my current VPC and the client CIDRs.. I was not able to track down the issue but seems related to this resource:
data "awsutils_ec2_client_vpn_export_client_config" "default" {Mike Croweabout 4 years ago
Hi folks, quick
• I'm assuming you run thru the process to initialize a new backend for each environment, right? So run once for
• When you create a new component, do you need to copy the
tfstate-backend questions:• I'm assuming you run thru the process to initialize a new backend for each environment, right? So run once for
root, once for dev, once for prod?• When you create a new component, do you need to copy the
backend.tf file from tfstate-backend folder into the new component, or is there a more direct process? I think, but I'm not sure, I'm seeing actual local state in a terraform.tfstate.d folderPePe Amengualabout 4 years ago
What is the latest on Terraform and Lambda canary deploy with 10% traffic shifting? anyone have implemented this?
rssabout 4 years ago(edited)
v1.1.5
1.1.5 (February 02, 2022)
ENHANCEMENTS:
backend/s3: Update AWS SDK to allow the use of the ap-southeast-3 region (#30363)
BUG FIXES:
cli: Fix crash when using autocomplete with long commands, such as terraform workspace select (<a href="https://github.com/hashicorp/terraform/issues/30193" data-hovercard-type="issue"...
1.1.5 (February 02, 2022)
ENHANCEMENTS:
backend/s3: Update AWS SDK to allow the use of the ap-southeast-3 region (#30363)
BUG FIXES:
cli: Fix crash when using autocomplete with long commands, such as terraform workspace select (<a href="https://github.com/hashicorp/terraform/issues/30193" data-hovercard-type="issue"...
Erik Osterman (Cloud Posse)about 4 years ago
I started a huddle fun 😃
Erik Osterman (Cloud Posse)about 4 years ago
Erik Osterman (Cloud Posse)about 4 years ago
@Mike Crowe re: AFT: https://learn.hashicorp.com/tutorials/terraform/aws-control-tower-aft
aimbotdabout 4 years ago
Hey friends. If this is
true, should all the appropriate pieces be built in place to support the cluster-autoscaler service? https://github.com/cloudposse/terraform-aws-eks-node-group#input_cluster_autoscaler_enabled. I have mine set true but I am not actually seeing it in action. I had to go and deploy the chart and set up the role to use the OIDC connection.Sam LEmabout 4 years ago(edited)
Hey friends. Starting our GitOps journey. Would love to set up AWS-SSO but struggling to understand how it ties into terraform. Does one typically set up AWS-SSO by hand and stick to TF for everything else (IAM roles and resources)? I did see that Cloud Posse has a module named terraform-aws-sso but wasn’t sure how mainstream it is
Sam LEmabout 4 years ago
On a side note, does one generally operate terraform through a public/private key owned by a root AWS account? Or is it a better practice to create tf-specific credentials by hand after the fact and use those?
Mike Croweabout 4 years ago
Can anybody point me to some more details regarding
• I can download the state from S3 and see the output I want from the other module (remote s3 state has the
• NOTE: I have to specify
• Error message is:
• My
I think think this is borked up because I have to use
remote-state? I can't seem to get it to work correctly:• I can download the state from S3 and see the output I want from the other module (remote s3 state has the
wildcard_certificate_arn output I want)• NOTE: I have to specify
profile in backend.tf.json (due to using control tower)• Error message is:
│ Error: Unsupported attribute
│ on main.tf line 19, in module "saml_cognito":
│ 19: certificate = module.dns_delegated.wildcard_certificate_arn
│ │ module.dns_delegated is a object, known only after apply
│ This object does not have an attribute named "wildcard_certificate_arn".• My
remote-state.tf is simply:module "dns_delegated" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "0.22.0"
component = "dns-delegated"
}I think think this is borked up because I have to use
profile in my backend specification, but I'm not positive.Brad Alexanderabout 4 years ago
I'm trying to use https://github.com/cloudposse/terraform-aws-datadog-integration and I'd like to set up integrations with multiple aws accounts. I don't see any specific mention of it in the docs, do multiple instances of this module play well together? anyone have an example?
msharma24about 4 years ago
Hi there,
Neovim with terraformls auto completions and tflint checking as you type resource configuration
Here is my editor config.
https://mukeshsharma.dev/2022/02/08/neovim-workflow-for-terraform.html
Neovim with terraformls auto completions and tflint checking as you type resource configuration
Here is my editor config.
https://mukeshsharma.dev/2022/02/08/neovim-workflow-for-terraform.html
Bryan Dadyabout 4 years ago
Hi @Erik Osterman (Cloud Posse)
I just discovered this Inc Mgmt / Opsgenie module and am excited to get it set up for our team.
I haven’t yet found any docs or description of how to think about or use the
I just discovered this Inc Mgmt / Opsgenie module and am excited to get it set up for our team.
I haven’t yet found any docs or description of how to think about or use the
existing_users (existing_users) vs. users.yaml (users).Josh B.about 4 years ago(edited)
Can we get an example for
ordered cache on module https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/tree/master/examples/complete. I could be wrong, but it seems to require some info that should be optional. I could also very well be doing it wrong 😞 I think mainly the function arn which I am not using any sort of lambda or cloud function so confused a tad.Grubholdabout 4 years ago
Hi folks, any idea why I’m suddenly getting this error with the WAF module it has been working fine and no changes made
│ Error: InvalidParameter: 1 validation error(s) found.
│ - minimum field size of 20, AssociateWebACLInput.ResourceArn.
│
│
│ with module.dk_waf.aws_wafv2_web_acl_association.default[1],
│ on modules/aws-waf/main.tf line 1, in resource "aws_wafv2_web_acl_association" "default":
│ 1: resource "aws_wafv2_web_acl_association" "default" {lorenabout 4 years ago
I am getting the sense that the v4.0.0 release of the AWS provider is imminent... Based on comments on issues I'm following, and PR tags... Thought it might bea good time to preview the upgrade guide... https://github.com/hashicorp/terraform-provider-aws/blob/main/website/docs/guides/version-4-upgrade.html.md
Kirill I.about 4 years ago
Hello everybody. How can I fix this:
Error: query returned no results. Please change your search criteria and try again
│
│ with module.vpc_peering_cross_account.data.aws_route_table.accepter[0],
│ on .terraform/modules/vpc_peering_cross_account/accepter.tf line 67, in data "aws_route_table" "accepter":
│ 67: data "aws_route_table" "accepter" {
Error: query returned no results. Please change your search criteria and try again
│
│ with module.vpc_peering_cross_account.data.aws_route_table.accepter[0],
│ on .terraform/modules/vpc_peering_cross_account/accepter.tf line 67, in data "aws_route_table" "accepter":
│ 67: data "aws_route_table" "accepter" {
Zephabout 4 years ago
Hi everyone, getting a strange error when trying to import a redis cluster already made with the module to another state (trying to consolidate our environment) and seeing this:
terragrunt import 'module.redis.aws_elasticache_replication_group.default' example-redis
│ Error: Invalid index
│
│ on .../elasticache_redis_cluster.redis/main.tf line 80, in locals:
│ 80: elasticache_member_clusters = module.this.enabled ? tolist(aws_elasticache_replication_group.default.0.member_clusters) : []
│ ├────────────────
│ │ aws_elasticache_replication_group.default is empty tuple
Any ideas?
terragrunt import 'module.redis.aws_elasticache_replication_group.default' example-redis
│ Error: Invalid index
│
│ on .../elasticache_redis_cluster.redis/main.tf line 80, in locals:
│ 80: elasticache_member_clusters = module.this.enabled ? tolist(aws_elasticache_replication_group.default.0.member_clusters) : []
│ ├────────────────
│ │ aws_elasticache_replication_group.default is empty tuple
Any ideas?
Vigneshabout 4 years ago
New terraform aws provider version 4.0 released with some breaking changes.
For example, few aws_s3_bucket attributes were made Read-Only.
https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.0.0
For example, few aws_s3_bucket attributes were made Read-Only.
https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.0.0
Donabout 4 years ago
Since 4.0 I’m using : https://github.com/cloudposse/terraform-aws-s3-bucket/releases/tag/0.47.0 and even if I set s3_replication_enabled to disable I get the following error: 168: for_each = local.s3_replication_rules == null ? [] : local.s3_replication_rules (“This object does not have an attribute named “s3_replication_rules”)
Donabout 4 years ago
has anyone else experienced this ?
Grummfyabout 4 years ago
warning, you could hae some issue for some poeple https://github.com/hashicorp/terraform-provider-aws/issues/23110
michael sewabout 4 years ago
Certification Question: I'm in Canada (Vancouver). Appears my certification-provider is PSI. Are there any physical test-center locations I can take the terraform associate exam, or is it online-proctor-only?
Rhys Daviesabout 4 years ago(edited)
hey guys is there any way to validate a
variable based on input from another block? I know it's not explicitly allowed in a validation block but I was wondering there are any patterns for saying something in Terraform effectively similar to this sentence "If this other argument to my terraform module is true, then this string I am evaluating must not be null or empty"?Rhys Daviesabout 4 years ago
To give an example of what I'm doing: There are some circumstances I don't want to attach a load balancer to my ECS service in the module I'm writing, but if I choose that I want to I would like to validate that the container name and port are not null
Rhys Daviesabout 4 years ago(edited)
I can't seem to figure out how to write that fairly simple bit of logic. It's not a huge problem for me, my code works without this validation I just have some docs written to explain, but I would like to explore options for constraining the state my module could be configured in
Alysonabout 4 years ago
Hi,
I'm provisioning an AWS MSK (kafka) cluster using the "terraform-aws-msk-apache-kafka-cluster" module, version 0.8.3.
I noticed that when I put "client_tls_auth_enabled=true", every time the cluster is destroyed and created again, even without me having made any modification in the terraform code.
Sorry, I'm not that good at English
I'm provisioning an AWS MSK (kafka) cluster using the "terraform-aws-msk-apache-kafka-cluster" module, version 0.8.3.
I noticed that when I put "client_tls_auth_enabled=true", every time the cluster is destroyed and created again, even without me having made any modification in the terraform code.
Sorry, I'm not that good at English
Brent Garberabout 4 years ago
Is there a way to basically do aws ipam, but just in TF? IE: When expand list of services they automagically grab the next non-overlapping ip range for their subnets?
rssabout 4 years ago(edited)
v1.1.6
1.1.6 (February 16, 2022)
BUG FIXES:
cli: Prevent complex uses of the console-only type function. This function may only be used at the top level of console expressions, to display the type of a given value. Attempting to use this function in complex expressions will now display a diagnostic error instead of crashing. (#30476)...
1.1.6 (February 16, 2022)
BUG FIXES:
cli: Prevent complex uses of the console-only type function. This function may only be used at the top level of console expressions, to display the type of a given value. Attempting to use this function in complex expressions will now display a diagnostic error instead of crashing. (#30476)...
Eric Bergabout 4 years ago
Do I remember correctly that Cloudposse modules sometimes pull in config from YAML and drop it into the root dir as
This seems to solve the problem of using variables, for typing and validation of yaml that's pulled into locals.
auto.tfvars.json files? Where do you create those files?This seems to solve the problem of using variables, for typing and validation of yaml that's pulled into locals.
Zachary Loeberabout 4 years ago
https://github.com/kube-champ/terraform-operator <-- A tf operator that is not simply a front for using tf cloud, could be worth poking at
Matt Gowieabout 4 years ago
@Jeremy G (Cloud Posse) — Many thanks on these solid migration notes — They were much appreciated. https://github.com/cloudposse/terraform-aws-eks-node-group/blob/780163dacd9c892b64b988077a994f6675d8f56d/MIGRATION.md
Maximiliano Morettiabout 4 years ago
👋 Hello, team!
Maximiliano Morettiabout 4 years ago
SOmebody is usin this module? It is broken from my side
Maximiliano Morettiabout 4 years ago
Maximiliano Morettiabout 4 years ago
module "website" {
source = "cloudposse/s3-website/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
namespace = "eg"
stage = "prod"
name = "app"
hostname = "<http://docs.prod.cloudposse.org|docs.prod.cloudposse.org>"
deployment_arns = {
"arn:aws:s3:::principal1" = ["/prefix1", "/prefix2"]
"arn:aws:s3:::principal2" = [""]
}
}Maximiliano Morettiabout 4 years ago
This object does not have an attribute named "enable_glacier_transition".IKabout 4 years ago
Anyone have a way of mapping a “friendly” name to an AWS account ID? Thinking of some sort of central repo with a simple YAML file that would be maintained as we add/remove accounts. The idea would be that users of our TF code (of which we have many modules) can specify the target account by name (as opposed to the account ID). Or am I overthinking this and just have them reference the account ID instead?
Almondovaralmost 4 years ago
hi team, how can i hide the password from being asked in terraform?
│ Error: provider.aws: aws_db_instance: : "password": required field is not set
or, can i put something temporary into the code, and some how enable the "change password on first login"?
Thanks!
│ Error: provider.aws: aws_db_instance: : "password": required field is not set
or, can i put something temporary into the code, and some how enable the "change password on first login"?
Thanks!
Hasinialmost 4 years ago
Hi everyone! I have created the msk cluster using the aws-msk-cluster resource. But after creating the “unauthenticated access” is not enabled. Only the IAM access is enabled. Please tell me how can we enable the “unauthenticated access” using the terraform resource.
Grubholdalmost 4 years ago
Hi folks, I recently deployed AWS Backup plans for DocumentDB and DynamoDB using the https://github.com/cloudposse/terraform-aws-backup module. Its working great and deployed everything as needed. I just want to understand how would the process of restoring work? Is that something that needs to be done from the console itself in a disaster scenario etc? If I restore from console how will the module act when running
terraform apply again? Would appreciate your two cents about this.Hasinialmost 4 years ago
Hello Everyone. Im trying to create the MSK cluster with terraform. Im able to create it but the “unauthenticated access” is not enabled. Im sharing the script please let me know where i need to add the “unauthenticated access” is true. Thanks in advance.
resource "aws_msk_configuration" "config" {
kafka_versions = ["2.6.2"]
name = var.cluster_name
description = "Manages an Amazon Managed Streaming for Kafka configuration"
server_properties = <<PROPERTIES
auto.create.topics.enable=true
default.replication.factor=3
min.insync.replicas=2
num.io.threads=8
num.network.threads=5
num.partitions=1
num.replica.fetchers=2
replica.lag.time.max.ms=30000
socket.receive.buffer.bytes=102400
socket.request.max.bytes=<tel:104857600|104857600>
socket.send.buffer.bytes=102400
unclean.leader.election.enable=true
zookeeper.session.timeout.ms=18000
PROPERTIES
}
resource "aws_msk_cluster" "example" {
depends_on = [
aws_msk_configuration.config
]
cluster_name = var.cluster_name
kafka_version = var.kafka_version
number_of_broker_nodes = var.number_of_broker_nodes
broker_node_group_info {
instance_type = var.broker_instance_type
ebs_volume_size = var.broker_volume_size
client_subnets = var.subnet_ids
security_groups = var.associated_security_group_ids
}
encryption_info {
encryption_at_rest_kms_key_arn = var.encryption_at_rest_kms_key_arn
encryption_in_transit {
client_broker = var.client_broker
in_cluster = "true"
}
}
configuration_info {
arn = aws_msk_configuration.config.arn
revision = aws_msk_configuration.config.latest_revision
}
enhanced_monitoring = var.enhanced_monitoring
open_monitoring {
prometheus {
jmx_exporter {
enabled_in_broker = false
}
node_exporter {
enabled_in_broker = false
}
}
}
logging_info {
broker_logs {
cloudwatch_logs {
enabled = false
}
firehose {
enabled = false
}
s3 {
enabled = false
}
}
}
tags = {
Environment = var.Environment
}
}
output "zookeeper_connect_string" {
value = aws_msk_cluster.example.zookeeper_connect_string
}
output "bootstrap_brokers_tls" {
description = "TLS connection host:port pairs"
value = aws_msk_cluster.example.bootstrap_brokers_tls
}
resource "aws_msk_configuration" "config" {
kafka_versions = ["2.6.2"]
name = var.cluster_name
description = "Manages an Amazon Managed Streaming for Kafka configuration"
server_properties = <<PROPERTIES
auto.create.topics.enable=true
default.replication.factor=3
min.insync.replicas=2
num.io.threads=8
num.network.threads=5
num.partitions=1
num.replica.fetchers=2
replica.lag.time.max.ms=30000
socket.receive.buffer.bytes=102400
socket.request.max.bytes=<tel:104857600|104857600>
socket.send.buffer.bytes=102400
unclean.leader.election.enable=true
zookeeper.session.timeout.ms=18000
PROPERTIES
}
resource "aws_msk_cluster" "example" {
depends_on = [
aws_msk_configuration.config
]
cluster_name = var.cluster_name
kafka_version = var.kafka_version
number_of_broker_nodes = var.number_of_broker_nodes
broker_node_group_info {
instance_type = var.broker_instance_type
ebs_volume_size = var.broker_volume_size
client_subnets = var.subnet_ids
security_groups = var.associated_security_group_ids
}
encryption_info {
encryption_at_rest_kms_key_arn = var.encryption_at_rest_kms_key_arn
encryption_in_transit {
client_broker = var.client_broker
in_cluster = "true"
}
}
configuration_info {
arn = aws_msk_configuration.config.arn
revision = aws_msk_configuration.config.latest_revision
}
enhanced_monitoring = var.enhanced_monitoring
open_monitoring {
prometheus {
jmx_exporter {
enabled_in_broker = false
}
node_exporter {
enabled_in_broker = false
}
}
}
logging_info {
broker_logs {
cloudwatch_logs {
enabled = false
}
firehose {
enabled = false
}
s3 {
enabled = false
}
}
}
tags = {
Environment = var.Environment
}
}
output "zookeeper_connect_string" {
value = aws_msk_cluster.example.zookeeper_connect_string
}
output "bootstrap_brokers_tls" {
description = "TLS connection host:port pairs"
value = aws_msk_cluster.example.bootstrap_brokers_tls
}
Grubholdalmost 4 years ago(edited)
Hi folks, when I’m importing a specific module such as
I’m getting the following error from another module. And the DynamoDB table is failing to import. How do we go around all the
terraform import module.dynamodb_document_table.aws_dynamodb_table.default disaster-pre-documentI’m getting the following error from another module. And the DynamoDB table is failing to import. How do we go around all the
for_each when importing a module. Both modules are defaults from CloudPosseError: Invalid for_each argument
│
│ on modules/aws-vpc-endpoints/main.tf line 29, in module "interface_endpoint_label":
│ 29: for_each = local.enabled ? data.aws_vpc_endpoint_service.interface_endpoint_service : {}
│ ├────────────────
│ │ data.aws_vpc_endpoint_service.interface_endpoint_service will be known only after apply
│ │ local.enabled is true
The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the
-target argument to first apply only the resources that the for_each depends on.Manolo Scardinoalmost 4 years ago
Hey everyone how are you?
I’m trying to experiment something on terraform that I’ve never done before, I’m trying to duplicate the amount of resource groups based on a list. It will be something like that.
locals {
vm_host = yamldecode(file(“./my-variables.yaml”))[“virtual_machines”]
vm_host_map = flatten([for vm in local.vm_host :
{
“environment” = vm.environment
“location” = vm.location
“name” = vm.name
“tech_type” = vm.tech_type
“network_name” = vm.networks[*].loadbalancers
“count” = vm.count
}
#if contains(i.environment, terraform.workspace)
])
}
resource “azurerm_resource_group” “rg” {
count = local.vm_host_map[0].count
name = format(“rg-%s-%s-%03s”, local.vm_host_map[0].tech_type, local.vm_host_map[0].environment, count.index+1)
location = local.vm_host_map[0].location
}
virtual_machines:
- name: docker-internal
tech_type: docker
type: Standard_F32s_v2
count: 10
location: westeurope
environment:
- prod
- dev
The thing is that when I try to create the resources with multiple environments I have the following error message
Error: “name” may only contain alphanumeric characters, dash, underscores, parentheses and periods
│
│ with azurerm_resource_group.rg[0],
│ on main.tf line 44, in resource “azurerm_resource_group” “rg”:
│ 44: name = format(“rg-%s-%v-%03s”, local.vm_host_map[0].tech_type, local.vm_host_map[0].environment, count.index+1)
Can anyone please try to help me ?
I’m trying to experiment something on terraform that I’ve never done before, I’m trying to duplicate the amount of resource groups based on a list. It will be something like that.
locals {
vm_host = yamldecode(file(“./my-variables.yaml”))[“virtual_machines”]
vm_host_map = flatten([for vm in local.vm_host :
{
“environment” = vm.environment
“location” = vm.location
“name” = vm.name
“tech_type” = vm.tech_type
“network_name” = vm.networks[*].loadbalancers
“count” = vm.count
}
#if contains(i.environment, terraform.workspace)
])
}
resource “azurerm_resource_group” “rg” {
count = local.vm_host_map[0].count
name = format(“rg-%s-%s-%03s”, local.vm_host_map[0].tech_type, local.vm_host_map[0].environment, count.index+1)
location = local.vm_host_map[0].location
}
virtual_machines:
- name: docker-internal
tech_type: docker
type: Standard_F32s_v2
count: 10
location: westeurope
environment:
- prod
- dev
The thing is that when I try to create the resources with multiple environments I have the following error message
Error: “name” may only contain alphanumeric characters, dash, underscores, parentheses and periods
│
│ with azurerm_resource_group.rg[0],
│ on main.tf line 44, in resource “azurerm_resource_group” “rg”:
│ 44: name = format(“rg-%s-%v-%03s”, local.vm_host_map[0].tech_type, local.vm_host_map[0].environment, count.index+1)
Can anyone please try to help me ?
Chandler Forrestalmost 4 years ago(edited)
I'm currently using terragrunt to generate providers in my root terragrunt.hcl file. I define a default aws provider and an aliased aws provider. I want to call another module that has a required_provider block included. This results in duplicate required provider blocks which threw an error. I found links to an overriding behavior that allow me to merge terraform blocks, but in this case this also fails because the aliased provider in my terragrunt.hcl isn't referenced by the module I'm referencing.
https://stackoverflow.com/questions/66770564/using-terragrunt-generate-provider-block-causes-conflicts-with-require-providers
What is the appropriate pattern for handling required_provider blocks in the root module when they are also defined in child module resources?
Example terragrunt.hcl:
Error message:
https://stackoverflow.com/questions/66770564/using-terragrunt-generate-provider-block-causes-conflicts-with-require-providers
What is the appropriate pattern for handling required_provider blocks in the root module when they are also defined in child module resources?
Example terragrunt.hcl:
generate "provider" {
path = "provider_override.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "${local.provider_version}"
}
}
}
provider "aws" {
region = "${local.aws_region}"
assume_role {
role_arn = "role.this"
}
}
provider "aws" {
region = "${local.aws_region}"
alias = "that"
assume_role {
role_arn = "role.that"
}
EOF
} Error message:
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
│ Error: Missing base provider configuration for override
│
│ on provider_override.tf line 26:
│ 26: provider "aws" {
│
│ There is no aws provider configuration with the alias "that". An override
│ file can only override an aliased provider configuration that was already
│ defined in a primary configuration file.Almondovaralmost 4 years ago
hi guys, i want to spin an ec2 with 100g storage, i used the
aws_volume and aws_volume_attachment and it created a second one that i dont want. how can i just have only 1 volume but 100g? please?Rhys Daviesalmost 4 years ago(edited)
Hi all! Got an auth question around Terraform and AWS. I would like to restrict people in my team from being able to
terraform apply|destroy and most probably plan from their local machines (though it would be really great if people could still use plan locally it’s not a dealbreaker right now), so that people use our automated CI (env0). BUT I would like these same users to still have the same level of access that they do now in the AWS console so they can try stuff out and administer other concerns as needs beRhys Daviesalmost 4 years ago
How would I go about doing this? RBAC so that only our automated CI users can mutate the Terraform State that’s in S3? Or maybe do something with DynamoDB?
Wilson Maralmost 4 years ago
Glad to be part of this. I've been working on a way to learn Terraform quickly and surely using Terragoat:
https://wilsonmar.github.io/terraform/#adoptionlearning-strategy
https://wilsonmar.github.io/terraform/#adoptionlearning-strategy