210 messages
Mohammed Yahyaalmost 5 years ago
anyone using https://github.com/localstack/localstack/releases/tag/v0.12.7 with Terraform for offline Testing in Terraform
D
discoursealmost 5 years ago
Steve Wade (swade1987)almost 5 years ago
does anyone know how you can run
terraform state show on module.gitlab_repository_webhook.gitlab_project_hook.this[15] ?RBalmost 5 years ago
terraform state show "module.gitlab_repository_webhook.gitlab_project_hook.this[15]"should work
Steve Wade (swade1987)almost 5 years ago
i was missing the quotes 🤦 thanks @RB
RBalmost 5 years ago
it's not obvious but any time you use square brackets, you'll need quotations
RBalmost 5 years ago
best to default to using quotations all the time
Steve Wade (swade1987)almost 5 years ago
makes sense
Steve Wade (swade1987)almost 5 years ago(edited)
in tf 0.13 can you do for a
for_each on a module?Steve Wade (swade1987)almost 5 years ago
i want to loop around each account in a list and execute a module
Alex Jurkiewiczalmost 5 years ago
if I have an aws_iam_policy_document and a JSON policy statement, what's the best way to append the latter to the former?
Alex Jurkiewiczalmost 5 years ago
I'm thinking about doing something horrible with jsonencode and jsondecode
D
discoursealmost 5 years ago
Nagaalmost 5 years ago
Does anyone know how to work with dynamodb backup and restore using terraform ? Or by using aws backup with terraform?
Pavelalmost 5 years ago
hey all
Pavelalmost 5 years ago
I started using the /terraform-aws-ecs-codepipeline module yesterday and I ran into a couple issue, the first more immediate on is that when the Deploy to Ecs stage runs it will set the desired count to 3 containers. I am not setting this anywhere in my configuration, I actually have it set to 1 as this is for dev. I am running EC2 to host my containers.
P
Pavelalmost 5 years ago
basically its just stuck here til it times out
Pavelalmost 5 years ago
is there some setting in my ecs service that im missing?
Pavelalmost 5 years ago
i think it may have to do with ec2 capacity
Pavelalmost 5 years ago(edited)
the desired count being mysteriously set to 3 needs to be solved
RBalmost 5 years ago(edited)
create a ticket with a Minimum, Reproducible Example and then someone will investigate eventually
Pavelalmost 5 years ago
this is probably upstream from this package tbh
Pavelalmost 5 years ago
i've reviewed all the tf code and i don't see anything that would set a desired count at all. the module doesn't control my service or anything and the Deploy stage part of the code is pretty minimal.
Jeff Dykealmost 5 years ago(edited)
Curious if anyone has done a TG -> TF migration. I'm about to embark on my own, and if you have any info to share it would be appreciated. I started with TF, but its been a couple years, so mostly trying to get my head around replacing the abstractions, for one i'll be using https://github.com/cloudposse/terraform-provider-utils/blob/main/examples/data-sources/utils_stack_config_yaml/data-source.tf, for a complementary solution to TG going up two levels by default. Thanks for any input. (Edited due to response in thread, posted the wrong repo)
Steve Wade (swade1987)almost 5 years ago
does anyone know of a module or anything that can lock down the default security group in all regions within an account?
Garethalmost 5 years ago
HI All,
Can anybody share their wisdom on the best way to rotate aws_iam_access_key's
Do most people taint the resource / module that created them or do you have two resources like
aws_iam_access_key.keyone with a count var.rotate = true
aws_iam_access_key.keytwo with a count var.rotate = true
with and output of the above equally switching between the two?
once applied you would then roll your environment and then set keyone to false?
When come to then rolling them again in the future you'd state move keytwo to one and repeat?
Can anybody share their wisdom on the best way to rotate aws_iam_access_key's
Do most people taint the resource / module that created them or do you have two resources like
aws_iam_access_key.keyone with a count var.rotate = true
aws_iam_access_key.keytwo with a count var.rotate = true
with and output of the above equally switching between the two?
once applied you would then roll your environment and then set keyone to false?
When come to then rolling them again in the future you'd state move keytwo to one and repeat?
Garethalmost 5 years ago(edited)
The issue I see with tainting it is stopping the removal of the current key before we've rolled our environment to get the new key. Guess you could target the creation or delete it from the state before applying but both options feel fudgy.
Mike Robinsonalmost 5 years ago
Hello,
In the module eks-iam-role, is there a recommended way for passing in an AWS managed policy ? An example use case would be creating an OIDC role for the VPC CNI add-on as described here. Currently all I can think of is something like:
In the module eks-iam-role, is there a recommended way for passing in an AWS managed policy ? An example use case would be creating an OIDC role for the VPC CNI add-on as described here. Currently all I can think of is something like:
data "aws_iam_policy" "cni_policy" {
arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}
module "vpc_cni_oidc_role" {
source = "cloudposse/eks-iam-role/aws"
version = "x.x.x"
[...a bunch of vars...]
aws_iam_policy_document = data.aws_iam_policy.cni_policy.policy
}Steve Wade (swade1987)almost 5 years ago
does anyone know what IAM permissions / RBAC is required to view workloads in the EKS cluster via the AWS console? I can't for the life of me find it documented anywhere!
Bart Coddensalmost 5 years ago
HI all, I am wondering who you are managing the s3state files
Bart Coddensalmost 5 years ago
it's a pain that terraform does not allow variables here
Alex Jurkiewiczalmost 5 years ago
today's dumb terraform code. I had some Lambda functions defined as a map for for_each purposes, like
and I wanted to loop over any custom iam policies defined to add them to the execution role. This is the simplest
You'd think
Sometimes, I wish Terraform never tried to add strict typing 😩
locals {
lambda_functions = {
foo = { memory=256, iam_policies = tolist([...]) }
bar = { memory=128 }
}
}and I wanted to loop over any custom iam policies defined to add them to the execution role. This is the simplest
for_each loop I could write that worked:dynamic "inline_policy" {
for_each = length(lookup(local.lambda_functions[each.key], "iam_policies", [])) > 0 ? each.value.iam_policies : []
content {
name = "ExtraPermissions${md5(jsonencode(inline_policy.value))}"
policy = jsonencode(inline_policy.value)
}
}You'd think
for_each = lookup(local.lambda_functions[each.key], "iam_policies", []) would work, but it doesn't, because you can't build a default value with the same type as the iam_policies values from your data type.Sometimes, I wish Terraform never tried to add strict typing 😩
CedricTalmost 5 years ago
Hello everybody, nice to meet you ! I’m Cedric and I just joigned.
CedricTalmost 5 years ago
My first question is about the terraform-aws-rds-cluster module. I see the input “enabled_cloudwatch_logs_exports” we can use to select which logs send to Cloudwatch, but actually I don’t find any input relative to the Log Group in which I will send the logs. Any clue ?
CedricTalmost 5 years ago
Thanks.
Pavelalmost 5 years ago
is there a way to integrate "cloudposse/ecs-codepipeline/aws" module with SSM Parameter Store to feed the build image environment vars
Mohammed Yahyaalmost 5 years ago
FYI
terraform test is WIP >> https://github.com/hashicorp/terraform/pull/27873Mohammed Yahyaalmost 5 years ago
I created VSCode
Terraform IaC Extension Pack to help with Develop Terraform templates and modules, please test and feedback https://marketplace.visualstudio.com/items?itemName=mhmdio.terraform-extension-packMaarten van der Hoefalmost 5 years ago
@Andriy Knysh (Cloud Posse) my greetings. I'm checking out the
terraform-aws-backup module. Also want to keep an a cross organisational copy. Would it make sense to use the module on both accounts or use a simple aws_backup_vault resource on the receiving end ?Pavelalmost 5 years ago
I noticed that the github_webhook module has an inline provider, this prevents from any parent modules from being used in a for_each, for example "cloudposse/ecs-codepipeline/aws" is affected by this. I was not able to spin up multiple codepipelines by a loop unless I removed all the github webhook stuff from it. A seperate issue is that the whole webhook thing doesn't work with my org for some reason. when it tries to post to the gtihub api its missing the org so it ends up looking like https://api.github.com/repo//repo-name/hooks where the // should be org. Maybe there is some documentation about required parameters, but if you test the examples with 0.23 and terraform 0.14, it wont work.
Justin Seiseralmost 5 years ago
Any Familar with the
<https://github.com/cloudposse/terraform-aws-efs> module? I can not figure out how it wants me to pass https://github.com/cloudposse/terraform-aws-efs/blob/master/variables.tf#L12Tomekalmost 5 years ago
Given the map:
How would you go about appending
map = {
events = ["foo"]
another_key = "bar"
}How would you go about appending
"baz" to the list in the events key so that you end up with:new_map = {
events = ["foo", "baz"]
another_key = "bar"
}kumar kalmost 5 years ago
Hello
kumar kalmost 5 years ago
Can someone please provide the upgrade steps from 0.12.29>0.13.6?
Takanalmost 5 years ago
hi guys,
anyone knows how to create "trusted advisor" in terraform?
anyone knows how to create "trusted advisor" in terraform?
Garethalmost 5 years ago
Good Evening, I need to create an API Gateway => Lambda => Dynamodb setup for the first time. As far as I can tell each of these items sits outside of a VPC. and while I can see there are VPC endpoints for Lambda and Dynamodb; do I actually need to use them? Is this one of those times where you can do either way, and one is more secure than the other but has double the operating costs?
My Lambda only need to talk to the Dynamodb and nothing else. All requests come from the public Internet to the API.
I'm used to my application always being on a private subnet, does that concept exist in this scenario?
The tutorials I've watch on this from Hashicorp and AWS don't really mention VPC's , they do in an announcement about end point support. Which is why I think I'm over thinking this.
Thanks for your time,
My Lambda only need to talk to the Dynamodb and nothing else. All requests come from the public Internet to the API.
I'm used to my application always being on a private subnet, does that concept exist in this scenario?
The tutorials I've watch on this from Hashicorp and AWS don't really mention VPC's , they do in an announcement about end point support. Which is why I think I'm over thinking this.
Thanks for your time,
vickenalmost 5 years ago
Hi, I'm also having a similar "eks-cluster" module issue like in these threads. Any leads as to what might be going on?
https://sweetops.slack.com/archives/CB6GHNLG0/p1612683973314300
https://sweetops.slack.com/archives/CB6GHNLG0/p1612683973314300
Matt Gowiealmost 5 years ago
Amazon RabbitMQ support just shipped in the AWS Provider — https://github.com/hashicorp/terraform-provider-aws/pull/16108#event-4416150216
Erik Osterman (Cloud Posse)almost 5 years ago
Tiago Casinhasalmost 5 years ago
Any resource on how to add a shared LB to multiple Beanstalk environments?
I don't see such options on the example in the cloudposse terraform github or the official provider's
I don't see such options on the example in the cloudposse terraform github or the official provider's
Bart Coddensalmost 5 years ago
Hi all, as I want to migrate existing infrastructure to a module based configuration based, what's the best approach ? Importing the existing configuration does not seem to be the best idea
Alex Jurkiewiczalmost 5 years ago
Looking for a reviewer 🙏 https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/136
Yoni Leitersdorf (Indeni Cloudrail)almost 5 years ago
Anyone with experience in using tools (like parliament) in CI/CD to catch overly privileged IAM policies?
Martin Helleralmost 5 years ago
Hi guys, anyone can give me a hand? When initially deploying
On the second run it works. I guess there is a
cloudposse/alb/aws with cloudposse/ecs-alb-service-task/aws I am always getting:The target group with targetGroupArn ... does not have an associated load balancer.On the second run it works. I guess there is a
depends_on missing in cloudposse/alb/aws or am I missing sth? ThxS
Steve Wade (swade1987)almost 5 years ago(edited)
can anyone help me with an issue with the upstream RDS module
I am trying to upgrade from 5.6 to 5.7 and getting the following error ...
however the instance has been upgrade fine
module "db" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"I am trying to upgrade from 5.6 to 5.7 and getting the following error ...
Error: Error Deleting DB Option Group: InvalidOptionGroupStateFault: The option group 'de-qa-env-01-20210119142009861600000003' cannot be deleted because it is in use.
status code: 400, request id: e9a3c5b5-61fa-4648-bc95-183fba0fa32bhowever the instance has been upgrade fine
Williealmost 5 years ago
I'm having trouble mounting an EFS volume in an ECS Fargate container. The container fails to start with
Terraform config in thread 🧵
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: mount.nfs4: Connection timed out : unsuccessful EFS utils command execution; code: 32Terraform config in thread 🧵
Steve Wade (swade1987)almost 5 years ago
does anyone know how to get RDS version upgrades working (e.g. 5.6 to 5.7) using the upstream RDS module?
Steve Wade (swade1987)almost 5 years ago
i can't seem to figure it out as it tries to delete the old option group which is referenced by the snapshot before performing the upgrade
Steve Wade (swade1987)almost 5 years ago
i have been unable to get TF to apply cleanly without deleting the existing snapshot prior to the upgrade which is just crazy as i have no way of rolling back if the upgrade borks out
melissa Jenneralmost 5 years ago
Module: cloudposse/elasticache-redis/aws. I got error below. Can someone help?
Error: Unsupported argument
on .terraform/modules/redis/main.tf line 92, in resource "aws_elasticache_replication_group" "default":
92: multi_az_enabled = var.multi_az_enabled
An argument named "multi_az_enabled" is not expected here.
Error: Unsupported argument
on .terraform/modules/redis/main.tf line 92, in resource "aws_elasticache_replication_group" "default":
92: multi_az_enabled = var.multi_az_enabled
An argument named "multi_az_enabled" is not expected here.
module "redis" {
source = "cloudposse/elasticache-redis/aws"
availability_zones = data.terraform_remote_state.vpc.outputs.azs
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
enabled = var.enabled
name = var.name
tags = var.tags
allowed_security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
allowed_cidr_blocks = ["20.10.0.0/16"]
subnets = data.terraform_remote_state.vpc.outputs.elasticache_subnets
cluster_size = var.redis_cluster_size #number_cache_clusters
instance_type = var.redis_instance_type
apply_immediately = true
automatic_failover_enabled = true
multi_az_enabled = true
engine_version = var.redis_engine_version
family = var.redis_family
cluster_mode_enabled = false
replication_group_id = var.replication_group_id
at_rest_encryption_enabled = var.at_rest_encryption_enabled
transit_encryption_enabled = var.transit_encryption_enabled
cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled
cluster_mode_num_node_groups = var.cluster_mode_num_node_groups
snapshot_retention_limit = var.snapshot_retention_limit
snapshot_window = var.snapshot_window
dns_subdomain = var.dns_subdomain
cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group
parameter = [
{
name = "notify-keyspace-events"
value = "lK"
}
]
}RBalmost 5 years ago
have you tried removing
multi_az_enabled ?RBalmost 5 years ago
it does also look like a bug in the module 🤔
RBalmost 5 years ago
pretty weird because the arg multi_az_enabled exist on both the module level and on the
aws_elasticache_replication_group resourcemelissa Jenneralmost 5 years ago
Removed multi_az_enabled. #multi_az_enabled = true.
But, still got error.
Error: Unsupported argument
on .terraform/modules/redis/main.tf line 92, in resource "aws_elasticache_replication_group" "default":
92: multi_az_enabled = var.multi_az_enabled
An argument named "multi_az_enabled" is not expected here.
[terragrunt] 2021/03/09 13:54:01 Hit multiple errors:
But, still got error.
Error: Unsupported argument
on .terraform/modules/redis/main.tf line 92, in resource "aws_elasticache_replication_group" "default":
92: multi_az_enabled = var.multi_az_enabled
An argument named "multi_az_enabled" is not expected here.
[terragrunt] 2021/03/09 13:54:01 Hit multiple errors:
melissa Jenneralmost 5 years ago
And, I do need multi_az_enabled. Regardless, I need to be able to provision redsi. By removing multi_az_enabled, I am still not able to provision redis.
Hankalmost 5 years ago
Hi all, i am looking for terraform-aws-eks modules? Have a question about that what's the difference between cloudposse and AWS? What's the purpose that we write our own eks modules rather than using the open source eks modules? 🙂
Steve Wade (swade1987)almost 5 years ago
hey @Hank i would personally swerve the AWS one its trying to be everything for everyone and in my personal opinion needs a major refactor
Hankalmost 5 years ago
Thanks Steve.
Hankalmost 5 years ago
What I consider is that how we handle the upgrade or new feature integration from AWS? We need add it into our own eks modules after new feature come out,right?
Steve Wade (swade1987)almost 5 years ago
at present i manage my own modules for EKS that work with bottlerocket
Gene Fontanillaalmost 5 years ago
Can anyone recommend a managed node group module for EKS?
msharma24almost 5 years ago
Hi - how do you run a target apply on a resource when using Terraform Cloud?
Takanalmost 5 years ago
hi all
how we can revert to a previous state?
how we can revert to a previous state?
Bart Coddensalmost 5 years ago
HI all, the documentation is a bit unclear on this module:
Bart Coddensalmost 5 years ago
it says: subscribers:
(email is an option but is unsupported, see below).Bart Coddensalmost 5 years ago
but then no extra info, does this refer to:
# The endpoint to send data to, the contents will vary with the protocol. (see below for more information)
endpoint_auto_confirms = boolSteve Wade (swade1987)almost 5 years ago
is there an easy way to obtain the difference in hours between two dates?
Steve Wade (swade1987)almost 5 years ago(edited)
i want to provide an expiry date to a cert module in the format
yyyy-mm-ddSteve Wade (swade1987)almost 5 years ago
and then work out the
validity in hoursBart Coddensalmost 5 years ago
hmmm terraform can confuse me with this:
Bart Coddensalmost 5 years ago
data "aws_sns_topic" "cloudposse-hosting" {
name = "cloudposse-hosting-isawesome"
}
alarm_actions = ["${data.aws_sns_topic.cloudposse-hosting.arn}"]Bart Coddensalmost 5 years ago
then it complains with:
Bart Coddensalmost 5 years ago
Template interpolation syntax
Bart Coddensalmost 5 years ago
what's the best way to format this ?
Ron Basumallikalmost 5 years ago
Hi I'm trying to use cloudposse's elastic beanstalk module and getting this error
Anyone seen this before?
Error: Invalid count argument
on .terraform/modules/elastic_beanstalk.elastic_beanstalk_environment.dns_hostname/main.tf line 2, in resource "aws_route53_record" "default":
2: count = module.this.enabled ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.Anyone seen this before?
David Napieralmost 5 years ago
Can someone point me in the right direction regarding the best use of using context?
melissa Jenneralmost 5 years ago
I have two VPCs. One if blue vpc (vpc_id = vpc-0067ff2ab41cc8a3e), another is shared VPC (vpc_id = vpc-076a4c26ec2217f9d). VPC peering connects these two VPCs. I provision MariaDB in the shared VPC. But, I got errors below.
Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e
status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d
Below is the code. Can someone help?
Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e
status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d
Below is the code. Can someone help?
module "master" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.master_identifier
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
storage_type = var.storage_type
storage_encrypted = var.storage_encrypted
name = var.mariadb_name
username = var.mariadb_username
password = var.mariadb_password
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_master
backup_window = var.backup_window_master
multi_az = true
tags = {
Owner = "MariaDB"
Environment = "blue-green"
}
enabled_cloudwatch_logs_exports = ["audit", "general"]
subnet_ids = data.terraform_remote_state.vpc-shared.outputs.database_subnets
create_db_option_group = true
apply_immediately = true
family = var.family
major_engine_version = var.major_engine_version
final_snapshot_identifier = var.final_snapshot_identifier
deletion_protection = false
parameters = [
{
name = "character_set_client"
value = "utf8"
},
{
name = "character_set_server"
value = "utf8"
}
]
options = [
{
option_name = "MARIADB_AUDIT_PLUGIN"
option_settings = [
{
name = "SERVER_AUDIT_EVENTS"
value = "CONNECT"
},
{
name = "SERVER_AUDIT_FILE_ROTATIONS"
value = "7"
},
]
},
]
}
module "replica" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.replica_identifier
replicate_source_db = module.master.this_db_instance_id
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
username = ""
password = ""
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_replica
backup_window = var.backup_window_replica
multi_az = false
backup_retention_period = 0
create_db_subnet_group = false
create_db_option_group = false
create_db_parameter_group = false
major_engine_version = var.major_engine_version
}Erik Osterman (Cloud Posse)almost 5 years ago
Is anyone using Terraform to manage and provision ECS Fargate + AWS CodeDeploy and doing database migrations? We're using GitHub Actions as our CI platform to build the docker image for a Rails app, then using a continuous delivery platform to deploy the terraform (spacelift). I'm curious how to run the rake migrations?
Coming from an EKS world, we'd just run them as a
Ideas considered:
• Using the new AWS Lambda container functionality ( but still not sure how we'd trigger it relative to the ECS tasks)
• Using a CodePipeline, but also not sure how we'd trigger it in our current continuous delivery model, since right now, we're calling terraform to deploy the ECS task and update the container definition. I don't believe there's any way to deploy a code pipeline and have it trigger automatically.
• Using Step Functions (haven't really used them before). Just throwing out buzz words. 😉
• Using ECS task on a cron schedule (but we have no way to pick the appropriate schedule)
Coming from an EKS world, we'd just run them as a
Job , but with ECS, there's no such straight equivalent (scheduled tasks don't count).Ideas considered:
• Using the new AWS Lambda container functionality ( but still not sure how we'd trigger it relative to the ECS tasks)
• Using a CodePipeline, but also not sure how we'd trigger it in our current continuous delivery model, since right now, we're calling terraform to deploy the ECS task and update the container definition. I don't believe there's any way to deploy a code pipeline and have it trigger automatically.
• Using Step Functions (haven't really used them before). Just throwing out buzz words. 😉
• Using ECS task on a cron schedule (but we have no way to pick the appropriate schedule)
Mohammed Yahyaalmost 5 years ago
terraform-provider-aws v3.32.0 is out now with new resource ACM Private CA, and more support for AWS managed RabbitMQ https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.32.0
msharma24almost 5 years ago
Any one using Terraform Cloud can share their Git workflow and repo structure?
I RTFM for TF Cloud and Github and what Hashi suggested is to either use persistent branch for each stage - dev stage branch and prod stage branch or else use folders for each env stage which translates to TF cloud workspaces and apply everything when merged to 'main` branch. There workflow don't appear DRY and easy to change IAC to me.
Is there a better workflow anyone using?
I RTFM for TF Cloud and Github and what Hashi suggested is to either use persistent branch for each stage - dev stage branch and prod stage branch or else use folders for each env stage which translates to TF cloud workspaces and apply everything when merged to 'main` branch. There workflow don't appear DRY and easy to change IAC to me.
Is there a better workflow anyone using?
TED Vortexalmost 5 years ago
i rotate multiple configurations through the backend config
Bart Coddensalmost 5 years ago
what do you prefer, remote states or data sources ?
Bart Coddensalmost 5 years ago
I tend to go for data sources when they are available
RBalmost 5 years ago
data sources
RBalmost 5 years ago
technically you can use a data source for a terraform remote state too
Bart Coddensalmost 5 years ago
in a cloudwatch alarm, thinking howto best implement this:
Bart Coddensalmost 5 years ago
dimensions = {
InstanceId = "i-cloudposseisawesome"
}
InstanceId = "i-cloudposseisawesome"
}
Bart Coddensalmost 5 years ago
a remote state pull is the best option here I guess because the data source is a bit messy
Zachalmost 5 years ago
messy in what way
joshmyersalmost 5 years ago
remote state pull has some less than ideal side effects e.g. state versioning not being backwards compat
RBalmost 5 years ago
what does the data source of the instance id look like
Bart Coddensalmost 5 years ago
ah now they don't seem to discourage this, in the past they did
RBalmost 5 years ago
you're not going to show us, are you ?
RBalmost 5 years ago
what a tease 😛
Bart Coddensalmost 5 years ago
hehe, cannot find the reference, I used it in the past but it was a bit messy. I should retry
patrykkalmost 5 years ago
Hi. I would like to use terraform-aws-cloudwatch-flow-logs for Terraform >=0.12.0 so I pulled branch 0.12/master from gitrepo. I get multiple warnings about interpolation syntax. I checked other brunches and all of them use the old interpolation syntax “${var.something}“. Is there any branch with sorted interpolation (terra 0.12.x) for that module? I can do it myself but no sense if that is already done and I am just blind 🙂
bkalmost 5 years ago(edited)
HI folks, I wouldn't normally post so soon after filing a bug but the github bug template suggested joining this Slack 🙂 Please shout at me if this was bad etiquette.
Any run into an issue where the s3 bucket creation doesn't respect the region you set in the provider? https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/88
Any run into an issue where the s3 bucket creation doesn't respect the region you set in the provider? https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/88
Bart Coddensalmost 5 years ago
any keybase experts in the group ? We are migrating to more teamwork in terraform configurations. If I use module: https://github.com/cloudposse/terraform-aws-iam-user and I configure the login profile, the console password is encrypted as a base64 key with my own encryption key in keybase. In the workflow I decrypt the key and store it in a password vault. If I leave the company, it's best that my co's taint the resource and recreate it with their own key ?
LTalmost 5 years ago
Hi All, got a silly question, I’ve deployed the https://github.com/cloudposse/terraform-aws-jenkins into AWS environment, but I can’t seemed to find the URL to access jenkins server, I tried Route53 DNS name with port 8080 and 80 in the URL, nothing seemed to work. Could anymore point me how to access the jenkins server?
Alex Jurkiewiczalmost 5 years ago
I create some Lambda functions like this:
Is it possible to add dependencies so these functions are created in serial, or so they depend on each other?
resource aws_lambda_function main {
for_each = local.functions
...
}Is it possible to add dependencies so these functions are created in serial, or so they depend on each other?
Steve Wade (swade1987)almost 5 years ago(edited)
does anyone have an example of tf variable validation to make sure a date is in the format
is the below valid?
YYYY-MM-DD ?is the below valid?
variable "expiry_date" {
description = "The date you wish the certificate to expire."
type = string
validation {
condition = regex("(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)")
error_message = "The expiry_date value must be in the format YYYY-MM-DD."
}
}Steve Wade (swade1987)almost 5 years ago
i fixed ☝️ using ...
variable "expiry_date" {
description = "The date you wish the certificate to expire."
type = string
validation {
condition = length(regexall("(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)", var.expiry_date)) > 0
error_message = "The expiry_date value must be in the format YYYY-MM-DD."
}
}Florian SILVAalmost 5 years ago
Hello guys ! I joined recently this Slack since I'm starting to use the CloudPosse module https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.
I'm very satisfied of the module but I see think a feature is missing. I've open an issue for it and would be glad to discuss and help for pushing it if I'm doing things well.
I'm very satisfied of the module but I see think a feature is missing. I've open an issue for it and would be glad to discuss and help for pushing it if I'm doing things well.
OliverSalmost 5 years ago(edited)
In case anyone is interested, I published a module on terraform registry that provides an alternative method of integrating provisioned state with external tools like helm and kustomize: https://github.com/schollii/terraform-local-gen-files. API is still alpha, although I have made use of it in a couple projects and I really like the workflow it supports. So any feedback welcome!
Erik Osterman (Cloud Posse)almost 5 years ago
TIL: The
1. Terraform downloads all the module sources from git to
2. The first clone is always a deep clone, with all files (including all dot files)
3. The next time terraform encounters a module source and there's a "cache hit" on the local filesystem, it does a copy of all the files, but ignores all dot files
4. If (like we did) happen to have
5. Renaming
terraform (cli) does some interesting caching.Error: Failed to download module
Could not download module "pr-16696-acme-migrate"
(pr-16696-acme-migrate.tf.json:3) source code from
"git::<https://github.com/acme/repo.git?ref=>....": subdir ".terraform" not found1. Terraform downloads all the module sources from git to
.terraform/2. The first clone is always a deep clone, with all files (including all dot files)
3. The next time terraform encounters a module source and there's a "cache hit" on the local filesystem, it does a copy of all the files, but ignores all dot files
4. If (like we did) happen to have
.terraform directory with terraform code for a micro service repo, this "dot file", was was ignored.5. Renaming
.terraform to terraform resolved the problem.Ashish Srivastavaalmost 5 years ago
Hi, I facing some issues with output value even after using depends_on block
Ashish Srivastavaalmost 5 years ago
I'm provisioning privatelink on mongoatlas, and require a connection string, following their github example I created my script but it fails at the output's end.
Jonathan Chapmanalmost 5 years ago(edited)
I’m working on upgrading Terraform from
The validation records exist is both AWS and the terraform state, but not in the
I’m uncertain what will happen if I apply this. Can anyone help confirm what will happen if I do apply this change? My biggest concern is that it doesn’t do anything to my cert.
0.12 to 0.13 and it is telling me that it will make the following change. Also, I’m upgrading the AWS provider to >= 3 # module.redirect_cert.aws_acm_certificate_validation.cert[0] must be replaced
-/+ resource "aws_acm_certificate_validation" "cert" {
certificate_arn = "arn:aws:acm:us-east-1:000:certificate/b55ddee7-8d98-4bf2-93eb-0029cb3e8929"
~ id = "2020-10-28 18:31:37 +0000 UTC" -> (known after apply)
~ validation_record_fqdns = [ # forces replacement
+ "<http://_2b63a2227feb97338346b0920e49818b.xxx.com|_2b63a2227feb97338346b0920e49818b.xxx.com>",
+ "<http://_423e90cf36285adac5ee4213289e73ab.xxx.com|_423e90cf36285adac5ee4213289e73ab.xxx.com>",
]
}The validation records exist is both AWS and the terraform state, but not in the
aws_acm_certificate_validation. I’ve read the documentation for upgrading the AWS provider to 3 and they mention it should be ok.I’m uncertain what will happen if I apply this. Can anyone help confirm what will happen if I do apply this change? My biggest concern is that it doesn’t do anything to my cert.
Steve Wade (swade1987)almost 5 years ago
i am trying to perform client auth on nginx ingress controller using a CA, server cert and client client created via Terraform
Steve Wade (swade1987)almost 5 years ago
does anyone know how i can get the server cert using
tls_locally_signed_cert to include the CA in the chain?Brij Salmost 5 years ago
Does anyone here use TFE/TFC internally? How do you manage module versions in Github and test new releases?
Brij Salmost 5 years ago
I just upgraded a terraform module to TF13 by running
When I publish this to TFE, I get the following error:
I’m not sure what this error eludes to, I’ve checked other public terraform modules with the same file and I dont notice anything different 🤔
terraform 0.13upgrade . I created a versions.tf file with the following content:terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
helm = {
source = "hashicorp/helm"
}
kubernetes = {
source = "hashicorp/kubernetes"
}
}
required_version = ">= 0.13"
}When I publish this to TFE, I get the following error:
Error: error loading module: Unsuitable value type: Unsuitable value: string required (in versions.tf line 3)I’m not sure what this error eludes to, I’ve checked other public terraform modules with the same file and I dont notice anything different 🤔
rssalmost 5 years ago(edited)
v0.15.0-beta2
UPGRADE NOTES:
The output of terraform validate -json has been extended to include a code snippet object for each diagnostic. If present, this object contains an excerpt of the source code which triggered the diagnostic. Existing fields in the JSON output remain the same as before. (#28057)
ENHANCEMENTS:
config: Improved type...
UPGRADE NOTES:
The output of terraform validate -json has been extended to include a code snippet object for each diagnostic. If present, this object contains an excerpt of the source code which triggered the diagnostic. Existing fields in the JSON output remain the same as before. (#28057)
ENHANCEMENTS:
config: Improved type...
Tomekalmost 5 years ago
What’s the best way to have terraform start tracking an s3 bucket that was created in the console (and has data in it already)? The terraform has a definition for the s3 bucket but is currently erroring because of
BucketAlreadyOwnedByYouRyanalmost 5 years ago(edited)
Anyone know how I can update a parameter for an existing object? I need to modify this:
Into this:
obj = {
one = {
two = {
foo = bar
}
},
three = "four"
}Into this:
obj = {
one = {
two = {
foo = bar,
biz = baz
}
},
three = "four"
}OliverSalmost 5 years ago(edited)
Hey sometimes I get asked why I prefer TF over CF (cloudformation). I'm curious what others' reasons are. Mine, after using CF a couple months (so not nearly as much as terraform which been using almost 3 years):
• CF difficult to modularize (nesting doesn't cut it and IIRC nesting is discouraged)
• CF has clunky template language
• Planning is shallow and it is often difficult to know why something will be changed
• Can get stuck in messed up state eg upgrade failed then rollback failed too
• Infra upgrade-as-a-transaction (atomic, all or nothing) just feels weird
• Having to load templates to s3 is annoying
Could probably find more but that's all that comes to mind just now.
• CF difficult to modularize (nesting doesn't cut it and IIRC nesting is discouraged)
• CF has clunky template language
• Planning is shallow and it is often difficult to know why something will be changed
• Can get stuck in messed up state eg upgrade failed then rollback failed too
• Infra upgrade-as-a-transaction (atomic, all or nothing) just feels weird
• Having to load templates to s3 is annoying
Could probably find more but that's all that comes to mind just now.
Takanalmost 5 years ago
anyone knows how to fix the following error ?
Failed to load state: Terraform 0.14.5 does not support state version 4, please update.Bart Coddensalmost 5 years ago(edited)
Here terraform confuses me a bit, I source a module like this:
module "global-state" {
source = "../../modules/s3state-backend"
profile = "cloudposse-rocks"
customername = "cloudposseisawesome"
}
module "global-state" {
source = "../../modules/s3state-backend"
profile = "cloudposse-rocks"
customername = "cloudposseisawesome"
}
Bart Coddensalmost 5 years ago
I need to pass the values of the variables in main terraform file like this, is there a more handy way to do this ?
Bart Coddensalmost 5 years ago
because the values are variable, like cloudposse-rocksforever for example
Jonalmost 5 years ago
There was a really well put together document on Cloudposse's GitHub a little while back that talked about Terraform modules. In a gist, it basically said "we will do our best to make a well developed module but in the end you are might need to add your own secret sauce to make it work well for your use case." I thought I had bookmarked it but I guess not. If anyone knows what readme I was referring to and has the link handy, I'd really appreciate it if you posted it! Until then, I'll keep looking around for it.
joshmyersalmost 5 years ago
Anybody else hit this? KMS key policy re ordering - https://github.com/hashicorp/terraform-provider-aws/issues/11801
lorenalmost 5 years ago
default_tags are coming in v3.33.0 as an attribute of the aws provider... https://github.com/hashicorp/terraform-provider-aws/pull/17974melissa Jenneralmost 5 years ago
How to code terraform properly so that it can provision the security groups I manually created?
At AWS console, I manually provisioned the security rules below for ElasticSearch. There are three VPCs. Transit gateway connects them. ElasticSearch is installed in VPC-A.
But, the terraform code below is not able to provision the above security groups.
The above code provision the security rules below:
Security rules of sg-0288988f38d2007be / shared-elasticSearch-sg
The terraform code provisioned security groups do not work. In VPC-B and VPC-C, it cannot reach elasticsearch at VPC-A. How to code terraform properly so that it can provision the security groups I manually created?
At AWS console, I manually provisioned the security rules below for ElasticSearch. There are three VPCs. Transit gateway connects them. ElasticSearch is installed in VPC-A.
Type Protocol Port range Source
All traffic All All 40.10.0.0/16 (VPC-A)
All traffic All All 20.10.0.0/16 (VPC-B)
All traffic All All 30.10.0.0/16 (VPC-C)
Outbound rules:
Type Protocol Port range Destination
All traffic All All 0.0.0.0/0But, the terraform code below is not able to provision the above security groups.
resource "aws_security_group" "shared-elasticsearch-sg" {
name = var.name_sg
vpc_id = data.terraform_remote_state.vpc-A.outputs.vpc_id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [data.terraform_remote_state.vpc-A.outputs.vpc_cidr_block,
data.terraform_remote_state.vpc-B.outputs.vpc_cidr_block,
data.terraform_remote_state.vpc-C.outputs.vpc_cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = var.name_sg
}
}
module "elasticsearch" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
security_groups = [aws_security_group.shared-elasticsearch-sg.id,
data.terraform_remote_state.vpc-A.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc-A.outputs.vpc_id
......
}The above code provision the security rules below:
Inbound rules:
Type Protocol Port range Source
All TCP TCP 0 - 65535 sg-0288988f38d2007be / shared-elasticSearch-sg
All TCP TCP 0 - 65535 sg-0893dfcdc1be34c63 / default
Outbound rules:
Type Protocol Port range Destination
All TCP TCP 0 - 65535 0.0.0.0/0Security rules of sg-0288988f38d2007be / shared-elasticSearch-sg
Type Protocol Port range Source
All traffic All All 40.10.0.0/16 (VPC-A)
All traffic All All 20.10.0.0/16 (VPC-B)
All traffic All All 30.10.0.0/16 (VPC-C)
Outbound rules:
Type Protocol Port range Destination
All traffic All All 0.0.0.0/0The terraform code provisioned security groups do not work. In VPC-B and VPC-C, it cannot reach elasticsearch at VPC-A. How to code terraform properly so that it can provision the security groups I manually created?
Mohammed Yahyaalmost 5 years ago
@loren https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.33.0 with provider: New
default_tags for applying tags across all resources under a providerBart Coddensalmost 5 years ago
Hi all, I would like to query a existing security group id and assign it to a ec2 instance
Bart Coddensalmost 5 years ago
for example:
Hakan Kayaalmost 5 years ago
Hi folks, can someone help me with an issue regarding gitlab provider? I’d like to try setting up a vanilla gitlab.com workspace from scratch. Right now the repo is completely empty, only one intial (owner) account exists. I tried using his personal access token to create a first gitlab_group resource, but I’m only getting 403 forbidden errors. Am I missing something or is there another requirement beforehand?
Bart Coddensalmost 5 years ago
data "aws_security_groups" "cloudposse-ips" {
tags = {
Name = "cloudposse-ips"
}
}
vpc_security_group_ids = ["data.aws_security_groups.cloudposse-ips.ids"]Bart Coddensalmost 5 years ago
this does not seem to work
Bart Coddensalmost 5 years ago
though the security group gets queried correctly:
Bart Coddensalmost 5 years ago
data "aws_security_groups" "cloudposse-ips" {
arns = [
"arn:aws:ec2:eu-west-1:564923036937:security-group/sg-0d5e812c1bb1c471a",
]
id = "eu-west-1"
ids = [
"sg-0d5e812c1bb1c471a",
]
tags = {
"Name" = "cloudposse-ips"
}
vpc_ids = [
"vpc-0baf4791f3db9bd8c",
]
}Joalmost 5 years ago(edited)
Should setting the following Environment variables in the shell (
zsh) ensure that the variables are set in azurerm provider section?export ARM_CLIENT_ID=aaaa
export ARM_CLIENT_SECRET=bbbb
export ARM_SUBSCRIPTION_ID=cccc
export ARM_TENANT_ID=ddddBart Coddensalmost 5 years ago
when I specify it as such: vpc_security_group_ids = ["${data.aws_security_group.cloudposse-ips.ids}"]
Bart Coddensalmost 5 years ago
it works
Bart Coddensalmost 5 years ago
but then I get a Interpolation warning
bkalmost 5 years ago
Hi friends, I'm creating an ec2 instance with https://github.com/cloudposse/terraform-aws-ec2-instance and the ssh keypair with https://github.com/cloudposse/terraform-aws-key-pair. the ssh connection seems to be timing out (no authorization error).
Is there some non-obvious, not-default setting I need to use to get the networking bits to work?
Is there some non-obvious, not-default setting I need to use to get the networking bits to work?
lorenalmost 5 years ago
cdktf 0.2 release... https://github.com/hashicorp/terraform-cdk/releases/tag/v0.2.0
Or Azarzaralmost 5 years ago
Hi All!
What is the best approach to integrate a security audit step on a Terraform pipeline in Jenkins using a third-party provider?
1. Should the provider supply a Jenkins plugin that adds an extra step having access to a repo with the Terraform plan file output?
2. Should the provider supply a Jenkins shared library that can be imported in any existing pipeline, calling a dedicated function with the Terraform plan output or path?
3. Should the provider supply a docker image that exposes a rest API endpoint receiving the Terraform plan output?
What is the best approach to integrate a security audit step on a Terraform pipeline in Jenkins using a third-party provider?
1. Should the provider supply a Jenkins plugin that adds an extra step having access to a repo with the Terraform plan file output?
2. Should the provider supply a Jenkins shared library that can be imported in any existing pipeline, calling a dedicated function with the Terraform plan output or path?
3. Should the provider supply a docker image that exposes a rest API endpoint receiving the Terraform plan output?
mrwackyalmost 5 years ago
I thought I saw something in Cloudposse TF to limit the length of resource names (ie for when we generate a label longer than AWS allows us to name a resource).. but I can't find it
Mohammed Yahyaalmost 5 years ago
TErraform Cloud now support README.md file and output.tf file, and there values are shown in the UI pretty neat
Walialmost 5 years ago
Masha’allah, is there a blog about this @Mohammed Yahya
Yoni Leitersdorf (Indeni Cloudrail)almost 5 years ago
Our Engineering team decided to share how they're doing CI/CD for Terraform with Jenkins, hopefully this could be helpful for any one here: https://indeni.com/blog/terraform-goes-hand-in-hand-with-ci-cd/
Jeff Behlalmost 5 years ago
hey all - first time caller, long time listener 🙂
fyi on something that was vexing me in the terraform-aws-elasticsearch module: being a smart user, I also use terraform-null-label, applying it as so to a security group I’ll be using for my elasticsearch domains:
fyi on something that was vexing me in the terraform-aws-elasticsearch module: being a smart user, I also use terraform-null-label, applying it as so to a security group I’ll be using for my elasticsearch domains:
resource "aws_security_group" "es_internal" {
description = "internal traffic"
name = module.label.id
tags = module.label.tags
vpc_id = data.aws_vpc.main.id
} ```Jeff Behlalmost 5 years ago
in the terraform module, I use
context = module.label.context. the end result? the terraform module tries to create a security group with the same name as the one I already created, so it errors out.Jeff Behlalmost 5 years ago
on an AWS error of “security group with that name already created!”
Jeff Behlalmost 5 years ago
interesting side effect of using best practices 🙂
sheldonhalmost 5 years ago
Quick catch-up, any progress by folks on better azure pipelines terraform workflow? I can use multistage pipelines or potentionally use another tool like Terraform Cloud, env0, scalyr, but azure pipelines will be the easiest with no approval required.
Any reason to push for the others right now or is azure pipelines servicing others well right now with multistage yaml pipelines.
Any reason to push for the others right now or is azure pipelines servicing others well right now with multistage yaml pipelines.
EvanGalmost 5 years ago
Back to this thread: https://sweetops.slack.com/archives/CB6GHNLG0/p1611948097136100. I figured out how to write an AWS policy that only requires MFA for human users in a group. It's pretty cool. You have to enter your MFA code when you assume role. This is HUGE security risk for cloud based companies.
Yoni Leitersdorf (Indeni Cloudrail)almost 5 years ago
Quiz:
If you use an
without specifying a VPC (that is, AWS is expected to use the default VPC),
and without setting
then do the EC2 instances generated have a public IP address or not?
(answer in the thread)
If you use an
aws_autoscaling_group with aws_launch_configuration,without specifying a VPC (that is, AWS is expected to use the default VPC),
and without setting
associate_public_ip_address,then do the EC2 instances generated have a public IP address or not?
(answer in the thread)
lorenalmost 5 years ago
the combination of
tftest and pytest is really feeling so much nicer and more robust/extensible than terratest.... https://github.com/GoogleCloudPlatform/terraform-python-testing-helperkgibalmost 5 years ago
is there a way to enable access logging with this module? https://github.com/cloudposse/terraform-aws-tfstate-backend/blob/793d3f90c25d9f17f4a299be7b13ae5141795345/main.tf#L106
TED Vortexalmost 5 years ago
anyone here used the cloudflare modules ? can give me some backend best practices ? cheers
Matt Gowiealmost 5 years ago(edited)
Appreciate some 👍️s on this AWS provider
aws_cognito_user resource addition issue: https://github.com/hashicorp/terraform-provider-aws/issues/4542Walialmost 5 years ago
You need this tool like yesterday. Absolutely what you need when dealing with imports on tf
https://github.com/GoogleCloudPlatform/terraformer
https://github.com/GoogleCloudPlatform/terraformer
Yoni Leitersdorf (Indeni Cloudrail)almost 5 years ago
We're running a survey on the difference between identifying cloud security issues in production, vs in code ("shift left"). It's a short one, meant to get a high level of understanding of people's sentiments:
https://docs.google.com/forms/d/e/1FAIpQLSc7izchAxnCqkQbdwIBETYX51hGmX_GMdqO9ZnEYSx34V_20Q/viewform?usp=sf_link
We're giving away free Visa gift cards to help incentivize people for their time. However, I can also share here, that we will be sharing the raw results (minus PII, etc.), and then the conclusions, for the benefit of everyone here who is thinking about "shift left".
Any comments on the survey are also very welcome.
https://docs.google.com/forms/d/e/1FAIpQLSc7izchAxnCqkQbdwIBETYX51hGmX_GMdqO9ZnEYSx34V_20Q/viewform?usp=sf_link
We're giving away free Visa gift cards to help incentivize people for their time. However, I can also share here, that we will be sharing the raw results (minus PII, etc.), and then the conclusions, for the benefit of everyone here who is thinking about "shift left".
Any comments on the survey are also very welcome.
uselessuseofcatalmost 5 years ago
What is your way of managing Security Groups trough Terraform? I would like to create a module where I specify the list of ports and allowed CIDR block multiple times. I can do
for_each but that can only be done for one thing, for example for ports, I do not know how to apply for_each for both ports and CIDR block? Thanks!sohel2020almost 5 years ago
which one is the best practice (tf version 0.12 / 0.13) and why
1.
2.
1.
name = format("%s-something", <http://var.my|var.my>_var)2.
name = "${<http://var.my|var.my>_var}-something"OliverSalmost 5 years ago(edited)
Some of you might be interested in
terraform-aws-multi-stack-backends this is first release: https://registry.terraform.io/modules/schollii/multi-stack-backends/aws. There are diagrams in the examples/simple folder. Any feedback welcome!melissa Jenneralmost 5 years ago
Does anyone know a Kinesis Firehose Terraform Module that sends Data Streams to Redshift?
Yoni Leitersdorf (Indeni Cloudrail)almost 5 years ago(edited)
Hey guys, would appreciate your collective minds for some feedback:
Our IaC security tool (Cloudrail) now has a new capability called "Mandate On New Resources Only". If this is set on a rule, Cloudrail will only flag resources that are set to be created under the TF plan.
This brought up an interesting philosophical question:
If a developer is adding new TF code that uses an existing module, is it really a new resource? Technically, it is. Actually several resources in many cases generated by the module. But in reality, it's just the same module again, with different parameters.
Some of our devs said "well, technically, yes, but it's the same module, so from an enforcement perspective, it's not a new resource, it's just like other uses of the same module".
I'm adding examples in a thread on this message. Appreciate your guys' and gals' thoughts on this matter as we think through it.
Our IaC security tool (Cloudrail) now has a new capability called "Mandate On New Resources Only". If this is set on a rule, Cloudrail will only flag resources that are set to be created under the TF plan.
This brought up an interesting philosophical question:
If a developer is adding new TF code that uses an existing module, is it really a new resource? Technically, it is. Actually several resources in many cases generated by the module. But in reality, it's just the same module again, with different parameters.
Some of our devs said "well, technically, yes, but it's the same module, so from an enforcement perspective, it's not a new resource, it's just like other uses of the same module".
I'm adding examples in a thread on this message. Appreciate your guys' and gals' thoughts on this matter as we think through it.
Tomekalmost 5 years ago
i have a module that creates an ECS task that is used with a for_each. Is there a way to use the same execution role across each invocation of the module? (Only way i can think of is creating the role outside the module and passing the arn in as a var
joshmyersalmost 5 years ago
I have a list of maps (nested) - can these be collapsed down?
Bart Coddensalmost 5 years ago
terraform confuses me a bit again
Bart Coddensalmost 5 years ago
I query the instance id with a datasource like this:data "aws_instance" "instancetoapplyto" {
filter {
name = "tag:Name"
values = ["${var.instancename}"]
}
}Bart Coddensalmost 5 years ago
and then I use it in a cloudwatch alarm:
Bart Coddensalmost 5 years ago
InstanceId = data.aws_instance.instancetoapplyto.idBart Coddensalmost 5 years ago
this works but I get a warning like this:
Bart Coddensalmost 5 years ago
Warning: Interpolation-only expressions are deprecated
on ../../../modules/cloudwatch/alarms.tf line 8, in data "aws_instance" "instancetoapplyto":
8: values = ["${var.instancename}"]Bart Coddensalmost 5 years ago
I know about the Interpolation-only expression but here it confuses me
Matt Gowiealmost 5 years ago
Would appreciate some 👍️ on this GH provider issue: https://github.com/integrations/terraform-provider-github/issues/612
Marcin Brańskialmost 5 years ago(edited)
I got a map providing IAM group name and policies that should be attached to it.
I want the policies to be attached to groups with
But because of the group format it require double iteration to enumerate all policies attached to a group.
I did it with ☝️ code but I think that it should be much more simple than my hacky, that
groups = {
Diagnose: [
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess",
...
], ...I want the policies to be attached to groups with
aws_iam_group_policy_attachmentBut because of the group format it require double iteration to enumerate all policies attached to a group.
locals {
groups_helper = chunklist(flatten([for key in keys(local.groups): setproduct([key], local.groups[key])]), 2)
}
resource "aws_iam_group_policy_attachment" "this" {
for_each = {
for group in local.groups_helper : "${group[0]}.${group[1]}" => group
}
group = each.value[0]
policy_arn = each.value[1]
}I did it with ☝️ code but I think that it should be much more simple than my hacky, that
groups_helper.Steve Wade (swade1987)almost 5 years ago(edited)
i am trying to get my head around how much of a cluster <insert swear word here> the upgrade from 0.13.4 to 0.14.x is
mainly from a CI perspective (we use Atlantis) and a pre-commit / dev perspective
mainly from a CI perspective (we use Atlantis) and a pre-commit / dev perspective
TED Vortexalmost 5 years ago
anyone happen to have a tutorial on https://github.com/cloudposse/terraform-aws-tfstate-backend ? terraform newbie here, could use some hints and best practices for implementing it here: https://github.com/0-vortex/cloudflare-terraform-infra
sheldonhalmost 5 years ago
@Erik Osterman (Cloud Posse) do you all still like the yaml_config approach? I'm building out some 2 environment pipelines and started with this. It's elegant, but verbose.
Would you recommend using a tfvars as an argument for an environment instead of the yaml config in certain cases?
I like the yaml config, but just trying to think through what I can do to avoid adding abstractions on abstractions when possible, and make it easier to collaborate.
Would you recommend using a tfvars as an argument for an environment instead of the yaml config in certain cases?
I like the yaml config, but just trying to think through what I can do to avoid adding abstractions on abstractions when possible, and make it easier to collaborate.
sheldonhalmost 5 years ago
[Seperate Question] I also have to initialize my backend in a setup step ahead of time instead of being more dynamic with the terraform-tf-state backend since it generates the backend tf. Is there any "pipeline" type setup you'd recommend to take advantage of your tf-state locking module, but not require the manual setup steps, backend.tf and other steps to be completed manually first?
I'm used to backend as terraform cloud which made it basically stupid simple with no setup requirement. Would like to approach something like this a bit more, but with S3.
No rush, just trying to think through a "non-terraform-cloud" oriented remote backend setup with the same ease of setup.
I'm used to backend as terraform cloud which made it basically stupid simple with no setup requirement. Would like to approach something like this a bit more, but with S3.
No rush, just trying to think through a "non-terraform-cloud" oriented remote backend setup with the same ease of setup.
Alex Jurkiewiczalmost 5 years ago(edited)
Is there a way to iterate over a list and get an index as well as the value? At the moment I am doing this in a string template which is sort of ugly:
%{for index in range(length(local.mylist))}
[item${index}]
name=local.mylist[item].name
%{endfor}Garethalmost 5 years ago
Good morning,
Has anybody got an idea if it is possible to override a value in a map of objects that's set in tfvars? Doc's suggest that the cmd line takes priority over tfvars and that you can override items via cmd line but I'm struggling to get the nesting right.
terraform plan -var=var.site_configs.api.lambdas_definitions.get-data={"lambda_zip_path":"/source/dahlia_v1.0.0_29.zip"}
structure is:
above is a large data structure but I've tried to simplified it for the purpose of this question.
Has anybody got an idea if it is possible to override a value in a map of objects that's set in tfvars? Doc's suggest that the cmd line takes priority over tfvars and that you can override items via cmd line but I'm struggling to get the nesting right.
terraform plan -var=var.site_configs.api.lambdas_definitions.get-data={"lambda_zip_path":"/source/dahlia_v1.0.0_29.zip"}
structure is:
variable "site_configs" {
type = map(object({
lambdas_definitions = map(object({
lambda_zip_path = string
}))
}))
}above is a large data structure but I've tried to simplified it for the purpose of this question.
Mohammed Yahyaalmost 5 years ago
watch out for this nasty bug in RabbitMQ ( enable logging ) https://github.com/hashicorp/terraform-provider-aws/issues/18067
John Clawsonalmost 5 years ago
I'm looking at building out a new account structure (currently all dumped into one wild-west style account) for my company using https://github.com/cloudposse/reference-architectures, but I don't think we'll wind up using EKS or kubernetes in any fashion. For now our needs are pretty simple and fargate should suffice. Will I regret using the reference architecture to build out the Organizations account structure even if I don't need all of the components that it's made to create?
Bart Coddensalmost 5 years ago
Good morning (in Belgium/Europe) to all, I a am bit confused by this module:
Bart Coddensalmost 5 years ago
you can assign a logging target and it's described as such:
Bart Coddensalmost 5 years ago
object({
bucket_name = string
prefix = string
})Bart Coddensalmost 5 years ago(edited)
but howto define this in the terraform config ?
sheldonhalmost 5 years ago
Hate bugging folks again, but I'm so close. I just need a pointer on the backend remote state with the new yaml config stuff. @Erik Osterman (Cloud Posse) anyone can point me towards a good example?
I'm unclear if I have to use cli options or if module "backend" actually works to define the remote state dynamically for me as part of the yaml config stack setup
https://github.com/cloudposse/terraform-yaml-stack-config/blob/75cd7c6d6e17a9c701d4067dbcd1eedcf6039aa4/examples/complete/main.tf#L12
I found this. I thought backend configs must be hard coded and can't be variables so someone point me towards a post or example so I can see S3 remote backend being used with yaml config for stacks pretty please. I want to deploy my stacks to 3 accounts, and each has custom yaml overrides from default.
I'm unclear if I have to use cli options or if module "backend" actually works to define the remote state dynamically for me as part of the yaml config stack setup
https://github.com/cloudposse/terraform-yaml-stack-config/blob/75cd7c6d6e17a9c701d4067dbcd1eedcf6039aa4/examples/complete/main.tf#L12
I found this. I thought backend configs must be hard coded and can't be variables so someone point me towards a post or example so I can see S3 remote backend being used with yaml config for stacks pretty please. I want to deploy my stacks to 3 accounts, and each has custom yaml overrides from default.
Garethalmost 5 years ago
Good evening all,
I am trying to use the below json input to supply a value to a map I've created normally within terraform and in part it works fine on windows
However when try it on centos it fails with the below error. I assume its to do with the escaping within bash?
I've tried a variety of escaping sequences but to no luck. Any suggestions, please?
Last questions; if specifying
Like this? I assume not, given the error I've just got 😞
/usr/local/bin/terraform_0.13.5 plan -var-file my.tf.json
I am trying to use the below json input to supply a value to a map I've created normally within terraform and in part it works fine on windows
terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"} However when try it on centos it fails with the below error. I assume its to do with the escaping within bash?
Error: Extra characters after expression
on <value for var.build_version_inputs> line 1:
(source code not available)
An expression was successfully parsed, but extra characters were found after
it.I've tried a variety of escaping sequences but to no luck. Any suggestions, please?
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={"get-data":"1.0.1_1"},{"post-data":"1.0.1_1"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={\"get-data\":\"1.0.1_1\",\"post-data\":\"1.0.1_1\"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:"1.0.1_1",post-data:"1.0.1_1"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:1.0.1_1,post-data:1.0.1_1}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={"get-data":"1.0.1_1","post-data":"1.0.1_1"}Last questions; if specifying
{"get-data":"1.0.1_1","post-data":"1.0.1_1"} in a json file how do you reference the variable you are trying to supply data to?Like this? I assume not, given the error I've just got 😞
{
"variable": {
"build_version_inputs": {
"get-data": "1.0.1_1",
"post-data": "1.0.1_1"
}
}
}/usr/local/bin/terraform_0.13.5 plan -var-file my.tf.json
Saichovskyalmost 5 years ago
Hey peeps,
How do I list resources that I can import into my statefile? In other words, I know that I can import already existing resources using
How do I list resources that I can import into my statefile? In other words, I know that I can import already existing resources using
terraform import <address> <ID> , but before importing, I would like to see what’s available - a list containing <ID> and probably other resources that were created outside of terraformsheldonhalmost 5 years ago
I've followed our past discussions on pulumi. I'm curious if anyone has actually had a great result using it if you are working with Go/python/node devs?
Since it's an abstraction of the apis just like terraform, for application stacks it sorta makes sense to me over learning HCL in depth for some. Was thinking about it providing value for serverless cli oriented alternative that could handle more resource creation that is needed specifically tied to the app.
I don't find it as approachable as HCL, but in the prior roles I was working with folks that knew HCL, but not Go, now it's the opposite. They know Go, but not HCL 🙂
Since it's an abstraction of the apis just like terraform, for application stacks it sorta makes sense to me over learning HCL in depth for some. Was thinking about it providing value for serverless cli oriented alternative that could handle more resource creation that is needed specifically tied to the app.
I don't find it as approachable as HCL, but in the prior roles I was working with folks that knew HCL, but not Go, now it's the opposite. They know Go, but not HCL 🙂
rssalmost 5 years ago(edited)
v0.15.0-rc1
0.15.0-rc1 (Unreleased)
ENHANCEMENTS:
backend/azurerm: Dependency Update and Fixes (#28181)
BUG FIXES:
core: Fix crash when referencing resources with sensitive fields that may be unknown (<a href="https://github.com/hashicorp/terraform/issues/28180" data-hovercard-type="pull_request"...
0.15.0-rc1 (Unreleased)
ENHANCEMENTS:
backend/azurerm: Dependency Update and Fixes (#28181)
BUG FIXES:
core: Fix crash when referencing resources with sensitive fields that may be unknown (<a href="https://github.com/hashicorp/terraform/issues/28180" data-hovercard-type="pull_request"...
Hakan Kayaalmost 5 years ago
Hi, did anyone ever try to manually remove an EKS cluster from the state and after changing some stuff in the console, try to reimport the EKS again back into the state? I am running into a race condition when adding subnets to the cluster and was wondering if the destroy + create path could be avoided…