117 messages
Eric Bergover 5 years ago
I'm updating my terraform-opsgenie-incident-management implementation from an earlier release and it looks like the auth mechanism has changed. I removed
opsgenie_provider_api_key from being passed to the CP opsgenie modules and added a provider block, but I have been getting this shockingly helpful message and can't find where to make this change:Error: Missing required argument
The argument "api_key" is required, but was not set.Eric Bergover 5 years ago
it's a module that has only uses the opsgenie provider, so it's not anything else.
sahil kambojover 5 years ago(edited)
Hey guys facing a weird problem with terrafrom rds module
i made a read replica with terraform and that was successful.
after that i do terraform apply, its saying it has to change name and want to recreate replica.(i also did this)
but again it shows it want to change the name
i checked the name in tfstate its whats that should be.(it wants to change the name {masterdb name} to {replica name})
i made a read replica with terraform and that was successful.
after that i do terraform apply, its saying it has to change name and want to recreate replica.(i also did this)
but again it shows it want to change the name
i checked the name in tfstate its whats that should be.(it wants to change the name {masterdb name} to {replica name})
sahil kambojover 5 years ago
~ name = "frappedb" -> "frappedb-replica" # forces replacement
option_group_name = "frappe-read-db-20200831120637864700000001"
parameter_group_name = "frappe-read-db-20200831120637864800000002"
password = (sensitive value)
performance_insights_enabled = false
option_group_name = "frappe-read-db-20200831120637864700000001"
parameter_group_name = "frappe-read-db-20200831120637864800000002"
password = (sensitive value)
performance_insights_enabled = false
natalieover 5 years ago(edited)
Evaluating terraform Cloud for our team. Wondering if anyone here is using it currently? And maybe can share some pros, cons, regrets, tips, etc? How was the migration from open source Terraform to the cloud, etc? thank you
Shankar Kumar Chaudharyover 5 years ago
when i am doing terragrunt apply in tfstate-backened it is going to create table and s3 bucket again. and throwing error why so?
rssover 5 years ago(edited)
v0.13.2
0.13.2 (September 02, 2020)
NEW FEATURES:
Network-based Mirrors for Provider Installation: As an addition to the existing capability of "mirroring" providers into the local filesystem, a network mirror allows publishing copies of providers on an HTTP server and using that as an alternative source for provider packages, for situations where directly accessing the origin registries is...
0.13.2 (September 02, 2020)
NEW FEATURES:
Network-based Mirrors for Provider Installation: As an addition to the existing capability of "mirroring" providers into the local filesystem, a network mirror allows publishing copies of providers on an HTTP server and using that as an alternative source for provider packages, for situations where directly accessing the origin registries is...
stefanover 5 years ago
Hi. Is there a possibility to use a CMK instead of the KMS default key for encryption at terraform-aws-dynamodb? Thanks.
stefanover 5 years ago
Hi. Another question: How can I deactivate the ttl_attribute at terraform-aws-dynamodb? If I set it to null or "" I get an error (because it must have a value). If I avoid the argument it will be enabled with the name "EXPIRES". I have checked the code in the module. I see no way to disable ttl. Can anyone explain to me how this works?
lorenover 5 years ago(edited)
fyi, submitted an issue with tf 0.13 that cloudposse modules may run into, since it impacts conditional resource values (e.g.
join("", resource.name.*.attr)) that are later referenced in other data sources. this includes module outputs that are passed to data sources later in a config... https://github.com/hashicorp/terraform/issues/26100Peter Huynhover 5 years ago
hi all, sometimes, I need to do things outside of terraform, for example provisioning the infra vs updating content (eg putting things into a bucket).
This introduces duplicates of declaration of variables, one set for shell and another for terraform.
Has anyone ran into something similar? Do you have any advise on how to DRY the config?
This introduces duplicates of declaration of variables, one set for shell and another for terraform.
Has anyone ran into something similar? Do you have any advise on how to DRY the config?
tajmahpaulover 5 years ago
Hey guys I'm using the
git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=tags/0.10.0> module. Just wondering if changing backup_window in a deployed RDS instance will create a new RDS instance or update the existing one. Also still new to terraform, so maybe there is an easy way to find out? thanks guysMatt Gowieover 5 years ago
Does anyone know a good tool for pulling values from Terraform state outside of terraform itself?
As in, I have a CD process that is running simple bash commands to build and deploy a static site. I’d like to get my CloudFront CDN Distribution ID and the bucket that the static site’s assets should be shipped to from my Terraform state file in S3. I could pull the state file, parse out the outputs I need, and then go about it that way but I am figuring that there must be a tool written around this.
As in, I have a CD process that is running simple bash commands to build and deploy a static site. I’d like to get my CloudFront CDN Distribution ID and the bucket that the static site’s assets should be shipped to from my Terraform state file in S3. I could pull the state file, parse out the outputs I need, and then go about it that way but I am figuring that there must be a tool written around this.
lorenover 5 years ago
oh. sweet! the terraform registry has versioned docs for providers! looks like the versioned docs go back about a year or so, here's the earliest versioned docs for the aws and azure providers
* https://registry.terraform.io/providers/hashicorp/aws/2.33.0/docs
* https://registry.terraform.io/providers/hashicorp/azurerm/1.35.0/docs
* https://registry.terraform.io/providers/hashicorp/aws/2.33.0/docs
* https://registry.terraform.io/providers/hashicorp/azurerm/1.35.0/docs
sheldonhover 5 years ago
Can you use for iterator with a data source? Just thought about this with looking up list of github users for example. Would like to know if that’s possible, didn’t see anything in docs
Mike Schuelerover 5 years ago
when running terraform in AWS, with s3 backend for the statefile, is there anyway to create the bucket when running for the first time? in the docs, it just says
This assumes we have a bucket created called mybucketCody Mooreover 5 years ago(edited)
I recently updated to the newest vpc module version https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.17.0 but then got this error for the null resource:
Would anyone know how to solve it?
I noticed that the 0.13 upgrade docs (https://www.terraform.io/upgrade-guides/0-13.html#explicit-provider-source-locations) mention this blurb:
But I wasn't sure how to interpret that if it's something that might have happened with the vpc module upstream of my usage?
Error: Provider configuration not present
To work with
module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3] its
original provider configuration at provider["<http://registry.terraform.io/-/null|registry.terraform.io/-/null>"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3], after
which you can remove the provider configuration again.Would anyone know how to solve it?
I noticed that the 0.13 upgrade docs (https://www.terraform.io/upgrade-guides/0-13.html#explicit-provider-source-locations) mention this blurb:
In this specific upgrade situation the problem is actually the missingresourceblock rather than the missingproviderblock: Terraform would normally refer to the configuration to see if this resource has an explicitproviderargument that would override the default strategy for selecting a provider. If you see the above after upgrading, re-add the resource mentioned in the error message until you've completed the upgrade.
But I wasn't sure how to interpret that if it's something that might have happened with the vpc module upstream of my usage?
Eric Bergover 5 years ago
Regarding
config module in terraform-opsgenie-incident-management, what's the significance of including "repo:...;owner: David..." in the descriptions?Makeshift (Connor Bell)over 5 years ago
Does anybody know a nice way to attach and then detach a security group from an ENI during a single run of a tf module? I'm trying to allow the host running TF temporary access to the box while it deploys, then revoke that later
RBover 5 years ago
Found out recently that a tf plan will update the state file, notably the version
Any good tricks to doing a plan without the state updating ?
Any good tricks to doing a plan without the state updating ?
Eric Bergover 5 years ago
Is there a doc on contributing to Cloudposse TF mods?
Abel Luckover 5 years ago
anyone know how to do a "find by" type of operation on a list of maps in terraform?
my_objs = [ {"foo": "a"} {"foo": "b"}]
# find_by("foo", "a", my_objs) --> {"foo": "a"}Abel Luckover 5 years ago
Here's what I've come up with :/
[for o in local.my_objs : o if o.foo == "a"][0]Chris Fowlesover 5 years ago
yeah, for/if is going to be the way that you'll need to filter a collection by a property
Abel Luckover 5 years ago(edited)
I'm investingating moving our setup to terragrunt to simplify modules. Looking at the official examples (Looking at the https://github.com/gruntwork-io/terragrunt-infrastructure-live-example/tree/master/non-prod) I see that they place the region higher in the hierarchy than the stage. Is this common?
gugaizover 5 years ago
Hi, I am trying to read the output
security_group_id from the elastic-beanstalk-environment module, but I am getting This value does not have any attributes when call it. Any ideas?Makeshift (Connor Bell)over 5 years ago
Is it possible to just simply 'set' a block? I have a basic object containing several volume definitions:
And would rather like to use it like so:
Unfortunately, TF whines that it wants it to be a block rather than a simple assignment (even though it's literally the same thing in the underlying json...). Please tell me this isn't how you're meant to work with these stupid blocks:
container_volumes = {
files_staging = {
name = var.file_staging_bucket
docker_volume_configuration = [{
scope = "shared"
autoprovision = false
driver = "rexray/s3fs"
}]
}
files_store = {
name = var.file_store_bucket
docker_volume_configuration = [{
scope = "shared"
autoprovision = false
driver = "rexray/s3fs"
}]
}And would rather like to use it like so:
volume = toset([
local.container_volumes["files_staging"],
local.container_volumes["files_store"]
])Unfortunately, TF whines that it wants it to be a block rather than a simple assignment (even though it's literally the same thing in the underlying json...). Please tell me this isn't how you're meant to work with these stupid blocks:
dynamic "volume" {
for_each = ["files_staging", "files_store",]
content {
name = volume.value
dynamic "docker_volume_configuration" {
for_each = local.container_volumes[volume.value].docker_volume_configuration
content {
scope = docker_volume_configuration.value.scope
autoprovision = docker_volume_configuration.value.autoprovision
driver = docker_volume_configuration.value.driver
}
}
}
}David J. M. Karlsenover 5 years ago
hm, something broke badly with cloudposse/ecr/aws v0.27.0
David J. M. Karlsenover 5 years ago
even if I give a name as
name = format("%s/%s/%s", var.orgPrefix, var.system, each.value)David J. M. Karlsenover 5 years ago
it will strip the
/Chris Warrenover 5 years ago
Hi everyone 👋 I've got a question about the new module features (count, foreach).. I want to access an output from a module with a count (its a true/false flag to create only if the flag is true) and use that output conditionally... but I get an error that it is an empty tuple. Anyone have experience with this?
Abel Luckover 5 years ago(edited)
Being able to forward submodule outputs with
output "foo" { value = module.foo } is very handy. Is there a way to do an export like that, but remove the layer of indirection? So that all of the outputs of foo are available as top level outputs of the exporting module?Luke Maslanyover 5 years ago
Hello all. I'm looking for a pointer to some guidance/best practice for cycling IAM access keys as part of a terraformed deployment pipeline. Any recommendations?
Kelvin Tanover 5 years ago
hi folks! had a question (similar to this old issue) on passing through GCP service account credentials to the atlantis environment. We currently use terraform to host atlantis on an AWS ECS cluster, and would prefer not to have to keep the credentials file in Github or manually baking it wholesale into the task definition. Was wondering if there was an easy way to reference the required credentials.
Right now thinking of either placing it on AWS secrets manager or SSM parameter store, then querying it through the provider module and passing through to the google provider. Open to any other ideas on this 🙂
Right now thinking of either placing it on AWS secrets manager or SSM parameter store, then querying it through the provider module and passing through to the google provider. Open to any other ideas on this 🙂
Jaesonover 5 years ago
I'm having issues with TF being too pedantic about how I set up my IAC. I'm not really sure what the name for it is, but the way that it expects block syntax, doesn't accept the same as arguments, and doesn't allow for automation in the block syntax, at least as I see it. In the example below, I want to define a number of s3 buckets as part of the IAC for a microservice based application. If I had started with boto3, I'd be done by now, and have all the flexibility I needed. I'm pretty frustrated with this. Anyway, is there any way to do what I have done below in a more DRY / maintainable way? I just discovered another variation of the bucket which also appears to be defined in block syntax -- one of the buckets is encrypted. So that means another 2-3 blocks for just that bucket.
Jaesonover 5 years ago
locals {
private_buckets = [
"company-${var.environment}-secrets",
"company-${var.environment}-service-auth",
# object level logging = company-prod-cloudtrail-logs
"company-${var.environment}-db-connections",
# object level logging = company-prod-cloudtrail-logs
"company-${var.environment}-service-file-upload",
# object level logging = company-prod-cloudtrail-logs
"company-${var.environment}-service-feedback"
# object level logging = company-prod-cloudtrail-logs
]
default_lifecycle = {
id = "DeleteAfterOneMonth"
expiration_days = 31
abort_incomplete_multipart_upload_days = 7
enabled = false
}
private_buckets_w_lifecycles = {
"company-service-imports" = {
"name" = "company-${var.environment}-service-imports"
"lifecycle_rl" = local.default_lifecycle
}
}
public_object_buckets = [
# "company-${var.environment}-service-transmit"
]
public_buckets_w_lifecycles = {
"company-service-transmit" = {
"name" = "company-${var.environment}-service-transmit"
"lifecycle_rl" = local.default_lifecycle
}
}
}
resource "aws_s3_bucket" "adv2_priv_bucket" {
for_each = toset(local.private_buckets)
bucket = each.value
tags = local.tags
}
resource "aws_s3_bucket" "adv2_priv_bucket_w_lc" {
for_each = local.private_buckets_w_lifecycles
bucket = each.value.name
lifecycle_rule {
id = each.value.lifecycle_rl.id
expiration {
days = each.value.lifecycle_rl.expiration_days
}
enabled = each.value.lifecycle_rl.enabled
abort_incomplete_multipart_upload_days = each.value.lifecycle_rl.abort_incomplete_multipart_upload_days
}
tags = local.tags
}
resource "aws_s3_bucket" "adv2_pubobj_bucket_w_lc" {
for_each = local.public_buckets_w_lifecycles
bucket = each.value.name
# log to cloudtrail bucket (this is for server logging, not object level logging)
# logging {
# target_bucket = aws_s3_bucket.adv2_cloudtrail_log_bucket.id
# }
lifecycle_rule {
id = each.value.lifecycle_rl.id
expiration {
days = each.value.lifecycle_rl.expiration_days
}
enabled = each.value.lifecycle_rl.enabled
abort_incomplete_multipart_upload_days = each.value.lifecycle_rl.abort_incomplete_multipart_upload_days
}
tags = local.tags
}
resource "aws_s3_bucket" "adv2_pubobj_bucket" {
for_each = toset(local.public_object_buckets)
bucket = each.value
tags = local.tags
}
resource "aws_s3_bucket_public_access_block" "adv2_priv_s3" {
for_each = aws_s3_bucket.adv2_priv_bucket
bucket = each.value.id
# AWS console language in comments
# Block public access to buckets and objects granted through new access control lists (ACLs)
block_public_acls = true
# Block public access to buckets and objects granted through any access control lists (ACLs)
ignore_public_acls = true
# Block public access to buckets and objects granted through new public bucket or access point policies
block_public_policy = true
# Block public and cross-account access to buckets and objects through any public bucket or access point policies
restrict_public_buckets = true
}
resource "aws_s3_bucket_public_access_block" "adv2_priv_s3_w_lc" {
for_each = aws_s3_bucket.adv2_priv_bucket_w_lc
bucket = each.value.id
# AWS console language in comments
# Block public access to buckets and objects granted through new access control lists (ACLs)
block_public_acls = true
# Block public access to buckets and objects granted through any access control lists (ACLs)
ignore_public_acls = true
# Block public access to buckets and objects granted through new public bucket or access point policies
block_public_policy = true
# Block public and cross-account access to buckets and objects through any public bucket or access point policies
restrict_public_buckets = true
}rssover 5 years ago(edited)
v0.14.0-alpha20200910
0.14.0 (Unreleased)
ENHANCEMENTS:
cli: A new global command line option -chdir=..., placed before the selected subcommand, instructs Terraform to switch to a different working directory before executing the subcommand. This is similar to switching to a new directory with cd before running Terraform, but it avoids changing the state of the calling shell. (<a href="https://github.com/hashicorp/terraform/issues/26087" data-hovercard-type="pull_request"...
0.14.0 (Unreleased)
ENHANCEMENTS:
cli: A new global command line option -chdir=..., placed before the selected subcommand, instructs Terraform to switch to a different working directory before executing the subcommand. This is similar to switching to a new directory with cd before running Terraform, but it avoids changing the state of the calling shell. (<a href="https://github.com/hashicorp/terraform/issues/26087" data-hovercard-type="pull_request"...
Matt Gowieover 5 years ago
Terraform 0.13.3 will start warning of an upcoming deprecation to the ansible, chef, and puppet provisioners — https://www.reddit.com/r/Terraform/comments/iq2z11/terraform_0133_will_include_a_deprecation_notice/
E
erikover 5 years ago
Erik Osterman (Cloud Posse)over 5 years ago
lol!! alright everyone, let's gear up for terraform
0.14! =PIgorover 5 years ago
@Erik Osterman (Cloud Posse) Removing that upper bound on TF version is going to pay dividends
Adam Blackwellover 5 years ago
Apologies if this is a question that Google can answer: Are there any examples of using a kms secret with the
terraform-aws-rds-cluster? We currently run an Ansible job to create databases, but want to enable all developers to be able to create their own databases and are not willing to put credentials in a statefile.Sai Krishnaover 5 years ago
I am using tf-13 , terraform init works fine for me but when I do a terraform plan its throwing an error saying cannot initialize plugin. Anyone else saw this ?
Justin Laiover 5 years ago(edited)
Hi I’m working with the VPC Module but getting
Wondering if there’s any debug help, or what I can do to get around this. I’ve already done terraform apply and this is a 2nd run
Error: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
status code: 400, request id: 8332210c-dcbe-4b6d-bde4-c8d37ce655c0
on .terraform/modules/aws_infrastructure.vpc.eks_subnets/nat-gateway.tf line 67, in resource "aws_route" "default":
67: resource "aws_route" "default" {Wondering if there’s any debug help, or what I can do to get around this. I’ve already done terraform apply and this is a 2nd run
Chien Hueyover 5 years ago
Is there a way to pull the k8s auth token from the CloudPosse
terraform-aws-eks-cluster module? If not, is there any objection to a PR to expose the token via the module's outputs?Jonover 5 years ago
Hello, I'm trying to create a custom root module that is comprised of a bunch of public modules. In this case, a bunch of CloudPosse modules but am having some questions regarding the layout..
I'm trying to follow this
https://github.com/cloudposse/terraform-root-modules but then stumbled on this
https://github.com/cloudposse/reference-architectures/tree/0.13.0/templates .
So, is the repo layout supposed to look like what is shown in link #2 and using the example Makefile from link #1
I'm trying to follow this
https://github.com/cloudposse/terraform-root-modules but then stumbled on this
https://github.com/cloudposse/reference-architectures/tree/0.13.0/templates .
So, is the repo layout supposed to look like what is shown in link #2 and using the example Makefile from link #1
Patrick Sodréover 5 years ago
Hi Folks, I am trying to reference a module in a private repo. Is there a standard way to tell git+ssh to use the $USER variable for login instead of using "root" when using the geodesic shell?
Yen Kuoover 5 years ago
Hi guys, is it possible to attach existed security group to the EC2 instances instead of creating a new one in
terraform-aws-elastic-beanstalk-environment module?Matt Gowieover 5 years ago
Anyone have a recommendation for a blog post / video / example repo that shows how to do Multiple Accounts in AWS well using Terraform? IMHO it’s a complicated topic and I’ve blundered through it lightly before, so I’m looking to avoid that going forward.
Martin Canovasover 5 years ago(edited)
Hi guys, I’m getting the error below when using the
terraform-aws-ec2-instance module:Error: Your query returned no results. Please change your search criteria and try again.
on .terraform/modules/replicaSet40/main.tf line 64, in data "aws_ami" "info":
64: data "aws_ami" "info" {Martin Canovasover 5 years ago
and here is my Terraform code:
module "replicaSet40" {
source = "git::<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=tags/0.24.0>"
ssh_key_pair = var.ssh_key_pair
instance_type = var.ec2_replicaSet
ami = var.ami_rhel77
ami_owner = var.ami_owner
vpc_id = module.vpc.vpc_id
root_volume_size = 10
assign_eip_address = false
associate_public_ip_address = false
security_groups = [aws_security_group.sg_replicaSet.id]
subnet = module.subnets.private_subnet_ids[0]
name = "om-replicaSet40"
namespace = var.namespace
stage = var.stage
tags = var.instance_tags
}Martin Canovasover 5 years ago
variable "ami_owner" {
default = "655848829540"
}
variable "ami_omserver40" {
default = "ami-00916221e415292ed"
}Martin Canovasover 5 years ago
I do find my ami when running
aws ec2 describe-images --owners selfrssover 5 years ago(edited)
HCS Azure Marketplace Integration Affected
Sep 16, 09:32 UTC
Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.
If you have questions or are experiencing difficulties with this service please reach out to your customer support team.
IMPACT: Creating HashiCorp Consul Service on Azure clusters may fail.
We apologize for this...
Sep 16, 09:32 UTC
Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.
If you have questions or are experiencing difficulties with this service please reach out to your customer support team.
IMPACT: Creating HashiCorp Consul Service on Azure clusters may fail.
We apologize for this...
rssover 5 years ago
HCS Azure Marketplace Integration Affected
Sep 16, 14:06 UTC
Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.
We apologize for this disruption in service and appreciate your patience.Sep 16, 09:32 UTC
Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with...
Sep 16, 14:06 UTC
Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.
We apologize for this disruption in service and appreciate your patience.Sep 16, 09:32 UTC
Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with...
rssover 5 years ago(edited)
HCS Azure Marketplace Integration Affected
Sep 16, 16:16 UTC
Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.
HashiCorp Cloud TeamSep 16, 14:06 UTC
Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.
We apologize for this disruption in service and appreciate...
Sep 16, 16:16 UTC
Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.
HashiCorp Cloud TeamSep 16, 14:06 UTC
Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.
We apologize for this disruption in service and appreciate...
rssover 5 years ago
HCS Azure Marketplace Integration Affected
Sep 16, 16:35 UTC
Resolved - We are considering this incident resolved. If you see further issues please contact HashiCorp Support.
We apologize for this disruption in service and appreciate your patience.
Hashicorp Cloud TeamSep 16, 16:16 UTC
Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.
HashiCorp Cloud TeamSep 16, 14:06 UTC
Update - We are...
Sep 16, 16:35 UTC
Resolved - We are considering this incident resolved. If you see further issues please contact HashiCorp Support.
We apologize for this disruption in service and appreciate your patience.
Hashicorp Cloud TeamSep 16, 16:16 UTC
Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.
HashiCorp Cloud TeamSep 16, 14:06 UTC
Update - We are...
MrAtheistover 5 years ago
Can someone chime in on the pros and cons of using terraform "workspace"? I'm trying to see how to structure TF for multiple environments and most of the "advanced" gurus prefer to avoid it. This is the one im following and I'm so confused as a beginner newb 🤐
https://www.oreilly.com/library/view/terraform-up-and/9781491977071/ch04.html
https://www.oreilly.com/library/view/terraform-up-and/9781491977071/ch04.html
rssover 5 years ago(edited)
v0.13.3
0.13.3 (September 16, 2020)
BUG FIXES:
build: fix crash with terraform binary on openBSD (#26250)
core: prevent create_before_destroy cycles by not connecting module close nodes to resource instance destroy nodes (<a href="https://github.com/hashicorp/terraform/issues/26186" data-hovercard-type="pull_request"...
0.13.3 (September 16, 2020)
BUG FIXES:
build: fix crash with terraform binary on openBSD (#26250)
core: prevent create_before_destroy cycles by not connecting module close nodes to resource instance destroy nodes (<a href="https://github.com/hashicorp/terraform/issues/26186" data-hovercard-type="pull_request"...
M
Matt Gowieover 5 years ago
Hey folks — would love some input into a module dependency issue I’m having using the CP terraform-aws-elasticsearch module.
I have my root project which consumes the above module. That module takes in an enabled flag var and a dns_zone_id var. They are used together in the below expression to determine if the ES module should create hostnames for the ES cluster:
This is invoked twice for two different hostnames (kibana endpoint and normal ES endpoint).
Now my consumption of the ES module doesn’t do anything special AFAICT. I do pass in dns_zone_id as an reference to another modules output:
I previously thought the module in module usage pattern was causing the below issue (screenshotted) because that was just too deep of a dependency tree for Terraform to walk (or something along those lines), but I’ve just now upgraded to Terraform 0.13 for this project and I’m using the new
I have my root project which consumes the above module. That module takes in an enabled flag var and a dns_zone_id var. They are used together in the below expression to determine if the ES module should create hostnames for the ES cluster:
module "domain_hostname" {
source = "git::<https://github.com/cloudposse/terraform-aws-route53-cluster-hostname.git?ref=tags/0.7.0>"
enabled = var.enabled && var.dns_zone_id != "" ? 1 : 0
...
}This is invoked twice for two different hostnames (kibana endpoint and normal ES endpoint).
Now my consumption of the ES module doesn’t do anything special AFAICT. I do pass in dns_zone_id as an reference to another modules output:
dns_zone_id = module.subdomain.zone_idI previously thought the module in module usage pattern was causing the below issue (screenshotted) because that was just too deep of a dependency tree for Terraform to walk (or something along those lines), but I’ve just now upgraded to Terraform 0.13 for this project and I’m using the new
depends_on = [module.subdomain]. Yet, I’m still getting this same error as I was on 0.12:Martin Canovasover 5 years ago
Hey folks, after upgrading my Terraform from 0.12 to 0.13, I’m unable to run
terraform initdue to terraform failing to find provider packages.Swetaover 5 years ago
Hello All, quick question regarding terraform... how can I use terraform to clone an existing EMR cluster
Nitin Prabhuover 5 years ago(edited)
👋 Hi guys this is Nitin here and I have just came across this slack channel. If this is not the right channel then please do let me know.
As part of provisioning EKS cluster on AWS we are exploring
What is the advantage of using cloud posse terraform module over the community published terraform module to provision EKS cluster on AWS
Thanks a lot
As part of provisioning EKS cluster on AWS we are exploring
terraform-aws-eks-cluster https://github.com/cloudposse/terraform-aws-eks-clusterWhat is the advantage of using cloud posse terraform module over the community published terraform module to provision EKS cluster on AWS
Thanks a lot
Jimmie Butlerover 5 years ago
Anyone been able to get https://github.com/cloudposse/terraform-aws-ecs-web-app to work without codepipeline/git enabled?
Aumkar Prajapatiover 5 years ago(edited)
Hey guys, had a quick question, is there any reason adding or removing a rule from a wafv2 acl in the terraform itself forces a destroy/recreate of the entire acl? Currently trying to look for ways to get around this as I need the ACL modified in place rather than destroyed everytime a dev goes to modify the wafv2 acl rules.
Brandon Wilsonover 5 years ago
Anyone in here using Firelens to ship logs to multiple destinations? I’m using
cloudposse/ecs-container-definition/aws and trying to come up with a log_configuration that will ship to both cloudwatch and logstash both.U
U010W9VSBTLover 5 years ago(edited)
Do you pin the version of TF and/or your providers/plugins?
Yoni Leitersdorf (Indeni Cloudrail)over 5 years ago
Would appreciate your guys’ feedback on the above. Trying to determine best practices for our environment.
Jaesonover 5 years ago
Hi everyone. I'm creating public subnets like this:
and I'd like to be able to refer to them similar to this:
This also failed to work:
... I've since been unable to figure out how to refer to the subnets created as a list of strings
I think the issue is that subnets is a collection of objects, not a list of strings, but I'm not sure how to say give me back a list of strings of attribute x for each object in the collection.
I'm also really not sure how to easily figure out what exactly aws_subnet.adv2_public_subnets is returning without breaking apart the project and creating something brand new just to figure out what that would be. ... is there a way to see this?
resource "aws_subnet" "adv2_public_subnets" {
for_each = var.adv2_public_subnet_map[var.environment]
vpc_id = var.vpc_map[var.environment]
cidr_block = each.value
availability_zone = each.key
tags = merge(local.tags, { "Name" = "adv2-${var.environment}-pub-net-${each.key}" } )
}and I'd like to be able to refer to them similar to this:
resource "aws_lb" "aws_adv2_public_gateway_alb" {
name = "services-${var.environment}-public"
internal = false
load_balancer_type = "application"
subnets = aws_subnet.adv2_public_subnets
idle_timeout = 120
tags = local.tags
}This also failed to work:
subnets = [aws_subnet.adv2_public_subnets[0].id, aws_subnet.adv2_public_subnets[1].id]... I've since been unable to figure out how to refer to the subnets created as a list of strings
I think the issue is that subnets is a collection of objects, not a list of strings, but I'm not sure how to say give me back a list of strings of attribute x for each object in the collection.
I'm also really not sure how to easily figure out what exactly aws_subnet.adv2_public_subnets is returning without breaking apart the project and creating something brand new just to figure out what that would be. ... is there a way to see this?
organicnzover 5 years ago(edited)
Hi guys, I’ve got an error when tried
Code: https://gitlab.com/organicnz/gitops-experiment.git
terraform apply to spin up a few nodes on Google, just preparing infra for the GitLab Ci/CD pipelines on Kubernetes. We had an outage at our ISP for a few days, but not sure that it can be related to this issue.Code: https://gitlab.com/organicnz/gitops-experiment.git
terraform apply -auto-approve -lock=false
google_container_cluster.default: Creating...
google_container_cluster.default: Still creating... [10s elapsed]
Failed to save state: HTTP error: 308
Error: Failed to persist state to backend.
The error shown above has prevented Terraform from writing the updated state
to the configured backend. To allow for recovery, the state has been written
to the file "errored.tfstate" in the current working directory.
Running "terraform apply" again at this point will create a forked state,
making it harder to recover.
To retry writing this state, use the following command:
terraform state push errored.tfstateJurgenover 5 years ago
is there some way I can get tf to load a directory of variable files?
PePe Amengualover 5 years ago
Im staring upgrading TF projects to 0.13 an in one particular project I have 3 provider aliases and I’m getting
Error: missing provider provider["<http://registry.terraform.io/hashicorp/aws|registry.terraform.io/hashicorp/aws>"].us_east_2 which is weird because it does not complain for any other and the upgrade command works just fineLaurynasover 5 years ago(edited)
I'm creating a cloudfront terraform module and want to optionally add geo restriction:
variables.tf:
However this gives me an error when I pass the default null variable:
Is there a way to fix this? Or am I doing it wrong?
restrictions {
dynamic "geo_restriction" {
for_each = var.geo_restriction
content {
restriction_type = geo_restriction.restriction_type
locations = geo_restriction.locations
}
}
}variables.tf:
variable "geo_restriction" {
type = object({
restriction_type = string
locations = list(string)
})
description = "(optional) geo restriction for cloudfront"
default = null
}However this gives me an error when I pass the default null variable:
for_each = var.geo_restriction
|----------------
| var.geo_restriction is null
Cannot use a null value in for_each.Is there a way to fix this? Or am I doing it wrong?
Jimmie Butlerover 5 years ago
Anyone have an example including EFS + Fargate, including permissions?
Richard Quadlingover 5 years ago
Hello. Just looking to use https://github.com/cloudposse/terraform-aws-elasticache-redis. Part of the task is to create users on the redis server that are essentially read only users. Is this possible with this module, or terraform in general? We already have a bastion SSH tunnel in place that only allows tunnelling to specific destinations, so no issue with connecting to the Redis instances.
My guess is that unless there’s a specific resource to monitor, terraform isn’t going to be involved.
But any suggestions would be appreciated.
My guess is that unless there’s a specific resource to monitor, terraform isn’t going to be involved.
But any suggestions would be appreciated.
MrAtheistover 5 years ago(edited)
Does anyone know how to terrafom apply just for additional outputs? (and ignore the rest of the changes as it's being tempered with)
edit: there doesnt seem to be a way to pass in
edit: there doesnt seem to be a way to pass in
ignore_changes into a module? im using the vpc module and just want to append some outputs ive missed without messing with the diff.Tomekover 5 years ago
👋 I'm trying to use https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack and was wondering how the
kms_key_arn is supposed to be used with the required slack_webhook_url string parameter. I was going to create a SecureString parameter in Parameter Store and am not sure if that's the correct way to go about using kms_key_arnPatrick Joyceover 5 years ago
is there anyway to mv a local terraform state to remote s3, I feel like im missing something super trivial
D
DJover 5 years ago
Anyone have any more detailed info on this new OSS project from Hashicorp?
DJover 5 years ago
We’re on
0.14 already?!rssover 5 years ago(edited)
v0.14.0-alpha20200923
0.14.0 (Unreleased)
UPGRADE NOTES:
configs: The version argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements declarations instead. (<a href="https://github.com/hashicorp/terraform/issues/26135" data-hovercard-type="pull_request"...
0.14.0 (Unreleased)
UPGRADE NOTES:
configs: The version argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements declarations instead. (<a href="https://github.com/hashicorp/terraform/issues/26135" data-hovercard-type="pull_request"...
RBover 5 years ago
anyone able to do an s3_import using an rds cluster ?
Jonover 5 years ago
can
for_each be used for data source lookups? I want to do something like this:## retrieve all organizational account ID's
data "aws_organizations_organization" "my_org" {
for_each = toset(var.ACCOUNT_ID)
ACCOUNT_ID = each.key
}OliverSover 5 years ago(edited)
I like to use cloudposse/terraform-state-backend but it seems overkill to create one bucket per terraform state. The module doesn't seem to allow me to re-use an existing bucket, is there another module that still does the dynamodb setup but uses already existing module?
Almost seems like the s3 stuff should be in separate module, this way one could create a common bucket to be used for all terraform stacks (in separate "folders" in that bucket, of course) and a dynamodb table + backend.tf file for each cluster. I could refactor that myself of course, but then I would loose the bug fixes & improvements you guys make.
Almost seems like the s3 stuff should be in separate module, this way one could create a common bucket to be used for all terraform stacks (in separate "folders" in that bucket, of course) and a dynamodb table + backend.tf file for each cluster. I could refactor that myself of course, but then I would loose the bug fixes & improvements you guys make.
Jurgenover 5 years ago
any ideas? https://discuss.hashicorp.com/t/data-aws-iam-policy-document-and-for-each-showing-changes-on-every-plan-and-nothing-on-apply/14606 Having issues with aws_iam_policy_document always showing a change.
Abel Luckover 5 years ago(edited)
I'm looking for an easy pattern for deploying lambdas with terraform, when the lambda code lives in the terraform module repo. This is for small lambdas that provide maintenance or config services. The problem is always updating the lambda when the code changes: a combination of a
Is there some other pattern to make this easier?
I've thought about packaging the lambda in gitlab/github CI, but terraform cannot fetch a URL to deploy the lambda source 😢
null_resource to build the lambda and an archive_file to package it into a zip works, but we end up having a build_number as a trigger on the null_resources that we have to bump to get it to update the code.Is there some other pattern to make this easier?
I've thought about packaging the lambda in gitlab/github CI, but terraform cannot fetch a URL to deploy the lambda source 😢
Erik Osterman (Cloud Posse)over 5 years ago
heads up: if you're using our
terraform-aws-rds-cluster module, we fixed a "bad practice" related to using an inline security group rule, but upgrading is a breaking change. we've documented one way of migrating here: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/83Yashover 5 years ago
What is the best way to pass the same local/variable to each module? I was the copy to be available to all our modules. It would be great if there is any way to declare global variable
OliverSover 5 years ago(edited)
In the latest of terraform-aws-backend-state module, the region can no longer be specified as module parameter, it is inferred from the provider region. If the region were a module parameter, I could loop over a set of regions. How can I do this now?
Abel Luckover 5 years ago
I'm working on a module that sets up an aws root account with an Org and children accounts. In this module I want to 1) create an audit logs account 2) create a bucket in this logs account. How would I go about doing this? How would terraform execute actions in a newly created account?
charlespogiover 5 years ago
hi all, first time to use this for beanstalk - https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment
charlespogiover 5 years ago
can you please help point out which permissions i need to add to that user?
OliverSover 5 years ago
No one has any idea on https://sweetops.slack.com/archives/CB6GHNLG0/p1600992655024200?
charlespogiover 5 years ago
do we have something like aws ec2imagebuilder in terraform?
vroadover 5 years ago
I can't disable CloudWatch alarms in https://github.com/cloudposse/terraform-aws-elasticache-redis
I don't want to create them for dev env, since it incurs some cost per month.
I don't want to create them for dev env, since it incurs some cost per month.
vroadover 5 years ago
possibly solved by adding another module variable that disable alarms, or disable alarms when nothing specified for alarm_actions and ok_actions. Which is better?
sahil kambojover 5 years ago
Hey guys facing this issue
every time i do
its always update this alarm
everything is fine on aws and tf.state.
deleted state locally. but its still there.
every time i do
terraform apply
(applied many times)its always update this alarm
~ dimensions = {
~ "AutoScalingGroupName" = "terra-autoscaling-asg-garden" -> "terra-autoscaling-asg-prod"
}
evaluation_periods = 2
id = "cpu-low-alarm"everything is fine on aws and tf.state.
deleted state locally. but its still there.
zidanover 5 years ago
#terraform
Build once and run everywhere, is a great concept behind docker and containers in general, but how to deploy these containers, here is how I used ECS to deploy my containers, check it out and let me know how do you deploy your containers?
https://www.dailytask.co/task/manage-your-containers-deployment-using-aws-ecs-with-terraform-ahmed-zidan
Build once and run everywhere, is a great concept behind docker and containers in general, but how to deploy these containers, here is how I used ECS to deploy my containers, check it out and let me know how do you deploy your containers?
https://www.dailytask.co/task/manage-your-containers-deployment-using-aws-ecs-with-terraform-ahmed-zidan
PePe Amengualover 5 years ago
is there a
WORKSPACE EN variable in terraform ?Yashover 5 years ago
I am coverting this cloudformation to Terraform:
I am unable to find how can I use the concept of GenerateSecretString with Terraform?
AppUserCredentials:
Type: AWS::SecretsManager::Secret
Properties:
Name: !Sub "${AWS::StackName}/app-user-credentials"
GenerateSecretString:
SecretStringTemplate: '{"username": "app_user"}'
GenerateStringKey: 'password'
PasswordLength: 16
ExcludePunctuation: trueI am unable to find how can I use the concept of GenerateSecretString with Terraform?
Igor Bronovskyiover 5 years ago
Hello.
I need generate an
I need generate an
.env file from terraform resource. How can I do it?Solomon Tekleover 5 years ago
Hello, has anyone configured Certificate based site to site VPN in Terraform . I get the following error when I try
Error: Unsupported argument
on aws.tf line 69, in resource “aws_customer_gateway” “customer_gateway_1":
69: certificate-arn = “arn
acm:us-east-2:894867615160:certificate/e3fc78b9-b946-4b41-8494-b33510aea894”
An argument named “certificate-arn” is not expected here.
Error: Unsupported argument
on aws.tf line 69, in resource “aws_customer_gateway” “customer_gateway_1":
69: certificate-arn = “arn
acm:us-east-2:894867615160:certificate/e3fc78b9-b946-4b41-8494-b33510aea894”An argument named “certificate-arn” is not expected here.
Solomon Tekleover 5 years ago
the parameter exist in command line : create-customer-gateway
--bgp-asn <value>
[--public-ip <value>]
[--certificate-arn <value>]
--type <value>
[--tag-specifications <value>]
[--device-name <value>]
[--dry-run | --no-dry-run]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
--bgp-asn <value>
[--public-ip <value>]
[--certificate-arn <value>]
--type <value>
[--tag-specifications <value>]
[--device-name <value>]
[--dry-run | --no-dry-run]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
Solomon Tekleover 5 years ago
tying to make this works in a testbed BTW
MrAtheistover 5 years ago
Anyone knows of a tool (thats equivalent to cloudformation console) to list all the resources for a terraform state? (depth = 1 is fine, and
terraform show/graph IS NOT human readable... 🤦 )lorenover 5 years ago
terraform state list? or do you want actual resource ids?charlespogiover 5 years ago(edited)
Error: AccessDenied: User: arn:aws:iam::395290764396:user/sabsab is not authorized to access this resource
status code: 403, request id: 918de6f4-347c-420f-b7af-6a19b9a029a3
on .terraform\modules\elastic_beanstalk_environment.dns_hostname\main.tf line 1, in resource "aws_route53_record" "default":
1: resource "aws_route53_record" "default" {charlespogiover 5 years ago
what iam permission do i need to add?
lorenover 5 years ago
T
Tom Vaughanover 5 years ago(edited)
Having an issue with https://github.com/cloudposse/terraform-aws-ecs-container-definition and log configuration. What do I need to set so that Auto-configure Cloudwatch Logs is checked in the container definition in ECS?
When task definition is created the log parameters are set as defined above but the box to Auto-configure Cloudwatch Logs is not checked in ECS.
log_configuration = {
logDriver = "awslogs"
options = {
"awslogs-group" = "/ecs/ctportal"
"awslogs-region" = var.vpc_region
"awslogs-stream-prefix" = "ecs"
}
secretOptions = []
}When task definition is created the log parameters are set as defined above but the box to Auto-configure Cloudwatch Logs is not checked in ECS.
rssover 5 years ago(edited)
v0.13.4
0.13.4 (September 30, 2020)
UPGRADE NOTES:
The built-in vendor (third-party) provisioners, which include habitat, puppet, chef, and salt-masterless are now deprecated and will be removed in a future version of Terraform. More information on Discuss.
Deprecated interpolation-only expressions are detected in more contexts in...
0.13.4 (September 30, 2020)
UPGRADE NOTES:
The built-in vendor (third-party) provisioners, which include habitat, puppet, chef, and salt-masterless are now deprecated and will be removed in a future version of Terraform. More information on Discuss.
Deprecated interpolation-only expressions are detected in more contexts in...
mrwackyover 5 years ago
So...
terraform graph.. As useless as puppet's? Yessir.Alucasover 5 years ago
so did terraform module chaining get wrecked with 0.13 or am I missing something? We have a resource_group module that creates the group and then the cluster module references that data, but it fails in 0.13 now
Alucasover 5 years ago(edited)
module "resource_group" {
source = "../modules/azure_resource_group"
resource_group_name = "test"
resource_group_location = "westus"
}
module "kubernetes" {
source = "../modules/azure_aks"
cluster_name = var.cluster_name
kubernetes_version = var.kubernetes_version
resource_group_name = module.resource_group.nameoutput:
Error: Error: Resource Group "test" was not found
on ../modules/azure_aks/main.tf line 1, in data "azurerm_resource_group" "rg":
1: data "azurerm_resource_group" "rg" {seems to work in 0.12 without issue