204 messages
Anthony Voutasabout 5 years ago
This is the security group error I have, it’s possible this is more of an AWS question than a terraform one. Why can’t I give ingress access to my redis cluster to a security group in another VPC?
Error: Error authorizing security group rule type ingress: InvalidGroup.NotFound: You have specified two resources that belong to different networks.
status code: 400, request id: cfa5dd4d-55c5-44dc-9c86-c18cd1b079c4
on .terraform/modules/elasticache-redis/main.tf line 22, in resource "aws_security_group_rule" "ingress_security_groups":
22: resource "aws_security_group_rule" "ingress_security_groups" {michaelssinghabout 5 years ago
Running into the following error when using the terraform-aws-ecs-container-definition module
michaelssinghabout 5 years ago
Error: Variables not allowed
on <value for var.environment> line 1:
(source code not available)
Variables may not be used here.michaelssinghabout 5 years ago
With a configuration that looks like this
{
name = "SPRING_PROFILES_ACTIVE"
value = "${var.spring_active_profile}"
},michaelssinghabout 5 years ago
Seems a bit odd to me that this would not be allowed?
Steve Wade (swade1987)about 5 years ago
can anyone recommended a Elasticache module upstream?
Steve Wade (swade1987)about 5 years ago
variable "replicas_per_node_group" {
type = number
default = 0
description = "Required when `cluster_mode_enabled` is set to true. Specify the number of replica nodes in each node group. Valid values are 0 to 5. Changing this number will force a new resource."
}Steve Wade (swade1987)about 5 years ago
is there a way to do this with
validation ?Garethabout 5 years ago(edited)
if you have a map/object and the key name needs to contain ":" character for backwards compatibility with my environment e.g.
Can you?
TF currently complains
I've tried a variety of escape characters but looks like this is a none starter. Any ideas please?
"terraform:managed" = string
"terraform:root" = stringCan you?
TF currently complains
"Object constructor map keys must be attribute names."I've tried a variety of escape characters but looks like this is a none starter. Any ideas please?
btaiabout 5 years ago
those of you that use terraform cloud for your modules i.e.
how do you test changes to your modules before cutting a new version for it?
is the best approach to just point your reference of the module at the git repo source during local testing and change it back once its ready?
module "consul" {
source = "<http://app.terraform.io/example-corp/k8s-cluster/azurerm|app.terraform.io/example-corp/k8s-cluster/azurerm>"
version = "1.1.0"
}how do you test changes to your modules before cutting a new version for it?
is the best approach to just point your reference of the module at the git repo source during local testing and change it back once its ready?
module "consul" {
source = "git@github.com:example-corp/terraform-azurerm-k8s-cluster.git?ref={new_changes}"
# source = "<http://app.terraform.io/example-corp/k8s-cluster/azurerm|app.terraform.io/example-corp/k8s-cluster/azurerm>"
# version = "1.1.0"
}Alex Jurkiewiczabout 5 years ago
You can use a local directory as the source
tweetyixabout 5 years ago
We use kitchen-terraform to test each version before releasing it.
Babar Baigabout 5 years ago(edited)
Greetings everyone.
I am using terraform-aws-ecs-container-definition and trying to add volumes using following code
But I am getting following error
Can anyone help me figure out what I am doing wrong here? I was unable to find an example.
According to the link
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html
I think the input is correct but I am unable to figure out the missing piece here. Any help is appreciated.
I am using terraform-aws-ecs-container-definition and trying to add volumes using following code
volumes_from = [
{
sourceContainer="applogs"
readOnly=false
}
]
mount_points = [
{
containerPath = "/app/log"
sourceVolume = "applogs"
}
]But I am getting following error
Error: ClientException: Invalid 'volumesFrom' setting. Unknown container: 'applogs'.
on main.tf line 151, in resource "aws_ecs_task_definition" "this":
151: resource "aws_ecs_task_definition" "this" { Can anyone help me figure out what I am doing wrong here? I was unable to find an example.
According to the link
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html
I think the input is correct but I am unable to figure out the missing piece here. Any help is appreciated.
Steve Wade (swade1987)about 5 years ago
is anyone using a postgres provider to create databases and users?
Steve Wade (swade1987)about 5 years ago
is anyone using https://tf-registry.herokuapp.com/providers/winebarrel/mysql/latest ?
rssabout 5 years ago(edited)
v0.14.0
0.14.0 (December 02, 2020)
NEW FEATURES:
Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.
terraform init will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a href="https://github.com/hashicorp/terraform/issues/26524" data-hovercard-type="pull_request"...
0.14.0 (December 02, 2020)
NEW FEATURES:
Terraform now supports marking input variables as sensitive, and will propagate that sensitivity through expressions that derive from sensitive input variables.
terraform init will now generate a lock file in the configuration directory which you can check in to your version control so that Terraform can make the same version selections in future. (<a href="https://github.com/hashicorp/terraform/issues/26524" data-hovercard-type="pull_request"...
RBabout 5 years ago
finally no more worries about tfstate and changing minor versions
RBabout 5 years ago
i'll wait for some minor updates before moving to tf14 . it looks impressive so far
Babar Baigabout 5 years ago(edited)
Can someone point me to relevant material in writing Terraform code which depicts industry standards. I am struggling to structure my code in way that it does not have duplication and can be used to deploy in multiple accounts with multiple environments? I explored Terragrunt and it is one of my options that I can use to remove duplication. So simply put I am looking for
1. Industry standard large enterprise code structuring method for Terraform
2. Avoid duplication
For now I simply create a new folder for each new use case. For example
Whereas the TF states are maintained in S3.
1. Industry standard large enterprise code structuring method for Terraform
2. Avoid duplication
For now I simply create a new folder for each new use case. For example
test-org-one-ecs-solution-production
- modules
- test-org-vpc
- main.tf
- rest of the files
- test-org-rds
- main.tf
- ...
- test-org-ecs-app
- main.tf (it has resources defined and it also call Terraform AWS modules to create a complete app solution of ECS for test-org)
- ...
test-org-one-ecs-solution-staging
- copy of aboveWhereas the TF states are maintained in S3.
Jonabout 5 years ago
Good morning, I was curious if anyone knew of a better way to consume this module then how I currently am doing and wouldn't mind sharing. I'm using https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/latest
Basically, my
And I am passing in a
Basically, my
main.tf looks like this:module "ssm_parameter_store" {
source = "cloudposse/ssm-parameter-store/aws"
version = "0.4.1"
parameter_write = var.parameter_write
kms_arn = data.aws_kms_key.ec2_ami_cmk.arn ## encrypts/decrypts secrets marked as "SecretString"
}And I am passing in a
tfvars for each environment (dev,test,prod).parameter_write = [
{
name = "/dev/app/us-east-1/path/to/secrets/foo"
value = "abc123"
type = "String"
overwrite = "true"
},
{
name = "/dev/app/us-east-1/path/to/secrets/password"
value = "def456"
type = "SecureString"
overwrite = "true"
}
]Ameliaabout 5 years ago
I'm also wondering about the beanstalk buckets you're using. They look like the prod beanstalk?
Steve Wade (swade1987)about 5 years ago
does anyone have an example of firing a lambda when a RDS database (not aurora) is created or modified?
Steve Wade (swade1987)about 5 years ago
i am trying to configure one to provision the instance with users and databases on creation
Steve Wade (swade1987)about 5 years ago
i am trying to work out what the
event_pattern would look likeJoan Portaabout 5 years ago(edited)
Hi guys! any recommendation of good tool to
import AWS resources in Terraform? Something better than this because I have lots of resources.terraform import example_thing.foo abc123Steve Wade (swade1987)about 5 years ago
Does anyone have an opinion on the thread I post here - https://twitter.com/swade1987/status/1334554787711492097?s=21
Shannon Dunnabout 5 years ago
Question regarding upgrade modules to 0.13, we usually dont declare the providers in module, and let them use the provider configuration from the caller terraform. but it seems 0.13 requires something like this in all module repos
terraform {required_version = ">= 0.13"required_providers {aws = {source = "hashicorp/aws"version = "~> 2.0"}local = {source = "hashicorp/local"version = "~> 1.2"}null = {source = "hashicorp/null"version = "~> 2.0"}}}Shannon Dunnabout 5 years ago
other wise it will try and use -/aws instead of hashicorp/aws
Shannon Dunnabout 5 years ago
is this instantiating a new aws provider in the module, or just requiring the root tf have that version?
Shannon Dunnabout 5 years ago
should i always have a block like this in ALL modules with relevant providers
Shannon Dunnabout 5 years ago
is that best practice now?
Shannon Dunnabout 5 years ago
docs are a little unclear if this is only if i want a module local provider
Alex Jurkiewiczabout 5 years ago
The above block is not "required", but generally you should include a minimal version where you specify the source of each required module. The source is a mapping from friendly name ("aws", "pagerduty") to the Hashicorp registry name ("hashicorp/aws" or "pagerduty/pagerduty").
The recommendation from Hashicorp is that top level configurations specify strict version strings ("~>" or "="), while modules specify minimum version only for providers (">=").
And I recommend in this Slack you thread your messages and post more than once sentence per message :)
The recommendation from Hashicorp is that top level configurations specify strict version strings ("~>" or "="), while modules specify minimum version only for providers (">=").
And I recommend in this Slack you thread your messages and post more than once sentence per message :)
Shannon Dunnabout 5 years ago
ahhhhh
Shannon Dunnabout 5 years ago
ah ok
lorenabout 5 years ago
The way it appears to require -/aws is an artifact of the plan for the upgrade to tf 0.13 tfstate. After the upgrade apply, you shouldn't see that anymore
lorenabout 5 years ago
You can use
terraform state replace-providers to fix it before the apply if you want. Just be aware that modifies your tfstate and may cause problems if you wanted to keep using the earlier version... https://www.terraform.io/docs/commands/state/replace-provider.htmlJurgenabout 5 years ago
hey team @Brandon Wilson I am going through a bunch of your modules that I use but I am on TF 0.14….
https://github.com/cloudposse/terraform-aws-iam-system-user/pull/38
https://github.com/cloudposse/terraform-aws-dynamodb-autoscaler/pull/27
https://github.com/cloudposse/terraform-aws-route53-cluster-hostname/pull/29
Once these ones are in, i’ll do the next round. Thanks
https://github.com/cloudposse/terraform-aws-iam-system-user/pull/38
https://github.com/cloudposse/terraform-aws-dynamodb-autoscaler/pull/27
https://github.com/cloudposse/terraform-aws-route53-cluster-hostname/pull/29
Once these ones are in, i’ll do the next round. Thanks
jonjitsuabout 5 years ago
Has anyone ever used a aws_cloudformation_stack because cloudformation did something better like resource updates?
Yoni Leitersdorf (Indeni Cloudrail)about 5 years ago
Anybody here using the new service_ipv4_cidr field in
aws_eks_cluster? Or looking to use it?Yoni Leitersdorf (Indeni Cloudrail)about 5 years ago
Separate question: we use TFLint and it feels a bit light in the rules that it has. Is there a better tool out there? Are there specific things you normally run into that TFLint doesn’t cover?
lorenabout 5 years ago
terraform is publishing a roadmap... hadn't noticed that before... https://github.com/hashicorp/terraform-provider-aws/blob/master/ROADMAP.md
Garethabout 5 years ago
Good afternoon, is there a quick way to take a map and remove any duplicate values, while maintain it as a map or creating a new one, as I need the key later on.
I've tried reversing the key and values e.g. making the value become the key. In the hope that the duplicate would then just replace what was there but looks like TF no longer allows that (Might never have allowed it but thought it did).
Equally tried converting it to a list and then running it though distinct, which works in terms of removing the duplicate values but obviously loses the key.
I've tried reversing the key and values e.g. making the value become the key. In the hope that the duplicate would then just replace what was there but looks like TF no longer allows that (Might never have allowed it but thought it did).
Equally tried converting it to a list and then running it though distinct, which works in terms of removing the duplicate values but obviously loses the key.
Garethabout 5 years ago(edited)
On a different note to the above questions. Can anyone please tell me if its possible yet to do a for_each within a resource and have it dynamically chance regions? Based on https://github.com/hashicorp/terraform/issues/19932
it looks like its still not possible but I was wondering if anybody has seen a work around with TF 0.14?
it looks like its still not possible but I was wondering if anybody has seen a work around with TF 0.14?
Steve Wade (swade1987)about 5 years ago
any lambda cloudwatch exports able to tell me why my lambda does not fire when my DB gets created ...
resource "aws_cloudwatch_event_rule" "harbor_rds_creation_or_modification_event" {
name = "${var.team_prefix}-${var.environment}-harbor-db-event"
description = "Capture any event related to the ${var.team_prefix}-${var.environment} harbor database."
event_pattern = <<PATTERN
{
"source": [
"aws.rds"
],
"resources": [
"${module.harbor_postgres.database_arn}"
],
"detail-type": [
"RDS DB Instance Event",
"RDS DB Cluster Event"
]
}
PATTERN
}
resource "aws_cloudwatch_event_target" "harbor_rds_creation_or_modification_event" {
rule = aws_cloudwatch_event_rule.harbor_rds_creation_or_modification_event.name
arn = module.harbor_lambda.arn
}
resource "aws_lambda_permission" "harbor_rds_creation_or_modification_event" {
statement_id = "Allow-Harbor-Database-Provisioner-Execution-From-Cloud-Watch-Event"
action = "lambda:InvokeFunction"
function_name = module.harbor_lambda.name
principal = "<http://events.amazonaws.com|events.amazonaws.com>"
source_arn = aws_cloudwatch_event_rule.harbor_rds_creation_or_modification_event.arn
}Matt Gowieabout 5 years ago
TIL —
overide.tf is a special file in Terraform: https://www.terraform.io/docs/configuration/override.htmlGarthabout 5 years ago
Hi All. Question about the use of the https://github.com/cloudposse/terraform-aws-cloudformation-stack module. I'm trying to use some values in the
but I get the error
Perhaps I'm just misunderstanding how to use that key-value map. Could someone take a look at my syntax and see if there is an obvious problem? Thank you!
parameters key-value map that are from local variables, e.g.module "ecs_cloudwatch_prometheus" {
source = "git::<https://github.com/cloudposse/terraform-aws-cloudformation-stack.git?ref=tags/0.4.1>"
enabled = true
namespace = "eg"
stage = var.env_name
name = "cloudwatch-prometheus"
template_url = "<https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/ecs-task-definition-templates/deployment-mode/replica-service/cwagent-prometheus/cloudformation-quickstart/cwagent-ecs-prometheus-metric-for-awsvpc.yaml>"
parameters = {
ECSClusterName = "${var.env_name}-ecs-cluster"
CreateIAMRoles = false
ECSLaunchType = "fargate"
SecurityGroupID = "${local.security_group_ids}"
SubnetID = "${local.subnet_ids}"
TaskRoleName = var.env_name == "production" ? "ecs_task_execution_role" : "${var.env_name}_ecs_task_execution_role"
ExecutionRoleName = var.env_name == "production" ? "ecs_role" : "${var.env_name}_ecs_role"
}
capabilities = ["CAPABILITY_IAM"]
}but I get the error
The given value is not suitable for child module variable "parameters" defined
at .terraform/modules/ecs_cloudwatch_prometheus/variables.tf:71,1-22: element
"SecurityGroupID": string required.Perhaps I'm just misunderstanding how to use that key-value map. Could someone take a look at my syntax and see if there is an obvious problem? Thank you!
btaiabout 5 years ago
anyone run into cycle errors running terraform destroy with eks + k8s/helm provider? If I remove the helm/kube resources by running terraform apply w/ the helm/k8s resources removed I can subsequently run terraform destroy on the eks cluster/worker. I’m running terraform on 0.12.29 still
Erik Osterman (Cloud Posse)about 5 years ago
Matt Gowieabout 5 years ago
Does anyone do terraform tests against their root modules? I’m assuming no, but if anyone is I’d like to hear your experience.
M
Matt Gowieabout 5 years ago
From Terraform 0.14 webinar — Lightly confirming we’re getting 1.0 after 0.15?
rssabout 5 years ago
v0.14.1
0.14.1 (December 08, 2020)
ENHANCEMENTS:
backend/remote: When using the enhanced remote backend with commands which locally modify state, verify that the local Terraform version and the configured remote workspace Terraform version are compatible. This prevents accidentally upgrading the remote state to an incompatible version. The check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, -ignore-remote-version. (<a...
0.14.1 (December 08, 2020)
ENHANCEMENTS:
backend/remote: When using the enhanced remote backend with commands which locally modify state, verify that the local Terraform version and the configured remote workspace Terraform version are compatible. This prevents accidentally upgrading the remote state to an incompatible version. The check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, -ignore-remote-version. (<a...
Y
Yoni Leitersdorf (Indeni Cloudrail)about 5 years ago
Any github actions users starting to get a weird error? (started over the past hour or so)
natalieabout 5 years ago
Hello, very general question ( just curious) if anyone here used/heard about Terraboard (https://github.com/camptocamp/terraboard)? any thoughts you might have?
rssabout 5 years ago
v0.14.2
0.14.2 (December 08, 2020)
BUG FIXES:
backend/remote: Disable the remote backend version compatibility check for workspaces set to use the "latest" pseudo-version. (#27199)
providers/terraform: Disable the remote backend version compatibility check for the terraform_remote_state data source. This check is unnecessary, because the...
0.14.2 (December 08, 2020)
BUG FIXES:
backend/remote: Disable the remote backend version compatibility check for workspaces set to use the "latest" pseudo-version. (#27199)
providers/terraform: Disable the remote backend version compatibility check for the terraform_remote_state data source. This check is unnecessary, because the...
Chris Wahlabout 5 years ago
Busy day for TF releases
cabrinhaabout 5 years ago
hello all
cabrinhaabout 5 years ago
I’m having an issue running some TF code from this PR: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1138
cabrinhaabout 5 years ago
Error: Invalid for_each argument
on ../../../modules/terraform-aws-eks/modules/node_groups/launchtemplate.tf line 2, in data "template_file" "workers_userdata":
2: for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.cabrinhaabout 5 years ago
the other guy says he can run the same code just fine … which is crazy — i tried the same version of TF that he’s on
lorenabout 5 years ago
are you both starting from a blank tfstate?
cabrinhaabout 5 years ago
pretty sure he is — let me check mine
cabrinhaabout 5 years ago
my
$ terraform state list is emptylorenabout 5 years ago
this kind of error is more common from a blank tfstate... double check the other one
cabrinhaabout 5 years ago
how can i get rid of my blank tfstate? 😄
cabrinhaabout 5 years ago
or … more importantly, how do i avoid the error?
lorenabout 5 years ago
once you apply, tfstate will not be empty
cabrinhaabout 5 years ago
apply fails too
lorenabout 5 years ago
the error is telling you, use
-target to apply dependent resources that make up your for_each expressioncabrinhaabout 5 years ago
dependent …
cabrinhaabout 5 years ago
i’ll try that
lorenabout 5 years ago
basically, you have something in
local.node_groups_expanded making it such that your k value is not known during the planlorenabout 5 years ago(edited)
if
k is not know in the plan phase, then terraform cannot determine the resource label, and it fails with this errorcabrinhaabout 5 years ago
so
terraform apply -target=module.eks.modules.node_groups ?cabrinhaabout 5 years ago
that ran, applied nothing, still same error
lorenabout 5 years ago
i can't give you the answer, i can only describe the condition under which that error occurs 🙂
cabrinhaabout 5 years ago
using more specific targets just results in help message being printed
Alex Jurkiewiczabout 5 years ago
what do you mean by "more specific targets"?
cabrinhaabout 5 years ago
terraform apply -target='module.eks.modules.node_groups.aws_launch_template.workers'cabrinhaabout 5 years ago
trying to apply this: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1138/files#diff-88d020257d5b3a657e10667cc97a9f3b0fa430b32bcd72d894d262d9c5351f3aR16
cabrinhaabout 5 years ago
i only have 1 module defined in my main.tf —
module.eks — that module contains another, called node_groupscabrinhaabout 5 years ago(edited)
$ terraform apply -target=‘module.eks.module.node_groups’ results in the same errorAlex Jurkiewiczabout 5 years ago
seems weird. The value of
local.node_groups_expanded seems to only depend on variables and static referencescabrinhaabout 5 years ago
yeah … so what the heck lol
cabrinhaabout 5 years ago(edited)
is there something wrong with this syntax?
for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }lorenabout 5 years ago
syntax looks fine to me
cabrinhaabout 5 years ago
node_groups_expanded also has a for k, v thing going oncabrinhaabout 5 years ago
can anyone else try running this for me? 😄
Alex Jurkiewiczabout 5 years ago
try opening
terraform console and see what value local.node_groups_expanded haslorenabout 5 years ago
one thing you could try is getting rid of the
template_file data source... you ought to be able to use the function templatefile() directlylorenabout 5 years ago(edited)
user_data = base64encode(templatefile("${path.module}/templates/userdata.sh.tpl", {
kubelet_extra_args = each.value["kubelet_extra_args"]
}
))lorenabout 5 years ago
template_file is deprecated anyway...cabrinhaabout 5 years ago
now it’s just complaining about the next block that uses that for_each
cabrinhaabout 5 years ago
Error: Invalid for_each argument
on terraform-aws-eks/modules/node_groups/launchtemplate.tf line 8, in resource "aws_launch_template" "workers":
8: for_each = { for k, v in local.node_groups_expanded : k => v if v["create_launch_template"] }
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.lorenabout 5 years ago
progress!
cabrinhaabout 5 years ago(edited)
“Terraform cannot predict how many instances will be created.”
I feel like we could easily count how many will be created using a function or something
I feel like we could easily count how many will be created using a function or something
Yoni Leitersdorf (Indeni Cloudrail)about 5 years ago
@loren @cabrinha get a room thread 🙂
sheldonhabout 5 years ago
Is anyone using the GitHub pull request comment feature with terraform cli? Digging the preview in the PR.
I do want to figure out if I could use Github deployment feature in actions to make that work smoother on the final merge and approval review.
Is after merge to master I want the final plan to be approved. Right now I have it trigger a run to approve in Terraform cloud but because the call is synchronous it means unless promptly resolved it will error with timeout.
I do want to figure out if I could use Github deployment feature in actions to make that work smoother on the final merge and approval review.
Is after merge to master I want the final plan to be approved. Right now I have it trigger a run to approve in Terraform cloud but because the call is synchronous it means unless promptly resolved it will error with timeout.
Erik Osterman (Cloud Posse)about 5 years ago
Yesterday we released our catalog for the full suite of managed AWS Config rules (including those for CIS). https://github.com/cloudposse/terraform-aws-config https://github.com/cloudposse/terraform-aws-config/tree/master/catalog
Mr.Devopsabout 5 years ago
Hi I’m using terraform cloud and have published a private module for my org. Is there any tips on how I can go about using best practice to reference the module in a separate repo I use to only apply variables using the tfe_variables resources?
Mr.Devopsabout 5 years ago
E.g if I commit my changes for all tfe_variables for my workspaces how can I have my repo reference my private module?
Perry Luoabout 5 years ago
I use the AWS Redshift Terraform module, https://github.com/terraform-aws-modules/terraform-aws-redshift. I got the error below. Error: InvalidClusterSubnetGroupStateFault: Vpc associated with db subnet group redshift-subnet-group does not exist. Per document, it says: redshift_subnet_group_name: The name of a cluster subnet group to be associated with this cluster. If not specified, new subnet will be created.
I use the module, terraform-aws-modules/vpc/aws to provision VPC with following subnets:
Below is the redshift code:
The error is gone if I comment out the line,
redshift_subnet_group_name = var.redshift_subnet_group_name
But, why?
I use the module, terraform-aws-modules/vpc/aws to provision VPC with following subnets:
private_subnets = var.private_subnets
public_subnets = var.public_subnets
database_subnets = var.database_subnets
elasticache_subnets = var.elasticache_subnets
redshift_subnets = var.redshift_subnetsBelow is the redshift code:
module "redshift" {
source = "terraform-aws-modules/redshift/aws"
version = "2.7.0"
redshift_subnet_group_name = var.redshift_subnet_group_name
subnets = data.terraform_remote_state.vpc.outputs.redshift_subnets
cluster_identifier = var.cluster_identifier
cluster_database_name = var.cluster_database_name
encrypted = false
cluster_master_password = var.cluster_master_password
cluster_master_username = var.cluster_master_username
cluster_node_type = var.cluster_node_type
cluster_number_of_nodes = var.cluster_number_of_nodes
enhanced_vpc_routing = false
publicly_accessible = true
vpc_security_group_ids = [module.sg.this_security_group_id]
final_snapshot_identifier = var.final_snapshot_identifier
skip_final_snapshot = true
}The error is gone if I comment out the line,
redshift_subnet_group_name = var.redshift_subnet_group_name
But, why?
Perry Luoabout 5 years ago
I got errors below:
Here are the code:
main.tf:
context.tf:
terraform validate
Error: Unsupported block type
on .terraform/modules/elasticsearch/main.tf line 105, in resource "aws_elasticsearch_domain" "default":
105: advanced_security_options {
Blocks of type "advanced_security_options" are not expected here.Error: Unsupported argument
on .terraform/modules/elasticsearch/main.tf line 139, in resource "aws_elasticsearch_domain" "default":
139: warm_enabled = var.warm_enabled
An argument named "warm_enabled" is not expected here.
Error: Unsupported argument
on .terraform/modules/elasticsearch/main.tf line 140, in resource "aws_elasticsearch_domain" "default":
140: warm_count = var.warm_enabled ? var.warm_count : null
An argument named "warm_count" is not expected here.
Error: Unsupported argument
on .terraform/modules/elasticsearch/main.tf line 141, in resource "aws_elasticsearch_domain" "default":
141: warm_type = var.warm_enabled ? var.warm_type : null
An argument named "warm_type" is not expected here.
[terragrunt] 2020/12/10 14:11:49 Hit multiple errors:Here are the code:
main.tf:
module "elasticsearch" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
subnet_ids = data.terraform_remote_state.vpc.outputs.private_subnets
zone_awareness_enabled = var.zone_awareness_enabled
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
#dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = var.kibana_hostname_enabled
domain_hostname_enabled = var.domain_hostname_enabled
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
context = module.this.context
}context.tf:
Perry Luoabout 5 years ago
module "this" {
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.22.0>"
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
context = var.context
}
# Copy contents of cloudposse/terraform-null-label/variables.tf here
variable "context" {
type = object({
enabled = bool
namespace = string
environment = string
stage = string
name = string
delimiter = string
attributes = list(string)
tags = map(string)
additional_tag_map = map(string)
regex_replace_chars = string
label_order = list(string)
id_length_limit = number
})
default = {
enabled = true
namespace = null
environment = null
stage = null
name = null
delimiter = null
attributes = []
tags = {}
additional_tag_map = {}
regex_replace_chars = null
label_order = []
id_length_limit = null
}
description = <<-EOT
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
EOT
}
variable "enabled" {
type = bool
default = true
description = "Set to false to prevent the module from creating any resources"
}
variable "namespace" {
type = string
default = "dev"
description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}
variable "environment" {
type = string
default = "dev-blue"
description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}
variable "stage" {
type = string
default = "dev-blue"
description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}
variable "name" {
type = string
default = "es-nsm-blue"
description = "Solution name, e.g. 'app' or 'jenkins'"
}
variable "delimiter" {
type = string
default = "-"
description = <<-EOT
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
EOT
}
variable "attributes" {
type = list(string)
default = []
description = "Additional attributes (e.g. `1`)"
}
variable "tags" {
type = map(string)
default = {}
description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}
variable "additional_tag_map" {
type = map(string)
default = {}
description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}
variable "label_order" {
type = list(string)
default = null
description = <<-EOT
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
EOT
}
variable "regex_replace_chars" {
type = string
default = null
description = <<-EOT
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
EOT
}
variable "id_length_limit" {
type = number
default = null
description = <<-EOT
Limit `id` to this many characters.
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
EOT
}Perry Luoabout 5 years ago
michaelssinghabout 5 years ago
Is it possible to retrieve the ARN of a specific key in a secret using aws_secretsmanager_secret or the aws_secretsmanager_secret_version data source?
michaelssinghabout 5 years ago
AWS docs say that the ARN to a specific key can be constructed this way
"arn:aws:secretsmanager:region:aws_account_id:secret:example-secret:example-key::"michaelssinghabout 5 years ago
I’m curious if it’s possible to just reference the ARN via a datasource
michaelssinghabout 5 years ago(edited)
Rather than constructing a string myself, eg:
data "aws_secretsmanager_secret" "this" {
name = var.secrets_manager_secret
}
locals {
example_service_token_secret_arn = data.aws_secretsmanager_secret.this.arn
}
valueFrom = "${local.example_service_token_secret_arn}::example_service_token::"M
Milindu Kumarageabout 5 years ago
Hi all, I'm using
cloudposse/terraform-aws-elasticache-redis and stuck with this error. I have no idea how to move forward. I tried with TF 1.35.5 and getting state snapshot was created by Terraform v0.14.0, which is newer than current v0.13.5 error. How can I resolve this issue?Bre Gielissenabout 5 years ago
Hi everyone, we use the
cloudposse/terraform-aws-route53-cluster-zone and are seeing a race condition with the creation of the ns record by Terraform with the creation of the NS record by AWS. Has anyone else run into that? Is there a reason to not use the AWS created NS records? You could achieve management over the resource by doing a data import instead of a creation.melissa Jennerabout 5 years ago
I provisioned Elasticsearch. I got URL outputs of "domain_endpoint", "domain_hostname", "kibana_endpoint" and "kibana_hostname". But, I cannot hit any of these URLS. I got, "This site can’t be reached". Below is the code:
main.tf:
module "elasticsearch" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = var.kibana_hostname_enabled
domain_hostname_enabled = var.domain_hostname_enabled
allowed_cidr_blocks = ["0.0.0.0/0"]
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
context = module.this.context
}
terraform.tfvars:
enabled = true
region = "us-west-2"
namespace = "dev"
stage = "abcd"
name = "abcd"
instance_type = "m5.xlarge.elasticsearch"
elasticsearch_version = "7.7"
instance_count = 2
zone_awareness_enabled = true
encrypt_at_rest_enabled = false
dedicated_master_enabled = false
elasticsearch_subdomain_name = "abcd"
kibana_subdomain_name = "abcd"
ebs_volume_size = 250
create_iam_service_linked_role = false
dns_zone_id = "Z08006012JKHYUEROIPAD"
kibana_hostname_enabled = true
domain_hostname_enabled = trueBabar Baigabout 5 years ago(edited)
Hi 👋
I am working on a project where I need to deploy a Ruby application along with the infrastructure (created from Terraform) on ECS. I am using CircleCI pipeline. Pipeline job creates the infra (RDS,Redis, ECR and my ECS services and cluster) through Terraform CLI. Now I've a requirement that whenever the environment variables of the application is changed I want to deploy new infrastructure so that each application is working separately. The problem I'm facing is with the S3 Backend configuration not being dynamic. If somehow I could provide the s3 key dynamically so that the state file for each application is maintained separately.
Simply put my use case is that whenever CircleCI pipeline is triggered, based on the environment variable either a new deployment along with the infrastructure is made if the environment variable file is new or it simply updates the old infra and deployment.
I am working on a project where I need to deploy a Ruby application along with the infrastructure (created from Terraform) on ECS. I am using CircleCI pipeline. Pipeline job creates the infra (RDS,Redis, ECR and my ECS services and cluster) through Terraform CLI. Now I've a requirement that whenever the environment variables of the application is changed I want to deploy new infrastructure so that each application is working separately. The problem I'm facing is with the S3 Backend configuration not being dynamic. If somehow I could provide the s3 key dynamically so that the state file for each application is maintained separately.
Simply put my use case is that whenever CircleCI pipeline is triggered, based on the environment variable either a new deployment along with the infrastructure is made if the environment variable file is new or it simply updates the old infra and deployment.
terraform {
backend "s3" {
encrypt = true
key = "./tfstates/staging/${var.something_dynamic}/ecr/terraform.tfstate"
region = "eu-west-1"
bucket = "mys3bucket"
profile = "default"
}
}michaelssinghabout 5 years ago
Consider the following object,
I am attempting to retrieve the value of the URL and assign it to a local based on a user supplied variable called
variable "data_sources" {
type = list(object{
environment = string
url = string
})
description = "An object containing data source URLs per environment"
default = [
{
beta = "jdbc:<postgresql://1.1.1.1:5432/db>"
}
]
}I am attempting to retrieve the value of the URL and assign it to a local based on a user supplied variable called
environmentmichaelssinghabout 5 years ago
Digging through the various function documentation for terraform there doesn’t appear to be many that operate on objects.
michaelssinghabout 5 years ago
Any tips are welcomed.
michaelssinghabout 5 years ago(edited)
locals {
data_source_url = var.environment != null ?
}Is as far as I have gotten. The idea here is if it doesn’t match match any of the values in data_sources.environment fall back to beta, retrieve the value of url and assign it to data_source_url
michaelssinghabout 5 years ago
If it does match an environment in the object, retrieve that value and assign it to data_sources.environment
michaelssinghabout 5 years ago
Is the object variable type the most optimal here?
michaelssinghabout 5 years ago
Does this require me creating a map of the data_sources.environment in order to do the comparison?
michaelssinghabout 5 years ago
This is what I came up with
data_source_urls = var.data_source_urls
data_source_keys = [for m in local.data_source_urls : lookup(m, "environment")]
data_source_values = [for m in local.data_source_urls : lookup(m, "url")]
data_source_as_map = zipmap(local.data_source_keys, local.data_source_values)
default_data_source_url = local.data_source_as_map["beta"]
data_source_url = lookup(local.data_source_as_map, var.environment, local.default_data_source_url)Mikhail Naletovabout 5 years ago(edited)
Hi Cloudposse!
Could someone explain me why we have a pinned github provider version? This pinned version blocks using
https://github.com/cloudposse/terraform-github-repository-webhooks/blob/master/versions.tf#L5
Could someone explain me why we have a pinned github provider version? This pinned version blocks using
count for the module and module using this module as a dependency (ecs-codepipeline, ecs-web-app, ecs-service-web-task, etc)https://github.com/cloudposse/terraform-github-repository-webhooks/blob/master/versions.tf#L5
Jayabout 5 years ago
Hi, I am trying to mask sensitive information from
• using sensitive keyword, but https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.35.0 version pinned at required_version = ">= 0.12.0, < 0.14.0" and sensitive keyword is available from >= 0.14.0
• Tfmask doesn't seems to work with resources or values which are lists. (I am trying to mask helm values variable).
Any suggestions on how to go about this.
plan and apply output. I tried couple of ways:• using sensitive keyword, but https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.35.0 version pinned at required_version = ">= 0.12.0, < 0.14.0" and sensitive keyword is available from >= 0.14.0
• Tfmask doesn't seems to work with resources or values which are lists. (I am trying to mask helm values variable).
Any suggestions on how to go about this.
PePe Amengualabout 5 years ago(edited)
anyone have seen this before ?
Terraform plan/apply work just fine
120:128: syntax error: A identifier can’t go after this “"”. (-2740)
aws-vault exec cloud-native-dev -- terraform validate
Success! The configuration is valid.Terraform plan/apply work just fine
Garethabout 5 years ago
Hello can anyone please help me find the correct syntax of nested "for" to get this data structure into a for_each block?
The data I wish to use in the resource block is the first Key e.g. cms
Then second key e.g. random_key_name1
Then the value associated with random_key_name1 e.g. "noreply@domain.com"
I've been able to similar before but I've always known the name of the keys at the second level but this time the keys could be named anything.
I know I need to do something like this but I just can't find the right configuration.
Resource block
test = {
cms = {
random_key_name1 = "<mailto:noreply@domain.com|noreply@domain.com>"
},
site2 = {
unknown_key_name1 = "<mailto:noreply@domain2.com|noreply@domain2.com>"
random_name3 = "<mailto:noreply@domain2.com|noreply@domain2.com>"
}
}The data I wish to use in the resource block is the first Key e.g. cms
Then second key e.g. random_key_name1
Then the value associated with random_key_name1 e.g. "noreply@domain.com"
I've been able to similar before but I've always known the name of the keys at the second level but this time the keys could be named anything.
I know I need to do something like this but I just can't find the right configuration.
[for mykey in keys(var.test) : {
for k,v in in var.test[mykey] : k => v }
]Resource block
resource "aws_ssm_parameter" "mailFromAddress" {
for_each = { CANT GET THE CORRECT FOR LOOP }
name = format("/%s/%s", cms, random_key_name1)
type = "String"
value = each.value aka "<mailto:noreply@domain.com|noreply@domain.com>"
}Garethabout 5 years ago(edited)
Anybody know of a written aws lambda that can instigate a RDS MSSQL restore from S3 backup? Looking for something to help bootstrap an MSSQL database in RDS with a baseline db.
or maybe a way of creating a gold ami but for RDS MSSQL?
or a way of running stored procedure directly from terraform?
ideally all controlled by terraform.
or maybe a way of creating a gold ami but for RDS MSSQL?
or a way of running stored procedure directly from terraform?
ideally all controlled by terraform.
Erik Osterman (Cloud Posse)about 5 years ago
Remember to join us tomorrow (12/16) at 11:25am PST to learn about TACOS - Terraform Automation and Collaboration Software
We have speakers from:
• HashiCorp Terraform Cloud
• Env0
• Scalr
• Spacelift
https://cloudposse.com/office-hours
We have speakers from:
• HashiCorp Terraform Cloud
• Env0
• Scalr
• Spacelift
https://cloudposse.com/office-hours
Steve Wade (swade1987)about 5 years ago
is there a way to create a lambda that just gets fired once when it gets created?
Steve Wade (swade1987)about 5 years ago
i basically want to create a lambda during the bootstrapping of an AWS account and then never needs to fire again
David Knellabout 5 years ago
I have a strange issue that I hope someone can point me in the right direction. I inadvertently updated to tf 0.14.2 (new laptop, homebrew, and being dumb). Anyway, needless to say all my tf states are in a bad, uhhh, state, which AFAIK cannot be reverted back to 0.13.x. I am using some of the cloudposse tf modules and a lot of them understandably are not ready for 0.14.x. So what does one do in a situation such as this? Naturally, fork every cloudposse terraform repo and hacking up the code until every last tf error is gone. Mission accomplished! but.... my tf plan now says that it wants to replace most of my resources because the name changed (from cloudposse/label/null). It seems that the attributes ordering is different now.
i.e.
So I have 2 questions
1. Does anyone know a way to revert a state back to 0.13.x?
2. Is this attribute re-ordering situation something that anyone has encountered?
i.e.
~ name = "dt-prod-api-ecs-exec" -> "dt-prod-api-exec-ecs" # forces replacementSo I have 2 questions
1. Does anyone know a way to revert a state back to 0.13.x?
2. Is this attribute re-ordering situation something that anyone has encountered?
Barak Schosterabout 5 years ago(edited)
Howdy ya’ll. I know there are some checkov user’s here.
One of the recent updates added the ability to run terraform plan analysis.
So now it supports both static and dynamic analysis of terraform.
More about it:
https://www.checkov.io/2.Concepts/Evaluate%20Terraform%20Plan.html
https://bridgecrew.io/blog/terraform-plan-security-scanning-checkov/
One of the recent updates added the ability to run terraform plan analysis.
So now it supports both static and dynamic analysis of terraform.
More about it:
https://www.checkov.io/2.Concepts/Evaluate%20Terraform%20Plan.html
https://bridgecrew.io/blog/terraform-plan-security-scanning-checkov/
Alex Jurkiewiczabout 5 years ago
You can't modify
The only way I can think to do this is duplicating the resource with different lifecycle configuration and having a condition define which copy of the the resource is actually created.
But this is really ugly. Anyone have a better idea?
lifecycle { ignore_changes data with dynamic data. I have a module I'd like to consume in multiple places, but with different ignore_changes configuration for an internal resource -- depending on the consumer.The only way I can think to do this is duplicating the resource with different lifecycle configuration and having a condition define which copy of the the resource is actually created.
But this is really ugly. Anyone have a better idea?
lorenabout 5 years ago
i'm pretty excited about this experiment... complex objects with optional attributes and default values! https://www.terraform.io/docs/configuration/functions/defaults.html
Lukasz Kabout 5 years ago
Hi guys, anyone managed to schedule daily cleanup job in terraform cloud? I have workspace on which I would like to execute
terraform destroy as a daily jobSteve Wade (swade1987)about 5 years ago
does anyone have a recommended example to add some xml to an existing xml via a bash script?
Matt Gowieabout 5 years ago
Anyone using the Kubernetes provider with EKS + Terraform Cloud? Any direct path to success for configuring the provider?
rssabout 5 years ago(edited)
v0.14.3
0.14.3 (December 17, 2020)
ENHANCEMENTS:
terraform output: Now supports a new "raw" mode, activated by the -raw option, for printing out the raw string representation of a particular output value. (#27212)
Only primitive-typed values have a string representation, so this formatting mode is not compatible with complex types. The...
0.14.3 (December 17, 2020)
ENHANCEMENTS:
terraform output: Now supports a new "raw" mode, activated by the -raw option, for printing out the raw string representation of a particular output value. (#27212)
Only primitive-typed values have a string representation, so this formatting mode is not compatible with complex types. The...
Austin Lovelessabout 5 years ago
Hey all! I'm using the terraform-aws-rds-cluster module and am trying to setup a secondary(replica) of my primary cluster, but I'm running into an issue with the secondary cluster. 🧵
Arjun Venkateshabout 5 years ago
Heads up if you are attempting to apply a terraform repo using helm https://github.com/hashicorp/terraform-provider-helm/issues/645
lorenabout 5 years ago(edited)
anyone happen to know any magic for generating a random string that can be used in a for_each expression in the same state, without using
-target? i was trying to be cute with try() but no love...locals {
id = substr(uuid(),0,8)
}
resource null_resource id {
triggers = {
id = local.id
}
lifecycle {
ignore_changes = [
triggers,
]
}
}
resource null_resource this {
for_each = try(toset([null_resource.id.triggers.id]), toset([local.id]))
}Joe Nilandabout 5 years ago(edited)
Is there a way to create a subset map from another map based on conditions? The map has elements with data types by I got around that by using a type of
For example, with the map below, is there a way to remove the check-name key completely by checking if it equals
I’ve tried various things with for loops, e.g.
Output is unchanged.
any.For example, with the map below, is there a way to remove the check-name key completely by checking if it equals
[]?I’ve tried various things with for loops, e.g.
locals {
cleaned_pattern = {
for label, value in var.cloudwatch_event_rule_pattern :
label => value if coalesce(value) != null
}
}Output is unchanged.
+ event_pattern = jsonencode(
{
+ check-name = []
+ detail = {
+ status = [
+ "ERROR",
+ "WARN",
]
}
+ detail-type = [
+ "Trusted Advisor Check Item Refresh Notification",
]
+ source = [
+ "aws.trustedadvisor",
]
}
)Pierre-Yvesabout 5 years ago(edited)
Hello,
I have recreated my private cluster.
and it try to start creating namespaces before the cluster is created, despite the depends_on azurerm_kubernetes_cluster.
do you have any input on how to do it one by one ?
i have found same error on awk_eks with some tricks to fix it. I have tried them but it don't work for now. ( terraform plan failed telling the cluster is not available ) . Can you give me some guidelines on how to solve this ?
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/943
I have recreated my private cluster.
and it try to start creating namespaces before the cluster is created, despite the depends_on azurerm_kubernetes_cluster.
do you have any input on how to do it one by one ?
resource "kubernetes_namespace" "terra_test_namespace" {
...
depends_on = [azurerm_kubernetes_cluster.kube_infra, var.vnet_subnet_id]
}i have found same error on awk_eks with some tricks to fix it. I have tried them but it don't work for now. ( terraform plan failed telling the cluster is not available ) . Can you give me some guidelines on how to solve this ?
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/943
Garethabout 5 years ago
Hi, Anybody know if its possible and probably more importantly advisable, to have the output of a lambda function as input of a Data source?
Use case:
I need to generate machine keys for IIS but only way I've found to do this is via PowerShell. I don't believe I can use a local PowerShell provider as not all the members of my team run on windows machines and therefore create a dependency on installing PowerShell core etc. Same could be said for the Jenkins pipelines.
So was think a lambda could generate the keys and a data source could read them in.
Side note: I know I could generate and inject the machine keys in as part of the build process but for historical reasons we've extracted security items from the build process and re-inject them at time of build.
Use case:
I need to generate machine keys for IIS but only way I've found to do this is via PowerShell. I don't believe I can use a local PowerShell provider as not all the members of my team run on windows machines and therefore create a dependency on installing PowerShell core etc. Same could be said for the Jenkins pipelines.
So was think a lambda could generate the keys and a data source could read them in.
Side note: I know I could generate and inject the machine keys in as part of the build process but for historical reasons we've extracted security items from the build process and re-inject them at time of build.
Christianabout 5 years ago(edited)
Is it possible to output state from resources created by child modules but are not declared as outputs in the child module?
For example, I would like to output the DB username when using the
The values are available in the state as demonstrated by
However, an output rule like the following does not work…
results in an error
For example, I would like to output the DB username when using the
cloudposse/rds/aws .The values are available in the state as demonstrated by
terraform state show 'module.rds_instance.aws_db_instance.default[0]'However, an output rule like the following does not work…
output "aws_db_instance" {
value = module.rds_instance.aws_db_instance
}results in an error
An output value with the name "aws_db_instance" has not been declared in
module.rds_instance.Jon Bevanabout 5 years ago
Hi, I’m trying to use https://github.com/cloudposse/terraform-aws-dynamodb v0.23.0 with terraform 0.13 but I’m getting this output from terraform plan:
which I kinda understand but having to run
seems to have been reported here https://github.com/cloudposse/terraform-aws-dynamodb/issues/70 too
Error: Invalid count argument
on .terraform/modules/dynamodb_table.dynamodb_autoscaler/main.tf line 92, in resource "aws_appautoscaling_target" "read_target_index":
92: count = var.enabled ? length(var.dynamodb_indexes) : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.which I kinda understand but having to run
plan -target seems a bit hacky…seems to have been reported here https://github.com/cloudposse/terraform-aws-dynamodb/issues/70 too
Rhys Daviesabout 5 years ago
Hi all - really enjoyed the recent cast on TACOS and I'm really interested in not having to manage my own Terraform or create the governance that I want around our infra on my own. Basically (and I understand that this is a really broad question, that I expect to differ between Terraform Cloud, Env0, Scalr, Spacelift) I would to ask how you transition your self-hosted Terraform solution to one of these SaaS providers without downtime and, maybe more importantly, how your previous small-team customers have driven buy-in from their wider org that this stuff is really important (please don't sell me on it, I know it's critical)
Rhys Daviesabout 5 years ago(edited)
Even if I don't get a reply here, fascinating stream, really enjoyed and will be tuning in for the next one
Christosabout 5 years ago(edited)
Hey morning. everyone! 👋
Having a short question.
I define my local module in the
Having a short question.
I define my local module in the
modules directory. Can modules in that directory have reference to another module from github for instance?tweetyixabout 5 years ago
Hi, yes that's possible.
Christosabout 5 years ago(edited)
Alright. Cool. You got any examples of where is this implemented? Like some github repo? I am getting this warning saying that “the module cannot be found in the directory”.
tweetyixabout 5 years ago
This should show it.
Steve Wade (swade1987)about 5 years ago
Can anyone help with this issue please ...
Why is it trying to find
terraform {
backend "s3" {
...
}
required_version = "= 0.13.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.9.0"
}
fugue = {
source = "fugue/fugue"
version = "0.0.1"
}
}
}
provider "fugue" {
client_id = var.fugue_client_id
client_secret = var.fugue_client_secret
}Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/aws v3.9.0
- Finding fugue/fugue versions matching "0.0.1"...
- Finding latest version of hashicorp/fugue...
- Installing fugue/fugue v0.0.1...
- Installed fugue/fugue v0.0.1 (self-signed, key ID B14956EDEF9DD1A2)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
<https://www.terraform.io/docs/plugins/signing.html>
Error: Failed to install provider
Error while installing hashicorp/fugue: provider registry
<http://registry.terraform.io|registry.terraform.io> does not have a provider named
<http://registry.terraform.io/hashicorp/fugue|registry.terraform.io/hashicorp/fugue>Why is it trying to find
hashicorp/fugue ?Joan Portaabout 5 years ago
Hi! I have a bunch of variables, 200, in
AWS Parameter Store, all of them, in same path, any way to create a kind of loop and get all of them in Terraform instead of going one by one?Laurynasabout 5 years ago
I use terraform for scheduled lambda fuction with cloudwatch. It worked perfectly 5 months ago but now I came back to the code and when I run terraform plan with no changes i get this:
What does it even mean? I don't even use target bus I use
Error: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListTargetsByRuleInput.EventBusName.What does it even mean? I don't even use target bus I use
resource "aws_cloudwatch_event_target" "start_alarms" {Laurynasabout 5 years ago
Turns out it's an issue with terraform AWS provider version. After updating to latest it works. How do you deal with versioning of aws provider? Do you have it specified like
version = "3.14.1" ?Alex Munteanabout 5 years ago(edited)
Hi!
I built a module which is using restapi provider to create kibana space/roles and users, the module needs two configuration for the restapi provider and I am going to use provider alias to pass the configuration from root module.
The module will create the space/roles and user in one instance of kibana, and I need to configure 3 instances of kibana, which means that I will need to define 6 provider configuration in the root module.
What would be the best practices for this situation ?
I built a module which is using restapi provider to create kibana space/roles and users, the module needs two configuration for the restapi provider and I am going to use provider alias to pass the configuration from root module.
provider "restapi" {
alias = "kibana"
uri = "<https://X.X.X.X:5601>"
username = "user"
password = "pass"
insecure = true
headers = {
"kbn-xsrf" = "true"
}
write_returns_object = true
}
provider "restapi" {
alias = "elastic"
uri = "<https://X.X.X.X:9200>"
username = "user"
password = "pass"
insecure = true
write_returns_object = true
}The module will create the space/roles and user in one instance of kibana, and I need to configure 3 instances of kibana, which means that I will need to define 6 provider configuration in the root module.
What would be the best practices for this situation ?
Prasanth Komminiabout 5 years ago
Hi Team,
Prasanth Komminiabout 5 years ago
Initializing modules...
Error: Unsupported Terraform Core version
on .terraform/modules/ec2-bastion-server.dns.this/versions.tf line 2, in terraform:
2: required_version = ">= 0.12.0, < 0.14.0"
Module module.ec2-bastion-server.module.dns.module.this (from
git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>)
does not support Terraform version 0.14.2. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.Prasanth Komminiabout 5 years ago
I get the above error message while using this module
Joe Curtisabout 5 years ago
Hello all
Joe Curtisabout 5 years ago
Have some questions on a Terraform and best practices
Tim Birkettabout 5 years ago(edited)
Can't answer questions that you don't ask @Joe Curtis 😉
Joe Curtisabout 5 years ago
Hey wanted to see if people where here first
Joe Curtisabout 5 years ago(edited)
Basically new to Terraform and want to send people to different sites depending if they are on the test network behind a VPN failing the resources not being there check public site. Can I use failover routing to solve this or is it better to use a lambda
Emily Melhuishabout 5 years ago
Hiya peeps! Has anyone used the
terraform-aws-elastic-beanstalk-environment and attached an RDS instance before? I have set the relevant aws:rds:dbinstance namespace values and put that in the additional_options but it doesn't appear to be creating the Database when I look at the environment Configuration in AWS console - is there something else I need to do to get the module to create the link? (Note this is for an RDS instance attached to the environment itself - not a seperate RDS instance, this is only for an internal tool, not production)Prasanth Komminiabout 5 years ago
Hi Team, having the following issue: https://github.com/cloudposse/terraform-aws-ec2-bastion-server/issues/52
Prasanth Komminiabout 5 years ago
Error: expected length of name to be in the range (1 - 64), got
on .terraform/modules/bastion/main.tf line 9, in resource "aws_iam_role" "default":
9: name = module.this.idPrasanth Komminiabout 5 years ago
on this module: https://github.com/cloudposse/terraform-aws-ec2-bastion-server
Prasanth Komminiabout 5 years ago
Would you happen to have any idea what could be the issue here?
Prasanth Komminiabout 5 years ago
I tried different values for
id_length_limitPrasanth Komminiabout 5 years ago
I tried 0, 5 and default null
Prasanth Komminiabout 5 years ago
but always getting the same eror
Prasanth Komminiabout 5 years ago
I seem to have fixed it/
Prasanth Komminiabout 5 years ago
Need to pass
enabled = true 😞Prasanth Komminiabout 5 years ago
along with a non-default value for
id_length_limitPrasanth Komminiabout 5 years ago
and a
nameraviabout 5 years ago
Hi All
R
raviabout 5 years ago
I have written the Terraform modules for creating EKS Cluster and EKS Node groups everything is running as expected but the EC2 instances under the node groups does not have a name.
raviabout 5 years ago
Any help
Steffanabout 5 years ago
hi guys i hope to get an advice on this.
So i am trying to create an extra db on an existing aws_db_instance cluster so that my applications on fargate can connect to it. However i keep getting a connection timed out error during creation.
wondering if anyone has had this kind of encounter. How did you go about it.
my config looks like this
So i am trying to create an extra db on an existing aws_db_instance cluster so that my applications on fargate can connect to it. However i keep getting a connection timed out error during creation.
wondering if anyone has had this kind of encounter. How did you go about it.
my config looks like this
# Create a database server
resource "aws_db_instance" "default" {
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t1.micro"
name = "initial_db"
username = "rootuser"
password = "rootpasswd"
# etc, etc; see aws_db_instance docs for more
}
# Configure the MySQL provider based on the outcome of
# creating the aws_db_instance.
provider "mysql" {
endpoint = "${aws_db_instance.default.endpoint}"
username = "${aws_db_instance.default.username}"
password = "${aws_db_instance.default.password}"
}
# Create a second database, in addition to the "initial_db" created
# by the aws_db_instance resource above.
resource "mysql_database" "app" {
name = "another_db"
}Matt Gowieabout 5 years ago
Does anyone know of a way for Terraform Cloud to connect to internal AWS resources without using the business tier hosted Terraform agents? I have a database root module where I manage multiple RDS DB instances and Amazon MQ vhosts. I’d like to make that as an automated Terraform Cloud workspace, but right now I manage accessing those private resources via port forwarding into a bastion host on the applier’s machine, which obviously isn’t possible for the TFC workspace.
Miguel Zablahabout 5 years ago
Hi all! I'm new here, but I wanted to ask what do you use to test your terraform modules? I have created some but I'm looking for some options on how to maybe create some testing and maybe lint?
Hope all of you have a great christmas 🎄
Hope all of you have a great christmas 🎄

Hao Wangabout 5 years ago
a quick quesiton, can ALB module support terraform 0.14?
Hao Wangabout 5 years ago(edited)
now the version file got >0.12:
terraform {
required_version = ">= 0.12.0"
required_providers {
aws = ">= 2.0"
template = ">= 2.0"
null = ">= 2.0"
local = ">= 1.3"
}
}Austin Lovelessabout 5 years ago
I'm working with the "terraform-aws-eks-node-group" module https://github.com/cloudposse/terraform-aws-eks-node-group, and am having issues adding user data. I followed examples in the repo:
When I run a
I'm using terraform version
Has anyone else had this problem?
before_cluster_joining_userdata = var.before_cluster_joining_userdataWhen I run a
terraform plan I'm gettingAn argument named "before_cluster_joining_userdata" is not expected here.I'm using terraform version
0.13.2.Has anyone else had this problem?
raviabout 5 years ago
I created security groups for eks cluster and eks nodes and also created an ingress rule for eks cluster to add an inbound rule for port 443 and it works fine but if i run the plan or apply the second time the ingress security rule which has added previously gets deleted and its not added back again. But when i run it for the third time it again created the ingress rule and if i run plan/apply again it deleted the rule and vice versa any idea why is it behaving like that.
resource "aws_security_group" "eks_cluster_sg" {
name = "${var.generictag}-${var.env}-scg-ekscls"
description = "The eks cluster master security group"
vpc_id = "${var.vpc}"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
description = "Allowed the inbound connection to VPC CIDR"
security_groups = ["${aws_security_group.bastion.id}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(
var.tags,
map(
"Name", "${var.generictag}-${var.env}-scg-ekscls"
)
)}"
}
resource "aws_security_group" "eks_node_security_group" {
name = "${var.generictag}-${var.env}-scg-eks-node"
description = "The eks cluster master security group"
vpc_id = "${var.vpc}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
description = "Allowed the inbound connection from bastion to eks nodes"
security_groups = ["${aws_security_group.bastion.id}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
description = "Allowed the inbound connection from eks controle plane to eks nodes"
security_groups = ["${aws_security_group.eks_cluster_sg.id}"]
}
ingress {
from_port = 10250
to_port = 10250
protocol = "tcp"
description = "Allowed the inbound from eks controle plane to eks nodes for internal connectivity"
security_groups = ["${aws_security_group.eks_cluster_sg.id}"]
}
ingress {
from_port = 1025
to_port = 65535
protocol = "tcp"
description = "Allowed the inbound connection port range of eks nodes to itself"
self = "true"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(
var.tags,
map(
"Name", "${var.generictag}-${var.env}-scg-eks-node"
)
)}"
}
resource "aws_security_group_rule" "eks_cluster-to-eks_worker_node" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
description = "Allow Inbound rule in eks cluster to eks nodes"
security_group_id = "${aws_security_group.eks_cluster_sg.id}"
source_security_group_id = "${aws_security_group.eks_node_security_group.id}"
depends_on = [aws_security_group.eks_node_security_group,]
lifecycle {
create_before_destroy = "true"
}
}raviabout 5 years ago
output of plan:
# module.sg.aws_security_group.eks_cluster_sg will be updated in-place
~ resource "aws_security_group" "eks_cluster_sg" {
id = "sg-09b067394f3f35d99"
~ ingress = [
- {
- cidr_blocks = []
- description = ""
- from_port = 443
- ipv6_cidr_blocks = []
- prefix_list_ids = []
- protocol = "tcp"
- security_groups = [
- "sg-07e3a22f09247060c",
]
- self = false
- to_port = 443
},
# (1 unchanged element hidden)
]
name = "a0266d-prd-scg-ekscls"
tags = {
"Environment" = "prd"
"Name" = "a0266d-prd-scg-ekscls"
"Projectcode" = "a0266d"
"Terraformed" = "true"
}
# (6 unchanged attributes hidden)
}
# module.sg.aws_security_group_rule.eks_cluster-to-eks_worker_node will be updated in-place
~ resource "aws_security_group_rule" "eks_cluster-to-eks_worker_node" {
+ description = "Allow Inbound rule in eks cluster to eks nodes"
id = "sgrule-1645696446"
# (10 unchanged attributes hidden)
}
Plan: 0 to add, 4 to change, 0 to destroy.Hao Wangabout 5 years ago
hey, I run into an issue and it may be an easy fix. When I use both
terraform-aws-ecs-alb-service-task and rdstogether, they both create a security group with the same name so I got the error message likeError creating Security Group: InvalidGroup.Duplicate: The security group 'eg-test-test' already exists for VPC 'vpc-0a4474b6d776a7b74'Hao Wangabout 5 years ago
I did some researches and found both modules will call
cloudposse/label/null and create the SG with the same IDHao Wangabout 5 years ago
how could I use a different name for different module?
Hao Wangabout 5 years ago
let me try pass
attributes to rdsPePe Amengualabout 5 years ago
you need to add something to the name, or add a attribute or something to make it different.
Hao Wangabout 5 years ago
got it, thanks 👍️
Hao Wangabout 5 years ago
for vpc module, how can I use custom security group rules?
Hao Wangabout 5 years ago
enable_default_security_group_with_custom_rules is a flag and enabled by defaultMatt Gowieabout 5 years ago
This is pretty weak — terraform cloud does not support
refresh : https://github.com/hashicorp/terraform/issues/23247Ryan Rykeabout 5 years ago
some basic updates to the cloudtrail s3 bucket module... also Happy NYE and NYD to everyone in here 🙂
Yoni Leitersdorf (Indeni Cloudrail)about 5 years ago
Anyone has any experience with blue/green deployments with EC2 using launch templates and ASG? Basically, I’m trying to update the launch template and launch the new EC2 before spinning down the old one. Looks like I need to duplicate the templates and ASGs and use a script to fail over, like these guys did: https://github.com/skyscrapers/terraform-bluegreen/blob/master/bluegreen.py
I was just hoping not to need to do that…
I was just hoping not to need to do that…
Austin Lovelessabout 5 years ago
Anyone have experience adding
I want to add user data to my worker nodes without any downtime. Is this possible to do with this?
before_cluster_joining_userdata to an eks_node_group https://github.com/cloudposse/terraform-aws-eks-node-group.git?ref=tags/0.9.0?I want to add user data to my worker nodes without any downtime. Is this possible to do with this?
Jeff Everettabout 5 years ago(edited)
I think I may have hit a bug on the ACM certificate module. Any time I try and specify subject alternative names (even using the example code from the readme), I'm getting these errors. Environment and other details in thread.
https://github.com/cloudposse/terraform-aws-acm-request-certificate
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 31, in resource "aws_route53_record" "default":
31: name = lookup(local.domain_validation_options_list[count.index], "resource_record_name")
|----------------
| count.index is 1
| local.domain_validation_options_list is empty list of dynamic
The given key does not identify an element in this collection value.https://github.com/cloudposse/terraform-aws-acm-request-certificate