100 messages
Michael Dizonover 4 years ago
having a weird issue setting up
sso with iam-primary-roles, after authenticating with google workspace, leaap opens the aws console. i'm not sure where the misconfiguration is, but my user isn't getting the arn:aws:iam::XXXXXXXXXXXX:role/xyz-gbl-identity-admin role assignment. i'm also not sure if i'm supposed to use the idp from the root account or from the identity account. any help is appreciated! 😭Michael Dizonover 4 years ago
thank you @Andrea Cavagna for the quick assist! 🙌
OliverSover 4 years ago
On the topic of version tracking of iac, such that only resources in plan get new tag, I found, amazingly, it should be possible to do with https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging#ignoring-changes-in-all-resources. I'm going to try this:
locals {
iac_version = ...get git short hash...
}
provider "aws" {
...
default_tags {
tags = {
IAC_Version = local.iac_version
}
}
ignore_tags {
keys = ["IAC_Version"]
}
}curious deviantover 4 years ago
Hello !
I am maintaining state in S3 and using DynamoDB for state locking. I had to make a manual change to the state file. I successfully uploaded the updated state file. But running any
I am maintaining state in S3 and using DynamoDB for state locking. I had to make a manual change to the state file. I successfully uploaded the updated state file. But running any
tf command errors out now due to the md5 digest of the new uploaded file not matching the entry in the DynamoDb table. Looks like the solution is to update the digest manually in the table corresponding to the backend entry. Just wanted to be sure that there isn’t indeed another way to have terraform regenerate/repopulate DynamoDb with the updated md5Tom Vaughanover 4 years ago(edited)
I am using the tfstate-backend module and noticed some add behavior. This is only when using a single s3 bucket to hold multiple state files. For example, bucket is named tf-state and state file for VPC would be in tf-state/vpc, RDS state file would be in tf-state/rds. The issue is the s3 bucket tag Name gets updated to whatever is set in the module name parameter.
What ends up happening is when VPC is created the Name tag would be set as vpc but when RDS is created the tag is updated to rds.
This may be by design but is there any way to override this and explicitly set the tag value to something else other than what is set as name in the module?
What ends up happening is when VPC is created the Name tag would be set as vpc but when RDS is created the tag is updated to rds.
This may be by design but is there any way to override this and explicitly set the tag value to something else other than what is set as name in the module?
AugustasVover 4 years ago
I would like to use aws_lb data file arn_suffix, but receive this error aws_lb | Data Sources | hashicorp/aws | Terraform Registry
I could see that option in resource atributes aws_lb | Resources | hashicorp/aws | Terraform Registry
I could see that option in resource atributes aws_lb | Resources | hashicorp/aws | Terraform Registry
Error: Value for unconfigurable attribute
on ../../modules/deployment/data_aws_lb.tf line 3, in data "aws_lb" "lb":
3: arn_suffix = var.arn_suffix
Can't configure a value for "arn_suffix": its value will be decided
automatically based on the result of applying this configuration.rssover 4 years ago(edited)
v1.0.6
1.0.6 (September 03, 2021)
ENHANCEMENTS:
backend/s3: Improve SSO handling and add new endpoints in the AWS SDK (#29017)
BUG FIXES:
cli: Suppress confirmation prompt when initializing with the -force-copy flag and migrating state between multiple workspaces. (<a href="https://github.com/hashicorp/terraform/issues/29438"...
1.0.6 (September 03, 2021)
ENHANCEMENTS:
backend/s3: Improve SSO handling and add new endpoints in the AWS SDK (#29017)
BUG FIXES:
cli: Suppress confirmation prompt when initializing with the -force-copy flag and migrating state between multiple workspaces. (<a href="https://github.com/hashicorp/terraform/issues/29438"...
Steve Wade (swade1987)over 4 years ago
does anyone know a good module for AWS budgets before I created my own?
Mohamed Habibover 4 years ago
Hi guys recently I’ve been thinking of ways to make my terraform code DRY within a project, and avoid having to wire outputs from some modules to other modules. I came up with a pattern similar to “dependency injection” using terraform data blocks. Keen to hear your thoughts on this? And also curious how do folks organise their large terraform codebases? https://github.com/diggerhq/infragenie/
Rhys Daviesover 4 years ago
Hey guys, quick q: When using Terraform to manage your AWS account, how do you or you team deploy containers to ECS? Are you using Terraform to do it or some other process to create/update containerdefinitions?
NeuroWinterover 4 years ago
Good morning all!
I have a few quick questions - I think I am doing something wrong because I have not seen anyone else talk about this but here goes! -
I have been trying to use
The first was when I was trying to create the cert for the site within main.tf, as per the examples in the README.md but I was getting an error about the zone_id being "".
I solved that by supplying the cert arn manually.
Now I face the problem of after running terraform and applying the config via github actions, on the next run I get "Error creating S3 bucket: BucketAlreadyOwnedByYou" and it looks like it is trying to create everything again, even though it has been deployed and I can see all the pieces in the aws console.
Here is a gist of my main.tf: https://gist.github.com/NeuroWinter/2e1877909ce06bd4ae2719b7d004f721
I have a few quick questions - I think I am doing something wrong because I have not seen anyone else talk about this but here goes! -
I have been trying to use
cloudposse/cloudfront-s3-cdn/aws in github actions to set up the infrastructure for my static site, and I have faced a few issues.The first was when I was trying to create the cert for the site within main.tf, as per the examples in the README.md but I was getting an error about the zone_id being "".
I solved that by supplying the cert arn manually.
Now I face the problem of after running terraform and applying the config via github actions, on the next run I get "Error creating S3 bucket: BucketAlreadyOwnedByYou" and it looks like it is trying to create everything again, even though it has been deployed and I can see all the pieces in the aws console.
Here is a gist of my main.tf: https://gist.github.com/NeuroWinter/2e1877909ce06bd4ae2719b7d004f721
Davidover 4 years ago
Hi folks - I appear to be having an issue with the following module: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
The above is the error message I get when performing a Terraform plan
The section of code which it is complaining about looks like this:
With vars for var.volumes declared like this:
I am passing in the following:
If I update the module variables file in my .terraform folder to:
This applies no problem, any ideas or will I submit a bug
╷
│ Error: Invalid value for module argument
│
│ on main.tf line 40, in module "ecs_alb_service_task":
│ 40: volumes = var.volumes
│
│ The given value is not suitable for child module variable "volumes" defined at .terraform/modules/ecs_alb_service_task/variables.tf:226,1-19: element 0: attributes "efs_volume_configuration" and "host_path" are required.
╵The above is the error message I get when performing a Terraform plan
The section of code which it is complaining about looks like this:
dynamic "volume" {
for_each = var.volumes
content {
host_path = lookup(volume.value, "host_path", null)
name = volume.value.name
dynamic "docker_volume_configuration" {
for_each = lookup(volume.value, "docker_volume_configuration", [])
content {
autoprovision = lookup(docker_volume_configuration.value, "autoprovision", null)
driver = lookup(docker_volume_configuration.value, "driver", null)
driver_opts = lookup(docker_volume_configuration.value, "driver_opts", null)
labels = lookup(docker_volume_configuration.value, "labels", null)
scope = lookup(docker_volume_configuration.value, "scope", null)
}
}
dynamic "efs_volume_configuration" {
for_each = lookup(volume.value, "efs_volume_configuration", [])
content {
file_system_id = lookup(efs_volume_configuration.value, "file_system_id", null)
root_directory = lookup(efs_volume_configuration.value, "root_directory", null)
transit_encryption = lookup(efs_volume_configuration.value, "transit_encryption", null)
transit_encryption_port = lookup(efs_volume_configuration.value, "transit_encryption_port", null)
dynamic "authorization_config" {
for_each = lookup(efs_volume_configuration.value, "authorization_config", [])
content {
access_point_id = lookup(authorization_config.value, "access_point_id", null)
iam = lookup(authorization_config.value, "iam", null)
}
}
}
}
}
}With vars for var.volumes declared like this:
variable "volumes" {
type = list(object({
host_path = string
name = string
docker_volume_configuration = list(object({
autoprovision = bool
driver = string
driver_opts = map(string)
labels = map(string)
scope = string
}))
efs_volume_configuration = list(object({
file_system_id = string
root_directory = string
transit_encryption = string
transit_encryption_port = string
authorization_config = list(object({
access_point_id = string
iam = string
}))
}))
}))
description = "Task volume definitions as list of configuration objects"
default = []
}I am passing in the following:
volumes = [
{
name = "etc"
docker_volume_configuration = {
scope = "shared"
autoprovision = true
}
},
{
name = "log"
host_path = "/var/log/hello"
},
{
name = "opt"
docker_volume_configuration = {
scope = "shared"
autoprovision = true
}
},
]If I update the module variables file in my .terraform folder to:
variable "volumes" {
type = list(object({
#host_path = string
#name = string
#docker_volume_configuration = list(object({
# autoprovision = bool
# driver = string
# driver_opts = map(string)
# labels = map(string)
# scope = string
#}))
#efs_volume_configuration = list(object({
# file_system_id = string
# root_directory = string
# transit_encryption = string
# transit_encryption_port = string
# authorization_config = list(object({
# access_point_id = string
# iam = string
# }))
#}))
}))
description = "Task volume definitions as list of configuration objects"
default = []
}This applies no problem, any ideas or will I submit a bug
RBover 4 years ago
@David every key in the object has to be set or terraform will error out. this is a limitation in terraform itself.
O Kover 4 years ago
Hi All! How long approximately should it take to deploy AWS MSK? I use this module https://registry.terraform.io/modules/cloudposse/msk-apache-kafka-cluster/aws/latest and I deployment is passed 20 min already and still nothing. Any feedback please?
module.kafka.aws_msk_cluster.default[0]: Still creating... [26m0s elapsed]
module.kafka.aws_msk_cluster.default[0]: Still creating... [26m10s elapsed]O Kover 4 years ago
After 26 min it has been created…
W
Wiraover 4 years ago
Hello, I am currently using this terraform module https://registry.terraform.io/modules/cloudposse/elastic-beanstalk-environment/aws/latest to create a worker environment. But I can't find how to configure custom endpoint for the worker daemon to post the sqs queue.
Yoni Leitersdorf (Indeni Cloudrail)over 4 years ago
Have you seen https://www.theregister.com/2021/09/07/hashicorp_pause/ ? Thoughts on this?
Kyle Johnsonover 4 years ago
Is there any existing solution for generating KMS policies that enable the interop with various AWS services?
Some services need actions others don’t such as
Seems like there ought to be a module for creating these policies which already knows the details of individual action requirements vs recreating policies from AWS docs on every project 🙈
Some services need actions others don’t such as
kms:CreateGrant. CloudTrail audits will flag that action being granted to services which don’t need it.Seems like there ought to be a module for creating these policies which already knows the details of individual action requirements vs recreating policies from AWS docs on every project 🙈
Mohammed Yahyaover 4 years ago(edited)
• Terraform is not currently reviewing Community Pull Requests: HashiCorp has acknowledged that it is currently understaffed and is unable to review public PRs.
• 😱
• 😱
Saichovskyover 4 years ago
Hello,
We have a
My question is on how to restrict some of the rules in the resultant SG that gets created by the
We have a
aws_directory_service_directory resource defined in a service, which creates a security group that allows ports 1024-65535 to be accessible from 0.0.0.0/0 and this is getting flagged by security hub because AWS CIS standards to not recommend allowing ingress from 0.0.0.0/0 for TCP port 3389.My question is on how to restrict some of the rules in the resultant SG that gets created by the
aws_directory_service_directory resource. How do you remediate this using terraform?mfridhover 4 years ago
Anyone here using
I feel like there are a few lies in this code here 😄...
This one for example: https://github.com/hashicorp/terraform-exec/blob/v0.14.0/tfexec/terraform.go#L62-L74
tfexec / tfinstall? https://github.com/hashicorp/terraform-exec2021/09/08 13:15:58 error running Init: fork/exec /tmp/tfinstall354531296/terraform: not a directoryI feel like there are a few lies in this code here 😄...
This one for example: https://github.com/hashicorp/terraform-exec/blob/v0.14.0/tfexec/terraform.go#L62-L74
Slackbotover 4 years ago
This message was deleted.
Tomekover 4 years ago
👋 I have the following public subnet resource:
I want to reference the subnets in an ALB resource I’m creating. At the moment this looks like:
Is there a way to wildcard the above? I tried
resource "aws_subnet" "public_subnet" {
for_each = {
"${var.aws_region}a" = "172.16.1.0"
"${var.aws_region}b" = "172.16.2.0"
"${var.aws_region}c" = "172.16.3.0"
}
vpc_id = aws_vpc.vpc.id
cidr_block = "${each.value}/24"
availability_zone = each.key
map_public_ip_on_launch = true
}I want to reference the subnets in an ALB resource I’m creating. At the moment this looks like:
subnet_ids = [
aws_subnet.public_subnet["us-east-1a"].id,
aws_subnet.public_subnet["us-east-1b"].id,
aws_subnet.public_subnet["us-east-1c"].id
]Is there a way to wildcard the above? I tried
aws_subnet.public_subnet.*.id which doesn’t work because I think the for each object is a map. What is the proper way to handle this?rssover 4 years ago
v1.1.0-alpha20210908
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph...
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph...
Steve Wade (swade1987)over 4 years ago(edited)
does anyone know of an IAM policy that will let people view the SSM parameters names and thats it? I don't want them to be able to see the values.
Pierre-Yvesover 4 years ago
I had a lot of tags to deploy, and not all resources support tagging .
to be effective in the process and after trying many option to trigger command on *.tf changes.
I finally use
(
to be effective in the process and after trying many option to trigger command on *.tf changes.
I finally use
watch terraform validate(
inotifywait don't seems to work on wsl + vscode )deepak kumarover 4 years ago
Hi People,
I am creating ecs service using tf 0.11.7
I have set the network_mode default to "bridge" for the ecs task definition but the module can be reused with different network_mode such as "awsvpc".
Since tf 0.11.* doesn't support dynamic block , I need to find out a way to achieve dynamic block to set arguments such as network_configurations(based on the network_mode)
Using locals I guess it can be achieved .Is there any other way to do it in tf 0.11.*?
I am creating ecs service using tf 0.11.7
I have set the network_mode default to "bridge" for the ecs task definition but the module can be reused with different network_mode such as "awsvpc".
Since tf 0.11.* doesn't support dynamic block , I need to find out a way to achieve dynamic block to set arguments such as network_configurations(based on the network_mode)
Using locals I guess it can be achieved .Is there any other way to do it in tf 0.11.*?
Grummfyover 4 years ago
You can use terraspace / terragrunt / other to do that, but I would advise to update a bit the version of terraform ...
Steve Wade (swade1987)over 4 years ago
has anyone managed to get terraform with when using federated SSO with AWS and leveraging an assume-role in the terraform configuration?
Conor Maherover 4 years ago
Its an awesome tool. I am using it for interacting with dozens of AWS accounts whether its IAM users + MFA or AWS SSO
Tomekover 4 years ago
ooof, I just corrupted my local state file and lost the state of a bunch of resources in my terraform (backup was corrupted to 😭 ). I don’t actually care about the resources, is there a way I can force terraform to destroy the resources that map to my terraform code and reapply?
ememover 4 years ago
hey guys anyone ever implemented a description on what terraform is applying on the approval stage in codepipeline. Like i can see what my terraform is planing in the terraform plan stage and i would like to pass this to details to my approval stage but approval does not support artifact atrtibute. Anyone found a solution for this before
N
Nikola Milicover 4 years ago(edited)
How do I access the ARN of the created resource in the sibling modules belonging to same main.tf file?
I want to create IAM user, and ECR resource that need’s that user’s ARN (Check line 22). How to reference variables?
I want to create IAM user, and ECR resource that need’s that user’s ARN (Check line 22). How to reference variables?
Cameron Popeover 4 years ago
Hello - First of all, thank you for having so many wonderful Terraform modules. I have a question about the
When I update by pushing new code to Github, and then run
It seems like the underlying problem is my way of doing things (managing the Task Definition via Terraform) is coupling Terraform and the CI/CD process too tightly, and that either Terraform or CodeBuild should ‘own’ the Task Definition, but not both. I don’t see a clean way to create the Task Definition during the Build phase and set it during the deploy phase. The standard ECS deployment takes the currently-running task definition and updates the image uri. It looks like one needs to use CodeDeploy to do anything more advanced.
I don’t think I’m the first person to want Terraform not to change the revision unless I’ve made changes to the task definition on the Terraform side. How do others handle this? Or is my use-case outside of what the
If you made it here, thank you for reading!
aws-ecs-web-app module and task definitions. It seems like neither setting for ignore_changes_task_definition does quite what I need, so I sense I am ‘doing it wrong’, but I am struggling to find the happy path to doing the right thing.When I update by pushing new code to Github, and then run
terraform apply the module wants to switch the task definition back to the previous version. Setting ignore_changes_task_definition to True fixes that, but if I want to update the container size or environment variables, then those changes do not get picked up.It seems like the underlying problem is my way of doing things (managing the Task Definition via Terraform) is coupling Terraform and the CI/CD process too tightly, and that either Terraform or CodeBuild should ‘own’ the Task Definition, but not both. I don’t see a clean way to create the Task Definition during the Build phase and set it during the deploy phase. The standard ECS deployment takes the currently-running task definition and updates the image uri. It looks like one needs to use CodeDeploy to do anything more advanced.
I don’t think I’m the first person to want Terraform not to change the revision unless I’ve made changes to the task definition on the Terraform side. How do others handle this? Or is my use-case outside of what the
aws-ecs-web-appmodule is designed for?If you made it here, thank you for reading!
Steve Wade (swade1987)over 4 years ago
anyone hooked in the identity provider for EKS yet? any gothcas I should be aware of?
Rhys Daviesover 4 years ago
Hey guys I'm writing the Terraform for a new AWS ECS Service, I want to deploy 6 (but effectively
1. Is it supposed to be done with a JSON file and a
2. I've found https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecs_container_definition but it doesn't have any parameters for
3. https://github.com/cloudposse/terraform-aws-ecs-container-definition I've also found this, not sure if anyone here has had any experience with it? I was going to experiment
n ) similar container definitions in my task definition. What's the recommended way of looping a data structure (a dict, or list of lists) and creating container_definitions?1. Is it supposed to be done with a JSON file and a
data "template_file" block with some sort of comprehension?2. I've found https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecs_container_definition but it doesn't have any parameters for
command which is the part between the container definitions that needs to differ slightly3. https://github.com/cloudposse/terraform-aws-ecs-container-definition I've also found this, not sure if anyone here has had any experience with it? I was going to experiment
for_eaching with it to create 6 container_defs I can then merge()in my resource "task_definition" - is this the right sort of approach?Rhys Daviesover 4 years ago
Thanks in advance for any help 🙂
othman issaover 4 years ago
Hello everyone, I have a question what is the best way to connect TF module with API ?
othman issaover 4 years ago
I was reading in TF doc HTTP API
Slackbotover 4 years ago
This message was deleted.
greg nover 4 years ago
good afternoon guys,
I think I’ve found a version issue with cloudposse/terraform-aws-ecs-web-app (version = “~> 0.65.2”).
Is this a legit upper version limit or perhaps just versions.tf a bit out of date?
Thanks
I think I’ve found a version issue with cloudposse/terraform-aws-ecs-web-app (version = “~> 0.65.2”).
Is this a legit upper version limit or perhaps just versions.tf a bit out of date?
Thanks
tf -version
Terraform v1.0.2
on linux_amd64
Your version of Terraform is out of date! The latest version
is 1.0.6. You can update by downloading from <https://www.terraform.io/downloads.html>- services_api_assembly.this in .terraform/modules/services_api_assembly.this
╷
│ Error: Unsupported Terraform Core version
│
│ on .terraform/modules/services_api_alb.alb.access_logs.s3_bucket.this/versions.tf line 2, in terraform:
│ 2: required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.s3_bucket.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To
│ proceed, either choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
╷
│ Error: Unsupported Terraform Core version
│
│ on .terraform/modules/services_api_alb.alb.access_logs.this/versions.tf line 2, in terraform:
│ 2: required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either choose
│ another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
╷
│ Error: Unsupported Terraform Core version
│
│ on .terraform/modules/services_api_alb.alb.default_target_group_label/versions.tf line 2, in terraform:
│ 2: required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.default_target_group_label (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either
│ choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵Richard Quadlingover 4 years ago
The versions.tf for v0.65.2 https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/versions.tf says
Which all looks good. What is the source of the services_api_alb module?
terraform {
required_version = ">= 0.13.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.34"
}
}
}Which all looks good. What is the source of the services_api_alb module?
Richard Quadlingover 4 years ago
https://registry.terraform.io/modules/cloudposse/alb/aws/latest is 0.35.3, so you are quite a way behind.
Nikola Milicover 4 years ago
For some reason, ec2 instance does not have public dns assigned, even though it’s part of the public subnet? What could be the case?
rssover 4 years ago(edited)
v1.0.7
1.0.7 (September 15, 2021)
BUG FIXES:
core: Remove check for computed attributes which is no longer valid with optional structural attributes (#29563)
core: Prevent object types with optional attributes from being instantiated as concrete values, which can lead to failures in type comparison (<a...
1.0.7 (September 15, 2021)
BUG FIXES:
core: Remove check for computed attributes which is no longer valid with optional structural attributes (#29563)
core: Prevent object types with optional attributes from being instantiated as concrete values, which can lead to failures in type comparison (<a...
Vikram Yerneniover 4 years ago
Fellas,
Is there a way to add a condition when adding S3 bucket/folder level permissions here at: https://github.com/cloudposse/terraform-aws-iam-s3-user
For example, I want to give like this string query:
Is there a way to add a condition when adding S3 bucket/folder level permissions here at: https://github.com/cloudposse/terraform-aws-iam-s3-user
For example, I want to give like this string query:
{
"Sid": "AllowStatement3",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"],
"Condition":{"StringLike":{"s3:prefix":["media/*"]}}
}PePe Amengualover 4 years ago
spamming channels: https://tech.loveholidays.com/enforcing-best-practice-on-self-serve-infrastructure-with-terraform-atlantis-and-policy-as-code-911f4f8c3e00
Ozzy Aluyiover 4 years ago
Hello Guys, I'm trying to create parameters in AWS SSM- any ideas/solution will be much appreciated.
Ozzy Aluyiover 4 years ago
data "aws_ssm_parameter" "rds_master_password" {
name = "/grafana/GF_RDS_MASTER_PASSWORD"
with_decryption = "true"
}
resource "aws_ssm_parameter" "rds_master_password" {
name = "/grafana/GF_RDS_MASTER_PASSWORD"
description = "The parameter description"
type = "SecureString"
value = data.aws_ssm_parameter.rds_master_password.value
}
resource "aws_ssm_parameter" "GF_SERVER_ROOT_URL" {
name = "/grafana/GF_SERVER_ROOT_URL"
type = "String"
value = "https://${var.dns_name}"
}
resource "aws_ssm_parameter" "GF_LOG_LEVEL" {
name = "/grafana/GF_LOG_LEVEL"
type = "String"
value = "INFO"
}
resource "aws_ssm_parameter" "GF_INSTALL_PLUGINS" {
name = "/grafana/GF_INSTALL_PLUGINS"
type = "String"
value = "grafana-worldmap-panel,grafana-clock-panel,jdbranham-diagram-panel,natel-plotly-panel"
}
resource "aws_ssm_parameter" "GF_DATABASE_USER" {
name = "/grafana/GF_DATABASE_USER"
type = "String"
value = "root"
}
resource "aws_ssm_parameter" "GF_DATABASE_TYPE" {
name = "/grafana/GF_DATABASE_TYPE"
type = "String"
value = "mysql"
}
resource "aws_ssm_parameter" "GF_DATABASE_HOST" {
name = "/grafana/GF_DATABASE_HOST"
type = "String"
value = "${aws_rds_cluster.grafana.endpoint}:3306"
}Ozzy Aluyiover 4 years ago
Error: Error describing SSM parameter (/grafana/GF_RDS_MASTER_PASSWORD): ParameterNotFound:
│
│ with module.Grafana_terraform.data.aws_ssm_parameter.rds_master_password,
│ on Grafana_terraform/ssm.tf line 1, in data "aws_ssm_parameter" "rds_master_password":
│ 1: data "aws_ssm_parameter" "rds_master_password" {
│ managedkaosover 4 years ago(edited)
@Ozzy Aluyi you have a conflict with the data and resource for the parameter named
On line 1, you are trying to read it as data. and on line 5 you are trying to create it as a resource.
If its already created and you just want to read it, remove the
If you are trying to create it, remove the
Of course, if you are reading it, you will need to find a way to get the value into place. In summary, you can’t have a data resource that calls on itself.
If you are trying to create and store a password, consider using the
https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password
rds_master_passwordOn line 1, you are trying to read it as data. and on line 5 you are trying to create it as a resource.
If its already created and you just want to read it, remove the
resource "aws_ssm_parameter" "rds_master_password" {… section.If you are trying to create it, remove the
data "aws_ssm_parameter" "rds_master_password" {... section.Of course, if you are reading it, you will need to find a way to get the value into place. In summary, you can’t have a data resource that calls on itself.
If you are trying to create and store a password, consider using the
random_password resource and storing the result of that in the parameter.https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password
Michael Dizonover 4 years ago
hey guys, i am a little confused about what
dns_gbl_delegated refers to in eks-iam https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks-iam/tfstate.tf#L51Michael Dizonover 4 years ago
is
delegated-dns supposed to be added to the global env as well as regional?Michael Dizonover 4 years ago
i modified the remote state for
dns_gbl_delegated to point to primary-dns -- not sure if that's going to cause any issues later onOzzy Aluyiover 4 years ago
@managedkaos thanks for the solution. the
random_password will make more sense,MrAtheistover 4 years ago(edited)
Would like some assistance with the following error with fargate task. It seems like the stuff inside
container_definitions isnt being registered at all… im getting all sorts of error saying args not found when they are clearly within the template. EDIT: terraform state show data.template_file.main got all the right args in the json.Fargate only supports network mode 'awsvpc'.Fargate requires that 'cpu' be defined at the task level.resource "aws_ecs_task_definition" "main" {
family = "${var.app_name}-app"
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
#network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
#cpu = var.fargate_cpu
#memory = var.fargate_memory
container_definitions = data.template_file.main.rendered
}
data "template_file" "main" {
template = file("./templates/ecs/main_app.json.tpl")
vars = {
app_name = var.app_name
app_image = var.app_image
container_port = var.container_port
app_port = var.app_port
fargate_cpu = var.fargate_cpu
fargate_memory = var.fargate_memory
aws_region = var.aws_region
}
}
# ./templates/ecs/main_app.json.tpl
[
{
"name": "${app_name}",
"image": "${app_image}",
"cpu": ${fargate_cpu},
"memory": ${fargate_memory},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/${app_name}",
"awslogs-region": "${aws_region}",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": [
{
"containerPort": ${container_port},
"hostPort": ${app_port}
}
]
}
]Pedro Santanaover 4 years ago
Hello folks, I
m trying to use <https://github.com/cloudposse/terraform-aws-mq-broker|AWS MQ Module> but it looks to have a issue on Benchmark Infraestructure Security. However i cant see whats kind of issues it is on github page. Anyone can explain this for me ?Vikram Yerneniover 4 years ago
Fellas, is there a way to create multiple users with the module: https://github.com/cloudposse/terraform-aws-iam-s3-user
I tried to add a variable for creating multiple users, but its not picking up as two users instead its combining into one: https://github.com/cloudposse/terraform-aws-iam-s3-user/blob/master/examples/complete/fixtures.us-west-1.tfvars#L9
I tried to add a variable for creating multiple users, but its not picking up as two users instead its combining into one: https://github.com/cloudposse/terraform-aws-iam-s3-user/blob/master/examples/complete/fixtures.us-west-1.tfvars#L9
Davidover 4 years ago
Hi, all. I’m trying to use
cloudposse/terraform-aws-cloudfront-s3-cdn in a module with an existing origin bucket managed in a higher level block using cloudposse/terraform-aws-s3-bucket. I’m getting a continual change cycle where the CDN module sets the origin bucket policy, but then the S3 module goes in and wants to re-write the policy. I’m not sure how to address this. Is there a way to get the S3 module to ignore_changes on the bucket policy or pass in the CDN OAI policy bits so that they’re not stomped on by S3 module runs?joshmyersover 4 years ago
👋 Anyone know if possible to
ignore_changes to an attribute in a dynamic block? Doesn't seem so.lorenover 4 years ago
Anyone building self-hosted GitHub Action Runners using terraform? I found this module, which looks pretty reasonable... https://github.com/philips-labs/terraform-aws-github-runner
Frankover 4 years ago
What is considered a "best practice" when dealing with many projects that are mostly similar in setup / configuration?
A lot of our projects share ~90-95% of the same setup approach (e.g. VPC + ALB + ECS + RDS + Redis + SES + ACM + SSM) and only differ slightly (some have no Redis or no RDS, or additional parameters assigned to the ECS instance).
For each project we currently have separate Git repositories and the current approach when new infrastructures needs to be built that all Terraform code for one of the other projects is copied in and modified accordingly (mostly replacing vars, adding in some additional ECS Secrets / Parameters etc). This is fairly quick to do and is also flexible as we can simply add or remove things we do (not) need.
But it doesn't feel like the most optimal approach. It's also somewhat of a PITA if a change has to be made across all projects.
A few idea's that spring to mind to address this:
1. Create a Terraform "app" module where we can toggle components using variables (e.g.
2. Use
I'm eager to learn how others are doing this.
A lot of our projects share ~90-95% of the same setup approach (e.g. VPC + ALB + ECS + RDS + Redis + SES + ACM + SSM) and only differ slightly (some have no Redis or no RDS, or additional parameters assigned to the ECS instance).
For each project we currently have separate Git repositories and the current approach when new infrastructures needs to be built that all Terraform code for one of the other projects is copied in and modified accordingly (mostly replacing vars, adding in some additional ECS Secrets / Parameters etc). This is fairly quick to do and is also flexible as we can simply add or remove things we do (not) need.
But it doesn't feel like the most optimal approach. It's also somewhat of a PITA if a change has to be made across all projects.
A few idea's that spring to mind to address this:
1. Create a Terraform "app" module where we can toggle components using variables (e.g.
redis_enable = false), use this as only module and add in optional custom extra's (e.g. a project that needs a service not covered by the app module)2. Use
Atmos (but this appears to be pretty much the same way by copy/pasting)I'm eager to learn how others are doing this.
Ryan Rykeover 4 years ago
has anyone been able to get the terraform-aws-ecs-web-app to work with
for_each it seems to be cranky with the embedded provider configuration in the github-webhooks module. https://github.com/cloudposse/terraform-github-repository-webhooks/blob/master/main.tfRyan Rykeover 4 years ago
even when i disable the sub-module it’s still cranky
R Dhaover 4 years ago
any good resources to learn terraform for gcp?
B
ByronHomeover 4 years ago
HI everyone ✋️,
I have weird behavior with s3 terraform resource. Specifically with this
I have a local property array list, containing a .csv values, and I need to create a s3 object for each element array list value.
This is my terraform code:
When i apply it locally, all works fine, and then when i try to make a terraform plan it give me a _
My coworkers tried to make a terraform plan and they got the
But, when I launch a terraform plan into Codebuild container, with the same terraform version and with no code changes. The terraform plan give me this changes to apply.
I have weird behavior with s3 terraform resource. Specifically with this
aws_s3_bucket_object.I have a local property array list, containing a .csv values, and I need to create a s3 object for each element array list value.
This is my terraform code:
local{
foo_values = [
{
"name" = "foo_a"
"content" = <<-EOT
var_1,var_2,var_3,var_4
value_1,value_2,value_3,value_4
EOT
},
{
"name" = "foo_b"
"content" = <<-EOT
var_1,var_2,var_3,var_4
value_1,value_2,value_3,value_4
EOT
}
]
}aws_s3_bucket_objectresource "aws_s3_bucket_object" "ob" {
bucket = aws_s3_bucket.b.id
count = length(local.foo_values)
key = "${local.foo_values[count.index].name}.csv"
content = local.foo_values[count.index].content
content_type = "text/csv"
} When i apply it locally, all works fine, and then when i try to make a terraform plan it give me a _
No changes. Infrastructure is up-to-date_ messageMy coworkers tried to make a terraform plan and they got the
same message.But, when I launch a terraform plan into Codebuild container, with the same terraform version and with no code changes. The terraform plan give me this changes to apply.
Steve Wade (swade1987)over 4 years ago
looking for some advise if possible ... i have a go binary called
1. Add the binary as a zip file to an core s3 bucket
2. Push a docker image to a core ECR registry
however on both occasions I need to make changes to their the bucket policy or registry policy when we create a new account.
does anyone have a nice way to do this?
rds-to-s3-exporter it needs to run as a lambda in each account, I have two options here1. Add the binary as a zip file to an core s3 bucket
2. Push a docker image to a core ECR registry
however on both occasions I need to make changes to their the bucket policy or registry policy when we create a new account.
does anyone have a nice way to do this?
rssover 4 years ago
v1.1.0-alpha20210922
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph...
1.1.0 (Unreleased)
UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph...
Kevin Neufeld(PayByPhone)over 4 years ago(edited)
Question: Curious to know if someone has a solution to bootstrap RDS Postgres for IAM authentication, specifically creating and granting the IAM user in the database?
for more context: https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-connect-using-iam/
for more context: https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-connect-using-iam/
mrwackyover 4 years ago
How are folks dealing with the braindeadedness that is TF 0.14+
.terraform.lock.hcl files ? We have a pretty large set of Terraform roles/modules, and boy what a pain to manage & upgrade a zillion different .terraform.lock.hcl files..Valter Silvaover 4 years ago
Hi All, I've started using the following module in one of my customers as a quickstart. We are making some modifications to meet our requirements. We've added the
https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
LICENSE file but I can't find the NOTICE file as stated in the README.md file. By not having a NOTICE file I believe we need to add a header to our *tf files, correct?https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
Dustin Leeover 4 years ago
Hello, anybody hitting the issue with multiple MX Records on https://github.com/cloudposse/terraform-cloudflare-zone, getting stopped due to duplicate object error's
Dustin Leeover 4 years ago(edited)
i think the object key may need to pull in the priority into the key id to differentiate
Dustin Leeover 4 years ago
i changed the local.records to pull it in ... bit hacky i got lost down the rabbit hole with the if logic and formatting if the record.priority was present, went with the try() instead seems to work cloudflare must throw it away if it doesn't make sense
D
Dustin Leeover 4 years ago
Jakub Igłaover 4 years ago
Hi Folks, I'm using your s3-website module, but whenever I try to run
terraform plan the data source data "aws_iam_policy_document" "default" gets refreshed with different output and it messes up my plan, which should produce "no chanfges". I'm on latest terraform, the module version is 0.17.1. In the thread I'm attaching what it produces.Almondovarover 4 years ago
Hi all, in our terraform, we got
So far so good, but what happens when we don’t want the terraform code to be exactly the same in all envs?
For example, in
Is there any way to make some programmatic intelligence behind the tf like
thanks.
environments and we differentiate between the different envs by using different variables.So far so good, but what happens when we don’t want the terraform code to be exactly the same in all envs?
For example, in
dev i want to do waf filter by ip’s in staging i need to combine ip’s & urls and this is changing the terraform code and of course its trying to apply this code everywhere and not only in one specific env.Is there any way to make some programmatic intelligence behind the tf like
if env = dev then run code A
elseif env = stage run code B
elseif env = prod run code Cthanks.
Grummfyover 4 years ago
nice, does it support multiple state file? a replacement for terraboard?
Alysonover 4 years ago
Hi,
it looks like the desired_size variable from the eks-node-group module is not working.
Anyone else going through this?
terraform-aws-eks-node-group - Version 0.26.0
Terraform v0.14.11
it looks like the desired_size variable from the eks-node-group module is not working.
Anyone else going through this?
terraform-aws-eks-node-group - Version 0.26.0
Terraform v0.14.11
Slackbotover 4 years ago
This message was deleted.
Steve Wade (swade1987)over 4 years ago
is there a way to fire a cloudwatch event ad-hoc?
Joaquin Menchacaover 4 years ago
SweetOps is no longer using helmfile? Is terraform used instead for k8s/helm? Any issues w/ current APi not supported w/ k8s provider, e.g. Ingress?
Almondovarover 4 years ago
Hi all, i am trying to use the
The issue is that from the documentation is not clear if the
I tried several ways of composing the code, can please someone tell me what i am doing wrong?
error code
and_statement to combine different statements (we need to combine ip filtering with url).The issue is that from the documentation is not clear if the
and_statement block should include inside it the statement argument, or the opposite, the statement block should include inside it the and_statement argument:I tried several ways of composing the code, can please someone tell me what i am doing wrong?
resource "aws_wafv2_web_acl" "alb_waf" {
name = "ALB-WAF"
description = "ALB"
scope = "REGIONAL"
default_action {
block {}
}
rule {
name = "allow-specific-ips"
priority = 1
action {
allow {}
}
statement {
and_statement {
ip_set_reference_statement {
arn = aws_wafv2_ip_set.ipset.arn
}
regex_pattern_set_reference_statement {
arn = aws_wafv2_regex_pattern_set.staging_regex.arn
}
} # and_statement
} # statement blockerror code
Error: Unsupported block type
on main.tf line 56, in resource "aws_wafv2_web_acl" "alb_waf":
56: regex_pattern_set_reference_statement {
Blocks of type "regex_pattern_set_reference_statement" are not expected here.
Fizzover 4 years ago
Try statement -> and_statement-> statement -> ip_set_reference_statement
Ben Keroover 4 years ago
Hi all. I'm not sure if this is the right place but I'm looking for a review for a PR I made to one of the Cloudposse Terraform AWS modules: https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/pull/54
Alex Jurkiewiczover 4 years ago
I also have a PR needing review please 🙏
https://github.com/cloudposse/terraform-aws-rds-cluster/pull/119
https://github.com/cloudposse/terraform-aws-rds-cluster/pull/119
Alysonover 4 years ago(edited)
I am getting timeout when creating an eks cluster using module 0.43.2. 😔
https://github.com/cloudposse/terraform-aws-eks-cluster/
https://github.com/cloudposse/terraform-aws-eks-cluster/
rssover 4 years ago(edited)
v1.0.8
1.0.8 (September 29, 2021)
BUG FIXES:
cli: Check required_version as early as possibly during init so that version incompatibility can be reported before errors about new syntax (#29665)
core: Don't plan to remove orphaned resource instances in refresh-only plans (<a href="https://github.com/hashicorp/terraform/issues/29640"...
1.0.8 (September 29, 2021)
BUG FIXES:
cli: Check required_version as early as possibly during init so that version incompatibility can be reported before errors about new syntax (#29665)
core: Don't plan to remove orphaned resource instances in refresh-only plans (<a href="https://github.com/hashicorp/terraform/issues/29640"...
Vlad Ionescu (he/him)over 4 years ago(edited)
AWS just launched a new Cloud Control API ( 1 single CRUD API for AWS resources) and Terraform has a new provider for it (links still WIP I guess?): https://aws.amazon.com/blogs/aws/announcing-aws-cloud-control-api/
Vlad Ionescu (he/him)over 4 years ago
Link to the new provider: https://github.com/hashicorp/terraform-provider-awscc
Hasicorp blog yet to be posted :waiting_anxiously_emoji:
Hasicorp blog yet to be posted :waiting_anxiously_emoji:
lorenover 4 years ago
yeah that thing seems incredibly aspirational. we'll see.
lorenover 4 years ago
docs indicate it depends on cloudformation resource support. i guess it's nice to have that exposed natively (best of both worlds!), but that support hasn't always moved quickly...
lorenover 4 years ago(edited)
i'm curious if the awscc provider accepts the same authentication mechanisms and configuration settings as the aws provider... can i pass a profile? a role_arn? credential_process? how do i override endpoints?
OliverSover 4 years ago(edited)
(Just saw the previous post about AWS Cloud Control) Being based on CloudFormation, I wonder how much of that bleeds through, esp since CF now supports stop-on-exception and resume-from-last-exception maybe TF interface to AWS Cloud Control API is ok.
lorenover 4 years ago
i'm figuring we'll see more multi-provider modules for a bit... things the aws provider does, things the awscc provider does... not loving that idea 😉
lorenover 4 years ago
i'm really hoping this doesn't manifest as actual CFN stacks behind the scenes lol
lorenover 4 years ago
registry docs went live recently, answering some of my questions on authentication... https://registry.terraform.io/providers/hashicorp/awscc/latest/docs#authentication
lucasluover 4 years ago
hello folks, im very newbie on devops culture. so i was figuring if docker and terraform do the same job and why use terraform instead of docker who has a bigger marketshare, sorry if i was rough but im just a beginner trying to catch what is better to learn by now days