52 messages
Mikael Kriefalmost 3 years ago
Bradalmost 3 years ago
Hi all, I'm struggling to attach an EFS volume to an ECS Fargate container, as far as I can make out I've got everything in place which should ensure the attachment... has anyone got any examples which I can cross-reference? 🙂
DaniC (he/him)almost 3 years ago
Hi folks, not sure if this is the right channel to ask, if not pls let me know and i'll move the convo to new "home").
My question goes around TFC as i've seen a lot of topics around the cons of using it but little on specifics, here are few:
• TFC does recognise the
• TFC plan output back to Github/ Gitlab etc -> is not a native solution, what path folks went with?
• TFC sandbox/ ephemeral env => out of the box is not as easy to have, what path folks went with?
• due to the lack above, i think is fair to say we could end up with 2 "paths": one when running inside TFC and one when running outside TFC - .eg local in which case you need to deal with the token - hence where do you store the token? you have one per team or 1 for each individual developer?
Based on all the above, how better off is env0 or spacelift ?
My question goes around TFC as i've seen a lot of topics around the cons of using it but little on specifics, here are few:
• TFC does recognise the
*.auto.tfvars which will be loaded and made available to all workspaces. However it doesn't know anything about the tfvars itself. Now i'm aware about few some workaround using TF_VAR... variable or the API to "populate" the workspace variables however that is not an option for me. So my first q - how did folks ended up going on with this limited functionality ?• TFC plan output back to Github/ Gitlab etc -> is not a native solution, what path folks went with?
• TFC sandbox/ ephemeral env => out of the box is not as easy to have, what path folks went with?
• due to the lack above, i think is fair to say we could end up with 2 "paths": one when running inside TFC and one when running outside TFC - .eg local in which case you need to deal with the token - hence where do you store the token? you have one per team or 1 for each individual developer?
Based on all the above, how better off is env0 or spacelift ?
Tom Hughesalmost 3 years ago
Hi Peeps, a question regarding semver.
I have a module, that deploys certain resources. The module itself depends on other modules. I want to upgrade an eks module's major version. What's best practice here in terms of how I should change the version of my module? If a maj version of a dependecy changes, should I also change the maj version of my module?
I have a module, that deploys certain resources. The module itself depends on other modules. I want to upgrade an eks module's major version. What's best practice here in terms of how I should change the version of my module? If a maj version of a dependecy changes, should I also change the maj version of my module?
PePe Amengualalmost 3 years ago
what is the new recommended way
or
aws_account_id = join("", data.aws_caller_identity.current[*].account_id)or
aws_account_id = one(data.aws_caller_identity.current[*].account_id)rssalmost 3 years ago
v1.4.0
1.4.0 (March 08, 2023)
UPGRADE NOTES:
config: The textencodebase64 function when called with encoding "GB18030" will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding "GBK" or "CP936" will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this...
1.4.0 (March 08, 2023)
UPGRADE NOTES:
config: The textencodebase64 function when called with encoding "GB18030" will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding "GBK" or "CP936" will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this...
DaniC (he/him)almost 3 years ago
Hi folks, i have a question about Atmos as i was going over the concepts and trying to map to what been said in various office-hourse in past year or so.
As per Atmos docs you have
what was the context of keeping all those modules locals instead of consuming from your private/ public registry ? As others child modules do point to the the registry
As per Atmos docs you have
components/terraform directory and the full list points to https://github.com/cloudposse/terraform-aws-components. Now my question goes around:what was the context of keeping all those modules locals instead of consuming from your private/ public registry ? As others child modules do point to the the registry
Josh Bairdalmost 3 years ago
I was hoping to get a PR reviewed for the cloudposse/redis module.. Is this the right place? https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/189
setheryopsalmost 3 years ago(edited)
I need some opinions...
Currently at work we have 10 accounts in AWS. For Terraform we use an IAM user in AccountA that assumes a role that can reach into all other accounts and has admin in all accounts to be able to do terraform things. We also use workspaces, one for each account. All of the state files live in an S3 bucket that is in this AccountA. All of the plans and applies are run by Jenkins(BOO) butwe are getting rid of Jenkins and are adopting Atlantis which runs from an eks cluster in a different account(AccountB).
We currently use Dynamodb for locking. I should be able to just start using a new table in Account B so I have that covered.
Considering that we have hundreds of thousands of resources in all of these state files and it would be too much to manually import each on into a new statefile in AccountB, what would be the best way to move those resources to the new account statefiles?
Since we have a state file per workspace per service could I just copy that statefile over to the new bucket and have Atlantis look at that new bucket/statefile instead of the old?
So I would move the statefile to the new bucket and put it under the same key as in the old bucket, then update the backend config to point to the new bucket.
My plan is to move over each service one at a time and make sure everything works instead of just one big lift and shift.
Thoughts?
Currently at work we have 10 accounts in AWS. For Terraform we use an IAM user in AccountA that assumes a role that can reach into all other accounts and has admin in all accounts to be able to do terraform things. We also use workspaces, one for each account. All of the state files live in an S3 bucket that is in this AccountA. All of the plans and applies are run by Jenkins(BOO) butwe are getting rid of Jenkins and are adopting Atlantis which runs from an eks cluster in a different account(AccountB).
We currently use Dynamodb for locking. I should be able to just start using a new table in Account B so I have that covered.
Considering that we have hundreds of thousands of resources in all of these state files and it would be too much to manually import each on into a new statefile in AccountB, what would be the best way to move those resources to the new account statefiles?
Since we have a state file per workspace per service could I just copy that statefile over to the new bucket and have Atlantis look at that new bucket/statefile instead of the old?
So I would move the statefile to the new bucket and put it under the same key as in the old bucket, then update the backend config to point to the new bucket.
My plan is to move over each service one at a time and make sure everything works instead of just one big lift and shift.
Thoughts?
OliverSalmost 3 years ago
Hey what am I missing: I have been reading about terraform workspace command for the CLI, ie without tf cloud. I really don't see the benefit of CLI-level workspaces over just using separate folders (to hold root module tf files and tvars). It is just as easy to forget to select the workspace as it is to forget changing to the right folder, no? Tfstate is separate for each env / stack in both cases so no advantage there either.
DaniC (he/him)almost 3 years ago
hi folks,
been looking at the semver TF modules and i found out there are different schools out there whereby:
• some are using the conventional commits to gut the release with actions like https://github.com/google-github-actions/release-please-action
• some are using based on repo labels (major/ minor/ patch) attached to a PR and when merged to main or using issueOps can trigger a release (draft or not)
• some TF aws modules are using this
• others use a Version file inside the repo from which a tag is being created
any trade-offs on any options above? I can indeed see one on the conventional commits as is very hard to "speak the same lang" between various teams
been looking at the semver TF modules and i found out there are different schools out there whereby:
• some are using the conventional commits to gut the release with actions like https://github.com/google-github-actions/release-please-action
• some are using based on repo labels (major/ minor/ patch) attached to a PR and when merged to main or using issueOps can trigger a release (draft or not)
• some TF aws modules are using this
• others use a Version file inside the repo from which a tag is being created
any trade-offs on any options above? I can indeed see one on the conventional commits as is very hard to "speak the same lang" between various teams
Vlad Ciobancaialmost 3 years ago(edited)
Hi, I have applied successfully the following module https://github.com/cloudposse/terraform-aws-elasticache-redis/tree/master where I created the redis cluster but it doesn't create the
I'm getting the following error :
Could somebody help me ? mayne I'm doing something wrong
aws_elasticache_user_group , after that I need to be used for user_group_ids`resource "random_password" "ksa_dev_password" {
length = 16
special = true
}
resource "aws_elasticache_user" "elasticache_user" {
user_id = var.redis_name
user_name = var.redis_name
access_string = "on ~* +@all"
engine = "REDIS"
passwords = [random_password.ksa_dev_password.result]
tags = local.tags
}
resource "aws_elasticache_user_group" "elasticache_group" {
engine = "REDIS"
user_group_id = "${var.tenant}-${var.environment}"
user_ids = [aws_elasticache_user.elasticache_user.user_id, "default"]
tags = local.tags
}
resource "aws_elasticache_user_group_association" "elasticache_user_group_association" {
user_group_id = aws_elasticache_user_group.elasticache_group.user_group_id
user_id = aws_elasticache_user.elasticache_user.user_id
} I'm getting the following error :
│ Error: creating ElastiCache User Group Association ("...-develop,...-dev-redis"): InvalidParameterCombination: ...-dev-redis is already a member of ...-develop.
│ status code: 400, request id: 8534d445-aee0-4b40-acf8-db36a14198e6
│Could somebody help me ? mayne I'm doing something wrong
Steve Wade (swade1987)almost 3 years ago
does anyone know / can recommend a terraform module for AWS neptune please?
Steve Wade (swade1987)almost 3 years ago
Does anyone know how to write the EKS KUBECONFIG as a file?
Michael Dizonalmost 3 years ago
i’m curious if anyone has encountered something like this before
but that account number isn’t associated with any of our accounts. where is it coming from?
data "aws_elb_service_account" "default" in https://github.com/cloudposse/terraform-aws-lb-s3-bucket/blob/master/main.tf returnsaws_elb_service_account = [
+ {
+ arn = "arn:aws-us-gov:iam::048591011584:root"
+ id = "048591011584"
+ region = null
},
]but that account number isn’t associated with any of our accounts. where is it coming from?
Michael Dizonalmost 3 years ago
or does the elb service account not something we would own
Kunalsing Thakuralmost 3 years ago
Hello is there way we can pass the sensitive variable in connection stanza of remote provisioning like passing sensitive password variable
rssalmost 3 years ago(edited)
v1.4.1
1.4.1 (March 15, 2023)
BUG FIXES:
Enables overriding modules that have the depends_on attribute set, while still preventing the depends_on attribute itself from being overridden. (#32796)
terraform providers mirror: when a dependency lock file is present, mirror the resolved providers versions, not the latest available based on...
1.4.1 (March 15, 2023)
BUG FIXES:
Enables overriding modules that have the depends_on attribute set, while still preventing the depends_on attribute itself from being overridden. (#32796)
terraform providers mirror: when a dependency lock file is present, mirror the resolved providers versions, not the latest available based on...
lorenalmost 3 years ago
New version 5 of the aws provider is being prepared....
https://github.com/hashicorp/terraform-provider-aws/issues/29842
https://github.com/hashicorp/terraform-provider-aws/issues/29842
rssalmost 3 years ago(edited)
v1.4.2
1.4.2 (March 16, 2023)
BUG FIXES:
Fix bug in which certain uses of setproduct caused Terraform to crash (#32860)
Fix bug in which some provider plans were not being calculated correctly, leading to an "invalid plan" error (<a href="https://github.com/hashicorp/terraform/issues/32860" data-hovercard-type="pull_request"...
1.4.2 (March 16, 2023)
BUG FIXES:
Fix bug in which certain uses of setproduct caused Terraform to crash (#32860)
Fix bug in which some provider plans were not being calculated correctly, leading to an "invalid plan" error (<a href="https://github.com/hashicorp/terraform/issues/32860" data-hovercard-type="pull_request"...
Joey Espinosaalmost 3 years ago
Is there a good way to glob all the directory names rather than only file types, using something like
Example:
It seems like
fileset?Example:
for_each = {
for idx, directory in fileset(path.module, "some/location/*") :
basename(filepath) => filepath
}
dir = each.key
foo = yamldecode(file("${each.value}/foo.yaml"))
bar = yamldecode(file("${each.value}/bar.yaml"))It seems like
fileset specifically only returns values that match a file, not a directory. Is there an alternative?Mikhailalmost 3 years ago
Hello!
I would want to ask if Cloudposse team has plans to add an ability to add different types of action types for aws_lb_listener_rule (not only “forward”, but
I would want to ask if Cloudposse team has plans to add an ability to add different types of action types for aws_lb_listener_rule (not only “forward”, but
redirect orfixed-response ) in the following module: https://github.com/cloudposse/terraform-aws-alb-ingress?Slackbotalmost 3 years ago
This message was deleted.
Shahulalmost 3 years ago
Hi I had tried to add a VM with disk encryption set, os disk was getting encrypted with no issues, but data disk is not getting encryption at all
Shahulalmost 3 years ago
On azure
vitali-federaualmost 3 years ago
Hey everyone, have created a PR that adding support for CAPTCHA action for WAF rules: https://github.com/cloudposse/terraform-aws-waf/pull/31
Also bumped version of aws provider and terraform as it was ±2 years old - none of my tests failed with that
Wanted to add the content{} block for actions but it seems like it is not being in aws provider still, so will add it later (i need that in my rule as well)
BTW I can see that cicd for my PR failed, could someone navigate me how can I fix that? https://github.com/cloudposse/terraform-aws-waf/actions/runs/4481450993/jobs/7878168555?pr=31 (seems like it tried to push something into my repo, not sure what have I missed)
Also bumped version of aws provider and terraform as it was ±2 years old - none of my tests failed with that
Wanted to add the content{} block for actions but it seems like it is not being in aws provider still, so will add it later (i need that in my rule as well)
BTW I can see that cicd for my PR failed, could someone navigate me how can I fix that? https://github.com/cloudposse/terraform-aws-waf/actions/runs/4481450993/jobs/7878168555?pr=31 (seems like it tried to push something into my repo, not sure what have I missed)
johncblandiialmost 3 years ago
what's the latest/greatest approach to fixing this:
is it still a local with the value ref then using
The "for_each" value depends on resource attributes that cannot be determined > until apply, so Terraform cannot predict how many instances will be created. > To work around this, use the -target argument to first apply only the > resources that the for_each depends on.
is it still a local with the value ref then using
count?Piotr Pawlowskialmost 3 years ago
hello all, when you are developing own terraform module, what decides if new change is a minor or major? I mean if the change forces resources recreation than it’s obviously a major one but what about adding new/modifying existing variables or outputs? how do you approach versioning challenge in such case? thanks for opinions in advance 🙂
Blake Dalmost 3 years ago(edited)
Hey Everybody, is there a way to declare an outside file path as a component dependency? I’m using a somewhat eccentric terraform provider that lets point all of my actual configuration to a relative file path that falls outside of my atmos directories. When I run “atmos describe affected”, on a PR made to that outside directory, the change doesn’t get picked up as impacting my stack and it my workflow doesn’t identify that a new plan is needed
DaniC (he/him)almost 3 years ago
hi Folks, anyone around who may have used TFC and tried to structure the codebase into layers could share how you shared the workspace state output values between? For example you have vpc-prod workspace (sum of few tf modules) and compute-prod workspace (sum of few tf modules) and the latter one needs the output from former? In the docs i saw https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/data-sources/outputs but not sure if is a good idea to go with
Viacheslavalmost 3 years ago
Hi guys, I need advice about passing variables through the command line.
value of headers will be written to Azure KeyVault, so it should be passed as is, with brackets:
If I understand correctly, Terraform recognizes curly brackets in the value as a nested map, but it should be recognized as a string. How can I overcome this issue?
terraform apply -var "myvar={\"Host\":\"$HOSTNAME\", \"headers\":\"{\"Content-type\":\"text/html\"}\"}"value of headers will be written to Azure KeyVault, so it should be passed as is, with brackets:
{"Content-type":"text/html"} , but I get this error:│ Error: Missing attribute separator
│
│ on <value for var.myvar> line 1:
│ (source code not available)
│
│ Expected a newline or comma to mark the beginning of the next attribute.
╵
exit status 1If I understand correctly, Terraform recognizes curly brackets in the value as a nested map, but it should be recognized as a string. How can I overcome this issue?
MrAtheistalmost 3 years ago(edited)
anyone know if
data.<http://template_file.my|template_file.my>_template.rendered without the quotes...? im getting output with quotes " completely nukeddata template_file "json" {
template = templatefile("${path.module}/template.json",
{
AWS_REGION = "${data.aws_region.current.name}"
}
)
}
data template_file "user_data" {
template = templatefile("${path.module}/user-data.sh",
{
JSON_TEMPLATE = data.template_file.json.rendered
)
}
# and when trying to printf ${JSON_TEMPLATE} in user-data.sh, all quotes are stripped
# instead it works if the original template starts with \"Isaacalmost 3 years ago(edited)
Is anyone else’s terraform plans failing with an
x509: certificate has expired or is not yet valid: error?Vucomir Ianculovalmost 3 years ago
hey,
i looking into using following terraform module
https://github.com/cloudposse/terraform-aws-glue
but when i want to specify multiple partition keys i’m not shore on what is the best way
from the variable i can see that it supports only one
i’m i missing something ?
i looking into using following terraform module
https://github.com/cloudposse/terraform-aws-glue
but when i want to specify multiple partition keys i’m not shore on what is the best way
from the variable i can see that it supports only one
variable "partition_keys" {
# type = object({
# comment = string
# name = string
# type = string
# })
# Using `type = map(string)` since some of the the fields are optional and we don't want to force the caller to specify all of them and set to `null` those not used
type = map(string)
description = "Configuration block of columns by which the table is partitioned. Only primitive types are supported as partition keys."
default = null
}i’m i missing something ?
Daniel Loaderalmost 3 years ago
Anyone using karpenter in their EKS terraform stack? and if so, how did you integrate it in sanely? 😄
Zach Balmost 3 years ago(edited)
In the Atmos “Hierarchical Layout” there seems to be a lot of assumption about the way we organize our OUs and accounts.
I assume this is because it has been a working strategy for Cloud Posse.
However, it seems to be making it much more difficult to adopt into our tooling.
E.g. The hierarchical layout assumes that the accounts living directly under each OU are only separate stages of a single account.
This is because the
You can change the
The assumption is more strict than that, because we’re limited to the following variables in the
• namespace
• tenant
• stage
• environment
Case: Sandbox accounts. What if we wanted to provision defaults for sandbox accounts for our developers?
These sandbox accounts might live in a
There is no feasible strategy with the
One option could be to combine our account name and region into the
But then we would be left with several directories where nesting would be better organized like so:
I can only think that we should have an additional variable in the
I hope I’ve missed something and Atmos does have the flexibility for this. Any advice would be much appreciated!
I assume this is because it has been a working strategy for Cloud Posse.
However, it seems to be making it much more difficult to adopt into our tooling.
E.g. The hierarchical layout assumes that the accounts living directly under each OU are only separate stages of a single account.
This is because the
stage variable from the name_pattern is tied to the stack living directly under an OU tenantYou can change the
name_pattern but it won’t break the overall assumption that stacks actually cannot be per-account.The assumption is more strict than that, because we’re limited to the following variables in the
name_pattern:• namespace
• tenant
• stage
• environment
Case: Sandbox accounts. What if we wanted to provision defaults for sandbox accounts for our developers?
These sandbox accounts might live in a
Sandbox OU (tenant), but they aren’t necessarily separate stages of one another, at all.There is no feasible strategy with the
name_pattern without breaking the behavior of other stacks.One option could be to combine our account name and region into the
environment variable (possibly without side-effects?) like so: sandbox-account-1-use1.yamlBut then we would be left with several directories where nesting would be better organized like so:
sandbox-account-1/use1.yamlI can only think that we should have an additional variable in the
name_pattern for example: name to truly identify the account.I hope I’ve missed something and Atmos does have the flexibility for this. Any advice would be much appreciated!
Zach Balmost 3 years ago
Another way of thinking of it is that you can only go at most 4 levels deep in your hierarchy:
But there are cases where 5 levels might be needed:
namespace tenant stage environment for example.But there are cases where 5 levels might be needed:
namespace tenant account-name stage environmentZach Balmost 3 years ago
I may have found my answer, I’ll report back if I do.
Zach Balmost 3 years ago
I did not find an answer.
Here,
• acme-workloads-data-prod-us-east-1
• acme-workloads-data-test-us-east-1
• acme-workloads-jam-prod-us-east-1
• acme-workloads-jam-test-us-east-1
However,
• acme-workloads-test-us-east-1
• acme-workloads-prod-us-east-1
Here,
atmos describe stacks reports only 2 stacks, but there are actually 4 unique stacks with this structure, which can be thought of as:• acme-workloads-data-prod-us-east-1
• acme-workloads-data-test-us-east-1
• acme-workloads-jam-prod-us-east-1
• acme-workloads-jam-test-us-east-1
However,
atmos only sees:• acme-workloads-test-us-east-1
• acme-workloads-prod-us-east-1
Zach Balmost 3 years ago
My current solution is to combine the OU and application/account name into the
tenant variable. We’ll see if that works:tenant: workloads-dataMatt Gowiealmost 3 years ago
Appreciate more 👍️ on these two AWS Provider issues:
1. https://github.com/hashicorp/terraform-provider-aws/pull/28921
2. https://github.com/hashicorp/terraform-provider-aws/issues/27646
1. https://github.com/hashicorp/terraform-provider-aws/pull/28921
2. https://github.com/hashicorp/terraform-provider-aws/issues/27646
Christof Bruylandalmost 3 years ago
I'm new to this community, and I started using the cloudposse modules.
I managed to use them to install our EKS cluster and node-group, Is there a cloudposse module available that can create the required IAM roles for EKS so I can use the ALB ingress controller?
Or do you suggest another ingress controller?
I managed to use them to install our EKS cluster and node-group, Is there a cloudposse module available that can create the required IAM roles for EKS so I can use the ALB ingress controller?
Or do you suggest another ingress controller?
rssalmost 3 years ago(edited)
v1.4.3
1.4.3 (March 30, 2023)
BUG FIXES:
Prevent sensitive values in non-root module outputs from marking the entire output as sensitive [<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="1632647362" data-permission-text="Title is private" data-url="https://github.com/hashicorp/terraform/issues/32891" data-hovercard-type="pull_request" data-hovercard-url="/hashicorp/terraform/pull/32891/hovercard"...
1.4.3 (March 30, 2023)
BUG FIXES:
Prevent sensitive values in non-root module outputs from marking the entire output as sensitive [<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="1632647362" data-permission-text="Title is private" data-url="https://github.com/hashicorp/terraform/issues/32891" data-hovercard-type="pull_request" data-hovercard-url="/hashicorp/terraform/pull/32891/hovercard"...
Dan Shepardalmost 3 years ago
Hi all. I'm encountering a bug with
As I am using
It has a dependency of the outputs of the vpc module that show up like this:
When I try to run the snowplow module however, I'm getting the following error:
That vpc exists (per the outputs above), and it's in the console as well. Even when I hardcode that variable in the
Is this a bug that I need to open a report for?
msk-apache-kafka-cluster (this). I've created a company module specifically from it w/ this code:main.tf
module "msk-apache-kafka-cluster" {
source = "cloudposse/msk-apache-kafka-cluster/aws"
version = "1.1.1"
name = var.name
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
kafka_version = var.kafka_version
associated_security_group_ids = var.associated_security_group_ids
broker_instance_type = var.broker_instance_type
broker_per_zone = var.broker_per_zone
}
variables.tf
variable "name" {
type = string
}
variable "vpc_id" {
type = string
}
variable "subnet_ids" {
type = list(string)
}
variable "kafka_version" {
type = string
}
variable "associated_security_group_ids" {
type = list(string)
}
variable "broker_instance_type" {
type = string
}
variable "broker_per_zone" {
type = number
}As I am using
terragrunt to invoke the tf-module, my terragrunt.hcl looks thusly:terraform {
source = "git@github.com:myplace/terraform-modules.git//snowplow?ref=snowplow/v1.1.0"
}
dependency "vpc" {
config_path = "../vpc"
}
inputs = {
name = "myplace-snowplow-test"
vpc_id = dependency.vpc.outputs.vpc_id
subnet_ids = dependency.vpc.outputs.private_subnets
kafka_version = "2.8.1"
associated_security_group_ids = [dependency.vpc.outputs.default_sg_id]
broker_instance_type = "kafka.t3.small"
broker_per_zone = 1
}It has a dependency of the outputs of the vpc module that show up like this:
azs = tolist([
"us-east-1a",
"us-east-1b",
"us-east-1c",
])
cgw_ids = []
database_subnets = [
"subnet-05ace04da69c0a5c3",
"subnet-03f094702e6413a5c",
"subnet-0e29e3ea07b3161bd",
]
default_sg_id = "sg-019db31e6084d695b"
intra_subnets = [
"subnet-02fa8b695b63f36be",
"subnet-068a94b0fcb72c6bf",
"subnet-0edb9a2c27f57b067",
]
nat_public_ips = tolist([
"3.231.112.27",
])
private_subnets = [
"subnet-047d998dd1bb4e300",
"subnet-02627f60507ea09fb",
"subnet-00ffed109a79644da",
]
public_subnets = [
"subnet-0b82cf0a6e280600a",
"subnet-0512c45da9cac36f2",
"subnet-0588f61d9b5307245",
]
this_customer_gateway = {}
vpc_cidr = "10.2.0.0/16"
vpc_id = "vpc-0adb2021bba46a1c5"When I try to run the snowplow module however, I'm getting the following error:
Error: creating Security Group (glorify-snowplow-test): InvalidVpcID.NotFound: The vpc ID 'vpc-0adb2021bba46a1c5' does not exist
│ status code: 400, request id: 3f7c6a9c-0025-4baa-8345-f44496a95c7f
│
│ with module.msk-apache-kafka-cluster.module.broker_security_group.aws_security_group.default[0],
│ on .terraform/modules/msk-apache-kafka-cluster.broker_security_group/main.tf line 24, in resource "aws_security_group" "default":
│ 24: resource "aws_security_group" "default" {That vpc exists (per the outputs above), and it's in the console as well. Even when I hardcode that variable in the
terragrunt.hcl, it gives the same error.Is this a bug that I need to open a report for?
Zach Balmost 3 years ago
It looks like the latest release of the
https://github.com/cloudposse/terraform-aws-vpc-flow-logs-s3-bucket/blob/master/main.tf#L165-L166
https://github.com/cloudposse/terraform-aws-s3-log-storage/releases
terraform-aws-vpc-flow-logs-s3-bucket module is using a version of terraform-aws-s3-log-storage that is 8 versions out-of-date:https://github.com/cloudposse/terraform-aws-vpc-flow-logs-s3-bucket/blob/master/main.tf#L165-L166
https://github.com/cloudposse/terraform-aws-s3-log-storage/releases
Zach Balmost 3 years ago
^ At the moment, I’ve experienced deprecation warnings due to this, such as:
Use the aws_s3_bucket_server_side_encryption_configuration resource insteadrssalmost 3 years ago
v1.4.4
1.4.4 (March 30, 2023)
Due to an incident while migrating build systems for the 1.4.3 release where
CGO_ENABLED=0 was not set, we are rebuilding that version as 1.4.4 with the
flag set. No other changes have been made between 1.4.3 and 1.4.4.
1.4.4 (March 30, 2023)
Due to an incident while migrating build systems for the 1.4.3 release where
CGO_ENABLED=0 was not set, we are rebuilding that version as 1.4.4 with the
flag set. No other changes have been made between 1.4.3 and 1.4.4.
OliverSalmost 3 years ago(edited)
Any tricks for adding new tags to existing ec2 instances that have already been deployed via terraform
Thinking perhaps I could create an output that shows the aws command to run for each ASG (ie the module that creates each ASG could do that, injecting ASG name, tags map etc in the aws command). The root module would just aggregate them so I would have to copy and paste in shell, that kind of mitigates most of the risks.
Just hoping there's a trick I haven't though of...
aws_autoscaling_group, without relaunching the instances? if I just update the tags in the .tf file, the existing instances will not get those; relaunching the instances (eg via the ASG) seems overkill just for tags, and adding tags directly to the instances via the CLI is error prone (I could easily miss some instances, tags incorrect instances, use incorrect values etc).Thinking perhaps I could create an output that shows the aws command to run for each ASG (ie the module that creates each ASG could do that, injecting ASG name, tags map etc in the aws command). The root module would just aggregate them so I would have to copy and paste in shell, that kind of mitigates most of the risks.
Just hoping there's a trick I haven't though of...
Brent Galmost 3 years ago
So I'm trying to feed CIDRs into
security_group_rules from the ec2-instance module and it's complaining about:│ The given value is not suitable for
│ module.instance.module.box.var.security_group_rules
│ declared at
│ .terraform/modules/instance.box/variables.tf:72,1-32:
│ element types must all match for conversion to list.DaniC (he/him)almost 3 years ago(edited)
Hi Folks, anyone is aware of what sort of data is being shared back with Mend.io when installing Renovate GH App ? I'm trying to sell it to my team but the GH Org admin requests all sort of compliance rules around the data being transferred even when using the free version of Renovate.