53 messages
Soren Jensenabout 3 years ago
I could use a bit of help here..
I'm trying to create a list of buckets my anti virus module is using. The list should contain all upload buckets + 2 extra buckets. I'm using the cloudposse module for creating the upload buckets
I'm in the same module trying to create this list for buckets to scan:
As an output this works
Gives me
But if I use the same as input to the list it obviously doesn't work as I need to change the map to a list.. Any one who can tell me how to get this working?
I'm trying to create a list of buckets my anti virus module is using. The list should contain all upload buckets + 2 extra buckets. I'm using the cloudposse module for creating the upload buckets
# Create the upload_bucket module
module "upload_bucket" {
for_each = toset(var.upload_buckets)
source = "cloudposse/s3-bucket/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "3.0.0"
enabled = true
bucket_name = random_pet.upload_bucket_name[each.key].id
}I'm in the same module trying to create this list for buckets to scan:
# Use the concat() and values() functions to combine the lists of bucket IDs
av_scan_buckets = concat([
module.temp_bucket.bucket_id,
module.db_objects_bucket.bucket_id
],
[LIST OF UPLOAD BUCKETS]
)As an output this works
value = { for k, v in toset(var.upload_buckets) : k => module.upload_bucket[k].bucket_id }Gives me
upload_bucket_ids = {
"bucket1" = "upload-bucket-1"
"bucket2" = "upload-bucket-2"
}But if I use the same as input to the list it obviously doesn't work as I need to change the map to a list.. Any one who can tell me how to get this working?
OliverSabout 3 years ago
How does terrateam compare to atlantis? The comparison page by terrateam shows it has several important additional capabilities over atlantis, but I'm looking for something a little deeper: https://terrateam.io/docs/compare
Adnanabout 3 years ago
Anybody knows about a page listing all AWS resource naming restrictions?
jonjitsuabout 3 years ago
Is there something like TF_PLUGIN_CACHE_DIR but for modules downloaded from github? I got 80 services using the same module (I copy pasted the source = "") and terraform redownloads it each time.
Ronabout 3 years ago
what you guys recommend to store state on-prem ?
Tusharabout 3 years ago
Hi Team,
When I'm trying to folllow the https://github.com/cloudposse/terraform-aws-vpc-peering) module to create VPC and setup the peering between them.
I'm following the example mentioned in "/examples/complete" module and while generating the plan getting the following error
I'm looking for help to understand following item:
1. what things i should improve?
2. is there any different process to use this module?
3. anything that i'm missing?
When I'm trying to folllow the https://github.com/cloudposse/terraform-aws-vpc-peering) module to create VPC and setup the peering between them.
I'm following the example mentioned in "/examples/complete" module and while generating the plan getting the following error
Error: Invalid count argument
β
β on ../../main.tf line 62, in resource "aws_route" "requestor":
β 62: count = module.this.enabled ? length(distinct(sort(data.aws_route_tables.requestor.0.ids))) * length(local.acceptor_cidr_blocks) : 0
β
β The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
β around this, use the -target argument to first apply only the resources that the count depends on.
and getting same for resource "aws_route" "acceptor".I'm looking for help to understand following item:
1. what things i should improve?
2. is there any different process to use this module?
3. anything that i'm missing?
rssabout 3 years ago(edited)
v1.4.0-alpha20221207
1.4.0 (Unreleased)
UPGRADE NOTES:
config: The textencodebase64 function when called with encoding "GB18030" will now encode the euro symbol β¬ as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding "GBK" or "CP936" will now encode the euro symbol β¬ as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this...
1.4.0 (Unreleased)
UPGRADE NOTES:
config: The textencodebase64 function when called with encoding "GB18030" will now encode the euro symbol β¬ as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding "GBK" or "CP936" will now encode the euro symbol β¬ as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this...
Krushnaabout 3 years ago
Hi,
I am trying to use cloudposse/terraform-aws-transit-gateway module to connect 2 different VPC on different regions, Are there any examples. The multiaccount example posted (https://github.com/cloudposse/terraform-aws-transit-gateway/tree/master/examples/multi-account) is within the same region.
I am trying to use cloudposse/terraform-aws-transit-gateway module to connect 2 different VPC on different regions, Are there any examples. The multiaccount example posted (https://github.com/cloudposse/terraform-aws-transit-gateway/tree/master/examples/multi-account) is within the same region.
Joe Perezabout 3 years ago
Hello All! I recently have had to work with AWS PrivateLink and found the documentation to be a bit lacking, so I created a blog post about my experience with the technology. I'm also planning a follow-up post with a terraform example. Has anyone had a chance to use AWS PrivateLink? And have you leveraged other technologies to accomplish the same thing?
https://www.taccoform.com/posts/aws_pvt_link_1/
https://www.taccoform.com/posts/aws_pvt_link_1/
Ellevalabout 3 years ago
Hi Everyone, I'm hitting the following issue when using
cloudposse/terraform-aws-alb and specifying access_logs_s3_bucket_id = aws_s3_bucket.alb_s3_logging.id .Ellevalabout 3 years ago(edited)
Also, is it possible, when enabling logs (https://github.com/cloudposse/terraform-aws-alb#input_access_logs_enabled) to give the s3 bucket, which is created a custom name? Seem to inherit from the labels of the ALB. S3 buckets have a global namespace which is causing a clash across environments, which are in different accounts/regions but use the same seed variable.
aimbotdabout 3 years ago
Hey, pretty cool update for ASG's. https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-ec2-specifying-instance-types-selection-ec2-spot-fleet-auto-scaling/
aimbotdabout 3 years ago
Regarding the node groups module. Lets say I'm running the instance type,
m6id.large , which is provisioned with 118G ssd ephemeral storage. In order to use that in the cluster, what should I be doing? Do I provision it via the block_device_mappings? Is it already available to pods in the cluster?Jonas Steinbergabout 3 years ago
If anyone has any preferred ways of terraforming secrets I'd love to hear about it. Right now I'm creating the secret stubs (so no sensitive data) in terraform and allowing people to clickops the actual secret data from UI; I'm also creating secrets through APIs where possible, e.g. datadog, circleci, whatever, and then reading them from those products and writing them over to my secret backend. I'm using a major CSP secret store; I am not using Vault and am not going to use Vault. I am aware of various things like SOPS to some extent. I'm just curious if anyone has any ingenious ideas for allowing for full secret management via terraform files; something like using key cryptography locally and then committing encrypted secrets in terraform might be a bit advanced for my developers. But fundamentally I'm open to anything slick. Thank you!
Michael Dizonabout 3 years ago
made a quick PR here π https://github.com/cloudposse/terraform-aws-ssm-patch-manager/pull/22
shamwowabout 3 years ago(edited)
hello, had a question about validations for list(string) variables, I was triying this but it doesnt seem to work:
but when I run it it just errors on that rule, probably something obvi but Im not able to find any solution
variable "dns_servers" {
description = "List of DNS Servers, Max 2"
type = list(string)
default = ["10.2.0.2"]
validation {
condition = length(var.dns_servers) > 2
error_message = "Error: There can only be two dns servers MAX"
}
}but when I run it it just errors on that rule, probably something obvi but Im not able to find any solution
Jonas Steinbergabout 3 years ago
Is creating api keys, application keys, private key pairs and similar product or cloud resources for which there are terraform resources for a bad practice? It winds secrets up in the state, but why would providers create these resources if it was an antipattern? Note here I am not talking about creating plaintext secrets or something of that nature -- obviously that is nuts. I have some workflows that involve creating key pairs and then reading them into other places.
I don't think its possible to avoid having secrets in state, is it?
I don't think its possible to avoid having secrets in state, is it?
Vicenteabout 3 years ago
Hello, I have this lb listener
I currently use codedeploy to make blue/green deployments into ECS - However, after a deployment, the weights of each target group change and terraform wants to change them back to the scripted configuration which makes traffic go to a target group with no containers. What is the best way to tackle this issue so regardless of the state of which target group has 100 weight, terraform does not want to update it?
resource "aws_lb_listener_rule" "https_443_listener_rule_2" {
listener_arn = aws_lb_listener.https_443.arn
priority = 107
action {
type = "forward"
forward {
target_group {
arn = aws_lb_target_group.flo3_on_ecs_blue_tg.arn
weight = 100
}
target_group {
arn = aws_lb_target_group.flo3_on_ecs_green_tg.arn
weight = 0
}
stickiness {
enabled = false
duration = 1
}
}
}I currently use codedeploy to make blue/green deployments into ECS - However, after a deployment, the weights of each target group change and terraform wants to change them back to the scripted configuration which makes traffic go to a target group with no containers. What is the best way to tackle this issue so regardless of the state of which target group has 100 weight, terraform does not want to update it?
ANILKUMAR Kabout 3 years ago
How to setup AWS MSK Cluster using SASL/IAM authentication using basic MSK Cluster in Terraform
ANILKUMAR Kabout 3 years ago
Could you please help me in configuring this
ANILKUMAR Kabout 3 years ago(edited)
Actually we have created the cluster and we are using the SASL/IAM authentication. Also I have provided the following policies in Instance profile role
Kafka-cluster: Connect
Kafka-cluster: Describe *
Kafka-cluster: ReadData
Kafka-cluster: Alter *
Kafka-cluster: Delete*
Kafka-cluster: Write *
Port are opened in EC2 and cluster i.e.9092 9098,2181,2182 both inbound and outbound
We are trying to connect with role and we are giving below command i.e., aws kafka describe-cluster --cluster -arn
Geeting error like : Connection was closed before we released a valid response from Endpoint.
Kafka-cluster: Connect
Kafka-cluster: Describe *
Kafka-cluster: ReadData
Kafka-cluster: Alter *
Kafka-cluster: Delete*
Kafka-cluster: Write *
Port are opened in EC2 and cluster i.e.9092 9098,2181,2182 both inbound and outbound
We are trying to connect with role and we are giving below command i.e., aws kafka describe-cluster --cluster -arn
Geeting error like : Connection was closed before we released a valid response from Endpoint.
Jonas Steinbergabout 3 years ago
Does anyone have a good hack, whether it be a script or a tool (but probably not something crazy like "use TFC or Spacelift") for prevent cross-state blunders in multi-account state management? In other words a good hack or script or tool for validating that, for example, a plan that has been generated is about to bo applied to the correct cloud account id (or similar)? Thanks.
deepakshiabout 3 years ago
π Hello, team!
deepakshiabout 3 years ago
im getting this error can any one suggest to resolve it.
deepakshiabout 3 years ago
β Error: CacheParameterGroupNotFound: CacheParameterGroup ps-prod-redis-cache not found.
β status code: 404, request id: ccbf450d-4b2d-410e-95a9-2797c6d184d2
β
β status code: 404, request id: ccbf450d-4b2d-410e-95a9-2797c6d184d2
β
Damianabout 3 years ago(edited)
Hi team. I wonder if there is a way to provide activemq
If I used barebones
Can I attach such configuration when I use Cloudposse
xml config using this module? https://registry.terraform.io/modules/cloudposse/mq-broker/aws/latest . I am new to terraform so I might be doing something wrong, but basically I'd like to modify some destination policies like this<destinationPolicy> ... </destinationPolicy>If I used barebones
aws_mq_broker I would do it like this:resource "aws_mq_broker" "example" {
broker_name = "example"
configuration {
id = aws_mq_configuration.example.id
revision = aws_mq_configuration.example.latest_revision
}
...
}
resource "aws_mq_configuration" "example" {
description = "Example Configuration"
name = "example"
engine_type = "ActiveMQ"
engine_version = "5.15.0"
data = <<DATA
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<broker xmlns="<http://activemq.apache.org/schema/core>">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" gcInactiveDestinations="true" inactiveTimoutBeforeGC="600000" />
</policyEntries>
</policyMap>
</destinationPolicy>
</broker>
DATA
}Can I attach such configuration when I use Cloudposse
mq-broker module?Alcpabout 3 years ago
I am running in to error with helm module, installing the calico-operator
here is the root module
β Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "default" namespace: "" from "": no matches for kind "APIServer" in version "<http://operator.tigera.io/v1|operator.tigera.io/v1>"
β ensure CRDs are installed first, resource mapping not found for name: "default" namespace: "" from "": no matches for kind "Installation" in version "<http://operator.tigera.io/v1|operator.tigera.io/v1>"
β ensure CRDs are installed first]
β
β with module.calico_addon.helm_release.this[0],
β on .terraform/modules/calico_addon/main.tf line 58, in resource "helm_release" "this":
β 58: resource "helm_release" "this" {
β here is the root module
module "calico_addon" {
source = "cloudposse/helm-release/aws"
version = "0.7.0"
name = "" # avoids hitting length restrictions on IAM Role names
chart = var.chart
description = var.description
repository = var.repository
chart_version = var.chart_version
kubernetes_namespace = join("", kubernetes_namespace.default.*.id)
wait = var.wait
atomic = var.atomic
cleanup_on_fail = var.cleanup_on_fail
timeout = var.timeout
create_namespace = false
verify = var.verify
iam_role_enabled = false
eks_cluster_oidc_issuer_url = replace(module.eks.outputs.eks_cluster_identity_oidc_issuer, "https://", "")
values = compact([
# hardcoded values
yamlencode(yamldecode(file("${path.module}/resources/values.yaml"))),
# standard k8s object settings
yamlencode({
fullnameOverride = module.this.name,
awsRegion = var.region
autoDiscovery = {
clusterName = module.eks.outputs.eks_cluster_id
}
rbac = {
serviceAccount = {
name = var.service_account_name
}
}
}),
# additional values
yamlencode(var.chart_values)
])
context = module.introspection.context
}Simon Weilabout 3 years ago
π Hello, team!
Great to be here and thank you for useful Terraform modules!
Great to be here and thank you for useful Terraform modules!
Simon Weilabout 3 years ago
I am using the https://github.com/cloudposse/terraform-aws-sso/ module and have 2 issues:
1. Depends on issue, opened a PR for it: https://github.com/cloudposse/terraform-aws-sso/pull/33
2. Deprecation warnings for AWS provider v4: https://github.com/cloudposse/terraform-aws-sso/issues/34
As the first issue has got no attention, I did not open a PR for the second one...
Any chance to get a review for the first one and fix the second one?
I'm willing to to open a PR for the second one if it will get attention
Please don't see my message as criticism, I'm very grateful for your open source work and modules
1. Depends on issue, opened a PR for it: https://github.com/cloudposse/terraform-aws-sso/pull/33
2. Deprecation warnings for AWS provider v4: https://github.com/cloudposse/terraform-aws-sso/issues/34
As the first issue has got no attention, I did not open a PR for the second one...
Any chance to get a review for the first one and fix the second one?
I'm willing to to open a PR for the second one if it will get attention
Please don't see my message as criticism, I'm very grateful for your open source work and modules
OliverSabout 3 years ago(edited)
Odd bug (I think):
I have a stack that has an ec2 instance with an AMI taken from the stock of AWS public AMIs. There is a data source in the stack which checks for latest AMI based on some criteria. I have been updating the stack every few weeks and I can see that when a newer AMI is available from AWS, the terraform plan shows a replacement of the EC2 instance will occur. All good so far.
Today I changed the aws_instance.ami attribute to override it manually with "ami-xxxxx" (an actual custom AMI that I created), as part of some testing. Oddly, terraform plan did NOT show that the ec2 instance would be replaced. I added some outputs to confirm that my manually overriden value is see by the var used for aws_instance.ami.
Any ideas what might cause this?
I worked around the issue by tainting the server, and in that case the plan showed that the ami was going to be changed. But I'm still puzzled as to why AMI ID change works sometimes (in this case AWS public AMIs) but not always (here for custom AMIs).
I have a stack that has an ec2 instance with an AMI taken from the stock of AWS public AMIs. There is a data source in the stack which checks for latest AMI based on some criteria. I have been updating the stack every few weeks and I can see that when a newer AMI is available from AWS, the terraform plan shows a replacement of the EC2 instance will occur. All good so far.
Today I changed the aws_instance.ami attribute to override it manually with "ami-xxxxx" (an actual custom AMI that I created), as part of some testing. Oddly, terraform plan did NOT show that the ec2 instance would be replaced. I added some outputs to confirm that my manually overriden value is see by the var used for aws_instance.ami.
Any ideas what might cause this?
I worked around the issue by tainting the server, and in that case the plan showed that the ami was going to be changed. But I'm still puzzled as to why AMI ID change works sometimes (in this case AWS public AMIs) but not always (here for custom AMIs).
Brice Zakraabout 3 years ago
Hello everyone, how to move my aws codepipeline from one environment to another?
Jonas Steinbergabout 3 years ago
Has anyone ever successfully implemented terraform in CI (not talking TFC, Spacelift or similar) where you came up with a way of preventing the canceling of the CI job from potentially borking TF state? Currently up against this issue. Solutions I can think of right off the top are:
1. have the CI delegate to some other tool like a cloud container that runs terraform
a. don't like this really because it's outside the CI
2. have a step in the CI that requires approval for apply
a. don't like this really because "manual CI"
3. do nothing and just run it in CI
4. try to implement a script that somehow persists the container on the CI backend? I don't have control of this so I highly doubt this is possible.
1. have the CI delegate to some other tool like a cloud container that runs terraform
a. don't like this really because it's outside the CI
2. have a step in the CI that requires approval for apply
a. don't like this really because "manual CI"
3. do nothing and just run it in CI
4. try to implement a script that somehow persists the container on the CI backend? I don't have control of this so I highly doubt this is possible.
Jonas Steinbergabout 3 years ago
Is anyone running Terraform in their CI workflow? Not Spacelift, TFC or other terraform management tools, but actual CI like CircleCI, Gitlab, Jenkins, Codedeploy, Codefresh, etc? If so: how do you handle for the potential for an apply to be accidentally canceled mid-run or other complications?
mikeabout 3 years ago
I have a list of Okta application names and would like to convert them to a list of Okta application IDs. I have this working:
The
Anyone know of a way to get this into a list? Thanks.
variable "okta_app_names" {
type = list(string)
default = ["core_wallet", "dli-test-app"]
}
data "okta_app_oauth" "apps" {
for_each = toset(var.okta_app_names)
label = each.key
}
resource "null_resource" "output_ids" {
for_each = data.okta_app_oauth.apps
provisioner "local-exec" {
command = "echo ${each.key} = ${each.value.id}"
}
}The
output_ids null_resource will print out each ID. However, I need this in a list, not just printed like this. The list is expected by another Okta resource.Anyone know of a way to get this into a list? Thanks.
Rikabout 3 years ago
Hi,
Trying to make use of the
Iβd like to add some tags (which are visibile in DD) from a variable. I cannot figure out how to get those included?
Basically the same behaviour as alert_tags variable for normal tags..
Tried tags = but this makes no difference in the monitor ending up in datadog:
Trying to make use of the
cloudposse/platform/datadog//modules/monitors To create monitors in Datadog.Iβd like to add some tags (which are visibile in DD) from a variable. I cannot figure out how to get those included?
Basically the same behaviour as alert_tags variable for normal tags..
Tried tags = but this makes no difference in the monitor ending up in datadog:
module "datadog_monitors" {
source = "cloudposse/platform/datadog//modules/monitors"
version = "1.0.1"
datadog_monitors = local.monitor_map
alert_tags = local.alert_tags
tags = { "BusinessUnit" : "XYZ" }
}Duraiabout 3 years ago(edited)
Hi,
Trying to use cloudposse/terraform-aws-cloudwatch-events to create a cloudwatch event rule with SNS target
I'm facing issue with cloudwatch event rule pattern while creating it.
We use terragrunt to deploy our resources.
Error received
Please suggest how to resolve it.
Trying to use cloudposse/terraform-aws-cloudwatch-events to create a cloudwatch event rule with SNS target
I'm facing issue with cloudwatch event rule pattern while creating it.
We use terragrunt to deploy our resources.
inputs = {
name = "rds-maintenance-event"
cloudwatch_event_rule_description = "Rule to get notified rds scheduled maintenance"
cloudwatch_event_target_arn = dependency.sns.outputs.sns_topic_arn
cloudwatch_event_rule_pattern = {
"detail" = {
"eventTypeCategory" = ["scheduledChange"]
"service" = ["RDS"]
}
"detail-type" = ["AWS Health Event"]
"source" = ["aws.health"]
}
}Error received
Error: error creating EventBridge Rule (nonprod-rds-maintenance-event): InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object
at [Source: (String)""{\"detail\":{\"eventTypeCategory\":[\"scheduledChange\"],\"service\":[\"RDS\"]},\"detail-type\":[\"AWS Health Event\"],\"source\":[\"aws.health\"]}""; line: 1, column: 2]Please suggest how to resolve it.
Patrice Lachanceabout 3 years ago(edited)
Hi, I'm trying to upgrade from https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.44.0 to https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.44.1 and get the following error message:
I saw other replies mentioning a network related issues but it shouldn't be the case because I'm running the command from the same host, same terminal session, same environment variables...
I can't figure out by looking at the diff why this problem happens and hope someone will be able to help me!
Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused
I saw other replies mentioning a network related issues but it shouldn't be the case because I'm running the command from the same host, same terminal session, same environment variables...
I can't figure out by looking at the diff why this problem happens and hope someone will be able to help me!
Samabout 3 years ago
Hello Everyone!
I'm working on creating an AWS Organization with the following (dev, staging, and prod), but I don't know what would be the best folder structure in Terraform.
1. Is it better to create a separate directory for each envs or use Workspaces?
folder structure ie.
- infra/
- dev /
- main.tf
- state.tf
- dev.tfvars
- prod /
- main.tf
- state.tf
- prod.tfvars
modules /
vpc
2. Is it best to use modules to share resources in between envs?
3. The .tfstate file, should that be in the root folder structure, or in each envs folder? I know it should be stored inside S3 with locks.
Your help would be greatly appreciated.
I'm working on creating an AWS Organization with the following (dev, staging, and prod), but I don't know what would be the best folder structure in Terraform.
1. Is it better to create a separate directory for each envs or use Workspaces?
folder structure ie.
- infra/
- dev /
- main.tf
- state.tf
- dev.tfvars
- prod /
- main.tf
- state.tf
- prod.tfvars
modules /
vpc
2. Is it best to use modules to share resources in between envs?
3. The .tfstate file, should that be in the root folder structure, or in each envs folder? I know it should be stored inside S3 with locks.
Your help would be greatly appreciated.
Dhamodharanabout 3 years ago
Hi All, I am new to terraform cloud, i would like automate my tf cli commands with tf cloud, to provision the resource in aws, can someone help with the proper document? I have gone through the terraform official documentation, i couldnt follow that as i am new to thatβ¦
someone refer anyother documents if you come acrossβ¦
regards,
someone refer anyother documents if you come acrossβ¦
regards,
Dhamodharanabout 3 years ago
I am trying to create a aws task definition with passing the env variables in the container definition in a different file. but i am getting the below error while planning the tf code.
My resource snippet is:
defined the image spec in the json file, its working when i deploy manually.
Someone help on this?
β Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal object into Go value of type []*ecs.ContainerDefinition
β
β with aws_ecs_task_definition.offers_taskdefinition,
β on ecs_main.tf line 13, in resource "aws_ecs_task_definition" "app_taskdefinition":
β 13: container_definitions = "${file("task_definitions/ecs_app_task_definition.json")}"My resource snippet is:
resource "aws_ecs_task_definition" "app_taskdefinition" {
family = "offers"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 2048
memory = 8192
task_role_arn = "${aws_iam_role.ecstaskexecution_role.arn}"
execution_role_arn = "${aws_iam_role.ecstaskexecution_role.arn}"
container_definitions = "${file("task_definitions/ecs_app_task_definition.json")}"
}defined the image spec in the json file, its working when i deploy manually.
Someone help on this?
Ericabout 3 years ago
Not sure if this is the right place to ask but can a new release be cut for cloudtrail-s3-bucket? There was a commit made to fix (what i assume was) a deprecation message but it was never released to the registry: https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/commit/93050ec4f032edc32fed7b77943f3d43e9baeccd
Ericabout 3 years ago
hmm actually (and i should have checked this), the deprecation message i got (coming from s3-log-storage deep down) isn't fixed by 0.26.0 either)
Ericabout 3 years ago
so ill file an issue in cloudtrail-s3-bucket to increment that dep
Ericabout 3 years ago
Matt Richterabout 3 years ago
Building out some light-brownfield terraform infra. I would love to make use of this module https://github.com/cloudposse/terraform-aws-tfstate-backend,
Matt Richterabout 3 years ago
but this issue is a bit annoying: https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/118
Matt Richterabout 3 years ago
i may take a whack at improving the outstanding PR at some point
Matt Richterabout 3 years ago
unless someone more familiar with the space has a chance to look
Jonas Steinbergabout 3 years ago
Wow -- shocked I haven't heard of this before! It does not come up at all when googling TACOS or things of that nature; it took accidentally seeing it mentioned in a reddit comment (of course):
https://github.com/AzBuilder/terrakube
cc @Erik Osterman (Cloud Posse)
https://github.com/AzBuilder/terrakube
cc @Erik Osterman (Cloud Posse)
Roman Kirilenkoabout 3 years ago
hi everyone, iβm trying to do upgrade of MSK with terraform but have an issue with
Configuration is in use by one or more clusters. Dissociate the configuration from the clusters. Is it possible to bypass that step so the configuration is not even touched?lorenabout 3 years ago
Recommended reading... https://cloud.google.com/docs/terraform/best-practices-for-terraform
aimbotdabout 3 years ago
Hey all, looking for some guidance here. I'd like to create a node group per AZ, I'd like the name of the group to contain the AZ in it. I'm struggling a bit. Any suggestions would be appreciated
module "primary_node_groups" {
source = "cloudposse/eks-node-group/aws"
version = "2.6.1"
for_each = { for idx, subnet_id in module.subnets.private_subnet_ids : idx => subnet_id }
subnet_ids = [each.value]
attributes = each.value # this doesn't get me want
}