35 messages
Ryanover 1 year ago
Hi all,
This is Ryan (ex Spacelift, now Resourcely).
We are looking to document and test integrating Resourcely with additional CICD and other terraform runners.
Resourcely is a configuration engine focusing on paved roads for infrastructure as code.
It’s kind of like an AWS Service Catalog, but 10x better and with support across cloud providers and platforms. You can define safe & sane blueprints, then present developers with a nice quick and clean interface for selecting those things, instead of the naked cloud console. Multi-cloud support out of the gate makes for a pretty compelling story, and users really like the security implications of default-deny when it comes to resources (developers can select from blessed service blueprints, which ensures they aren’t accidentally choosing a $30/hr instance type or an unapproved region or a software version that hasn’t been approved by the compliance folks, etc.).
Our team currently has documented a number of integrations with “terraform runners” / TACOS solutions found here: https://docs.resourcely.io/integrations/terraform-integration
We need to create some documentation and run a few end to end tests with more real world use cases tested. I’d love to figure out a way to make it beneficial to the community and any individuals who want to assist.
Is there anyone interested in collaborating with us?
Feel free to comment or send me a DM, then we can connect next week.
This is Ryan (ex Spacelift, now Resourcely).
We are looking to document and test integrating Resourcely with additional CICD and other terraform runners.
Resourcely is a configuration engine focusing on paved roads for infrastructure as code.
It’s kind of like an AWS Service Catalog, but 10x better and with support across cloud providers and platforms. You can define safe & sane blueprints, then present developers with a nice quick and clean interface for selecting those things, instead of the naked cloud console. Multi-cloud support out of the gate makes for a pretty compelling story, and users really like the security implications of default-deny when it comes to resources (developers can select from blessed service blueprints, which ensures they aren’t accidentally choosing a $30/hr instance type or an unapproved region or a software version that hasn’t been approved by the compliance folks, etc.).
Our team currently has documented a number of integrations with “terraform runners” / TACOS solutions found here: https://docs.resourcely.io/integrations/terraform-integration
We need to create some documentation and run a few end to end tests with more real world use cases tested. I’d love to figure out a way to make it beneficial to the community and any individuals who want to assist.
Is there anyone interested in collaborating with us?
Feel free to comment or send me a DM, then we can connect next week.
Taylor Turnerover 1 year ago
What are some of the newer age IaC tools that you've shown before on the podcast? There was one that was dependency aware and had a visual drag-n-drop UI.
sheldonhover 1 year ago
remind me... if i'm building a resuable module... does it declare an empty provider or not, the docs got me confused. I see no examples of you doing this outside the examples/ directory in cloudposse modules.
Assuming I should gut any providers.tf in the resuable module and only declare required provider?
So would leave this out?
or any other provide mapping to the username/password etc?
Assuming I should gut any providers.tf in the resuable module and only declare required provider?
So would leave this out?
provider "azurerm" {
features {}
}or any other provide mapping to the username/password etc?
jaysunover 1 year ago
what’s the best way to do the following?
• convert from CDKTF (failed experiment) cloudposse eks module to vanilla terraform cloudposse eks module
• convert from terraform-aws-modules/eks to cloudposse eks module? (some of our old clusters)
we’re trying to standardize and need to perform module migrations for both of the above. I think the first one might be simpler in terms of state file massaging, but not certain
is terraform state mv the right call? or something fancy / clever with the import blocks?
Destroying / recreating the sensitive resources (eks cluster) is not an option
• convert from CDKTF (failed experiment) cloudposse eks module to vanilla terraform cloudposse eks module
• convert from terraform-aws-modules/eks to cloudposse eks module? (some of our old clusters)
we’re trying to standardize and need to perform module migrations for both of the above. I think the first one might be simpler in terms of state file massaging, but not certain
is terraform state mv the right call? or something fancy / clever with the import blocks?
Destroying / recreating the sensitive resources (eks cluster) is not an option
rssover 1 year ago(edited)
v1.10.0-alpha20240606
1.10.0-alpha20240606 (June 6, 2024)
EXPERIMENTS:
Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.
ephemeral_values: This language experiment introduces a new special kind of value which Terraform allows to change between the plan phase and the apply phase, and between plan/apply rounds....
1.10.0-alpha20240606 (June 6, 2024)
EXPERIMENTS:
Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.
ephemeral_values: This language experiment introduces a new special kind of value which Terraform allows to change between the plan phase and the apply phase, and between plan/apply rounds....
jaysunover 1 year ago
have people been using the “new” security group resources?
aws_vpc_security_group_ingress/egress_rule
historically, the group rules have been a PITA with destroy recreates / rule dedupe logic…
wondering if folks think it’s worth it to move to the new resources
aws_vpc_security_group_ingress/egress_rule
historically, the group rules have been a PITA with destroy recreates / rule dedupe logic…
wondering if folks think it’s worth it to move to the new resources
Juan Pablo Lorierover 1 year ago
Hi, I'm trying to understand why the ecs cluster module is trying to recreate the policy attachments every time I add more than one module instance via a for_each.
The plan shows the arn will change, but it's a AWS managed policy, so it won't change:
update
policy_arn :
"arn
iam:
policy/AmazonSSMManagedInstanceCore"
change to
Known after apply
Forces replacement
the resource address is:
module.ecs_clusters["xxx"].module.ecs_cluster.aws_iam_role_policy_attachment.default["AmazonSSMManagedInstanceCore"]
The plan shows the arn will change, but it's a AWS managed policy, so it won't change:
update
policy_arn :
"arn
iam:
policy/AmazonSSMManagedInstanceCore"change to
Known after apply
Forces replacement
the resource address is:
module.ecs_clusters["xxx"].module.ecs_cluster.aws_iam_role_policy_attachment.default["AmazonSSMManagedInstanceCore"]
Soren Jensenover 1 year ago
I need a bit of help to debug an issue.
I have created a RDS Postgres cluster with
I'm now trying to get the master password from Secrets Manager to bootstrap the database with the Postgresql provider.
Unfortunately the first data
I see both on the self-hosted runner as well as locally that terraform correctly finds the cluster and resolve
Any ideas?
I have created a RDS Postgres cluster with
resource "aws_rds_cluster" "postgres_cluster" and setmanage_master_user_password = trueI'm now trying to get the master password from Secrets Manager to bootstrap the database with the Postgresql provider.
data "aws_secretsmanager_secret" "postgres_password_secret" {
arn = aws_rds_cluster.postgres_cluster.master_user_secret[0].secret_arn
depends_on = [aws_rds_cluster.postgres_cluster]
}
data "aws_secretsmanager_secret_version" "postgres_password_version" {
secret_id = data.aws_secretsmanager_secret.postgres_password_secret.id
depends_on = [data.aws_secretsmanager_secret.postgres_password_secret]
}
# Parse the secret JSON string to extract the username and password
locals {
db_credentials = jsondecode(data.aws_secretsmanager_secret_version.postgres_password_version.secret_string)
}Unfortunately the first data
aws_secretsmanager_secret isn't working on our self-hosted github runners. It works locally on my laptop, as well as in the default GitHub runners. I have spent a significant amount of time trying to narrow down differences in versions, reading the debug outputs to see why it doesn't work.I see both on the self-hosted runner as well as locally that terraform correctly finds the cluster and resolve
aws_rds_cluster.postgres_cluster.master_user_secret[0].secret_arn to the correct arn. Still terraform is stuck:data.aws_secretsmanager_secret.postgres_password_secret: Still reading... [10s elapsed]
data.aws_secretsmanager_secret.postgres_password_secret: Still reading... [20s elapsed]
data.aws_secretsmanager_secret.postgres_password_secret: Still reading... [30s elapsed] Any ideas?
Jackie Virgoover 1 year ago
Has anyone used
terrafomr-aws-s3-bucket module for creating bi-directional replication?Dhruv Tiwariover 1 year ago
Hi,
We are at POC stage of implementing the RDS cluster through cloudposse module,
One critical requirement is to use RDS managed secrets manager credentials for dbs,
I see there is a PR for this feature:
https://github.com/cloudposse/terraform-aws-rds-cluster/pull/218
If possible, can any one share the approx ETA on this? ( Would help in planning our POC accordingly )
We are at POC stage of implementing the RDS cluster through cloudposse module,
One critical requirement is to use RDS managed secrets manager credentials for dbs,
I see there is a PR for this feature:
https://github.com/cloudposse/terraform-aws-rds-cluster/pull/218
If possible, can any one share the approx ETA on this? ( Would help in planning our POC accordingly )
Jasonover 1 year ago
does anyone here work on the terraform provider stuff backend with Go?
Ive got to building my own provider using the hashicups stuff, so first I followed it word for word for the demo hashicups stuff THAT WORKED then I went off downloaded the framework again and been following it for my own provider and its not working and I cant figure out why. (edited)
Its confidential code so I cant share it publicly
I need help please
Ive got to building my own provider using the hashicups stuff, so first I followed it word for word for the demo hashicups stuff THAT WORKED then I went off downloaded the framework again and been following it for my own provider and its not working and I cant figure out why. (edited)
Its confidential code so I cant share it publicly
I need help please
Jasonover 1 year ago
Basically the stupid terraform provider is still trying to look up
even though ive shoved this in the main.go part
and I changed my .terraformrc file to be this:
<http://registry.terraform.io/hashicorp/hashicups|registry.terraform.io/hashicorp/hashicups> even though ive shoved this in the main.go part
<http://hashicorp.com/edu/gccloud|hashicorp.com/edu/gccloud>and I changed my .terraformrc file to be this:
provider_installation {
dev_overrides {
"<http://hashicorp.com/edu/gccloud|hashicorp.com/edu/gccloud>" = "/home/jason/go/bin"
}
# For all other providers, install them directly from their origin provider
# registries as normal. If you omit this, Terraform will _only_ use
# the dev_overrides block, and so no other providers will be available.
direct {}
}Jasonover 1 year ago
This is the error I am getting:
there is no lock file because when writing providers you dont
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are inconsistent with the current configuration:
│ - provider <http://registry.terraform.io/hashicorp/gccloud|registry.terraform.io/hashicorp/gccloud>: required by this configuration but no version is selected
│ - provider <http://registry.terraform.io/hashicorp/hashicups|registry.terraform.io/hashicorp/hashicups>: required by this configuration but no version is selected
│
│ To make the initial dependency selections that will initialize the dependency lock file, run:
│ terraform initthere is no lock file because when writing providers you dont
ìnitandrew_pintxoover 1 year ago
Hello, I am creating ECS module
with this attribute
But it throws me this error:
What can be the problem? Where should I look into?
Thank you
module "pdf_ecs_task" {
source = "cloudposse/ecs-alb-service-task/aws"
version = "0.66.4"with this attribute
task_policy_arns = [
module.pdf_ecs_iam_policy.policy_arn
]But it throws me this error:
Error: Invalid for_each argument
│
│ on .terraform/modules/pdf_ecs_task/main.tf line 162, in resource "aws_iam_role_policy_attachment" "ecs_task":
│ 162: for_each = local.create_task_role ? toset(var.task_policy_arns) : toset([])
│ ├────────────────
│ │ local.create_task_role is true
│ │ var.task_policy_arns is list of string with 1 element`What can be the problem? Where should I look into?
Thank you
Jasonover 1 year ago
Do you have the module code? And the locals code?
Jasonover 1 year ago
It's how your passing something in
Jasonover 1 year ago
You have a bool variable and your trying to pass in a list with 1 element?
PePe Amengualover 1 year ago(edited)
if I have a
pepe.auto.tfvars file where I have attributes = [ "one", "two"] and a env.tfvars that I use with -var-file where I also have attributes = ["four"] will terraform merge the values of both attributes?rssover 1 year ago(edited)
v1.10.0-alpha20240619
1.10.0-alpha20240619 (June 19, 2024)
EXPERIMENTS:
Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.
ephemeral_values: This language experiment introduces a new special kind of value which Terraform allows to change between the plan phase and the apply phase, and between plan/apply rounds....
1.10.0-alpha20240619 (June 19, 2024)
EXPERIMENTS:
Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.
ephemeral_values: This language experiment introduces a new special kind of value which Terraform allows to change between the plan phase and the apply phase, and between plan/apply rounds....
Mehakover 1 year ago
Can someone help me with sentinel policy to enforce multi-az on rds aurora and elasticsearch clusters. I will create policy in TF cloud?
andrew_pintxoover 1 year ago
Does cloudposse has module to create a subdomain and attach it to existing LoadBalancer?
Juniorover 1 year ago
Hi, i'm using the aws terraform provider and i need help adding multiple external repositories to aws_codeartifact_repository . The documenation says
external_connections - An array of external connections associated with the repository. Only one external connection can be set per repository. Does the external connection take an array or am i reading the documentation wrong. Thanks in advanceresource "aws_codeartifact_repository" "upstream" {
repository = "upstream"
domain = aws_codeartifact_domain.test.domain
}
resource "aws_codeartifact_repository" "test" {
repository = "example"
domain = aws_codeartifact_domain.example.domain
external_connections {
external_connection_name = "public:npmjs"
}
}Oleksandr Lytvynover 1 year ago(edited)
Please advice, does anyone know someone from Terraform community or Hashicorp?
We have 3 Terraform providers in Registry, two of them deprecated, and 1 is actual. I'm trying to understand if there is a way to add some kind of message in TF Registry page that old TF provider is deprecated and that people should use new one (and provide link to new one) and who i should contact to make this happen.
PS. I understand this is probably somewhat not perfect place to post it, but giving it a shot anyways
We have 3 Terraform providers in Registry, two of them deprecated, and 1 is actual. I'm trying to understand if there is a way to add some kind of message in TF Registry page that old TF provider is deprecated and that people should use new one (and provide link to new one) and who i should contact to make this happen.
PS. I understand this is probably somewhat not perfect place to post it, but giving it a shot anyways
PePe Amengualover 1 year ago
was there a trick to have
locals scoped to the .tf file where are defined only? I’m dreaming or I remember there was a way?andrew_pintxoover 1 year ago
When creating ECS task with this module
Task needs to run in a private subnet and be accessible over load balancer
Terraform module to create an ECS Service for a web app (task), and an ALB target group to route requests.
I am a bit confused, where this ALB target group is created and does it have some output?
As well I cant understand how do I connects my ECS task with Load Balancer, as I found in documentation an attribute:
and I need to pass object like above. But have no idea where should I get this target group, cuz module says in a statement it will be created )))
Can someone help with explanation?
Thank you
module "pdf_ecs_task" {
source = "cloudposse/ecs-alb-service-task/aws"
version = "0.73.0"Task needs to run in a private subnet and be accessible over load balancer
Terraform module to create an ECS Service for a web app (task), and an ALB target group to route requests.
I am a bit confused, where this ALB target group is created and does it have some output?
As well I cant understand how do I connects my ECS task with Load Balancer, as I found in documentation an attribute:
ecs_load_balancers = [
{
elb_name = null
target_group_arn = "what target group should I provide here?"
container_name = "container name"
container_port = 80
}
]and I need to pass object like above. But have no idea where should I get this target group, cuz module says in a statement it will be created )))
Can someone help with explanation?
Thank you
rssover 1 year ago(edited)
v1.9.0
1.9.0 (June 26, 2024)
If you are upgrading from an earlier minor release, please refer to the Terraform v1.9 Upgrade Guide.
NEW FEATURES:
Input variable validation rules can refer to other objects: Previously input variable validation rules could refer only to the variable being validated. Now they are general expressions, similar to those elsewhere in a module, which can refer to other...
1.9.0 (June 26, 2024)
If you are upgrading from an earlier minor release, please refer to the Terraform v1.9 Upgrade Guide.
NEW FEATURES:
Input variable validation rules can refer to other objects: Previously input variable validation rules could refer only to the variable being validated. Now they are general expressions, similar to those elsewhere in a module, which can refer to other...
Kyle Johnsonover 1 year ago
is terraform cloud effectively no longer offering free state storage? logged in to it for the first time in many months and everything is plastered with ads for me to “upgrade to the new free plan” which stores… 500 objects
we dont use it to execute plans or anything, its solely used for remote state storage
we dont use it to execute plans or anything, its solely used for remote state storage
George Fahmyover 1 year ago(edited)
We forced an LLM to generate 100% syntactically valid Terraform and it did this while we were testing its limits
Erik Osterman (Cloud Posse)over 1 year ago
@U079VSXEQ5D
PePe Amengualover 1 year ago
So, I know there is a difference between the
for_each on a resource and the for_each on a dynamic block in the way they can deal with list(object()) I’m pretty sure someone at some point posted a link with a pretty detailed explanation as of why, could you share that again? if not, reply to this 🧵 Thanks