22 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Luke Hobbsalmost 4 years ago
Hey all, how does everyone manage the initial IAM Role creation at your organizations?
Specifically we’ve got a multi-account structure and looking to setup some Organization Access Roles that we can use to create other IAM Roles with more restrictive permissions. It seems like this will need to be run from a developer/administrator laptop using IAM User credentials, but curious how others approach initial IAM Role setup to enable developers in new AWS Accounts.
Specifically we’ve got a multi-account structure and looking to setup some Organization Access Roles that we can use to create other IAM Roles with more restrictive permissions. It seems like this will need to be run from a developer/administrator laptop using IAM User credentials, but curious how others approach initial IAM Role setup to enable developers in new AWS Accounts.
Soren Jensenalmost 4 years ago
Follow up question to the above. We also use AWS Org+SSO. When we create a new account we obviously get a root account created in the new account. To be CIS compliant we need to enable MFA ideally hardware MFA for that account, has anyone managed to automate that? Ideally with terraform
Chin Samalmost 4 years ago
hi everyone! i just join your slack, i am using your module and new to Terraform, can someone please give a hint, can i use rate / cron expression with the module you provider ? https://github.com/cloudposse/terraform-aws-cloudwatch-events/tree/0.5.0 thank you very much
Bhavik Patelalmost 4 years ago
Has anyone had any issues trying to EXEC into a Fargate instance? I’m getting the following error and our team is pretty stumped with this one.
Of the 6/7 ECS clusters we have, we are able to exec in. The only material difference this cluster has is that we are using a NLB instead of an ALB …
This is a recent issue for us without any changes to our infrastructure
An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later.Of the 6/7 ECS clusters we have, we are able to exec in. The only material difference this cluster has is that we are using a NLB instead of an ALB …
This is a recent issue for us without any changes to our infrastructure
Dhiaalmost 4 years ago
👋 Hello, team!
Wilalmost 4 years ago
Howdy y'all... question. I'm using the terraform-aws-ec2-instance module and want to add some lifecycle --> ignore_changes options so boxes don't rebuild.
I went and created these options and was going to do a pull request and found that you cannot add variables inside the lifecycle stanza. So... how are people getting around this?
I went and created these options and was going to do a pull request and found that you cannot add variables inside the lifecycle stanza. So... how are people getting around this?
Dev Jadhavalmost 4 years ago
Does anyone help me to setup airflow in EKS using terraform?
Sherifalmost 4 years ago
Shreyank Sharmaalmost 4 years ago
Dotnet lambda runtime version error
Hi All,
We are running a serverless lambda application running dotnet core 2.1 runtime(deployed through dotnet lambda deploy-serverless).
Couple of days back, a person in our team accidentally tried to delete the cloud formation stack used to deploy this application.
But as he did not have the required permissions, the stack status changed to DELETE_FAILED.
As the status of stack became DELETE_FAILED, we were not able to update the application through cloud formation.
Our deployment failed.
So we deleted the stack manually and redeployed using dotnet lambda deploy-serverless command.
But we got the following error:
Resource handler returned message: "The runtime parameter of dotnetcore2.1 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (dotnet6 while creating or updating functions.)
As aws was not allowing us to create a lambda with donetcore2.1, we changed the code to dotnetcore3.1 in seperate git branch and deployed again, which worked.
There were still some bugs in 3.1 code, so we were still debugging.
But yesterday on of our developer deployed the code in different branch, which had the stack runtime as 2.1.
Now the runtime changed from 3.1 back to 2.1.
We became confused, AWS says after end of support its not possible to create, update or rollback to unsupported runtime(https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html).
Even though cloudformation did not allow us to create the application in runtime dotnetcore2.1 but allowed us to change the running dotnetcore3.1 application to dotnetcore2.1.
We tested the same in python also, Deployed the application running python3.9 and changed the runtime to unsupported version 2.7
It did not allow us to create in 2.7 but change the application runtime from 3.9 to 2.7.
Now our question is as we are back to running our application in dotnetcore2.1, is it fine to continue with it for some time, as the code update is working.
Or will AWS one day suddenly stops allowing code updates and its to better to move our application to dotnetcore3.1 now itself?
Hi All,
We are running a serverless lambda application running dotnet core 2.1 runtime(deployed through dotnet lambda deploy-serverless).
Couple of days back, a person in our team accidentally tried to delete the cloud formation stack used to deploy this application.
But as he did not have the required permissions, the stack status changed to DELETE_FAILED.
As the status of stack became DELETE_FAILED, we were not able to update the application through cloud formation.
Our deployment failed.
So we deleted the stack manually and redeployed using dotnet lambda deploy-serverless command.
But we got the following error:
Resource handler returned message: "The runtime parameter of dotnetcore2.1 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (dotnet6 while creating or updating functions.)
As aws was not allowing us to create a lambda with donetcore2.1, we changed the code to dotnetcore3.1 in seperate git branch and deployed again, which worked.
There were still some bugs in 3.1 code, so we were still debugging.
But yesterday on of our developer deployed the code in different branch, which had the stack runtime as 2.1.
Now the runtime changed from 3.1 back to 2.1.
We became confused, AWS says after end of support its not possible to create, update or rollback to unsupported runtime(https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html).
Even though cloudformation did not allow us to create the application in runtime dotnetcore2.1 but allowed us to change the running dotnetcore3.1 application to dotnetcore2.1.
We tested the same in python also, Deployed the application running python3.9 and changed the runtime to unsupported version 2.7
It did not allow us to create in 2.7 but change the application runtime from 3.9 to 2.7.
Now our question is as we are back to running our application in dotnetcore2.1, is it fine to continue with it for some time, as the code update is working.
Or will AWS one day suddenly stops allowing code updates and its to better to move our application to dotnetcore3.1 now itself?
muhahaalmost 4 years ago
Hey 👋 Question.. Are You using custom NAT instances for private subnets ? We have quite big spending for outbound traffic via NAT gateway, so wondering if custom NAT instances could do the work too. Ofc its possible, but is there any “cloud-native” way ? Routing via Squid deployed in Kubernetes with observability support, or some VyOS auto scaling group ? Ideas? Thanks
Vlad Ionescu (he/him)almost 4 years ago(edited)
“_Scaling containers on AWS in 2022_” is out and may be of interest to y’all!
PePe Amengualalmost 4 years ago
EventBridge vs SQS vs SNS can someone please tell me why EventBridge is
"better" than SQS from the Architectural pint of view and maybe development code of view? I got asked some questions yesterday about details about messages/events per second etcDaniC (he/him)almost 4 years ago
What do folks use as an observability platform when you are in a situation with multiple arch patterns?
• containers running in a single EC2
• serverless pattern ( high number of invocations)
• rds/ AMQ/ Elasticache/ DDB
We have tried NewRelic but that was too expensive. WE are not on Datadog and is okay for now (it fixed a huge prob with devs having access to prod for troubleshooting etc), especially profiling but is getting very expensive: each EC2 + container hours + log ingested (not too bad) + log indexed ( scream! also cause we have garbage and log every single line 😢 )
Going in house with maybe APM/ AMG could be an option but having 2 different UIs and then correlate things ...
I feel like any SaaS solution: be honeycomb.io / dynatrace/ logz.io etc won't cut it unless we sort out the garbage in....
• containers running in a single EC2
• serverless pattern ( high number of invocations)
• rds/ AMQ/ Elasticache/ DDB
We have tried NewRelic but that was too expensive. WE are not on Datadog and is okay for now (it fixed a huge prob with devs having access to prod for troubleshooting etc), especially profiling but is getting very expensive: each EC2 + container hours + log ingested (not too bad) + log indexed ( scream! also cause we have garbage and log every single line 😢 )
Going in house with maybe APM/ AMG could be an option but having 2 different UIs and then correlate things ...
I feel like any SaaS solution: be honeycomb.io / dynatrace/ logz.io etc won't cut it unless we sort out the garbage in....
David Spedziaalmost 4 years ago
Soren Jensenalmost 4 years ago
Hi All. Anyone know an easy was to make an S3 bucket policy there allow access to the bucket from any account in the AWS Organisation but no one outside the Org. As far as I can see it's only possible by listing all the account ids.
Balazs Vargaalmost 4 years ago
question.. we are using spot instances and sometimes termination handler cordons node incorrectly... with warning
WRN All retries failed, unable to complete the uncordon after reboot workflow error="timed out waiting for the condition"
If I ssh to the node I can get the metadata... what can be the issue ?
WRN All retries failed, unable to complete the uncordon after reboot workflow error="timed out waiting for the condition"
If I ssh to the node I can get the metadata... what can be the issue ?
Nimesh Aminalmost 4 years ago
Does anyone else here use AWS's managed Prometheus offering? I currently have it setup (along with Grafana) to just run on my nodes, but have been wondering if it's worth moving over from a cost-maintenance ROI perspective.
Jesus Martinezalmost 4 years ago
Hi, any functional terraform example to create an Amazon Managed Workflows for Apache Airflow?
Josh B.almost 4 years ago
GM Folks,
I am wondering if anyone had thoughts on attaching multiple services to a single alb using host based routing vs an alb per "app or service". Also maybe what you do more often than not that works best? tyia
I am wondering if anyone had thoughts on attaching multiple services to a single alb using host based routing vs an alb per "app or service". Also maybe what you do more often than not that works best? tyia
Balazs Vargaalmost 4 years ago
do you have issues with rds ? currently we are facing with connection issues to aurora serverless v1
Eamon Keanealmost 4 years ago(edited)
anyone got any thoughts on this module? Has about a person year of effort from two AWS engineers. Seems reasonably flexible and well thought through based on the below video and is AWS' response to the hell of managing EKS clusters. I'm aware that generally AWS' terraform modules have had breaking changes, but it seems like they're invested in maintaining this one.
https://www.youtube.com/watch?v=TXa-y-Uwh2w
https://github.com/aws-ia/terraform-aws-eks-blueprints
https://www.youtube.com/watch?v=TXa-y-Uwh2w
https://github.com/aws-ia/terraform-aws-eks-blueprints
Nick Kocharhookalmost 4 years ago(edited)
I’m trying to add multiple rules to a cloudposse security group with a rules block that looks like this:
This is failing with:
And the problem is the
Anybody have any idea what am I doing wrong here?
rules = [
{
key = "HTTP"
type = "ingress"
from_port = 5050
to_port = 5050
protocol = "tcp"
cidr_blocks = module.subnets.public_subnet_cidrs
self = null
description = "Allow HTTP from IPs in our public subnets (which includes the ALB)"
},
{
key = "SSH"
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
self = null
description = "Allow SSH from all IPs"
}
]This is failing with:
Error: Invalid value for module argument
The given value is not suitable for child module variable “rules” defined
at .terraform/modules/project_module.sg/variables.tf:60,1-17: element types
must all match for conversion to list.
And the problem is the
cidr_blocks. If I replace the first one with ["0.0.0.0/0"] it works. I see that the output from the aws-dynamic-subnets module is aws_subnet.public.*.cidr_block. The current value of the cidr_blocks variable in the resource is ["172.16.96.0/19", "172.16.128.0/19"], which sure looks like a list of strings to me. When I open terraform console and ask for the type of public_subnet_cidrs, I just get dynamic. I’ve tried wrapping the output in tolist() and adding an empty string to the cidr_blocks array in the second ingress rule, but neither changes the error.Anybody have any idea what am I doing wrong here?