31 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Adrian Rodzikabout 3 years ago
Hello everyone,
I have an default Tag Policy applied to my aws organisation. I've added some new tags to this policy and reatached it, but it seems that it not applied. In ex. i want to add those new tags to my ec2 instances but the new tags are not available for them. The already existing tags are in place. Any idea what im doing wrong?
Thanks in advance!
I have an default Tag Policy applied to my aws organisation. I've added some new tags to this policy and reatached it, but it seems that it not applied. In ex. i want to add those new tags to my ec2 instances but the new tags are not available for them. The already existing tags are in place. Any idea what im doing wrong?
Thanks in advance!
Ashwin Jacobabout 3 years ago
Hello Everyone!
I am creating 2 VPCs: Dev and Production. They each have their own CIDR range and are on us-east-1. I am using tailscale to connect to the private instances. I am trying to figure out how to work on Step 3 where I need to add AWS DNS to my tailnet. I got it working in DEV perfectly. As I work in production, I am realizing that there is a conflict in the search domain (on tailscale). Both search domains are
I am creating 2 VPCs: Dev and Production. They each have their own CIDR range and are on us-east-1. I am using tailscale to connect to the private instances. I am trying to figure out how to work on Step 3 where I need to add AWS DNS to my tailnet. I got it working in DEV perfectly. As I work in production, I am realizing that there is a conflict in the search domain (on tailscale). Both search domains are
us-east-1.compute.internal. How do I separate between DEV and PROD even though they are on the same region?bradymabout 3 years ago
I created a new ed25519 key pair in the aws console and I've got the .pem file. But I can't figure out how to get the public key from it. My googling tells me that
openssl pkey -in private.pem -pubout should do it, but instead I get Could not read key from private.pem. Anyone know the correct incantation to get the public key?Kirupa Karanabout 3 years ago(edited)
Hello everyone,
I need to write a lambda which should login the instance and execute (service nginx status) command and print that result and post those result to the gmail
I need to write a lambda which should login the instance and execute (service nginx status) command and print that result and post those result to the gmail
Kirupa Karanabout 3 years ago
Is there any possible way to do this automation ??
John Stiliaabout 3 years ago
Hi all,
I am working on a personal project
without writing a long message π I am going to have 2 lambdas ( one for GET and one for POST attached to one API GW each )
I am Throttling via a UsagePlan on the API GW and the use of keys ( not for auth, just for the usage plan)
The Lambdas will be hiting an RDS
I am deploying these with SAM
Would you have any advise for me that I should consider ?
I am working on a personal project
without writing a long message π I am going to have 2 lambdas ( one for GET and one for POST attached to one API GW each )
I am Throttling via a UsagePlan on the API GW and the use of keys ( not for auth, just for the usage plan)
The Lambdas will be hiting an RDS
I am deploying these with SAM
Would you have any advise for me that I should consider ?
Vladimirabout 3 years ago
Hi, does AWS Beanstalk provider create a RDS instance implicitly?
John Stiliaabout 3 years ago
is this channel pretty dead. /
John Stiliaabout 3 years ago
?
Soren Jensenabout 3 years ago
Nope not dead.. There is plenty of life here as well as in other channel π
Erik Osterman (Cloud Posse)about 3 years ago
Yes, as @Soren Jensen said - it's very much alive π
John Stiliaabout 3 years ago
I could use some help here
I have the following Lambda
It sends data to an RDS so I need it to be in the same VPC and Subnet as the RDS ( or does it ? )
also I need to get a secret from Secret manager so I have attached a policy
when I add the
Any thoughts ? cause I am very confused π
I have the following Lambda
It sends data to an RDS so I need it to be in the same VPC and Subnet as the RDS ( or does it ? )
also I need to get a secret from Secret manager so I have attached a policy
when I add the
VPCconfig, I cant anymore get the secretAny thoughts ? cause I am very confused π
HelloWorldFunctionGET:
Type: AWS::Serverless::Function # More info about Function Resource: <https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction>
Properties:
FunctionName: HelloWorldFunctionGET
CodeUri: rest-listener/
Handler: app.lambda_handler
Runtime: python3.9
VpcConfig:
SecurityGroupIds:
- sg-XXXXX
SubnetIds:
- subnet-XXXX
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
Auth:
ApiKeyRequired: true
IAMP2L87H:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: "IAMP2L87H"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "secretsmanager:GetSecretValue"
Resource: "arn:aws:secretsmanager:XXXXXXXXXXX"
Roles:
- !Ref "HelloWorldFunctionGETRole"Darren Cunninghamabout 3 years ago
1. canβt answer about RDS since there arenβt any details about RDS VPC configuration
2. check that your security group & subnet NACL allow for HTTPS β if youβre using a VPC Endpoint for SSM then check that the SG allows for inbound from your subnet CIDR
2. check that your security group & subnet NACL allow for HTTPS β if youβre using a VPC Endpoint for SSM then check that the SG allows for inbound from your subnet CIDR
John Stiliaabout 3 years ago
Hi folks,
If I have an account A and an account B
The account A has a R53 example.com
how can I create R53 entries on the account B that will be forwarded on the account A
(I am confused on where I need to put what nameservers
If I have an account A and an account B
The account A has a R53 example.com
how can I create R53 entries on the account B that will be forwarded on the account A
(I am confused on where I need to put what nameservers
Adnanabout 3 years ago
I am using aurora serverless v2 with 2 instances (multi az, writer/reader).
Only the writer is effectively used by an application.
The reader is pretty much useless atm except for failover.
Yet, even though, it's only doing the replication it is using pretty much the same amount of ACU's as the writer.
I would be expecting it to use much less and so save some money if not used.
Anybody using multi-az aurora serverless v2?
Is this behaviour normal?
Is there any way to change it π?
Only the writer is effectively used by an application.
The reader is pretty much useless atm except for failover.
Yet, even though, it's only doing the replication it is using pretty much the same amount of ACU's as the writer.
I would be expecting it to use much less and so save some money if not used.
Anybody using multi-az aurora serverless v2?
Is this behaviour normal?
Is there any way to change it π?
John Stiliaabout 3 years ago
Hi Sweetops
I have created some resources from the console and when I use CF to manage them of course it complains that those reseouces are already exist,
Is there a way to wither force the creation or import that state in the CF ?
I have created some resources from the console and when I use CF to manage them of course it complains that those reseouces are already exist,
Is there a way to wither force the creation or import that state in the CF ?
Adnanabout 3 years ago
I want to do something in a lambda upon a DB cluster being available.
I cannot find an event that says "DB cluster available" but there is
one that says "DB cluster created" - RDS-EVENT-0170
Would this event fired also mean that the cluster is available?
I cannot find an event that says "DB cluster available" but there is
one that says "DB cluster created" - RDS-EVENT-0170
Would this event fired also mean that the cluster is available?
Yonatan Korenabout 3 years ago(edited)
Hey all,
What is the best way to do a sweeping destroy, or "nuke" a bunch of AWS resource that all consistently have one tag in particular? For example
Think when you have some resources that were all spun up by Terraform some time ago and all have this consistent tag, but the Terraform config is so foobar'd that you cannot run a
I thought aws-nuke would be the absolute perfect candidate for this, but when trying to write an aws-nuke config that targets this tag across all resources, I ran into this issue, which shows that you have to know every resource type beforehand and write a filter for that resource (that filters for
So my best bet is to write a bash script that iterates over
Or maybe someone knows of a different tool that can fulfill this use case? π
What is the best way to do a sweeping destroy, or "nuke" a bunch of AWS resource that all consistently have one tag in particular? For example
Environment: Foo.Think when you have some resources that were all spun up by Terraform some time ago and all have this consistent tag, but the Terraform config is so foobar'd that you cannot run a
terraform destroy.I thought aws-nuke would be the absolute perfect candidate for this, but when trying to write an aws-nuke config that targets this tag across all resources, I ran into this issue, which shows that you have to know every resource type beforehand and write a filter for that resource (that filters for
Environment: Foo).So my best bet is to write a bash script that iterates over
aws-nuke resource-types and spits out a YAML list item with that filter, and then shove that massive config into aws-nuke.Or maybe someone knows of a different tool that can fulfill this use case? π
Shreyank Sharmaabout 3 years ago
Hello All,
We are using AWS Lambdas and some of the lambdas will be running every 5min and generates a lot of logs. We noticed that these lambdas generate a lot of cloudwatch logs.
and If a goto that Lambd's log group and click on any log stream which generated the last 2 days back it takes time but it loads all the logs if I go more than 2 days back, like 3 or 4 days back cloudwatch logs for that lambda it loads for some time then just shows empty, but if I filter for some word like "bill" then it shows logs which have bill word it.
old logs will not show but if I put a filter it will show, anyone faced this issue?
will it work if I clear any old logs? right now its configured to keep forever
thank you
We are using AWS Lambdas and some of the lambdas will be running every 5min and generates a lot of logs. We noticed that these lambdas generate a lot of cloudwatch logs.
and If a goto that Lambd's log group and click on any log stream which generated the last 2 days back it takes time but it loads all the logs if I go more than 2 days back, like 3 or 4 days back cloudwatch logs for that lambda it loads for some time then just shows empty, but if I filter for some word like "bill" then it shows logs which have bill word it.
old logs will not show but if I put a filter it will show, anyone faced this issue?
will it work if I clear any old logs? right now its configured to keep forever
thank you
Darren Cunninghamabout 3 years ago
sounds like a bug that should be raised to AWS CloudWatch Support
Vicenteabout 3 years ago
Hi, I have one ecs container running cron jobs inside, would codedeploy wait for all the process to finish when making a blue green deployment before deleting the container?
John Stiliaabout 3 years ago
hi al
I habe a lambda+api_gw that does some DB work
the api_gw resolves on api.domain.com/get-data or api.domain.com/put-data
I would like to have the notion of api versioning for example
api.domain.com/v1/get-data etc
Any ideas how I can do that
I habe a lambda+api_gw that does some DB work
the api_gw resolves on api.domain.com/get-data or api.domain.com/put-data
I would like to have the notion of api versioning for example
api.domain.com/v1/get-data etc
Any ideas how I can do that
Martin Helfertabout 3 years ago
Hey. Would you recommend paid AWS support plans? We currently have a business support plan for our prod account which in fact we didnβt use for the last two years. Leave it like that for βjust in case..β or switch it on/off whenever we have the need for extended support? How do you handle this? The cost add up to a fairly huge amount..
fotagabout 3 years ago
Hello π !
Some questions related to AWS Aurora (Postgres like) serverlessV2 cluster with Multi-AZ (which created via terraform with
β’ Does this cluster use all instances created there (writer and readers) are in use via cluster endpoint or readers are standby until they will be needed?
β’ Do we have to specify allocated storage and storage type?
Some questions related to AWS Aurora (Postgres like) serverlessV2 cluster with Multi-AZ (which created via terraform with
count=2 at aws_rds_cluster_instance - no multi-az conf available via TF for serverlessv2) in case anyone knows:β’ Does this cluster use all instances created there (writer and readers) are in use via cluster endpoint or readers are standby until they will be needed?
β’ Do we have to specify allocated storage and storage type?
Steven Millerabout 3 years ago
Is anyone using multi architecture EKS + self-hosted github runners for multi architecture builds? The point would be to allow something like this:
In github workflows. Is that overly complicated or even a valid concept? Is there some easier option we haven't considered for self-hosted multi-architecture builds? Or maybe we just go with github-hosted runners. Is it already built in, like karpenter can deploy arm nodes noting the target architecture of the pods or something like that?
runs-on: ubuntu-20.04-arm64-largeIn github workflows. Is that overly complicated or even a valid concept? Is there some easier option we haven't considered for self-hosted multi-architecture builds? Or maybe we just go with github-hosted runners. Is it already built in, like karpenter can deploy arm nodes noting the target architecture of the pods or something like that?
Soren Jensenabout 3 years ago
Anyone know if you can subscribe to when new AMI's are released by AWS? AWS Inspector is identifying an issue in the latest image, it will be nice to know when to redeploy.
John Stiliaabout 3 years ago(edited)
Hi all,
I have a standard API->Lambda->Dynamo app.
Currently I have the API on aggressive throttle and quota to minimise usage and get DOW.
Would you have any other advise on how to go about it without using expensive resources like WAF ? ( its only an OpenSource VSCode extension I am building π)
Thanks in advance
I have a standard API->Lambda->Dynamo app.
Currently I have the API on aggressive throttle and quota to minimise usage and get DOW.
Would you have any other advise on how to go about it without using expensive resources like WAF ? ( its only an OpenSource VSCode extension I am building π)
Thanks in advance
Adnanabout 3 years ago
If you need documentation and lists related to permissions in one place, this might be useful ...
https://aws.permissions.cloud/
https://aws.permissions.cloud/
John Stiliaabout 3 years ago
hi folks
if I have apply a Usage plan on my REST API_GW,
after the quota is met, and I get HTTP429, do I get charged for any subsequent API calls or aws takes care of it ?
Cant find anything on docs
if I have apply a Usage plan on my REST API_GW,
after the quota is met, and I get HTTP429, do I get charged for any subsequent API calls or aws takes care of it ?
Cant find anything on docs
Alex Atkinsonabout 3 years ago
I haven't looked at the west regions for quite a while. Is us-west-1 over capacity lately? us-west-1c is unavailable.
aws ec2 describe-availability-zones --region us-west-1
{
"AvailabilityZones": [
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1a",
"ZoneId": "usw1-az3",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
},
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1b",
"ZoneId": "usw1-az1",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
}
]
}