56 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Adnanover 2 years ago
"A pull through cache is a way to cache images you use from an upstream repository. Container images are copied and kept up-to-date without giving you a direct dependency on the external registry. If the upstream registry or container image becomes unavailable, then your cached copy can still be used."
https://aws.amazon.com/blogs/containers/announcing-pull-through-cache-for-registry-k8s-io-in-amazon-elastic-container-registry/
https://aws.amazon.com/blogs/containers/announcing-pull-through-cache-for-registry-k8s-io-in-amazon-elastic-container-registry/
Vicenteover 2 years ago
Hello! Im trying to create a 301 redirect with s3 and cloudfront. The main idea is to redirect traffic from support.example.com to our atlassian service desk portal url. Cloudfronts behaviour is to redirect http from https and theres a valid certificate from acm added to cloudfront for *.example.com, however, i get this error from chrome when trying to test the redirect: NET::ERR_CERT_COMMON_NAME_INVALID. Any ideas what could be wrong?
Adnanover 2 years ago
For those who are using AWS Identity Center (SSO)
• Are you using primarily permission sets for cross-account access?
• Are you using primarily self managed roles for cross-account access?
Why do you use one over the other?
• Are you using primarily permission sets for cross-account access?
• Are you using primarily self managed roles for cross-account access?
Why do you use one over the other?
Slackbotover 2 years ago
This message was deleted.
M Asifover 2 years ago
Hi. Want to import existing elasticache cluster into terraform. Not sure what is it about the replication group? Should I create a replication group and import cluster into it or what?
Jim Parkover 2 years ago
Your best bet is to get the specific spec using describe_replication_group and then write the terraform resource to match. Then import it, yes. Do a terraform plan and if you note any deviations, then fix the resource definition to match.
Oleh Kopylover 2 years ago
Guys, how can I redeploy tasks after i pushed another Docker image to ECR?
When i press "Update service" and tick "force deploy", it does not do anything.
So tired of going manually through deleting a service -> waiting until it's finished the deletion (cause otherwise my tasks fail, probably due to the quota limit, so they have to finish deleting first) -> creating another service from scratch...
When i press "Update service" and tick "force deploy", it does not do anything.
So tired of going manually through deleting a service -> waiting until it's finished the deletion (cause otherwise my tasks fail, probably due to the quota limit, so they have to finish deleting first) -> creating another service from scratch...
Oleh Kopylover 2 years ago
AWS support is horrible
support: All restrictions on your account have been lifted.
me: What were the restrictions?
support: https://i.imgur.com/TuC0If7.jpg
me: You said "All restrictions on your account have been lifted.". So what were the restrictions?
support: https://i.imgur.com/imFbdjL.jpg
support: "I understand"
are they on crack?
support: All restrictions on your account have been lifted.
me: What were the restrictions?
support: https://i.imgur.com/TuC0If7.jpg
me: You said "All restrictions on your account have been lifted.". So what were the restrictions?
support: https://i.imgur.com/imFbdjL.jpg
support: "I understand"
are they on crack?
jonjitsuover 2 years ago
Any thoughts/opinions on orgformation as an alternative to Control Tower/Landing Zones?
Patrick McDonaldover 2 years ago
anyone impacted from the current aws us-east-1 outage?
Wayne Jessenover 2 years ago
Yup. We are all waiting around and can’t do anything about it.
mikeover 2 years ago
us-east with lambdas api gateway both being down gonna be a bad time for a large swath of aws
Patrick McDonaldover 2 years ago
cloud formation and lambda are affected I wonder if this only impacts provisioning managed services
PePe Amengualover 2 years ago
Has anyone implement a TCP/UDP proxy for instances in AWS? pure forwarding of ports to different instances like port:1 to instance:1 , port2: instance2 etc ? I wonder if there is a container with nginx or something else that have this built in to make it easier instead of cooking my on image. I could use NLBs but NLBs are layer4 and do not support SGs so once is public then I need to use NACLs to close the access and I will like to avoid that
Oleh Kopylover 2 years ago
Is there any easy way to launch a Docker container on AWS from ECR without a complex cluster + task + service setup on ECS?
If there is such a complex setup for just playing around with one server, there is no point in ECR – better set up manually EC2…
If there is such a complex setup for just playing around with one server, there is no point in ECR – better set up manually EC2…
John Bedalovover 2 years ago(edited)
this elasticache module doesn't seem to conveniently handle global clusters https://github.com/cloudposse/terraform-aws-elasticache-redis - is that correct?
Oleh Kopylover 2 years ago
Anybody has experience in AWS Lightsai?
Is there any way to make the launch script work? If i execute these commands in a shell script by running ./install.sh with these commands, i can then find that my packages are installed, whereas with this – when I ssh into an instance – there are no such packages...
Is there any way to make the launch script work? If i execute these commands in a shell script by running ./install.sh with these commands, i can then find that my packages are installed, whereas with this – when I ssh into an instance – there are no such packages...
Oleh Kopylover 2 years ago
How to select arm/x86 for a lightsail container service?
Oleh Kopylover 2 years ago
Do you know why does my Lightsail deployemnt fails?
It does have CMD in my Dockerfile
No launch command, environment variables, or open ports specified.
It does have CMD in my Dockerfile
Oleh Kopylover 2 years ago
One of the worst development experience with Lightsail too
Oleh Kopylover 2 years ago
And why a deployment on Lightsail take such a lot of time? Crazy...
Oleh Kopylover 2 years ago
Deployment on Lightsail takes way more time than to Fargate...
Oleh Kopylover 2 years ago
Seeing this in Lightsail service deployment logs. Why? So for Fargate this Docker image is good and for Lightsail it's not, correct?
Oleh Kopylover 2 years ago
Yo. Please help me. I created a new task definition with .5 vCPU instead of .25 vCPU.
How can I update all the current tasks im a service to .5 vCPU?
I pressed "Update service" button, then checked "Force new deployment", then pressed "Update" button, got "Service updated:" message and my tasks are still .25 vCPU. Why?
How can I update all the current tasks im a service to .5 vCPU?
I pressed "Update service" button, then checked "Force new deployment", then pressed "Update" button, got "Service updated:" message and my tasks are still .25 vCPU. Why?
Oleh Kopylover 2 years ago
I checked "Deployments" tab and all the last deployment is "In progress" for eternity now...
It's faster to remove a service and create a new service to update tasks... What the hell...
It's faster to remove a service and create a new service to update tasks... What the hell...
Oleh Kopylover 2 years ago(edited)
Guys, could you please recommend any Fargate autoscaling tutorials?
Don't recommend AWS docs please.
I set it up and it does not work...
Don't recommend AWS docs please.
I set it up and it does not work...
Hao Wangover 2 years ago
I’ve got many clients in the same situation, it is frustrating, let us calm down first 👍️ I think the cause is some basic thing is ignored like the image architecture as Alex mentioned
Hao Wangover 2 years ago
I was frustrated when I started using Docker at 2014, version 0.9 🙂
Hao Wangover 2 years ago
You are at the psychology turning point of learning tech. Don’t give up but try different platforms, AWS may not be a good fit for you
Hao Wangover 2 years ago
The other platform will have the similar issue since this seems not an AWS issue though
Hao Wangover 2 years ago
So back to the first point, which is some basic stuff was overseen… Do you have a writeup or source codes?
Oleh Kopylover 2 years ago
Please tell me how to test docker lambda locally.
I found this https://docs.aws.amazon.com/lambda/latest/dg/images-test.html
Ran a docker image like this
I got:
Went to http://127.0.0.1:9000/2023-06-18/functions/function/invocations , got nothing
I found this https://docs.aws.amazon.com/lambda/latest/dg/images-test.html
Ran a docker image like this
docker run -p 9000:8080 ...I got:
18 Jun 2023 01:34:24,542 [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/var/task, handler=)Went to http://127.0.0.1:9000/2023-06-18/functions/function/invocations , got nothing
Oleh Kopylover 2 years ago
Why does the hell local Lamda has different
event 's structure from pushed function's event?Oleh Kopylover 2 years ago
Here is my Lanbda's handler:
Here is my Python's request:
Here is what i get in Python from deployed Lambda invocation:
Here is what i get from local Lambda invocation:
Here is what i get when i press "Test" in AWS console: https://i.imgur.com/Vmuy01T.png
Why the hell it's the same function, but 3 different outputs? How am i supposed to test it locally if it gives me the different result in the deployed state?
def handler(event, context):
image_url = event.get("image_url")
token = event.get("headers", {}).get("Authorization")
return {
"headers": {"Content-Type": "application/json"},
"statusCode": "status_code",
"body": image_url,
"token": token,
"event": event,
}Here is my Python's request:
import requests
payload = {
"image_url": 'https://...',
"return_version": True,
}
headers = {
"Authorization": "tes",
}
res = <http://requests.post|requests.post>("https://...", json=payload, headers=headers)
print(res.json())Here is what i get in Python from deployed Lambda invocation:
NoneHere is what i get from local Lambda invocation:
{'headers': {'Content-Type': 'application/json'}, 'statusCode': 'status_code', 'body': 'https://...', 'token': None, 'event': {'image_url': 'https://...', 'return_version': True}}Here is what i get when i press "Test" in AWS console: https://i.imgur.com/Vmuy01T.png
Why the hell it's the same function, but 3 different outputs? How am i supposed to test it locally if it gives me the different result in the deployed state?
Oleh Kopylover 2 years ago
Is there any way to have faster Lambda responses? My lambda processes requests for too long as opposed to a regular EC2...
Oleh Kopylover 2 years ago(edited)
Guys, could you please recommend any good course video or article on auto-scaling Fargate for beginners?
Everything i read and watch so far gives me me more questions that answers...
Everything i read and watch so far gives me me more questions that answers...
Oleh Kopylover 2 years ago
Could you please recommend any good tutorials/courses on deploying Docker images to EKS and setting up auto-scaling?
Again, so tired of watching videos and reading articles which give me more questions than answers…
Again, so tired of watching videos and reading articles which give me more questions than answers…
Oleh Kopylover 2 years ago(edited)
Guys, do you know how to edit a cloudwatch alarm for EC2 auto-scaling?
Was unable to Google anything on it (maybe i Googled it wrong...)
I need it to scale up if CPU is over 70% for 30 seconds
and scale down if CPU is under 70% for 30 seconds
Was unable to Google anything on it (maybe i Googled it wrong...)
I need it to scale up if CPU is over 70% for 30 seconds
and scale down if CPU is under 70% for 30 seconds
Kunalsing Thakurover 2 years ago
To edit a CloudWatch alarm for EC2 auto-scaling, you can follow these steps:
1. Sign in to the AWS Management Console and open the Amazon CloudWatch service.
2. In the navigation pane, click on "Alarms" under the "CloudWatch" section.
3. Locate the alarm that is associated with your EC2 auto-scaling group and select it.
4. Click on the "Actions" dropdown menu and choose "Edit."
5. In the "Create/Edit Alarm" wizard, you can modify the alarm configuration to match your requirements.
- Under the "Conditions" section, select the "Static" option for the "Threshold type."
- For the "Whenever" condition, choose "Greater" and enter "70" in the text box.
- Set the "Period" to 30 seconds.
- Enable the "Consecutive periods" option and set it to "1."
- Choose the appropriate "Statistic" (e.g., "Average" CPU utilization) and adjust the "Datapoints to alarm" if needed.
6. Under the "Actions" section, click on the "Add notification action" button if you want to receive notifications when the alarm state changes.
7. Optionally, you can configure auto-scaling actions when the alarm state is triggered.
- Click on the "Add Scaling Action" button.
- Choose the appropriate scaling policy for scaling up and scaling down.
- Configure the desired scaling adjustments, such as the number of instances to add or remove.
- Save the scaling actions.
8. Review your changes and click on the "Save" button to update the alarm.
The edited CloudWatch alarm will now trigger scaling actions for your EC2 auto-scaling group based on the specified CPU utilization thresholds and duration.
1. Sign in to the AWS Management Console and open the Amazon CloudWatch service.
2. In the navigation pane, click on "Alarms" under the "CloudWatch" section.
3. Locate the alarm that is associated with your EC2 auto-scaling group and select it.
4. Click on the "Actions" dropdown menu and choose "Edit."
5. In the "Create/Edit Alarm" wizard, you can modify the alarm configuration to match your requirements.
- Under the "Conditions" section, select the "Static" option for the "Threshold type."
- For the "Whenever" condition, choose "Greater" and enter "70" in the text box.
- Set the "Period" to 30 seconds.
- Enable the "Consecutive periods" option and set it to "1."
- Choose the appropriate "Statistic" (e.g., "Average" CPU utilization) and adjust the "Datapoints to alarm" if needed.
6. Under the "Actions" section, click on the "Add notification action" button if you want to receive notifications when the alarm state changes.
7. Optionally, you can configure auto-scaling actions when the alarm state is triggered.
- Click on the "Add Scaling Action" button.
- Choose the appropriate scaling policy for scaling up and scaling down.
- Configure the desired scaling adjustments, such as the number of instances to add or remove.
- Save the scaling actions.
8. Review your changes and click on the "Save" button to update the alarm.
The edited CloudWatch alarm will now trigger scaling actions for your EC2 auto-scaling group based on the specified CPU utilization thresholds and duration.
Kunalsing Thakurover 2 years ago
this is from chatgpt
Oleh Kopylover 2 years ago
How do you add CloudFlare to AWS Load Balancer? Spent an hour trying to figure out how to make it work – no luck.
Without NGINX forwarding of course, since it seems like a huuuge redundant overhead
Without NGINX forwarding of course, since it seems like a huuuge redundant overhead
Oleh Kopylover 2 years ago
@Kunalsing Thakur chatGPT does not give proper solution for this
Oleh Kopylover 2 years ago
Hell. I tried refreshing instances in auto-scaling group.
I thought that the logic for this is following
1. Create a new instance
2. Make sure that it's port 80 is accessible
3. Drop old instance, remove it from the auto-scaling group
But the logic is like this
1. Remove old instance from the auto-scaling group
2. Create a new instance
3. Drop old instance
How to make it work like it should (1st case?)
I thought that the logic for this is following
1. Create a new instance
2. Make sure that it's port 80 is accessible
3. Drop old instance, remove it from the auto-scaling group
But the logic is like this
1. Remove old instance from the auto-scaling group
2. Create a new instance
3. Drop old instance
How to make it work like it should (1st case?)
Oleh Kopylover 2 years ago
i launch my ec2 instance with a launch script (user data).
it ends with "python3 launch-app.py"
Where are the logs of it in Ubuntu?
Meaning where is the stderr and stdout?
it ends with "python3 launch-app.py"
Where are the logs of it in Ubuntu?
Meaning where is the stderr and stdout?
Nishant Thoratover 2 years ago
AWS S3 Buckets never cease to amaze me with their peculiar nature.
https://www.cloudyali.io/blogs/aws-s3-bucket-creation-date-discrepancy-in-master-and-other-regions
https://www.cloudyali.io/blogs/aws-s3-bucket-creation-date-discrepancy-in-master-and-other-regions
Oleh Kopylover 2 years ago
I have an issue with Fargate. It scales up fast (from 1 to 60 instances in 1 minute), but scales down tooo slow (from 60 to 1 instance in 59 min, meaning it scales 1 instance per 1 minute).
Can i have more control over it? I need it to scale up in 1 minute and down in 1 minute too (from whatever amount instances i have to whatever amount instances are needed at a moment be it 1 or 30 or anything else(
Can i have more control over it? I need it to scale up in 1 minute and down in 1 minute too (from whatever amount instances i have to whatever amount instances are needed at a moment be it 1 or 30 or anything else(
Oleh Kopylover 2 years ago
+ Is it possible to have app restarts on Fargate?
As if you launch it with
I was getting 5xx errors from Fargate probably due to OOM kills. App restart would fix this crap....
As if you launch it with
systemd as a service...I was getting 5xx errors from Fargate probably due to OOM kills. App restart would fix this crap....
Vlad Ionescu (he/him)over 2 years ago
@Oleh Kopyl how are you controlling the scaling? Scaling down should not take than long 🤔
Oleh Kopylover 2 years ago(edited)
Does SageMaker Real-Time Inference scale up and down?
So I don't pay for instances which are not used
So I don't pay for instances which are not used
Samiover 2 years ago
Hey all.
I'm pondering over a project I'm working on at the moment and was hoping to get some advice or thoughts from other people.
I have to design the architecture for a 3 tier nodejs application which consists of a simple web front-end, and API component, and a database. My initial thoughts are to go serverless and deploy the web and API components on Lambda and try to keep things light and quick. I am concerned here though about the potential lack of flexibility with the front-end. I understand that you can have a Lambda function return HTML but I don't know how well it would work for the application's progression in the furture.
Alternatively, I can containerise both the web and API components and move them onto ECS which would cost more but allow for greater flexibility and if need be a migration to Kubernetes if required down the track.
Has anybody got any thoughts on this? Have you deployed front-end on Lambda and had it work well or poorly?
I'm pondering over a project I'm working on at the moment and was hoping to get some advice or thoughts from other people.
I have to design the architecture for a 3 tier nodejs application which consists of a simple web front-end, and API component, and a database. My initial thoughts are to go serverless and deploy the web and API components on Lambda and try to keep things light and quick. I am concerned here though about the potential lack of flexibility with the front-end. I understand that you can have a Lambda function return HTML but I don't know how well it would work for the application's progression in the furture.
Alternatively, I can containerise both the web and API components and move them onto ECS which would cost more but allow for greater flexibility and if need be a migration to Kubernetes if required down the track.
Has anybody got any thoughts on this? Have you deployed front-end on Lambda and had it work well or poorly?
Oleh Kopylover 2 years ago
Some say Fargate does restarts. But does it restart the whole image or just an app from CMD?
I was getting some 5xx errors from Fargate due to OOM Kills. OOM Kills are okay, but 5xx errors are not.
With my EC2 instance (no Docker) systemd always restarts my Python app and since i have a load balancer, i never get 5xx errors (the worst what i can get - .5s delay on a request.
Meaning that Fargate seemingly makes whole image restarts instead of just my python app restarts (the thing which was in
If it's really the case, is there any way to force Fargate to just restart my app (in the same way as systemd does it).
I was trying to get systemd to work in Docker but was getting this error:
I tried using
I was getting some 5xx errors from Fargate due to OOM Kills. OOM Kills are okay, but 5xx errors are not.
With my EC2 instance (no Docker) systemd always restarts my Python app and since i have a load balancer, i never get 5xx errors (the worst what i can get - .5s delay on a request.
Meaning that Fargate seemingly makes whole image restarts instead of just my python app restarts (the thing which was in
CMD like ["python", "main.py"]If it's really the case, is there any way to force Fargate to just restart my app (in the same way as systemd does it).
I was trying to get systemd to work in Docker but was getting this error:
/bin/sh: 4: systemd: not found. Even after i did apt update && apt install --reinstall systemd -y, i was still getting errors like this:System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is downI tried using
service instead of systemctl, but it had errors too:# service crossdot-flask-inference.service on
crossdot-flask-inference.service: unrecognized serviceBalazs Vargaover 2 years ago
aws serverless v1. how vcan I restore from backup and not from snapshot?
Daniel Adeover 2 years ago
Is anyone well versed in github actions and aws, i'm having an issues deploying my container image to ecr i keep getting Error: Not authorized to perform sts:AssumeRoleWithWebIdentity
Dan Hamiltonover 2 years ago(edited)
Hey. Earlier this year we began using the terraform-aws-sso module to manage our human access to our AWS accounts. It works really well and has been a lifesaver, so upfront thank you to everyone who added to it. .
However I think I am missing something as only recently did we have a need to make a new account assignment and because I have a
Removing the
I did some searching and I think that this PR addressed this issue already by adding a variable to handle the dependency issue.
The variable
I don’t understand what parameters it’s referring to? Is it a list of all Okta groups I create?
However I think I am missing something as only recently did we have a need to make a new account assignment and because I have a
depends_on for my Okta moduole to make sure the okta groups are created before the account assignment is attempted, terraform is forcing a replacement of all account assignments despite the code only adding one.Removing the
depends_on fixes it in my plan, but I worry it will fail because it isn’t aware of the dependency on my okta module.I did some searching and I think that this PR addressed this issue already by adding a variable to handle the dependency issue.
The variable
identitystore_group_depends_on description states the value should be “a list of parameters to use for data resources to depend on”.I don’t understand what parameters it’s referring to? Is it a list of all Okta groups I create?
Balazs Vargaover 2 years ago(edited)
I have few question related to orangizations:
• I know I need to select a management account, but with delegated role can I have a user in member account to manage organization?
• can I limit this delegated role to OU ?
• if I delete the management account will it delete the all other aws accounts in organization ?
• I know I need to select a management account, but with delegated role can I have a user in member account to manage organization?
• can I limit this delegated role to OU ?
• if I delete the management account will it delete the all other aws accounts in organization ?