76 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Pierre Humberdrozalmost 6 years ago
They are 👍️🏼
vFondevillaalmost 6 years ago
Yes, they said that the 1.15 slowness was something exceptional as they we’re refining processes for faster support of future versions of kubernetes
tweetyixalmost 6 years ago
Hi, I'd like to provide a file in S3 to users in an easy way. This means no additional client software, no access key overhead. Therefore I thought about loading it to SharePoint online as this is rather the corporate platform including proper authentication. Unfortunately I didn't find a connector yet. Anybody had a comparable challenge, solution, thoughts to share?
Mattalmost 6 years ago
RDS question, I'm seeing this when attempting to create a slot in Postgres:
ERROR: must be superuser or replication role to use replication slotsMattalmost 6 years ago
However. . . I am authenticated as the superuser
msharma24almost 6 years ago
AWS has launched new Cert badges https://www.youracclaim.com/earner/earned
Omer Senalmost 6 years ago
guys what do you recommed to hold Kubernetes Secrets on Github ? Has anyone used SealedSecrets of Bitnami ?
Asraralmost 6 years ago(edited)
Hi All, I am trying to figure how to get cloud front working with my elastic beanstalk app, i have used this terraform module https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment to setup my beanstalk app. I went for nat instance instead of nat gateway for cost reasons, and i have 2 instances running. My domain is managed as part of Gsuite, whilst i have permission to add a cname entry or something i cant do much more. I have tried to manually setup cloud front and i selected the origin domain to be the load balancer resource in the drop down. apart from that most of the page i left as defaults except whitelisting one cookie.
When i try to hit the url generated for cloud front i keep getting 502 error wasnt able to connect to origin.
My ssl certificate for my domain are managed by AWS ACM but my dns is managed by Gsuite.
From what i understand i will eventually have to update my dns to point to cloudfront instead of the load balancer as it is now, but before i do that i want to make sure the cloudfront url is correctly configured
When i try to hit the url generated for cloud front i keep getting 502 error wasnt able to connect to origin.
My ssl certificate for my domain are managed by AWS ACM but my dns is managed by Gsuite.
From what i understand i will eventually have to update my dns to point to cloudfront instead of the load balancer as it is now, but before i do that i want to make sure the cloudfront url is correctly configured
kennyalmost 6 years ago
is it possible to specify specific data center using Cloudformation (for example, I want to spin up subnets on the same datacenter with another account/subnets). Since the us-east-1a might not be the same physical location for me than someone else, how can this be done via Cloudformation?
RBalmost 6 years ago(edited)
noticed some of our devs were using an ELB for just a health check so the ELB itself has 0 requests. I understand that target groups have a health check and ECS can leverage the
how do people here do health checks for applications without an ELB ? do you use a target group ? or do ecs / fargate with a healthcheck arg in the container ?
if target group, can this be used without an ALB ?
also wondering if anyone has used route53's health checks in lieu of the above ?
HEALTHCHECK argument in dockerfiles.how do people here do health checks for applications without an ELB ? do you use a target group ? or do ecs / fargate with a healthcheck arg in the container ?
if target group, can this be used without an ALB ?
also wondering if anyone has used route53's health checks in lieu of the above ?
lorenalmost 6 years ago
Great summary of aws networking... Breathe it in, we're all network engineers in the cloud... https://blog.ipspace.net/2020/05/aws-networking-101.html
RBalmost 6 years ago
btaialmost 6 years ago
aws global accelerator is the bees knees
Milosbalmost 6 years ago
Hi, I am little bit confused regarding AWS Secrets Manager. If I have lets say one AWS Secret with 4 key/value pairs, is that considered as one secret or 4 secrets?
Milosbalmost 6 years ago
My understating is that should be one secret, but who knows 😄
RBalmost 6 years ago
is there a way to do a rolling deploy in ECS without updating the task definition ?
PePe Amengualalmost 6 years ago
no if you want to tag the images
PePe Amengualalmost 6 years ago
if you are doing “latest” then you could
RBalmost 6 years ago
yes, let's assume the docker image stays the same
RBalmost 6 years ago
so the image doesnt need to change in the task definition
PePe Amengualalmost 6 years ago
then you can
RBalmost 6 years ago
how ?
PePe Amengualalmost 6 years ago(edited)
you set min to 50% and max to like 200%
PePe Amengualalmost 6 years ago
you make sure you have enough resources to run more than one task
PePe Amengualalmost 6 years ago
but this is not blue green
PePe Amengualalmost 6 years ago
so you will have old/new containers running at the same time
PePe Amengualalmost 6 years ago
while all get updated
RBalmost 6 years ago
i was just having an issue finding this in the cli
PePe Amengualalmost 6 years ago
one sec
RBalmost 6 years ago
im dumb for not finding that earlier
RBalmost 6 years ago
does that command align with yours @PePe Amengual
PePe Amengualalmost 6 years ago
we do thi s:
PePe Amengualalmost 6 years ago
aws ecs update-service --cluster ${clusterName} --service ${serviceName} --force-new-deploymentRBalmost 6 years ago
brilliant, thank you very much!
PePe Amengualalmost 6 years ago
lol is the same command
PePe Amengualalmost 6 years ago
the key is on the deployment config
PePe Amengualalmost 6 years ago
not the command
RBalmost 6 years ago
looks like this can be even easier using a task set but currently this is not in terraform yet
PePe Amengualalmost 6 years ago
correct
Erik Osterman (Cloud Posse)almost 6 years ago
Erik Osterman (Cloud Posse)almost 6 years ago
mfridhalmost 6 years ago
aws-okta is on a hiatus? Someone picking that up? 🙂... or did you all move on to more hipster solutions? I think aws-okta at least has very good usability...
Plus, I did this to get rid of some annoying obstacles: https://github.com/frimik/aws-okta-tmux
Plus, I did this to get rid of some annoying obstacles: https://github.com/frimik/aws-okta-tmux
PePe Amengualalmost 6 years ago
is anyone here using https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets-one-user-one-password.html for RDS aurora ?
raghualmost 6 years ago
Is there a iam policy example that deny whole subnet and just allow one ip from that subnet?
Omer Senalmost 6 years ago
Hello, It is a bit K8S/AWSEKS related so i don't know where to post 😉 Anyway is there a way to make a Pod to get ingress(inbound) traffic only from ALB Ingress ? As ALB Ingress will change Source IP address i can still get traffic to this pod but simply will deny access from other Pods that is living on same namespace.
RBalmost 6 years ago
interesting that one cannot remove the Server header from cloudfront... but at least it no longer will show
https://stackoverflow.com/questions/56710538/cloudfront-and-lambdaedge-remove-response-header
AmazonS3https://stackoverflow.com/questions/56710538/cloudfront-and-lambdaedge-remove-response-header
curious deviantalmost 6 years ago
Hello,
I am a bit confused around the usage of
I am a bit confused around the usage of
--queue-url and --endpoint-url when a VPC Interface endpoint is created. One of the three private DNS is exactly the same as the context of the --queue-url and if I do a dig it indeed resolves to a private IP. What is the purpose of the other two DNSes with vpce-* in their names and when should they be used? One of them seems region specific while the other seems AZ specific.Joeyalmost 6 years ago
Hey guys, total AWS noob here
Joeyalmost 6 years ago
I have a subdomain attached to a Route53 privately hosted zone
Joeyalmost 6 years ago
It's supposed to load a webpage, and I want to be able to give access to certain people. Preferably with a username / password, though just whitelisting individual IP addresses would be doable
Joeyalmost 6 years ago
What strategy should I use to go about this?
Matt Gowiealmost 6 years ago(edited)
@Joey — Sounds like a combo of putting Basic Auth in front of your app + updating the Security Group inbound rules for your Application’s SG would be what you want.
raghualmost 6 years ago
Hi Guys,
Is there any better auto remediation tool you guys are using for aws security,like remediating ports that are open globally etc. We are currently using cloud prisma, but its currently just detecting and sending us notifications,but auto remediations are not ready yet. I am planning to write my own scripts meanwhile to remediate them, but my scripts call boto3 api and it needs to call every 30 mins or so how i define, i was thinking if i can log the actual real time data and auto remediate it when ever it detects any alert. Any inputs how to do it.
Is there any better auto remediation tool you guys are using for aws security,like remediating ports that are open globally etc. We are currently using cloud prisma, but its currently just detecting and sending us notifications,but auto remediations are not ready yet. I am planning to write my own scripts meanwhile to remediate them, but my scripts call boto3 api and it needs to call every 30 mins or so how i define, i was thinking if i can log the actual real time data and auto remediate it when ever it detects any alert. Any inputs how to do it.
Zachalmost 6 years ago
Anyone have experience with RDS IAM connection throttling?
Maciek Strömichalmost 6 years ago
hey, anyone experienced aws fargate inability to resolve hostnames? After migration I’ve noticed that for some number of requests Django is not able to resolve RDS hostname which is ultra worrying.
Padarnalmost 6 years ago(edited)
Hi all. I'm trying to understand how to replace the use of security groups when using transit gateway
Padarnalmost 6 years ago
I'm used to using VPC peering, where peered security groups can be referenced, but I'm not clear on how this is meant to work if using TGW
RBalmost 6 years ago
any reason why a route53 record would be pointed at an elastic ip of an ec2 instance instead of creating a load balancer and pointing the r53 record to it?
James Woolfendenover 5 years ago
question, say you wanted to create a static site in aws and your not allowed to use an s3 bucket, besides ec2 what are your options. i tried crying which didnt get me anywhere?
Kevin Chanover 5 years ago
Does any one here have some good links with ecs in terraform? seems like the terraform-ecs-alb-service-task seems to be deleting the task definitions. I'm coming from the cloudformation/troposphere world where I updated the task definition and then it persisted and then though automation I updated the service to point at the latest task definition.
Igorover 5 years ago
We're currently just using awscli with circleci to push ecs deployments, but it seems very limited. No notifications out-of-the-box, no support for rollbacks. And then of course it messes with Terraform. I don't want to ignore changes to task definitions, because we sometimes change the environment variables/secrets associated with the task. I wish there was a better way.
Igorover 5 years ago
Has anyone had luck with using CodeDeploy?
RBover 5 years ago
how do you all automatically give ssh creds safely to everyone ?
Victor Grenuover 5 years ago
Hi folks,
Lately I’ve updated to v0.3 a pet project who intends to push notifications when you left running AWS instances on the most expensive AWS services. (EC2, RDS, Glue Dev Endpoint, SageMaker Notebook Instances, Redshift Cluster).
I hope you can give a try and get me some feedback about features, bugs, or architecture.
It can be a side-car for Instance Scheduler from AWS, and easier than CloudCustodian. Only informative and dead-simple.
Let me know if it makes sense, or if you have any idea to improve this.
Instance Watcher 👀
Introduction
AWS Instance Watcher will send you once a day a recap notification with the list of the running instances on all AWS regions for a given AWS Account.
Useful for non-prod, lab/training, sandbox, or personal AWS accounts, to get a kindly reminder of what you’ve left running. 💸
Currently, It covers the following AWS Services:
• EC2 Instances
• RDS Instances
• SageMaker Notebook Instances
• Glue Development Endpoints
• Redshift Clusters
Notifications could be:
• Slack Message
• Microsoft Teams Message
• Email
https://github.com/z0ph/instance-watcher
Features
• Customizable Cron Schedule
• Whitelisting capabilities
• Month to Date (MTD) Spending
• Slack Notifications (Optional)
• Microsoft Teams Notifications (Optional)
• Emails Notifications (Optional)
• Serverless Architecture
• Automated deployment using (IaC)
Lately I’ve updated to v0.3 a pet project who intends to push notifications when you left running AWS instances on the most expensive AWS services. (EC2, RDS, Glue Dev Endpoint, SageMaker Notebook Instances, Redshift Cluster).
I hope you can give a try and get me some feedback about features, bugs, or architecture.
It can be a side-car for Instance Scheduler from AWS, and easier than CloudCustodian. Only informative and dead-simple.
Let me know if it makes sense, or if you have any idea to improve this.
Instance Watcher 👀
Introduction
AWS Instance Watcher will send you once a day a recap notification with the list of the running instances on all AWS regions for a given AWS Account.
Useful for non-prod, lab/training, sandbox, or personal AWS accounts, to get a kindly reminder of what you’ve left running. 💸
Currently, It covers the following AWS Services:
• EC2 Instances
• RDS Instances
• SageMaker Notebook Instances
• Glue Development Endpoints
• Redshift Clusters
Notifications could be:
• Slack Message
• Microsoft Teams Message
https://github.com/z0ph/instance-watcher
Features
• Customizable Cron Schedule
• Whitelisting capabilities
• Month to Date (MTD) Spending
• Slack Notifications (Optional)
• Microsoft Teams Notifications (Optional)
• Emails Notifications (Optional)
• Serverless Architecture
• Automated deployment using (IaC)
RBover 5 years ago
how does one create a vpc with both private and public subnets ? our vpc has a public cidr but i cannot add a private cidr to it in order to add private subnets
Eric Bergover 5 years ago
I'm having problems setting up a logging account and assigning buckets for S3 logging to buckets in a central account.
I'm setting up logging for our multiple AWS accounts, which are in AWS Organizations. We use a separate account for each client's infrastructure (all accounts are basically the same...SaaS). I'm trying to set up a logging account and send all logs there, including cloudtrail, Load Balancer, and bucket logging.
Right now, I'm working on the bucket logging, and it appears from this page (https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html) that the source and target buckets must be in the same account, which completely negates any thoughts of sharing log buckets across accounts in the organization.
I have a bucket policy that does grant access to the other accounts in the organization, with this:
but, though I can write to that bucket from any of the other accounts in the org, when I try to assign logging to those buckets (which are in the separate logging account), I get this error:
I'm setting up logging for our multiple AWS accounts, which are in AWS Organizations. We use a separate account for each client's infrastructure (all accounts are basically the same...SaaS). I'm trying to set up a logging account and send all logs there, including cloudtrail, Load Balancer, and bucket logging.
Right now, I'm working on the bucket logging, and it appears from this page (https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html) that the source and target buckets must be in the same account, which completely negates any thoughts of sharing log buckets across accounts in the organization.
I have a bucket policy that does grant access to the other accounts in the organization, with this:
{
"Sid": "AllowPutObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl"
],
"Resource": [
"arn:aws:s3:::${local.log_bucket_name_east_2}",
"arn:aws:s3:::${local.log_bucket_name_east_2}/*"
],
"Condition": {
"StringEquals": {;
"aws:PrincipalOrgID":[
"o-7cw0yb5uv3"
]
}
}
},but, though I can write to that bucket from any of the other accounts in the org, when I try to assign logging to those buckets (which are in the separate logging account), I get this error:
Error: Error putting S3 logging: InvalidTargetBucketForLogging: The owner for the bucket to be logged and the target bucket must be the same.
status code: 400, request id: FEDF21FAA89472BD, host id: v+LjsDLH+EpNXzcfjxVYiNDLH1HOhY5NpHFbPwGTpketFpeEyUVQxp+7DCCNE9ZDy/vdNUd8+Pg=Eric Bergover 5 years ago
So, can I set up a logging account to receive s3 access logs?
msharma24over 5 years ago(edited)
Hi Everyone, looking for some EMR ETL advise,
Use case :
When a csv file (appx 1GB size) arrives in S3 - Launch EMR cluster, which will run Spark Script Steps and transform the data, and write into a dynamodb.
There will be maximum 10 files that will arrive at a set time window of the day.
I have been thinking to use Event bridge rule - >Step function - > launch EMR cluster with Spot Fleet Task nodes
Looking for advise if this is a good architecture?
Use case :
When a csv file (appx 1GB size) arrives in S3 - Launch EMR cluster, which will run Spark Script Steps and transform the data, and write into a dynamodb.
There will be maximum 10 files that will arrive at a set time window of the day.
I have been thinking to use Event bridge rule - >Step function - > launch EMR cluster with Spot Fleet Task nodes
Looking for advise if this is a good architecture?
vFondevillaover 5 years ago
quick question because I can’t find it in the docs… The sticky sessions on a NLB use a cookie or it’s based on Source IP?
Haroon Rasheedover 5 years ago
I would like to have AWS EKS setup using Terraform in such way that we have 2 VPCs. One VPC where I should deploy AWS control plane and other VPC I should have my worker nodes running. Do we terraform suite for this in cloudposse or any other repo?
Chris Fowlesover 5 years ago
I don't have an answer as to how, but I am very curious as to "why?"
Haroon Rasheedover 5 years ago
Coz my customer EKS setup is like that.. Customer business workloads (pods) running under worker nodes on different VPC and control plane is running on different VPC.. I need to replicate that for our testing.
PePe Amengualover 5 years ago
Haroon Rasheedover 5 years ago
I would like to create EKS unmanaged worker nodes 1 on VPC1 and Worker node 2 on VPC2. Control plane on maybe running on either VPC1 or VPC2. Is it possible to setup something like this?
Pierre Humberdrozover 5 years ago
how are people managing AWS CLI 2FA ? I find it very hard to manage the usage of the cli right now with 2FA enforced