41 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
btaiabout 5 years ago
theres no db slack channel, so I’m asking here since I’m using RDS (and theyre deprecating support for my postgres version). Anyone thats done the postgres 9 -> postgres 10/11 migration have any gotchas we should be concerned about when doing it?
Mohammed Yahyaabout 5 years ago
In https://github.com/cloudposse/reference-architectures#3-delegate-dns Can some one explains An available domain we can use for DNS-base service discovery (E.g.
<http://ourcompany.co|ourcompany.co>). This domain must not be in use elsewhere as the master account will need to be the authoritative name server (SOA).Steve Wade (swade1987)about 5 years ago
does anyone have a "best practice" for where to have the S3 bucket for access logs for an account in the account itself or in the security account and replicate
sheldonhabout 5 years ago
Anyone use AWS distributor for packages? I prefer choco and dsc but I need to create a datadog package and want to cover linux + windows. I'd like to know if various distros can be handled easily, configure etc. Overall if I any problems using or smooth sailing.
Otherwise I have to do a mix of ansible+ dsc and more and it's unlikely others will be comfortable with that.
In addition while I'm a fan of ansible I primarily use AWS SSM to manage a mix of Windows and Linux instances. At this time AWS SSM only will run playbooks for Linux.
Otherwise I have to do a mix of ansible+ dsc and more and it's unlikely others will be comfortable with that.
In addition while I'm a fan of ansible I primarily use AWS SSM to manage a mix of Windows and Linux instances. At this time AWS SSM only will run playbooks for Linux.
sheldonhabout 5 years ago
bump 🙂 going to tackle this in next couple days so would love to know your experience
Yoni Leitersdorf (Indeni Cloudrail)about 5 years ago(edited)
Having an internal debate and I’m curious what your guys’ thoughts are:
How many people do you think there are (worldwide) who are using Terraform with AWS today? Please include your rationale for your answer.
Some stats:
• The AWS Provider github repo has 5k stars and 2k contributors.
• The AWS Provider has been downloaded 216.2M times.
• This channel has 2,221 members.
How many people do you think there are (worldwide) who are using Terraform with AWS today? Please include your rationale for your answer.
Some stats:
• The AWS Provider github repo has 5k stars and 2k contributors.
• The AWS Provider has been downloaded 216.2M times.
• This channel has 2,221 members.
Connor Gervinabout 5 years ago(edited)
Hi All - any help greatly appreciated
(wasn't sure which channel was best for this one)
https://sweetops.slack.com/archives/CB6GHNLG0/p1609975378080100
(wasn't sure which channel was best for this one)
https://sweetops.slack.com/archives/CB6GHNLG0/p1609975378080100
Babar Baigabout 5 years ago
Hi 👋 I am trying to find a tool/vault to manage my passwords (a tool like roboform, lastpass). I am looking for an open source tool like this which I can configure on my Linux EC2 instance and access via a UI. Is there any tool like this?
Babar Baigabout 5 years ago(edited)
Hey 👋 I want to work with a subdomain on CloudFlare but unfortunately it does not support working with subdomains. What are my options? Will Route53/Cloudfront be useful?
Babar Baigabout 5 years ago
Guys I've been using AWS SSM Parameter store for storing my credentials like RDS database credentials so that I can access them via API or CLI in my pipelines or infrastructure code. I am thinking to put my AWS credentials in SSM parameter store because my Rails application (which is deployed in ECS via Terraform) demands AWS keys for accessing an S3 bucket. Should I put AWS credentials in SSM? I just feel that it is not right way to deal with this problem.
joeyabout 5 years ago
i'm seeing some funky issues with grpc and nlb's when rolling pods in eks. anyone got experience in this area?
joshmyersabout 5 years ago
Anyone using on AWS App Mesh? Thoughts?
bazbremnerabout 5 years ago(edited)
I'm having a bit of a wood-for-trees moment with ACM Private CA - it's reasonably straightforward to set up a root and subordinate CA in a single AWS account and RAM share that out to other accounts that need certificates (although it seems necessary to share both the root and the subordinate for the subordinate to appear in the ACM PCA console in the target account).
However, the best practice (https://docs.aws.amazon.com/acm-pca/latest/userguide/ca-best-practices.html) recommends having the root alone in its own account, and subordinate(s) in another account.
My problem is that the process of signing the subordinate CA manually when the root CA is in another account is really not clear. The docs cover the case of both in the same account, or an using an external CA. Anyone done this before?
However, the best practice (https://docs.aws.amazon.com/acm-pca/latest/userguide/ca-best-practices.html) recommends having the root alone in its own account, and subordinate(s) in another account.
My problem is that the process of signing the subordinate CA manually when the root CA is in another account is really not clear. The docs cover the case of both in the same account, or an using an external CA. Anyone done this before?
Matt Gowieabout 5 years ago
Has anyone ever seen the AWS ElasticSearch Service take over an hour to create a domain (i.e. Domain is in “Loading” state and nothing is accessible)? I’ve seen long update / creation times from this service before… but this seems absurd.
RBabout 5 years ago
Does anyone need to periodically restart the aws ssm agent ?
sheldonhabout 5 years ago
[thread] Troubleshooting — EC2 Instance for Windows isn’t working with ECS tasks, ssm session manager, and RDP reporting User Profile service Can’t start
Ofir Rabanianabout 5 years ago
If anyone’s using AWS AppMesh here - do you know if there’s a way for the virtual gateway to NOT overwrite the host? we’re using the host in order to do some analytics in the container.. might also be similar for Istio users.
Darren Cunninghamabout 5 years ago(edited)
am I blind or is there really no way to get container details for a running tasks in the new ECS portal? 🤮
V
Vlad Ionescu (he/him)about 5 years ago
FYI for the containers people: the last run of COM-401 "Scaling containers on AWS" is in 15-ish minutes at https://virtual.awsevents.com/media/0_6ekffvm8 🙂 There are some massive Fargate improvements that got no other announcements as far as I know
Santiago Campuzanoabout 5 years ago(edited)
morning everyone ! Quick question, is there any performance overhead/impact when live migrating an EBS volume from gp2 to gp3 ?
Babar Baigabout 5 years ago
Hi. I have a question about CloudFront. My application is deployed on Heroku and I am using the Heroku endpoint in CloudFront. It works fine but when I try to open a page by specifying path in URL then the CloudFront URL is routed to use the Heroku endpoint. For example http://myherokuendpoint is the application link in Heroku and d829203example.cloudfront.net is my cloudfront address to access my app. When I try to access d829203example.cloudfront.net/admin it changes the address to http://myherokuendpoint/admin I tried adding origins but it did not work.
If I attach ALB link in CloudFront distribution it works fine. Is there a way I can make it work with Heroku link?
If I attach ALB link in CloudFront distribution it works fine. Is there a way I can make it work with Heroku link?
Tim Gourleyabout 5 years ago
Question about egress filtering -
Ideally you don't want processes to have the ability to reach out to the internet with the exception of specific cases like calling other AWS services, downloading yum updates, contacting other services like Trendmicro SaaS etc. Some AWS services support in VPC endpoints but last I checked this only worked for some services and generally only within the same region. IP filtering seems solid but it would be a huge pain to setup and maintain. DNS blocking would seem to be easier to maintain but would not prevent connections that don't require DNS.
Anyway, are therebest practices recommendations for setting up egress filtering? Are there other options?
Thanks!
Tim
Ideally you don't want processes to have the ability to reach out to the internet with the exception of specific cases like calling other AWS services, downloading yum updates, contacting other services like Trendmicro SaaS etc. Some AWS services support in VPC endpoints but last I checked this only worked for some services and generally only within the same region. IP filtering seems solid but it would be a huge pain to setup and maintain. DNS blocking would seem to be easier to maintain but would not prevent connections that don't require DNS.
Anyway, are there
Thanks!
Tim
Maarten van der Hoefabout 5 years ago(edited)
Error: Error creating aggregator: OrganizationAccessDeniedException: This action can only be performed if you are a registered delegated administrator for AWS Config with permissions to call ListDelegatedAdministrators API.Anyone had this before. the account I'm executing is actually delegated as such and can call ListDelegatedAdministrators succesfully.
Liam Helmerabout 5 years ago
Hey All! I'm working on editing a cloudposse module for our use, but I'm having a weird issue. I think I added all the things in I need and got all the variables in correctly, but now I'm getting a super generic error and I'm unclear on how to troubleshoot it. It's unfortunately not telling me at all what's wrong with the construction, and I'd love ot know how to trouble shoot from here:
PePe Amengualabout 5 years ago
ok, I figure it out
Steve Wade (swade1987)about 5 years ago(edited)
is a VPC name unique across regions in the same account?
e.g. can i have a
e.g. can i have a
dev VPC in Ireland and another dev VPC in Singapore within the same account?Steveabout 5 years ago
hey all
are there modules (or some best practices docs) for creating cloudwatch alarms (use case: alarm on CPU/disk space apache kafka (MSK))?
are there modules (or some best practices docs) for creating cloudwatch alarms (use case: alarm on CPU/disk space apache kafka (MSK))?
Igor Bronovskyiabout 5 years ago
Can I run command on AWS Fargate when container go to deactivation ? https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-lifecycle.html
sahil kambojabout 5 years ago
Hi,
I have a query about RSYNC
i have an ec2 server with 3 diff apps folder 2 GB each(master server)
i want to clone and maintain these folder to multiple ec2, lets say 5 ec2. if i setup cron job of rsync for every hour does it bottle neck my local bandwidth? and does it makes my cpu utilisation high ?
I have a query about RSYNC
i have an ec2 server with 3 diff apps folder 2 GB each(master server)
i want to clone and maintain these folder to multiple ec2, lets say 5 ec2. if i setup cron job of rsync for every hour does it bottle neck my local bandwidth? and does it makes my cpu utilisation high ?
Santiago Campuzanoabout 5 years ago(edited)
Morning everyone ! Any experience migrating EBS volumes from gp2 to gp3 ? I have a large Kafka cluster with huge (1.5 TB) EBS volumes attached to every single broker
Asisabout 5 years ago
Hello everyone , I am trying to save output of the terraformed eks cluster into a json file
btaiabout 5 years ago
asking this here cause we didn’t get it it in office hours 😃 any tips for improving global s3 upload speed? (think india, hong kong, etc) what other optimizations could I possibly make after turning on s3 transfer acceleration and using multipart uploads?
Babar Baigabout 5 years ago(edited)
Hi everyone! I am deploying a Rails application in ECS. The application only allows to be accessed from a specified hostname. I am stuck in the healthcheck pass issues. Health checks keep failing while the application is working fine when accessed via Loadbalancer (because I passed LB hostname as an environment variable) I am assuming that, for healthcheck, target group hits the instance ip as the IP is not allowed hence the healthcheck fails. I can not specify the instance ip like I specified the loadbalancer because we can not get instance ip from the launch configuration. Is there any way to tackle this?
Ofir Rabanianabout 5 years ago
What’s the advantage of using S3 SSE? is there an attack vector that it prevents?
Tomekabout 5 years ago
with terraform, is it possible to create a
aws_secretsmanager_secret_version resource that will merge its values with the current aws_secretsmanager_secret_version (only if one exists)?MrAtheistabout 5 years ago
Anyone know if cloudwatch captures metrics around "Total # of EC2 instances running over time (per region)"?
Chris Fowlesabout 5 years ago
People running SQL Server on RDS - how do you handle backups?
We need to have a longer retention than automated snapshots allow, but want to retain point in time recovery, so can't only rely on native backup to s3.
Was looking at AWS Backup, but that seems $$$$$$$$ compared to S3 storage.
Also, database size is over 4Tb so it's a lot of bits to be pumping around making mistakes.
We need to have a longer retention than automated snapshots allow, but want to retain point in time recovery, so can't only rely on native backup to s3.
Was looking at AWS Backup, but that seems $$$$$$$$ compared to S3 storage.
Also, database size is over 4Tb so it's a lot of bits to be pumping around making mistakes.
imran hussainabout 5 years ago
Hi All,
Glad to have been invited to the channel. I think that the work that you guys do is great and have used some of the modules in the past keep up the great work 🙂
I wonder if anyone can help me with a problem I have. I am trying to serve multiple react apps from the same s3 bucket we have a single Domain i.e my.example.com:
I have configured a CDN with an S3 backend with a policy that allows the CDN to access s3 objects and the s3
bucket is configured for static website hosting. The default page to serve is set to index.html and well as the default error page.
The Bucket is then used as an host 2 react Applications under 2 different paths say /App1 /App2
The CDN has been set up with 2 origins and path patterns of /App1/ and /App2/ and /api/* which points to an API gateway.
The Default Root Object is not set and this should defer the serving of the root document to S3 under the origin that has been set with the path set
s3.bucket.domain/path i.e mybucket.s3.eu-west-2.amazonaws.com/App1 or mybucket.s3.eu-west-2.amazonaws.com/App2 or mybucket.s3.eu-west-2.amazonaws.com/api
The behaviour are path pattern /App1/ has the origin that points to mybucket.s3.eu-west-2.amazonaws.com/App1 , path_pattern => /App2/ : origin => mybucket.s3.eu-west-2.amazonaws.com/App2/*, path_pattern => /api/*: origin => mybucket.s3.eu-west-2.amazonaws.com/api/* with the default path_pattern pointing to the origin mybucket.s3.eu-west-2.amazonaws.com/App1 in that order in the behaviours
When ever I request anything form my.example.com/App2/ The default behaviours is applied the same is for mybucket.s3.eu-west-2.amazonaws.com/App1/. In fact every page request returns the same index.html page no matter what page I as for weather it be an image.png or blah.js. The files exist under the /App1 and App2 in the s3 bucket.
But for mybucket.s3.eu-west-2.amazonaws.com/api/someapi/ it seems to work fine.
There is no index.html at the root of the s3 bucket and all file are either in /App1/ or /App2/ in the bucket.
Has anyone done this before or know of a way that I can get this to work. Note everything has to be served under a single domain.
Any help would be welcome
Glad to have been invited to the channel. I think that the work that you guys do is great and have used some of the modules in the past keep up the great work 🙂
I wonder if anyone can help me with a problem I have. I am trying to serve multiple react apps from the same s3 bucket we have a single Domain i.e my.example.com:
I have configured a CDN with an S3 backend with a policy that allows the CDN to access s3 objects and the s3
bucket is configured for static website hosting. The default page to serve is set to index.html and well as the default error page.
The Bucket is then used as an host 2 react Applications under 2 different paths say /App1 /App2
The CDN has been set up with 2 origins and path patterns of /App1/ and /App2/ and /api/* which points to an API gateway.
The Default Root Object is not set and this should defer the serving of the root document to S3 under the origin that has been set with the path set
s3.bucket.domain/path i.e mybucket.s3.eu-west-2.amazonaws.com/App1 or mybucket.s3.eu-west-2.amazonaws.com/App2 or mybucket.s3.eu-west-2.amazonaws.com/api
The behaviour are path pattern /App1/ has the origin that points to mybucket.s3.eu-west-2.amazonaws.com/App1 , path_pattern => /App2/ : origin => mybucket.s3.eu-west-2.amazonaws.com/App2/*, path_pattern => /api/*: origin => mybucket.s3.eu-west-2.amazonaws.com/api/* with the default path_pattern pointing to the origin mybucket.s3.eu-west-2.amazonaws.com/App1 in that order in the behaviours
When ever I request anything form my.example.com/App2/ The default behaviours is applied the same is for mybucket.s3.eu-west-2.amazonaws.com/App1/. In fact every page request returns the same index.html page no matter what page I as for weather it be an image.png or blah.js. The files exist under the /App1 and App2 in the s3 bucket.
But for mybucket.s3.eu-west-2.amazonaws.com/api/someapi/ it seems to work fine.
There is no index.html at the root of the s3 bucket and all file are either in /App1/ or /App2/ in the bucket.
Has anyone done this before or know of a way that I can get this to work. Note everything has to be served under a single domain.
Any help would be welcome
sheldonhabout 5 years ago
🐕️🐩🐶🐾🦴 attention datadog pros 😁
I need to distribute datadog across a fleet of machines. I can do this with SSM. I use chocolately for windows.
However, chef/puppet have modules for this that help configure the various yaml files on demand. I'm wondering if chef,puppet, salt have any simple to implement "masterless" approach like Ansible so I can implement with minimal fuss for a single package like this and leverage their config options.
The problem isn't the install... It's the config.
Do I just have one giant config directory that everything uses including all possible log paths, or will this be throwing errors and causing needless failed checks constantly?
If I have to customize the collector per type of instance then I'm back in scripting a config and flipping from json to yaml. I don't mind but if I can avoid this extra programmatic overhead and maintenance I want to figure that out.
I need to distribute datadog across a fleet of machines. I can do this with SSM. I use chocolately for windows.
However, chef/puppet have modules for this that help configure the various yaml files on demand. I'm wondering if chef,puppet, salt have any simple to implement "masterless" approach like Ansible so I can implement with minimal fuss for a single package like this and leverage their config options.
The problem isn't the install... It's the config.
Do I just have one giant config directory that everything uses including all possible log paths, or will this be throwing errors and causing needless failed checks constantly?
If I have to customize the collector per type of instance then I'm back in scripting a config and flipping from json to yaml. I don't mind but if I can avoid this extra programmatic overhead and maintenance I want to figure that out.
sheldonhabout 5 years ago
Anyone use aws-vault + docker? I can normally mount my Codespaces container with aws creds if I use the plain text file. I prefer aws-vault but becomes problematic with this docker based work environment.
thinking maybe I could use a file backend, mount this, and then in container install aws-vault to accomplish this?
I prefer aws-vault but since I'm remote, no one has access to my machine it might be just easier to stick with plain text cred file 🙈
thinking maybe I could use a file backend, mount this, and then in container install aws-vault to accomplish this?
I prefer aws-vault but since I'm remote, no one has access to my machine it might be just easier to stick with plain text cred file 🙈
OliverSabout 5 years ago(edited)
Does anyone know how to programmatically get the permissions needed for a given action? Eg here us a bunch of actions (which aws docs also refer to as operations) and the corresponding permissions:
Surely there is a table or an AWS CLI command to get this mapping? Eg something like
s3:HeadBucket -> s3:ListBucket
s3:HeadObject -> s3:GetObject, s3:ListBucket
s3:GetBucketEncryption -> s3:GetEncryptionConfiguration
s3:GetBucketLifecycleConfiguration -> s3:GetLifecycleConfiguration
s3:GetObjectLockConfiguration -> docs don't saySurely there is a table or an AWS CLI command to get this mapping? Eg something like
aws iam get-permissions --action s3:HeadBucket and the output would be s3:ListBucket.