29 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Vinko Vrsalovicover 2 years ago
Hi, what do I need to read about to use autodiscovery (or constant names) in an ECS cluster? I need services to talk to each other so service A has IP 10.0.1.2 and service B has 10.0.1.4 using those IP addresses to refer to each other will work until the task is restarted.
dlowrenzover 2 years ago
Hey guys, any comments on this one?
https://medium.com/@TechStoryLines/prepare-your-wallet-aws-now-bills-for-your-public-ipv4-addresses-even-when-attached-d2ed6c1fa533
https://medium.com/@TechStoryLines/prepare-your-wallet-aws-now-bills-for-your-public-ipv4-addresses-even-when-attached-d2ed6c1fa533
Brendenover 2 years ago
Hi, just wondering if anyone is aware of a Terraform / CloudFormation template that has CIS 1.4 monitoring configure?
Renesh reddyover 2 years ago
Hi ALL,
We have github repos one repo is dependent to another repo.
So for first repo it is containerised and second repo only file system. Now second repo I have created a image and pushed to ECR.
Now i can use the second repo file system in first repo.
So my question is When ever any commits to second repo first repo codepipeline should work ?
( I have created cloudwatch event and copied arn in github webhook but it is not working )
Any solutions ?
We have github repos one repo is dependent to another repo.
So for first repo it is containerised and second repo only file system. Now second repo I have created a image and pushed to ECR.
Now i can use the second repo file system in first repo.
So my question is When ever any commits to second repo first repo codepipeline should work ?
( I have created cloudwatch event and copied arn in github webhook but it is not working )
Any solutions ?
Iulian Diminencoover 2 years ago
Hi all,
Iulian Diminencoover 2 years ago
Guys, I have a small problem with the SES service in AWS. Previously, I created a configuration set named 'xxx,' and even though it has been deleted, I often receive an error when sending an email. The error message says, "Configuration set xxx doesn't exist," and it includes error code 554. Do any of you have ideas on how to solve such issues?
elover 2 years ago
👋 for providing access to S3 resources, is there a good rule of thumb for when to add a policy to the principal vs. when to add a policy to the bucket? e.g. comparing the following two:
1) attach a policy to role to allow it to access an s3 bucket
2) attach a policy to the bucket to allow the role to access it
1) attach a policy to role to allow it to access an s3 bucket
2) attach a policy to the bucket to allow the role to access it
Sairam Madichettyover 2 years ago
Hi all, need some suggestion here.
I have ec2 instances in ASG and prometheus exporters configured. can I send the prometheus metrics to cloudwatch and then use Grafana for visualization?
I have ec2 instances in ASG and prometheus exporters configured. can I send the prometheus metrics to cloudwatch and then use Grafana for visualization?
Felipe Vaca Ruedaover 2 years ago
Hi guys, maybe this is not the right channel, but I am looking for help with an Nginx problem.
I am hosting an instance of Grafana, I have also configured my Nginx in order to ask for authentication request, it is working fine with web browsers, my problem is only with mobile browsers as it is asking login on every request almost every 3 sec.
I am getting this error with different mobile browsers,
Below my Nginx configuration:
I have tried everything, but the issue remains
I am hosting an instance of Grafana, I have also configured my Nginx in order to ask for authentication request, it is working fine with web browsers, my problem is only with mobile browsers as it is asking login on every request almost every 3 sec.
I am getting this error with different mobile browsers,
Below my Nginx configuration:
server {
listen 3000;
server_name localhost;
auth_basic "Enter password!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
location / {
proxy_pass <http://grafana>;
proxy_set_header Host $http_host;
}
location /backup/ {
proxy_pass <http://flask/>;
proxy_read_timeout 600s;
}
}I have tried everything, but the issue remains
Balazs Vargaover 2 years ago
hello all,
I have the following iam role:
I have velero backups in region A encrypted with kms and I would like to replicate them to another region. thanks
I have the following iam role:
{
"Statement": [
{
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket",
"s3:PutInventoryConfiguration"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3::: sourcebucket"
]
},
{
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionTagging",
"s3:PutInventoryConfiguration"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3::: sourcebucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTagging",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::destinationbucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::reportbucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::reportbucket/*"
]
},
{
"Sid": "AllowS3ReplicationSourceRoleToUseTheKey",
"Effect": "Allow",
"Action": [
"kms:GenerateDataKey",
"kms:Encrypt",
"kms:Decrypt"
],
"Resource": "*"
},
{
"Action": [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:ReplicateObject",
"s3:ReplicateTags",
"s3:ReplicateDelete",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectTagging",
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::sourcebucket",
"arn:aws:s3:::destinationbucket",
"arn:aws:s3:::sourcebucket/*",
"arn:aws:s3:::destinationbucket/*"
]
}
],
"Version": "2012-10-17"
}I have velero backups in region A encrypted with kms and I would like to replicate them to another region. thanks
Mike Croweover 2 years ago
Hey folks, anybody have an example of setting up a VPC with dynamic subnets for a lambda with access to an RDS instance in the private subnet? I have this so far: https://gist.github.com/drmikecrowe/5c8b3bead3536f77511137417f15db39
No matter what I do, I can't seem to get the routing to allow the lambda's in the public subnets to reach internet (and AWS) services.
No matter what I do, I can't seem to get the routing to allow the lambda's in the public subnets to reach internet (and AWS) services.
Jacob Larvieover 2 years ago
Hi. I am working on a relatively new AWS org with a few accounts. The company only has a few projects/resources running in these accounts ATM. I just joined the company and have the opportunity to build out the aws with terraform. I was planning to carve out a big chunk of 10/8 (rfc1918) block for aws, but one of the networking guys is really pushing for us to use 100.64 (rfc6598) exclusively. I see where 100.64 is allowed and implemented in AWS for non-routable resources like EKS pods. I see where netflix has used it, but it was still using some of 10/8 too.
Do any of you have any experience running AWS enterprise level networking with transit gateways and such using only 100.64/10? I do not want to agree to using only 100.64 and discover some caveat later where we have to hack some additional natting or whatever to communicate with other networks/services.
Do any of you have any experience running AWS enterprise level networking with transit gateways and such using only 100.64/10? I do not want to agree to using only 100.64 and discover some caveat later where we have to hack some additional natting or whatever to communicate with other networks/services.
dudeover 2 years ago
@Vlad Ionescu (he/him) We've noticed with some folks that Identity Center is starting to display the challenge code and ask users to verify it matches. Are you aware of anything that AWS might have unofficially updated? It feels like a stealth update was applied for security.
Kirupa Karanover 2 years ago
Hello everyone,
We are encountering a peculiar issue with our Windows 2012 R2 EC2 instance. We have installed the AWS CLI for all users. However, when attempting to use the AWS command in one of the user accounts, it fails to open properly, prompting an application error. As a workaround, we have been using the administrator account to execute AWS commands.
Additionally, we have scheduled jobs responsible for transferring files between the local system and Amazon S3. These jobs sporadically run successfully but often fail. It's worth noting that we are operating behind a proxy.
I would greatly appreciate your suggestions on resolving this issue
We are encountering a peculiar issue with our Windows 2012 R2 EC2 instance. We have installed the AWS CLI for all users. However, when attempting to use the AWS command in one of the user accounts, it fails to open properly, prompting an application error. As a workaround, we have been using the administrator account to execute AWS commands.
Additionally, we have scheduled jobs responsible for transferring files between the local system and Amazon S3. These jobs sporadically run successfully but often fail. It's worth noting that we are operating behind a proxy.
I would greatly appreciate your suggestions on resolving this issue
Slackbotover 2 years ago
Some older messages are unavailable. Due to the retention policies of an organization in this channel, all their messages and files from before this date have been deleted.
Sairam Madichettyover 2 years ago
Hey folks need some help in this issue:
I have managed to get the custom metrics from the prometheus to the cloudwatch, configured the cloudwatch datasource in the grafana.
I can see the namespace in panel but I dont see any metrics in it.
However in the cloudwatch console there are 1000+ metrics available under same namespace.
Thanks
I have managed to get the custom metrics from the prometheus to the cloudwatch, configured the cloudwatch datasource in the grafana.
I can see the namespace in panel but I dont see any metrics in it.
However in the cloudwatch console there are 1000+ metrics available under same namespace.
Thanks
PePe Amengualover 2 years ago
anyone used, done something like this https://www.youtube.com/watch?v=MKc9r6xOTpk 🧵
Isaacover 2 years ago
Some weekend reading from AWS : https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html
Alex Atkinsonover 2 years ago
AWS Cloudfront question:
• LIMIT: Response headers policies per AWS account : 20
• Desire: Be able to set response headers policies for all common MIME types, allowing devs to upload whatever they want to s3 without having to set the mime-type per file, resulting in the Content-Type header being present for all file served via cloudfront.
Considering npm build can spew out who knows what, and who knows how much, uploading per file seems aweful, so I'm assuming folks use response header policies as follows to inject Content-Type headers.... Unless there's another way (that's not not using CF).
• LIMIT: Response headers policies per AWS account : 20
• Desire: Be able to set response headers policies for all common MIME types, allowing devs to upload whatever they want to s3 without having to set the mime-type per file, resulting in the Content-Type header being present for all file served via cloudfront.
Considering npm build can spew out who knows what, and who knows how much, uploading per file seems aweful, so I'm assuming folks use response header policies as follows to inject Content-Type headers.... Unless there's another way (that's not not using CF).
Thomasover 2 years ago
Hello, I need some help with some architect work:
I'm trying to automate extracting data from a DynamoDB in to S3 Bucket. This is what I have come up with some far (See image). The data will be extracted via Lambda function that will run every hour when there has been changed to the database. Would this be best practice or is there another way to approach this?
I'm trying to automate extracting data from a DynamoDB in to S3 Bucket. This is what I have come up with some far (See image). The data will be extracted via Lambda function that will run every hour when there has been changed to the database. Would this be best practice or is there another way to approach this?
Craigover 2 years ago
Are there any usage examples for the https://github.com/cloudposse/terraform-aws-eks-node-group module that expand on how to create a node group that only has nodes in a private subnet?
Craigover 2 years ago
Looking at the example I see an output that references a private_subnet_cidrs
Craigover 2 years ago
But I don’t see how this output is actually discovered
Craigover 2 years ago
Ah I see it comes from https://github.com/cloudposse/terraform-aws-dynamic-subnets
Craigover 2 years ago
buuut…if I already have a VPC, with existing subnets
Craigover 2 years ago(edited)
that I’d like to attach to and simply use to discovery existing private & public subnets….can the module do this?
PePe Amengualover 2 years ago
just taking a training on AFT, Orgs and such and I saw this: 🧵
Aadheshover 2 years ago
Hi All. Might be a basic ask. I am trying to get a pricing comparison (On-demand) for Compute, Storage and Network between AWS, Azure and Alibaba Cloud for China Region.
Seems the pricing calculator domains for China region is different and doesn't get much detail easily (Especially Object Storage).
If anyone of you have come across this requirement and have this data ready, could you please share? It will be much helpful to address this urgent query from a Stakeholder.
Seems the pricing calculator domains for China region is different and doesn't get much detail easily (Especially Object Storage).
If anyone of you have come across this requirement and have this data ready, could you please share? It will be much helpful to address this urgent query from a Stakeholder.
vamshiover 2 years ago
Hi everyone, we are trying to use this https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/0.76.0/main.tf repo when I enable the s3_origin_enabled = true, we are getting the issue at
line number 210: data "aws_iam_policy_document" "combined" and remove the sid or make the unique , please use the unique sid , but now fix the issue using the same repo anyone help to fix this issue
line number 210: data "aws_iam_policy_document" "combined" and remove the sid or make the unique , please use the unique sid , but now fix the issue using the same repo anyone help to fix this issue