46 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Bart Coddensalmost 4 years ago
The Trusted advisor supports a organizatiol view. Can you create such a report via the cli ?
sohaibahmed98almost 4 years ago
Shreyank Sharmaalmost 4 years ago
We are running a web application written in Java(tomcat8) hosted in AWS ElastcBeanStalk
Some weeks back we started getting 503 error randomly
When we checked the elasticbeanstalk-erorr_logs
The error logs are suggesting connection error with backend unix socket
When we checked in /var/run/httpd/ folder, there were no unix sockets(.sock files)
But in apache httpd config
the proxy backend is ip address not unix socket
As per the config httpd should connect to backend ip address(localhost:8080) but why is it complaining about unix socket
Have anyone faced similar issues?
Some weeks back we started getting 503 error randomly
When we checked the elasticbeanstalk-erorr_logs
[Thu Mar 03 13:22:12.906144 2022] [proxy:error] [pid 14882:tid 139757338711808] (13)Permission denied: AH02454: HTTP: attempt to connect to Unix domain socket /var/run/httpd/ (localhost) failed
[Thu Mar 03 13:22:12.906202 2022] [proxy_http:error] [pid 14882:tid 139757338711808] [client 172.31.17.0:61382] AH01114: HTTP: failed to make connection to backend: httpd-UDS, referer: <http://our-domain.com/1/callBackLog.jsp>The error logs are suggesting connection error with backend unix socket
When we checked in /var/run/httpd/ folder, there were no unix sockets(.sock files)
But in apache httpd config
<VirtualHost *:80>
<Proxy *>
Require all granted
</Proxy>
ProxyPass / <http://localhost:8080/> retry=0
ProxyPassReverse / <http://localhost:8080/>
ProxyPreserveHost on
ErrorLog /var/log/httpd/elasticbeanstalk-error_log
</VirtualHost>the proxy backend is ip address not unix socket
As per the config httpd should connect to backend ip address(localhost:8080) but why is it complaining about unix socket
Have anyone faced similar issues?
Antarr Byrdalmost 4 years ago
Anyone familiar with AWS Transfer? I’m trying to upload a file using SFTP but I’m getting a permissions denied error
sftp> put desktop.ini
Uploading desktop.ini to /reports/desktop.ini
remote open("/reports/desktop.ini"): Permission denied
sftp> Nishant Thoratalmost 4 years ago
When it comes to cloud data leak protection all eyes turn to public S3 buckets or public EC2 servers. But even though your EC2 instance is not exposed the data may still leak. EBS volumes are in plenty and hence should be continuously assessed for risk.
Two ways EBS volumes can leak data:
(Unintended) Public EBS volumes snapshots and Unencrypted EBS volumes/snapshots.
https://www.cloudyali.io/blogs/finding-unencrypted-aws-ebs-volumes-at-scale
Two ways EBS volumes can leak data:
(Unintended) Public EBS volumes snapshots and Unencrypted EBS volumes/snapshots.
https://www.cloudyali.io/blogs/finding-unencrypted-aws-ebs-volumes-at-scale
Shreya Chouhanalmost 4 years ago
Hi anyone can help me with this problem
There are 50-60 accounts under AWS organisation. There is a central account which manages all. I want if any of the backup fails in any account to be notified in mail.
In short the solution. Shouldn't be implemented in all the account. Just one account so it manages all.
There are 50-60 accounts under AWS organisation. There is a central account which manages all. I want if any of the backup fails in any account to be notified in mail.
In short the solution. Shouldn't be implemented in all the account. Just one account so it manages all.
Balazs Vargaalmost 4 years ago
Facing with the following issue on aurora serverless
Any advice ?
Aborted connection 148 to db: 'db' user: 'user' host: 'rds_ip' (Got an error reading communication packets)Any advice ?
Baronne Moutonalmost 4 years ago
hi, can I request a feature for the cloudposse/terraform-aws-backup module?
It doesn't appear to have the ability to enable or set VSS (volume shadow copy service).
In the aws_backup_plan resource it is the following addition required:
advanced_backup_setting {
backup_options = {
WindowsVSS = "enabled"
}
resource_type = "EC2"
}
It doesn't appear to have the ability to enable or set VSS (volume shadow copy service).
In the aws_backup_plan resource it is the following addition required:
advanced_backup_setting {
backup_options = {
WindowsVSS = "enabled"
}
resource_type = "EC2"
}
michael sewalmost 4 years ago
Any JQ / AWS CLI junkies here? I'm trying to get the SubnetId and Name (from the tag) from vpc subnets matching a string. I'm having a problem getting those things out of arrays and into a simpler format I can parse
aws ec2 describe-subnets --filters "Name=tag:Name,Values=*private1*" \
--query 'Subnets[*].[ SubnetId, Tags[?Key==`Name`].Value ]'
[
[
"subnet-00c332dca528235fe",
[
"my-vpc-private1-us-east-1a-subnet"
]
]
]Matt Gowiealmost 4 years ago
Leapp just released v0.10.0 — https://www.leapp.cloud/releases
Super excited about this release as it enables logging into the AWS console from Leapp, which was my major feature request before being able to fully switch away from
Super excited about this release as it enables logging into the AWS console from Leapp, which was my major feature request before being able to fully switch away from
aws-vault. If any of ya’ll are using aws-vault then be sure to check out Leapp — It’s a vastly better tool.Neil Joharialmost 4 years ago
Hey team! Just trying to design something and wanted to know if my understanding is correct from one of you legends 🙇
1. In an AppMesh, does it make sense to always have every Virtual Service associated with a Virtual Router and Virtual Node?
2. Is the way you allow communication between two Virtual Nodes via the “backends”? Why does 2-way traffic work if backends are meant to be egress traffic only?
3. Can I have a route in my Virtual Gateway direct traffic to Cloudfront? I wanted to have a Virtual Node who’s service discovery was DNS hostname = Cloudfront address
1. In an AppMesh, does it make sense to always have every Virtual Service associated with a Virtual Router and Virtual Node?
2. Is the way you allow communication between two Virtual Nodes via the “backends”? Why does 2-way traffic work if backends are meant to be egress traffic only?
3. Can I have a route in my Virtual Gateway direct traffic to Cloudfront? I wanted to have a Virtual Node who’s service discovery was DNS hostname = Cloudfront address
Balazs Vargaalmost 4 years ago
if I have an delete mfa enabled bucket and I would like to do exclude a folder from this "role"? How can I do that ?
Jesus Martinezalmost 4 years ago
in the policy you explicit deny that folder
Jesus Martinezalmost 4 years ago
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyExampleBucket/folder1",
"arn:aws:s3:::MyExampleBucket/folder1/*"
]Jesus Martinezalmost 4 years ago
More info: <https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html>
Neil Joharialmost 4 years ago
Hey team, I’m going absolutely nuts…
Has anyone here had success with App Mesh with envoy proxy? I can’t seem to get envoy to work correctly, despite all logs looking good 😞
More details in 🧵
Has anyone here had success with App Mesh with envoy proxy? I can’t seem to get envoy to work correctly, despite all logs looking good 😞
More details in 🧵
Frankalmost 4 years ago
I was checking this example, and am wondering if it could even work?
https://github.com/cloudposse/terraform-aws-transit-gateway/blob/master/examples/multi-account/main.tf
https://github.com/cloudposse/terraform-aws-transit-gateway/blob/master/examples/multi-account/main.tf
Frankalmost 4 years ago
module.transit_gateway has a dependency on module.transit_gateway_vpc_attachments_and_subnet_routes_prod and visa versaFrankalmost 4 years ago
wouldn’t this cause circular dependencies?
S
Stevealmost 4 years ago
Anyone know if elasticache redis cluster mode can be changed to cluster mode disabled? Not seeing a clear answer in aws docs.
We're trying to do a data migration to a new cluster.
We're trying to do a data migration to a new cluster.
muhahaalmost 4 years ago(edited)
Hey, anyone running Fedora CoreOS? Well, its not directly related to FCOS, but how are You handling mapping EBS disks if they are different from instance to instance type ? /dev/xvdb vs /dev/nvme1 for example.. Thanks
jonjitsualmost 4 years ago
Anyone have any examples with SSM automation creating an aws_ssm_association with an 'Automation' document (not Command)?
PePe Amengualalmost 4 years ago(edited)
EventBridge question: is it possible to create a policy to allow another event bridge to send events cross account?
PePe Amengualalmost 4 years ago
I can’t approve this one : https://github.com/cloudposse/terraform-aws-datadog-lambda-forwarder/pull/31
BitsnBitesalmost 4 years ago
Is anyone experience in resetting CloudFormation? We use it for a media CDN and seems it's in a bad state but aws only helps paid.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html I worry trying this as it won't actually resolve the issue because it says the stack will be in an inconsistent state, and I think that is already the case. The error happens in the UPDATE_ROLLBACK_COMPLETE event. I posted about it here https://stackoverflow.com/questions/71380124/how-to-fix-the-following-resources-failed-to-update-cloudfront
An error always occurs when trying to apply the new set of changes, and then rolling back generates the UPDATE_ROLLBACK_* errors.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html I worry trying this as it won't actually resolve the issue because it says the stack will be in an inconsistent state, and I think that is already the case. The error happens in the UPDATE_ROLLBACK_COMPLETE event. I posted about it here https://stackoverflow.com/questions/71380124/how-to-fix-the-following-resources-failed-to-update-cloudfront
An error always occurs when trying to apply the new set of changes, and then rolling back generates the UPDATE_ROLLBACK_* errors.
lorenalmost 4 years ago
When writing a lambda intended to be invoked by a scheduled event, do you prefer to make it configurable via the event data structure, or via environment variables?
PePe Amengualalmost 4 years ago
is there a way to force lambda to connect to https endpoints only?
Milosbalmost 4 years ago
is it possible to use application load balancer controller with selfmanaged Kubernetes cluster (not eks)?
azecalmost 4 years ago
Hi there!
I’ve got an architectural question that I couldn’t find answer to myself.
I have:
• Aurora PostgreSQL in
• AWS Lambda function in
• K8S EKS in
• Transit GW with route propagation and attachments for all 3 VPCs (
The DB needs to trigger lambda using
When triggered, Lambda needs to make REST API call to a Service running in K8S via HTTP. The DNS for that K8S Service is exposed via External DNS as a record in Route53 in a public hosted zone. So basically the LB DNS is resolvable from anywhere (including public Internet, as well as Lambda VPC - pretty much anywhere). The LB itself is however internal and the IPs are within
However the more challenging part for me is connectivity between DB (in
I was reading docs invoking AWS Lambda function from an Aurora PostgreSQL DB cluster , and they mention that for DB instances in private subnets, there are two approaches:
a) Using NAT Gateways
b) Using VPC endpoints
But they don’t elaborate on how NAT Gateways could be used in this scenario. They just outline the steps needed to accomplish this connectivity by using option (b) Using VPC endpoints
While I don’t have problem taking approach (b) , I would like to understand how could I invoke Lambda without public API endpoint even without using (b) , considering I already have routing among all these VPCs established using transit gateway.
If anyone has done something similar or has some hints based on the past experiences, I would appreciate chatting about it!
Thanks!
I’ve got an architectural question that I couldn’t find answer to myself.
I have:
• Aurora PostgreSQL in
VPC A with all DB instances in private subnets, in AWS account 1• AWS Lambda function in
VPC B within private subnets, in AWS account 1• K8S EKS in
VPC C , in AWS account 1 • Transit GW with route propagation and attachments for all 3 VPCs (
A, B, C ), in AWS account 2 The DB needs to trigger lambda using
aws_lambda() extensions for PostgreSQL.When triggered, Lambda needs to make REST API call to a Service running in K8S via HTTP. The DNS for that K8S Service is exposed via External DNS as a record in Route53 in a public hosted zone. So basically the LB DNS is resolvable from anywhere (including public Internet, as well as Lambda VPC - pretty much anywhere). The LB itself is however internal and the IPs are within
VPC C CIDR range. Based on all existing route tables for private subnets for Lambda ENI attachments, I doubt there will be problems routing traffic from Lambda to that K8S Service (exposed via LB).However the more challenging part for me is connectivity between DB (in
VPC A ) and Lambda (in VPC B ).I was reading docs invoking AWS Lambda function from an Aurora PostgreSQL DB cluster , and they mention that for DB instances in private subnets, there are two approaches:
a) Using NAT Gateways
b) Using VPC endpoints
But they don’t elaborate on how NAT Gateways could be used in this scenario. They just outline the steps needed to accomplish this connectivity by using option (b) Using VPC endpoints
While I don’t have problem taking approach (b) , I would like to understand how could I invoke Lambda without public API endpoint even without using (b) , considering I already have routing among all these VPCs established using transit gateway.
If anyone has done something similar or has some hints based on the past experiences, I would appreciate chatting about it!
Thanks!
Adnanalmost 4 years ago
Hi there,
I was wondering if any of you use spot instances for EKS worker nodes?
If yes, do you manage it by yourself or do you use some software?
I see, for example, this OS project https://github.com/cloudutil/AutoSpotting
I wonder if a paid service is worth it compared to an open source solution?
I was wondering if any of you use spot instances for EKS worker nodes?
If yes, do you manage it by yourself or do you use some software?
I see, for example, this OS project https://github.com/cloudutil/AutoSpotting
I wonder if a paid service is worth it compared to an open source solution?
N
Nikolai Momotalmost 4 years ago
I have Windows image that I use for hosting IIS on EC2
Recently I've been trying to automate the image build using packer, which managed to build a Windows AMI with IIS installed and setup on it.
However, launching this AMI seems to make the instance unreachable - the EC2 system log is blank and SSM no longer recognizes the instance.
Has anyone had this issue before?
I had to add a
Recently I've been trying to automate the image build using packer, which managed to build a Windows AMI with IIS installed and setup on it.
However, launching this AMI seems to make the instance unreachable - the EC2 system log is blank and SSM no longer recognizes the instance.
Has anyone had this issue before?
I had to add a
user_data_file to the packer build to get WinRM to connect during the build, I suspect this is where the issue stems from but I haven't been able to to figure out whyT
tomasalmost 4 years ago
Hello,
I want to ask why I see dkim ses records in route53. I not using ses or any domain validation. Could you please advice?
I want to ask why I see dkim ses records in route53. I not using ses or any domain validation. Could you please advice?
Bhavik Patelalmost 4 years ago
I’m currently storing my .env variables for my ECS instance via an S3 file. We just added in an RSA key and it’s a huge pain in the butt to include this as an inline value or pad it with new lines. Does anyone have any other recommendations?
I was thinking about storing it into SSM and have the value pulled in programmatically with boto3
I was thinking about storing it into SSM and have the value pulled in programmatically with boto3
DaniC (he/him)almost 4 years ago
Hi folks, anyone around is in a situation where it needs to run from a central place (multiple distributed teams) various sql queries against many rds instances scattered between few regions ?
One option am considering is maybe to have a bastion/ cloud9 in a vpc peered with all the other vpcs so we can have access to RDS private subnet. Trouble is how do i auth folks etc
One option am considering is maybe to have a bastion/ cloud9 in a vpc peered with all the other vpcs so we can have access to RDS private subnet. Trouble is how do i auth folks etc
azecalmost 4 years ago(edited)
It seems that RDS module is not very flexible when it comes to
when trying to bring your own IAM roles associations after cluster is created using aws_rds_cluster_role_association resource.
The docs on aws_rds_cluster resource do have this note:
I have situation where we are not even passing
1. On first apply, association is fine, I see IAM role was added to RDS cluster and aws_rds_cluster_role_association resource was stacked.
2. On the consecutive apply, RDS cluster sees a change - from the real infrastructure it picks up association, but since
iam_roles handling (see https://github.com/cloudposse/terraform-aws-rds-cluster/blob/0.50.2/main.tf#L86)when trying to bring your own IAM roles associations after cluster is created using aws_rds_cluster_role_association resource.
The docs on aws_rds_cluster resource do have this note:
NOTE on RDS Clusters and RDS Cluster Role Associations
Terraform provides both a standalone RDS Cluster Role Association - (an association between an RDS Cluster and a single IAM Role) and an RDS Cluster resource with iam_roles attributes. Use one resource or the other to associate IAM Roles and RDS Clusters. Not doing so will cause a conflict of associations and will result in the association being overwritten.I have situation where we are not even passing
iam_roles var to cloudposse/rds-cluster/aws:0.46.2 but I did add new IAM Role with permisisons to trigger Lambda from PostgreSQL and then associating that role with RDS cluster using aws_rds_cluster_role_association resource. The resulting situation is:1. On first apply, association is fine, I see IAM role was added to RDS cluster and aws_rds_cluster_role_association resource was stacked.
2. On the consecutive apply, RDS cluster sees a change - from the real infrastructure it picks up association, but since
iam_roles is a bad default ( empty list [] ), TF computes that as a - change and wants to tear down the IAM Role association. The aws_rds_cluster_role_association resource doesn’t render any changes in the plan (it still exists). Proceeding with apply on this, it fails with:Error: DBClusterRoleNotFound: Role ARN arn:aws:iam::<REDACTED>:role/<REDACTED> cannot be found for DB Cluster: <REDACTED>. Verify your role ARN and try again. You might need to include the feature-name parameter.
status code: 404, request id: 039e4bb0-6091-4c19-9b7d-e63472ec859eazecalmost 4 years ago
However, if we do pass
like this….
we get another error:
So something is not quite right here how
iam_roles …like this….
iam_roles = [
aws_iam_role.rds_lambda_invoke_feature_role.arn
]we get another error:
Error: Cycle: module.rds_cluster.aws_rds_cluster.secondary, module.rds_cluster.aws_rds_cluster.primary, module.rds_cluster.output.cluster_identifier (expand), aws_iam_role.rds_lambda_invoke_feature_role, module.rds_cluster.var.iam_roles (expand)So something is not quite right here how
iam_roles are handled inside this module at all.Andyalmost 4 years ago(edited)
Does anyone have a recommendation for how they size their subnets for an EKS cluster? e.g for a
My approach here was just trying to get big private subnets for the EKS nodes, and then fit in the other subnets around this.
/19 in us-east-1 with 3x AZs I was considering using something like:# Only really going to have one public NLB here
10.136.0.0/25. - public. - 126 hosts
10.136.0.128/25 - public. - 126
10.136.1.0/25. - public. - 126
10.136.1.128/25 - spare
10.136.2.0/23. - spare
10.136.4.0/22. - private - 1,022 hosts
10.136.8.0/22. - private - 1,022
10.136.12.0/22. - private - 1,022
10.136.16.0/22 - spare
10.136.20.0/24. - db - 254
10.136.21.0/24 - db - 254
10.136.22.0/24. - db - 254
10.136.23.0/24 - spare
10.136.24.0/24. - elasticache - 254
10.136.25.0/24 - elasticache - 254
10.136.26.0/24 - elasticache - 254
10.136.27.0/24. - spare
10.136.28.0/22. - spareMy approach here was just trying to get big private subnets for the EKS nodes, and then fit in the other subnets around this.
Wilalmost 4 years ago
Howdy everybody, I'm using the
It's working great, however I'd like to turn on
Anyone have a suggestion on how I could go about this?
Defaults leave me with this in
cloudposse/terraform-aws-ec2-instancemodule.It's working great, however I'd like to turn on
instance_metadata_tags inside metadata_options.Anyone have a suggestion on how I could go about this?
Defaults leave me with this in
terraform plan + metadata_options {
+ http_endpoint = "enabled"
+ http_put_response_hop_limit = 2
+ http_tokens = "required"
+ instance_metadata_tags = "disabled"
}Wilalmost 4 years ago
Aw screw it, filed a pull request to fix it. https://github.com/cloudposse/terraform-aws-ec2-instance/pull/122
Maarten van der Hoefalmost 4 years ago
Hello everyone. Does anyone know if conditions like
kms:*ViaService* work cross-account for KMS ? It doesn't seem to work. ( Context is I have a central KMS acct with EKS in other accounts. Current policies are opening to arn::::root but would like to tighten it more. )Thomas Hoefkensalmost 4 years ago
Hello everyone, do you have any idea how Cognito can return the original request URL to our Angular app?
• A user receives an email with a deep link to our web app protected by Cognito e.g. https://webapp.com/reporting?reportingId=123
• The user clicks the link and is redirected to Cognito: the URL then looks like https://auth.webapp.com/login?client_id=xxx&response_type=code&redirect_uri=https://webapp.com/auth#/reporting?reportingId=123
• After entering UserId and Pass on the Cognito-provided login screen, I can see a first a request is done against this URL: https://auth.webapp.com/login?client_id=xxx&response_type=code&redirect_uri=https://webapp.com/auth
As you may notice, the real deep link is already lost in step3 and then not passed on in the next step to https://webapp.com/auth?code=56565ab47-xxxx
Could you point me to how getting the original redirect URI to work and to take the user back to the deep link?
• A user receives an email with a deep link to our web app protected by Cognito e.g. https://webapp.com/reporting?reportingId=123
• The user clicks the link and is redirected to Cognito: the URL then looks like https://auth.webapp.com/login?client_id=xxx&response_type=code&redirect_uri=https://webapp.com/auth#/reporting?reportingId=123
• After entering UserId and Pass on the Cognito-provided login screen, I can see a first a request is done against this URL: https://auth.webapp.com/login?client_id=xxx&response_type=code&redirect_uri=https://webapp.com/auth
As you may notice, the real deep link is already lost in step3 and then not passed on in the next step to https://webapp.com/auth?code=56565ab47-xxxx
Could you point me to how getting the original redirect URI to work and to take the user back to the deep link?
Aumkar Prajapatialmost 4 years ago(edited)
Hey all, dealing with a rather strange issue with our KOPS based kubernetes cluster’s autoscaling group in AWS. Basically our ASG itself is terminating nodes as a form of balancing it’s trying to do between AZ. The thing is, AWS is terminating these nodes which means whatever is running on those nodes is basically shut down and restarted which is not ideal because this is a prod cluster. Our cluster-autoscaler already runs in the cluster which occasionally scales things up and down in a controlled manner while AWS is doing it’s own form of balancing that appears to be more reckless with our cluster.
Here’s an example of the error, any ideas on what could be causing this? This seems isolated to one cluster only:
Here’s an example of the error, any ideas on what could be causing this? This seems isolated to one cluster only:
At 2022-03-28T07:57:29Z instances were launched to balance instances in zones ca-central-1b ca-central-1a with other zones resulting in more than desired number of instances in the group. At 2022-03-28T07:57:49Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 33 to 32. At 2022-03-28T07:57:49Z instance i-xxx was selected for termination. Balazs Vargaalmost 4 years ago
hello all, do you know ay s3 maintenance tool running in k8s? WE have few folder that we would like to delete after x days. using CLI I can write a script but better if there is a tool maybe with helm chart. thanks
Balazs Vargaalmost 4 years ago
we use serverless v1. the version is really old (2.7.01). Is there a way to use the lastest 2.10.02 ?
Johnmaryalmost 4 years ago
Hello everyone. Please I need help. I have a terraform script that creates a mongodb cluster and nodes, but I have issue, when I want change my node from a region to another or remove a node from a region. example like I have 3 nodes in 3 region but now wants only 3 nodes in 2 regions, the terraform would want to delete the whole cluster and create a new one based on the update, at such I will loss data. I want terraform to be able to do that update without destroying the cluster. Any help will be appreciated.
Kareemalmost 4 years ago
Anyone ever tried using DMS or any other AWS offering to make a copy of a prod database with obfuscated and redacted data?