42 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Fabianover 4 years ago(edited)
Hi. I'm looking at downgrading an AWS RDS database one level from db.m4.xlarge to db.m6g.large. Sometimes I get CPU loads of 50% for 1 hour a day, but only a few days a month. Does anyone have an idea if the cheaper database will be able to handle the load?
Alex Jurkiewiczover 4 years ago
I want to tag my resources with a few levels of categorisation to help with cost allocation. The top level of categorisation is always "product" sold to customers. But then there are one or two layers of more specific categorisation for each infra piece.
Any suggestions on generic names for these tags? I would like to have something that makes sense globally since we want these tags everywhere for cost purposes
Any suggestions on generic names for these tags? I would like to have something that makes sense globally since we want these tags everywhere for cost purposes
Sai Krishnaover 4 years ago
Hi I am trying to setup AWS Appflow Salesforce integration, part of the setup AWS Docs requires me to setup AWS Callback URLs on the Salesforce side . “In the Callback URL text field, enter the URLs for your console for the stages and Regions in which you will use the connected app” . Can someone guide me if its just the landing page url of the AWS console
Steve Wade (swade1987)over 4 years ago
does anyone know how to change the
EnabledCloudwatchLogsExports via of an RDS database instance via the CLI?Steve Wade (swade1987)over 4 years ago
i am trying to do it via terraform but its not liking it
Steve Wade (swade1987)over 4 years ago
"EnabledCloudwatchLogsExports": [
"audit",
"general"
],Steve Wade (swade1987)over 4 years ago
i want to remove
auditBrian Ojedaover 4 years ago
aws rds modify-db-instance --db-instance-identifier abcedf1234 --cloudwatch-logs-export-configuration EnableLogType=general Steve Wade (swade1987)over 4 years ago
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration '{"EnableLogTypes":["general"]}'
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: No modifications were requestedSteve Wade (swade1987)over 4 years ago
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration EnableLogTypes=general
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: No modifications were requestedBrian Ojedaover 4 years ago
aws rds modify-db-instance --db-instance-identifier abcedf1234 --cloudwatch-logs-export-configuration DisableLogTypes=audit Steve Wade (swade1987)over 4 years ago
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration DisableLogTypes=audit
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: You cannot use the log types 'audit' with engine version mysql 8.0.21. For supported log types, see the documentation.Mazin Ahmedover 4 years ago
Can anyone help me with this one? I can't seem to find the reason why it's failing
I always get
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject"
],
"Effect": "Deny",
"Resource": ["arn:aws:s3:::test-mazin-12", "arn:aws:s3:::test-mazin-12/*"],
"Condition": {
"StringNotLike": {
"s3:prefix": "allow-me.txt"
}
},
"Principal": "*"
}
]
}I always get
Conditions do not apply to combination of actions and resources in statementMaciek Strömichover 4 years ago
expect issues 😉 https://downdetector.com/status/aws-amazon-web-services/map/
Beauover 4 years ago
Does anyone have any good documentation/ideas on deploying to AWS Elastic Beanstalk through a Jenkins pipeline project? We currently just build a grails application into a .war file and deploy it with the EBS plugin through a freestyle project, but we’re looking to expand the functionality of the build a lot and need to move it to pipeline projects. I can’t really find any documentation on this.
Maybe we need a pipeline to run all the pre-deploy steps and build the project, then use what we already have to deploy what we build? Not sure how people handle this specifically
Maybe we need a pipeline to run all the pre-deploy steps and build the project, then use what we already have to deploy what we build? Not sure how people handle this specifically
Frankover 4 years ago
I just had a discussion with one of our developers regarding a customer of ours who wants to be able to access an RDS instance to be able to SELECT some data.
That raises some challenges. Our RDS is in a private subnet and can only be reached externally using a tunnel to our Bastion host.
From a security-perspective (but also maintenance) I'm not a fan as it would mean granting AWS access, creating an IAM role allowing them to connect to the Bastion node, creating a user on the RDS instance and granting appropriate permissions etc.. This doesn't feel like the most practical/logical approach.
How do you guys deal with granting access to externals to "internal" systems?
That raises some challenges. Our RDS is in a private subnet and can only be reached externally using a tunnel to our Bastion host.
From a security-perspective (but also maintenance) I'm not a fan as it would mean granting AWS access, creating an IAM role allowing them to connect to the Bastion node, creating a user on the RDS instance and granting appropriate permissions etc.. This doesn't feel like the most practical/logical approach.
How do you guys deal with granting access to externals to "internal" systems?
Saichovskyover 4 years ago(edited)
What’s the best way to enable EBS encryption by default across accounts in an AWS organization? Should I deploy a lambda to all member accounts to enable the setting or is there a better way?
Alex Jurkiewiczover 4 years ago(edited)
are the aws docs for anyone else? https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html
edit: seem to be back
edit: seem to be back
Alex Jurkiewiczover 4 years ago
nice. You can create a Lambda Layer with a long name that has an ARN longer than what the Lambda API will accept when creating/updating a function. So you can create a layer but never attach it to anything 

PePe Amengualover 4 years ago
at this point I think I get paid to deal with with AWS apiisms like this all the time so I do not even get mad anymore
Davidover 4 years ago
Say I have an API Gateway endpoint that triggers an async lambda (meaning the http req will return immediately and the lambda will run async). Say each lambda invocation takes 1 second to run, and my reserved concurrency is 100 lambdas for that function, but my reqs/second to the endpoint is 200.
In this case, the queue of lambda functions would continuously grow it seems like, as there wouldn't be enough lambdas to keep up with the queued events.
I have three questions:
1. How do I monitor if this type of queueing is happening? I use datadog, but any pointers on any system are helpful
2. Can I tell Lambda to just drop messages in the queue older than X hours if the messages are not that important to process?
3. Can I tell Lambda to only process X% of the traffic, discarding the rest?
In this case, the queue of lambda functions would continuously grow it seems like, as there wouldn't be enough lambdas to keep up with the queued events.
I have three questions:
1. How do I monitor if this type of queueing is happening? I use datadog, but any pointers on any system are helpful
2. Can I tell Lambda to just drop messages in the queue older than X hours if the messages are not that important to process?
3. Can I tell Lambda to only process X% of the traffic, discarding the rest?
managedkaosover 4 years ago
Greetings, team! Question about using
I’m able to connect to a server using the following SSH configuration:
However, this still requires a user and key to be in place on the target server for the user that’s connecting.
Using
Is there a way to get the functionality of starting a session as
aws ssm start-session for SSH connections.I’m able to connect to a server using the following SSH configuration:
Host host_name
ProxyCommand aws ssm start-session --target=i-1234567890 --document-name AWS-StartSSHSession --parameters 'portNumber=%p'However, this still requires a user and key to be in place on the target server for the user that’s connecting.
Using
aws ssm start-session --target=i-1234567890 directly, instead of as a ProxyCommand, also works and drops me into the server as ssm-user. However, there are too many servers to know them by their ID; the name just works so much better with the human brain. :DIs there a way to get the functionality of starting a session as
ssm-user without using a key? Essentially, I’d like to not have to provision a user on the server but instead gate access with IAM roles that have permission to start an SSM session. 🤔Beauover 4 years ago
Is it possible to move everything in a VPC from one set of IPs to another? Not sure exactly how that’s worded but my boss who doesn’t have a ton of networking knowledge is the only one allowed in AWS. He set up a VPC for me to use for “dev tools” starting with Prometheus/Grafana server. Using these will require a VPC peering connection to our other VPCs, but I’m 90% sure he used the same CIDR block for both VPCs, which will block the Peering connection.
Is there some way to migrate the instance or 2 I have in the dev tools VPC to a new block, or will I have to tear it down and set up a new one?
Is there some way to migrate the instance or 2 I have in the dev tools VPC to a new block, or will I have to tear it down and set up a new one?
Juan Luis Baptisteover 4 years ago(edited)
Hi,
I have a question with the AWS EFS module. I'm trying to build an EFS filesystem and access it from a swarm cluster built with this terraform module:
https://gitlab.com/live9/terraform/modules/-/tree/master/swarm_cluster
and I'm using the EFS module like this:
The swarm module writes the VPC id and security group to files so I can access them from the EFS code. But when running
One time per each subnet. What I'm doing wrong ?
Thanks!
I have a question with the AWS EFS module. I'm trying to build an EFS filesystem and access it from a swarm cluster built with this terraform module:
https://gitlab.com/live9/terraform/modules/-/tree/master/swarm_cluster
and I'm using the EFS module like this:
data "aws_vpc" "selected" {# default = trueid = file("../swarm_cluster/output/vpc_id.txt")}data "aws_subnet_ids" "subnet" {vpc_id = data.aws_vpc.selected.id# filter {# name = "availabilityZone"# values = ["us-east-2a", "us-east-2b", "us-east-2c"] # insert values here# }}module "efs" {source = "git::<https://github.com/cloudposse/terraform-aws-efs.git?ref=master>"namespace = var.projectstage = var.environmentname = var.nameregion = var.aws_regionvpc_id = data.aws_vpc.selected.idsubnets = tolist(data.aws_subnet_ids.subnet.ids)security_groups = [file("../swarm_cluster/output/swarm-sg_id.txt")]tags = var.extra_tags# zone_id = var.aws_route53_dns_zone_id}The swarm module writes the VPC id and security group to files so I can access them from the EFS code. But when running
terraform apply I get this error:Error: MountTargetConflict: mount target already exists in this AZ *│* { │ RespMetadata: { │ StatusCode: 409, │ RequestID: "5fc21e72-d970-4b28-992d-211acf4c0491" │ }, │ ErrorCode: "MountTargetConflict", │ Message_: "mount target already exists in this AZ" │ } │ │ on .terraform/modules/efs/main.tf line 22, in resource "aws_efs_mount_target" "default": │ 22: resource "aws_efs_mount_target" "default" {One time per each subnet. What I'm doing wrong ?
Thanks!
William Chenover 4 years ago
hi
Alex Jurkiewiczover 4 years ago
nice. If you use AWS SSM Sessions Manager for access to ec2 instances, you can choose to write logs of each session to S3 and encrypt the logs with KMS. However, to do so you need to grant kms:Decrypt permissions to the iam users that use sessions manager and ec2 instance profiles that are connected to. Not kms:Encrypt, but Decrypt! 🙄
Chris Fowlesover 4 years ago
how does that even get past the initial design?
Robert Horroxover 4 years ago
Is anyone experiencing issues in us-east-2?
Milosbover 4 years ago
Hi all,
Is there any convenient tool for mfa with aws-cli?
Is there any convenient tool for mfa with aws-cli?
PePe Amengualover 4 years ago
I’m stealing this one from @RB https://aws.amazon.com/about-aws/whats-new/2021/06/kms-multi-region-keys/
PePe Amengualover 4 years ago
for those who work with KMS that is a big deal
Davidover 4 years ago
If I have an API Gateway and I set a global rate limit of 25 reqs / second on it, but I receive 100 reqs per second, am I charged for the requests that are rejected (in the $3.50/million requests pricing)?
Vikram Yerneniover 4 years ago
Folks,
Does anyone know what this master_user_options meant for? I mean do we use this master user for actually connecting to the elasticsearch setup and pump in the logs?
Does anyone know what this master_user_options meant for? I mean do we use this master user for actually connecting to the elasticsearch setup and pump in the logs?
managedkaosover 4 years ago
Question:
Is there a way to serve a static HTML page from S3 through an ALB?
TLDR:
On occasion I use maintenance pages for long deployments or changes. I do this by creating a
Unfortunately, this method only allows for content that is less than or equal to 1024 bytes. So the page is minimally styled. I’d like to add richer content with CSS and images (well, not me but the developers! 😅 ) but I know that will require more bytes. I’m thinking maybe the CSS could come from a link but even then, depending on how much is added to make the maintenance page look like the app, it will take more than 1024 bytes.
So I’m thinking we could store the page in S3 and then serve it from there. I’d prefer not to do any DNS dancing with the app endpoint and instead just update what the app is serving from the ALB. Any thoughts or ideas?
Is there a way to serve a static HTML page from S3 through an ALB?
TLDR:
On occasion I use maintenance pages for long deployments or changes. I do this by creating a
/* rule in the ALB listener that reads a local html file for the response content:resource "aws_lb_listener_rule" "maintenance_page" {
listener_arn = aws_lb_listener.alb.arn
action {
type = "fixed-response"
fixed_response {
content_type = "text/html"
message_body = file("${path.module}/maintenance_page.html")
status_code = "200"
}
}
condition {
path_pattern {
values = ["/*"]
}
}
}Unfortunately, this method only allows for content that is less than or equal to 1024 bytes. So the page is minimally styled. I’d like to add richer content with CSS and images (well, not me but the developers! 😅 ) but I know that will require more bytes. I’m thinking maybe the CSS could come from a link but even then, depending on how much is added to make the maintenance page look like the app, it will take more than 1024 bytes.
So I’m thinking we could store the page in S3 and then serve it from there. I’d prefer not to do any DNS dancing with the app endpoint and instead just update what the app is serving from the ALB. Any thoughts or ideas?
venkataover 4 years ago
This is really cool. Just heard about it from another devops group: https://www.allthingsdistributed.com/2021/06/introducing-aws-bugbust.html
Devops alertsover 4 years ago(edited)
Hi, Every one, i am trying to create AWS route53 resource using terraform but want to use dynamic block approach. Can any one help me how i can do it in right way. ?
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CREATE ROUTE53 ZONES AND RECORDS
#
# This module creates one or multiple Route53 zones with associated records
# and a delegation set.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ------------------------------------------------------------------------------
# Prepare locals to keep the code cleaner
# ------------------------------------------------------------------------------
locals {
zones = var.name == null ? [] : try(tolist(var.name), [tostring(var.name)], [])
skip_zone_creation = length(local.zones) == 0
run_in_vpc = length(var.vpc_ids) > 0
skip_delegation_set_creation = !var.module_enabled || local.skip_zone_creation || local.run_in_vpc ? true : var.skip_delegation_set_creation
delegation_set_id = var.delegation_set_id != null ? var.delegation_set_id : try(
aws_route53_delegation_set.delegation_set[0].id, null
)
}
# ------------------------------------------------------------------------------
# Create a delegation set to share the same nameservers among multiple zones
# ------------------------------------------------------------------------------
resource "aws_route53_delegation_set" "delegation_set" {
count = local.skip_delegation_set_creation ? 0 : 1
reference_name = var.reference_name
depends_on = [var.module_depends_on]
}
# ------------------------------------------------------------------------------
# Create the zones
# ------------------------------------------------------------------------------
resource "aws_route53_zone" "zone" {
for_each = var.module_enabled ? toset(local.zones) : []
name = each.value
comment = var.comment
force_destroy = var.force_destroy
delegation_set_id = local.delegation_set_id
dynamic "vpc" {
for_each = { for id in var.vpc_ids : id => id }
content {
vpc_id = vpc.value
}
}
tags = merge(
{ Name = each.value },
var.tags
)
depends_on = [var.module_depends_on]
}
# ------------------------------------------------------------------------------
# Prepare the records
# ------------------------------------------------------------------------------
locals {
records_expanded = {
for i, record in var.records : join("-", compact([
lower(record.type),
try(lower(record.set_identifier), ""),
try(lower(record.failover), ""),
try(lower(record.name), ""),
])) => {
type = record.type
name = try(record.name, "")
ttl = try(record.ttl, null)
alias = {
name = try(record.alias.name, null)
zone_id = try(record.alias.zone_id, null)
evaluate_target_health = try(record.alias.evaluate_target_health, null)
}
allow_overwrite = try(record.allow_overwrite, var.allow_overwrite)
health_check_id = try(record.health_check_id, null)
idx = i
set_identifier = try(record.set_identifier, null)
weight = try(record.weight, null)
failover = try(record.failover, null)
}
}
records_by_name = {
for product in setproduct(local.zones, keys(local.records_expanded)) : "${product[1]}-${product[0]}" => {
zone_id = try(aws_route53_zone.zone[product[0]].id, null)
type = local.records_expanded[product[1]].type
name = local.records_expanded[product[1]].name
ttl = local.records_expanded[product[1]].ttl
alias = local.records_expanded[product[1]].alias
allow_overwrite = local.records_expanded[product[1]].allow_overwrite
health_check_id = local.records_expanded[product[1]].health_check_id
idx = local.records_expanded[product[1]].idx
set_identifier = local.records_expanded[product[1]].set_identifier
weight = local.records_expanded[product[1]].weight
failover = local.records_expanded[product[1]].failover
}
}
records_by_zone_id = {
for id, record in local.records_expanded : id => {
zone_id = var.zone_id
type = record.type
name = record.name
ttl = record.ttl
alias = record.alias
allow_overwrite = record.allow_overwrite
health_check_id = record.health_check_id
idx = record.idx
set_identifier = record.set_identifier
weight = record.weight
failover = record.failover
}
}
records = local.skip_zone_creation ? local.records_by_zone_id : local.records_by_name
}
# ------------------------------------------------------------------------------
# Attach the records to our created zone(s)
# ------------------------------------------------------------------------------
resource "aws_route53_record" "record" {
for_each = var.module_enabled ? local.records : {}
zone_id = each.value.zone_id
type = each.value.type
name = each.value.name
allow_overwrite = each.value.allow_overwrite
health_check_id = each.value.health_check_id
set_identifier = each.value.set_identifier
# only set default TTL when not set and not alias record
ttl = each.value.ttl == null && each.value.alias.name == null ? var.default_ttl : each.value.ttl
# split TXT records at 255 chars to support >255 char records
records = can(var.records[each.value.idx].records) ? [for r in var.records[each.value.idx].records :
each.value.type == "TXT" && length(regexall("(\\\"\\\")", r)) == 0 ?
join("\"\"", compact(split("{SPLITHERE}", replace(r, "/(.{255})/", "$1{SPLITHERE}")))) : r
] : null
dynamic "weighted_routing_policy" {
for_each = each.value.weight == null ? [] : [each.value.weight]
content {
weight = weighted_routing_policy.value
}
}
dynamic "failover_routing_policy" {
for_each = each.value.failover == null ? [] : [each.value.failover]
content {
type = failover_routing_policy.value
}
}
dynamic "alias" {
for_each = each.value.alias.name == null ? [] : [each.value.alias]
content {
name = alias.value.name
zone_id = alias.value.zone_id
evaluate_target_health = alias.value.evaluate_target_health
}
}
depends_on = [var.module_depends_on]
}Benover 4 years ago
Hey there, I’m pretty new to k8s and AWS and I’m wondering if it is a good idea to use AWS services via AWS Service Broker. The project and its docs seem a little bit outdated, so I’m afraid flogging a dead horse …
curious deviantover 4 years ago
Hi
I have a requirement to re-encrypt traffic from alb to backend hosts using a 3rd party cert. The traffic from client to alb is already encrypted using ACM cert. Would appreciate pointers to any reference docs and/or suggestions to enable this. Thanks !
I have a requirement to re-encrypt traffic from alb to backend hosts using a 3rd party cert. The traffic from client to alb is already encrypted using ACM cert. Would appreciate pointers to any reference docs and/or suggestions to enable this. Thanks !
Nikola Milicover 4 years ago
I’m using AWS ECS with load balancer in front of my services. Now I want to restrict access to the Mongo Atlas Database (whitelist) for certain IP addresses? How do i find out those external addressed of my backend services that are running on ECS? Should that be the IP address of the load balancer? Mongo Atlas whitelisting only supports IP address or CIDR notation, I can’t put DNS there.
Tomekover 4 years ago(edited)
👋 I’m trying to allow AWSAccountA to send eventbridge events to AWSAccountB. In AWSAccountB I can get this to work correctly with a
however, the following principal doesn’t work
Is this because a cross account EventBridge message isn’t marked as a
* principal principals {
type = "*"
identifiers = ["*"]
}however, the following principal doesn’t work
principals {
type = "Service"
identifiers = ["<http://events.amazonaws.com|events.amazonaws.com>"]
}Is this because a cross account EventBridge message isn’t marked as a
"<http://events.amazonaws.com|events.amazonaws.com>" service principal? (it seems like I’d want to avoid anonymous principals)Davidover 4 years ago
s there a standard way to share private Docker images with external customers?
I have a docker registry on ECR that has images we allowlist external AWS Account IDs to access from our customers. If we have a customer who wanted to access the image without AWS (like through Google Cloud), how could we make that happen while not making our registry public?
I have a docker registry on ECR that has images we allowlist external AWS Account IDs to access from our customers. If we have a customer who wanted to access the image without AWS (like through Google Cloud), how could we make that happen while not making our registry public?
Zachover 4 years ago
FYI if you use gimme-aws-creds, okta changed something recently that has apparently been slowly rolling out to customers, and it totally breaks gimme-aws-creds. A hotfix release was made yesterday to gimme-aws-creds to account for the change