27 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Matt Gowieabout 3 years ago
Anyone here have success with AWS SSO account delegation? I heard it was buggy when it was first released and I'm wondering if that is still the case.
bradymabout 3 years ago(edited)
I'm using v2 of the aws golang sdk and as someone still relatively new to golang, I'm still trying to wrap my head around real-world uses of context. I've not been able to find any example code that does anything with the context object that is required for everything now. Anyone have sample code or use cases they could share with me?
Michael Taquíaabout 3 years ago
Help writing pytest to test a method in CDK:
Hello everyone, I am working with this lab (https://cdkworkshop.com/).
Because of coverage, I need to write a test for the method table_name.
I tried something like this but it is failing, any example or help please?
Error(it returns TOKEN and does not compare):
Hello everyone, I am working with this lab (https://cdkworkshop.com/).
Because of coverage, I need to write a test for the method table_name.
class HitCounter(Construct):
@property
def handler(self):
return self._myhandler
@property
def table(self):
return self._table
@property
def table_name(self):
return self._table.table_name
def __init__(
self,
scope: Construct,
id: str,
downstream: _lambda.IFunction,
**kwargs
) -> None:
# expose our table to be consumed in workshop01_stack.py
self._table = ddb.Table(
self,
"Hits",
partition_key={"name": "path", "type": ddb.AttributeType.STRING},
encryption=ddb.TableEncryption.AWS_MANAGED, # test_dynamodb_with_encryption()
read_capacity=read_capacity,
removal_policy=RemovalPolicy.DESTROY,
)I tried something like this but it is failing, any example or help please?
def test_dynamodb_tablename():
stack = Stack()
myhit = HitCounter(
stack,
"HitCounter",
downstream=_lambda.Function(
stack,
"TestFunction",
runtime=_lambda.Runtime.PYTHON_3_9,
handler="hello.handler",
code=_lambda.Code.from_asset("lambda_cdk"),
),
)
assert myhit.table_name == "mytable"Error(it returns TOKEN and does not compare):
> assert myhit.table_name == "mytable"
E AssertionError: assert '${Token[TOKEN.825]}' == 'mytable'
E - mytable
E + ${Token[TOKEN.825]}
tests/unit/test_cdk_workshop.py:120: AssertionErrorManvinderabout 3 years ago
I'm using ECS for orchestration of services by using ec2 as capacity provider. My images in different aws account. I'm not able to fetch the docker image. Please help me in figuring out a solution.
#!/bin/bash
sudo yum install -y aws-cli
aws ecr get-login-password --region ap-south-1 | docker login --username AWS --password-stdin <http://ACCOUNTID.dkr.ecr.ap-south-1.amazonaws.com|ACCOUNTID.dkr.ecr.ap-south-1.amazonaws.com>
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=qa
ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
EOF
sudo stop ecs && sudo start ecs Balazs Vargaabout 3 years ago
hello all, I see this in our RDS:
Anyone has fix for this ? v1 serverless.
<http://oscar_oom.cc|oscar_oom.cc> Anyone has fix for this ? v1 serverless.
Kirupa Karanabout 3 years ago
Hi all, we're facing nginx 504 error frequently, after changing proxy_connection_timeout 600; it's doesn't work, please advice
Vinko Vrsalovicabout 3 years ago
I remember that there were some window desktop tools to upload files to S3 - I'm looking for a solution to give a non technical user. My idea would be to grant them a read-write key on a specific bucket (and or folder) and this tools so they can do it. What options are there?
lorenabout 3 years ago
huh, this is new to me, though i see it was released in nov 2022... cross-account visibility for cloudwatch logs? anyone use this yet?
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html
bradymabout 3 years ago
Anyone using ssm parameter policies? In the cli help for
ExpirationNotification and NoChangeNotification it states that a cloudwatch event will be created. Then at https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-policies.html it states that an EventBridge event will be created. So.. which is it? Or are they both somehow correct? I've never used EventBridge so I'mOliverSabout 3 years ago
Any advice on moving an s3 bucket from one region to another? I've used S3 Batch Replication before, but I'm wondering if there is cheaper (and therefore likely slower, but that might be fine).
PePe Amengualabout 3 years ago
what was that SSM ssh client interactive utility that was mention here before?
Vinko Vrsalovicabout 3 years ago
Do you use any tool to discover how much things are exactly costing? I'm trying to use the cost explorer and I can't reliably get the data I want. Which is basically, give me all charges you are making detailed by item.
Steve Wade (swade1987)about 3 years ago
does anyone know a fast way to delete all the objects in an S3 bucket? I am waiting to delete the contents of our cloudtrail S3 bucket but its taking literally forever to delete them via the CLI
Steve Wade (swade1987)about 3 years ago
does anyone happen to know how quickly an s3 lifecycle kicks in after being changed?
elabout 3 years ago
hey all 👋 anyone have good resources on setting up AWS SSO & cross-account permissions? specifically looking for "do it this way" or "here are some things to consider"
Steve Wade (swade1987)almost 3 years ago
hey all 👋 is it recommended to have a single SNS topic for all cloudwatch style events (at the account level)
Steve Wade (swade1987)almost 3 years ago
i want to start setting events to slack for guardduty, security hub, rds and other services
Benalmost 3 years ago
Hi there 🙋
I have an issue with max pods limit in our EKS cluster. We created it using tf module “cloudposse/eks-cluster/aws”.
Not sure if this belongs rather to #terraform than to #aws ...
I found this thread https://sweetops.slack.com/archives/CB6GHNLG0/p1658754337537289, but wasn’t able to figure out what I might do.
Our cluster (dev cluster) uses t3a.small instances as worker nodes. Each of which have a pod capacity of only 8.
So I read about the CNI add-on, added it (manually using the AWS console), set ENABLE_PREFIX_DELEGATION=true and WARM_PREFIX_TARGET=1.
Increased the node count (manually, again), so a new worker node was added. But also that new node still has this 8 pods limit.
Any hints what else I might check?
I have an issue with max pods limit in our EKS cluster. We created it using tf module “cloudposse/eks-cluster/aws”.
Not sure if this belongs rather to #terraform than to #aws ...
I found this thread https://sweetops.slack.com/archives/CB6GHNLG0/p1658754337537289, but wasn’t able to figure out what I might do.
Our cluster (dev cluster) uses t3a.small instances as worker nodes. Each of which have a pod capacity of only 8.
So I read about the CNI add-on, added it (manually using the AWS console), set ENABLE_PREFIX_DELEGATION=true and WARM_PREFIX_TARGET=1.
Increased the node count (manually, again), so a new worker node was added. But also that new node still has this 8 pods limit.
Any hints what else I might check?
Zach Balmost 3 years ago
Hi guys, the latest release of the
The sole purpose of this variable appears to be passed down to the
The
While it’s technically valid since it’s pinned to the earlier release of the
It appears there was some sort of possible breaking issue associated with this: https://github.com/cloudposse/terraform-aws-s3-log-storage/wiki/Upgrading-to-v0.27.0-(POTENTIAL-DATA-LOSS)
Is there any reason this can’t be updated to
terraform-aws-alb module (v1.7.0) uses a alb_access_logs_s3_bucket_force_destroy_enabled variable as seen here: https://github.com/cloudposse/terraform-aws-alb/commit/43aa53c533bef8e269620e8f52a99f1bac9554a0#diff-05b5a57c136b6ff5965[…]84d9daa9a65a288eL209-L221The sole purpose of this variable appears to be passed down to the
terraform-aws-lb-s3-bucket module (as seen here: https://github.com/cloudposse/terraform-aws-alb/commit/43aa53c533bef8e269620e8f52a99f1bac9554a0#diff-dc46acf24afd63ef8c5[…]aa09a931f33d9bf2532fbbL50)The
terraform-aws-lb-s3-bucket module previously used this variable in an earlier release, but no longer does (removed in v0.16.3 as seen here: https://github.com/cloudposse/terraform-aws-lb-s3-bucket/commit/876918fac8682a1b9032cecc61496e61587f6857#diff-05b5a57c136b6ff[…]9daa9a65a288eL17-L30)While it’s technically valid since it’s pinned to the earlier release of the
terraform-aws-lb-s3-bucket module (v0.16.0) - it was quite confusing to understand the purpose of this variable vs. a very similar alb_access_logs_s3_bucket_force_destroy (same variable name without the “enabled” suffix).It appears there was some sort of possible breaking issue associated with this: https://github.com/cloudposse/terraform-aws-s3-log-storage/wiki/Upgrading-to-v0.27.0-(POTENTIAL-DATA-LOSS)
Is there any reason this can’t be updated to
v0.16.3 in the next release of terraform-aws-alb to avoid this confusion?Michael Liualmost 3 years ago
Anyone know if you can use terraform (module or resource) to customize the AWS access portal URL? By default, you can access the AWS access portal by using a URL that follows this format:
<http://d-xxxxxxxxxx.awsapps.com/start|d-xxxxxxxxxx.awsapps.com/start>. But, I want to customize it with TF.Vinko Vrsalovicalmost 3 years ago
Here I am on my crusade against extra AWS costs - Is there a centralized place to disable cloudwatch metrics from being collected or you need to go service by service and disable from there?
Erik Weberalmost 3 years ago
Does anyone have experience with ADOT (AWS Distro for OpenTelemetry)? Specifically I am investigating if it is possible to use cross account, or if each account needs to have its own instance
akhan4ualmost 3 years ago
Is there any way to figure out the RDS postgres DB parameters
apply_type on runtime? we are using AWS RDS official moduleMichael Liualmost 3 years ago(edited)
When using this EKS module (terraform-aws-modules/eks/aws) you need to specify a kubernetes provider in order to modify the aws-auth configmap. I'm trying to create a kubernetes provider with an alias, but I'm not sure how to pass the alias information to the EKS module. Anyone know how to do this?
managedkaosalmost 3 years ago(edited)
Do you have an example of or a reference for IAM policies that:
1. allows an IAM user to create/update/delete CloudFormation stacks
2. allows an IAM user to pass a service role to CloudFormation so it can create resources
3. limits the service role to a few types, ie SG, EC2, and RDS only
1. allows an IAM user to create/update/delete CloudFormation stacks
2. allows an IAM user to pass a service role to CloudFormation so it can create resources
3. limits the service role to a few types, ie SG, EC2, and RDS only
Alex Atkinsonalmost 3 years ago
Does AppSync not support Automatic Persisted Queries... ? I went with Apollo GraphQL on Lambda years ago because of this, but I could swear I heard of an AppSync v2? that supported APQ....?
OliverSalmost 3 years ago(edited)
Any recommendations for further protecting an EKS cluster API server that has to be accessible to a cloud CI/CD's deployers (that deploy containers into a cluster), beyond what is offered via RBAC? IP filtering is easy to setup and has worked well, but I don't like the idea of relying a list of IP addresses which could change anytime without notice (although this has yet to happen with the CI/CD provider we are using). Are there betters options? Eg OIDC but I'm not finding a way to configure an EKS cluster to use support that auth mechanism.