22 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
jaysunover 2 years ago
how are you all handling IAM roles/permissions in aws sso? we're moving to it, and the introduction of permission sets is making me ponder the best way to architect the whole IAM lifecycle.
1. what's the best way to assign a team/group to a role to an account?
2. what about the ad-hoc cases where a permission set is too permissive or restrictive?
1. what's the best way to assign a team/group to a role to an account?
2. what about the ad-hoc cases where a permission set is too permissive or restrictive?
Sairam Madichettyover 2 years ago
HI everyone.๐
Has anyone worked on Grafana mimir for prometheus HA solution?
I'm stuck in the configuration of grafana mimir, prometheus and S3 integration.
Has anyone worked on Grafana mimir for prometheus HA solution?
I'm stuck in the configuration of grafana mimir, prometheus and S3 integration.
Aritra Banerjeeover 2 years ago
Hi everyone, has anyone implement Palo Alto Firewall in AWS. We have to implement that and I wanted to know how much difficult it is to implement it using a gateway load balancer. We only have a single VPC where everything is present
rohitover 2 years ago
Has anyone ever deployed a complete "product" on top of EKS/K8s in a customer account/environment? Something my company is working on, and historically we used a bunch of tf and scripts, but it doesn't seem feasible given the assurances we need.
susie-hover 2 years ago
Question here - how can i reference the iam role created by this module in the api gw i create using this module? Terraform has an argument for
cloud_watch_role_arn in their resource api-gw-account , but i can't see how to do that with the cp module. thanks in advance!venkataover 2 years ago(edited)
Is there a hard limit to how many AWS accounts you can have in an organization or can you just keep asking for them via service quota requests?
Dexter Cariรฑoover 2 years ago
Hi fellaz, a question regarding aurora mysql, about the auto scaling of it I test it and I want to be enlighten about the endpoints,
what part of the documentation support that only the cluster endpoint should use?
on the autoscaling, the aws handles the distribution of that in just 1 read endpoint?
what part of the documentation support that only the cluster endpoint should use?
on the autoscaling, the aws handles the distribution of that in just 1 read endpoint?
PePe Amengualover 2 years ago
Is it true that when control tower is enabled, aws activates a throttling on certain APIs that could affect terraform runs?
Muhammad Taqiover 2 years ago
Hello guys, I'm using Terraform to Create and S3 Private Bucket with IAM user & Keys to access to this Bucket. Here is my Terraform code.
After running above code, i can a new user has been created in IAM with name
Then i'v a simple python scripts which try to upload a file to s3 bucket using keys from above secret manager;
It throws an error
What I'm looking is that every bucket should have it's own keys and could be accesable to that specific keys only.
module "s3_bucket" {
source = "cloudposse/s3-bucket/aws"
version = "4.0.0"
name = local.bucket_name
acl = "private"
enabled = true
user_enabled = true
force_destroy = true
versioning_enabled = true
sse_algorithm = "AES256"
block_public_acls = true
allow_encrypted_uploads_only = true
allow_ssl_requests_only = true
block_public_policy = true
ignore_public_acls = true
cors_configuration = [
{
allowed_origins = ["*"]
allowed_methods = ["GET", "PUT", "POST", "HEAD", "DELETE"]
allowed_headers = ["Authorization"]
expose_headers = []
max_age_seconds = "3000"
}
]
allowed_bucket_actions = ["s3:*"]
lifecycle_configuration_rules = []
}
resource "aws_secretsmanager_secret" "s3_private_bucket_secret" {
depends_on = [module.s3_private_bucket]
name = join("", [local.bucket_name, "-", "secret"])
recovery_window_in_days = 0
}
resource "aws_secretsmanager_secret_version" "s3_private_bucket_secret_credentials" {
depends_on = [module.s3_private_bucket]
secret_id = aws_secretsmanager_secret.s3_private_bucket_secret.id
secret_string = jsonencode({
KEY = module.s3_private_bucket.access_key_id
SECRET = module.s3_private_bucket.secret_access_key
REGION = module.s3_private_bucket.bucket_region
BUCKET = module.s3_private_bucket.bucket_id
})
}After running above code, i can a new user has been created in IAM with name
x-rc-bucket with access key and secret same as stored in Secret manager and has policy attached asjson
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::x-rc-bucket/*",
"arn:aws:s3:::x-rc-bucket"
]
}
]
}Then i'v a simple python scripts which try to upload a file to s3 bucket using keys from above secret manager;
import os
import boto3
image = "x.jpg"
s3_filestore_path = "images/x.jpg"
filename, file_extension = os.path.splitext(image)
content_type_dict = {
".png": "image/png",
".html": "text/html",
".css": "text/css",
".js": "application/javascript",
".jpg": "image/png",
".gif": "image/gif",
".jpeg": "image/jpeg",
}
content_type = content_type_dict[file_extension]
s3 = boto3.client(
"s3",
config=boto3.session.Config(signature_version="s3v4"),
region_name="eu-west-3",
aws_access_key_id="**",
aws_secret_access_key="**",
)
s3.put_object(
Body=image, Bucket="x-rc-bucket", Key=s3_filestore_path, ContentType=content_type
)It throws an error
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.What I'm looking is that every bucket should have it's own keys and could be accesable to that specific keys only.
PePe Amengualover 2 years ago(edited)
Another gossip: is it true AWS might deprecate Beanstalk in the future? ๐งต
Rahul Reddy Anuguover 2 years ago
Hello everyone,
I write technical related articles. Here is my article related to Foundation Models on AWS Bedrock. Please read my article and I am open for feedback.
I am planning more articles on AWS bedrock series.
https://medium.com/@techcontentspecialist/unlock-the-power-of-ai-foundation-models-with-amazon-bedrock-4937d30bc925
I write technical related articles. Here is my article related to Foundation Models on AWS Bedrock. Please read my article and I am open for feedback.
I am planning more articles on AWS bedrock series.
https://medium.com/@techcontentspecialist/unlock-the-power-of-ai-foundation-models-with-amazon-bedrock-4937d30bc925
Wojciech Pietrzakover 2 years ago
Hello,
My team was tasked with getting a security audit of our cloud infrastructure setup.
If you had to choose which company you would let make the audit, which criteria you would choose based on?
Would somebody share his experience about this topic?
My team was tasked with getting a security audit of our cloud infrastructure setup.
If you had to choose which company you would let make the audit, which criteria you would choose based on?
Would somebody share his experience about this topic?
rohitover 2 years ago
Trying to understand, but has anyone deployed helm charts (a bunch of them) within a cloudformation stack? how does that work? does it wait for a EKS cluster to be spun up before deploying helm charts sequentially?
Sairam Madichettyover 2 years ago(edited)
Hi guys,
Is this design possible? For cross accounts:
Infra account - codecommit,codepipeline, s3, kms.
Dev account - codeDeploy
Prod account - codeDeploy
In codepipeline, Can we give source as codecommit of same account and target as codeDeploy of another account?
Is this design possible? For cross accounts:
Infra account - codecommit,codepipeline, s3, kms.
Dev account - codeDeploy
Prod account - codeDeploy
In codepipeline, Can we give source as codecommit of same account and target as codeDeploy of another account?
jonjitsuover 2 years ago
Any opinions on resource naming conventions where they put the resource type in the name? ex: https://cloud.google.com/architecture/security-foundations/using-example-terraform#naming_conventions I'm not sure of the logic of doing that besides perhaps when looking at logs the name already has the type in it.
PePe Amengualover 2 years ago(edited)
Iโm about to create my Org in AWS using TF, the module is ready, and all and I will be using the advanced organization structure, but I was wondering about my state. In which account should I put it? The
management account? ๐งตTechHippieover 2 years ago(edited)
Hello- i am a EKS novice so forgive if my question is pretty basic. I am creating Terraform code to create EKS cluster, node group. In addition I also want to create 3 cluster roles ( deployer, administrator and developer) mapping it to IAM roles. Can anyone help me with how I can create the roles and configure the role mapping to IAM roles/users.
ghostfaceover 2 years ago
can an internal api gateway be reached from a VPC in another account?
Jim Parkover 2 years ago
Finally, AWS ECR has announced support for remote caching using buildkit.
Hereโs an example from the announcement:
The feature was introduced in buildkit v0.12. The key syntax is
May your builds be speedy and true!
Hereโs an example from the announcement:
docker build -t <account-id>.dkr.ecr.<my-region>.<http://amazonaws.com/buildkit-test:image|amazonaws.com/buildkit-test:image> \
--cache-to mode=max,image-manifest=true,oci-mediatypes=true,type=registry,ref=<account-id>.dkr.ecr.<my-region>.<http://amazonaws.com/buildkit-test:cache|amazonaws.com/buildkit-test:cache> \
--cache-from type=registry,ref=<account-id>.dkr.ecr.<my-region>.<http://amazonaws.com/buildkit-test:cache|amazonaws.com/buildkit-test:cache> .
docker push <account-id>.dkr.ecr.<my-region>.<http://amazonaws.com/buildkit-test:image|amazonaws.com/buildkit-test:image>The feature was introduced in buildkit v0.12. The key syntax is
image-manifest=true,oci-mediatypes=trueMay your builds be speedy and true!
Mannan Bhuiyanover 2 years ago
โข Hello Guys, Can any one help me to find out a solution how to monitor Disk usage of an ecs cluster and set cloudwatch alarm or create an alert when the disk is 70GB or 70% gets full
Benedikt Dollinger (XALT)over 2 years ago
๐ Join Us for a Platform Engineering Webinar! ๐
Hey everyone!
Weโre excited to invite you to our upcoming webinar on Platform Engineering, featuring insights from one of our valued customers. This session will guide you through the process of creating AWS Accounts swiftly using Jira Service Management and a Developer Self-Service, empowering you to unleash the full potential of your AWS Cloud Infrastructure.
๐ Date: Friday, 17th November 2023
๐๏ธ Time: 10:00 AM CET
๐ Location: Live & Online
What you will learn:
Set up AWS Infrastructure in minutes with JSM Cloud & Developer Self-Service
Navigate our seamless account creation process within JSM
Experience the efficiency of approvals for a streamlined workflow
Explore the comprehensive account catalog in Asset Management
Leverage AWS & JSM for enhanced cost efficiency, speed, security, and compliance through our developer self-service
Donโt miss this opportunity to supercharge your AWS Cloud Infrastructure deployment!
๐ Save your spot: Platform Engineering Webinar Registration
See you there!
๐ TEAM XALT
Hey everyone!
Weโre excited to invite you to our upcoming webinar on Platform Engineering, featuring insights from one of our valued customers. This session will guide you through the process of creating AWS Accounts swiftly using Jira Service Management and a Developer Self-Service, empowering you to unleash the full potential of your AWS Cloud Infrastructure.
๐ Date: Friday, 17th November 2023
๐๏ธ Time: 10:00 AM CET
๐ Location: Live & Online
What you will learn:
Set up AWS Infrastructure in minutes with JSM Cloud & Developer Self-Service
Navigate our seamless account creation process within JSM
Experience the efficiency of approvals for a streamlined workflow
Explore the comprehensive account catalog in Asset Management
Leverage AWS & JSM for enhanced cost efficiency, speed, security, and compliance through our developer self-service
Donโt miss this opportunity to supercharge your AWS Cloud Infrastructure deployment!
๐ Save your spot: Platform Engineering Webinar Registration
See you there!
๐ TEAM XALT
TechHippieover 2 years ago
Hello Team -is anyone aware of any terraform code or module that could create a AWS LB or nginx LB based on user input? Any guidance with one or the other will be really helpful.