22 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
SAover 2 years ago(edited)
Hello all - I have a question. I have a bucket in the AWS console, and I configured it in Terraform. The bucket has
TIA
storage class = GLACIER , and I want to update it to INTELLIGENT_TIERING can we just replace it, or do we need to add an additional resource storage class = INTELLIGENT_TIERING (is it a good practice). If replaced, will it affect the already stored data?TIA
PePe Amengualover 2 years ago(edited)
Anyone here have worked with Nitro Enclaves? 🧵
TechHippieover 2 years ago(edited)
Hello All - In EKS, can a cluster role binding bind a k8s role to a IAM role/user or it has to be a k8s group/user ?
rohitover 2 years ago
Does anyone have experience with AWS CDK (preferably in python)? I have a small issue that I have been stuck on for an hour now. I am letting a user pass in a CFN parameter (when the template is synthesized) and one of those parameters is a list of subnet ids.
In eks_stack.py:
Error:
In eks_stack.py:
subnet_ids_param = CfnParameter(
self,
"SubnetIds",
type="List<AWS::EC2::Subnet::Id>",
description="The list of IDs of the private isolated subnets",
allowed_pattern="^(subnet-[0-9a-f]{8,},)*subnet-[0-9a-f]{8,}$",
constraint_description="must be a valid list of subnet IDs.",
)
subnet_ids = subnet_ids_param.value_as_list
vpc_subnets = ec2.SubnetSelection(subnets=subnet_ids) <--- where i am having an errorError:
TypeError: type of argument vpc_subnets must be one of (Sequence[Union[aws_cdk.aws_ec2.SubnetSelection, Dict[str, Any]]], NoneType); got aws_cdk.aws_ec2.SubnetSelection insteaDarren Cunninghamover 2 years ago
pretty sure you need something like:
vpc_subnets = ec2.SubnetSelection(subnets=[ec2.Subnet.from_subnet_id(self, f"Subnet{i}", subnet_id) for i, subnet_id in enumerate(subnet_ids)])rohitover 2 years ago
I did try that, to pass a sequence of ec2.SubnetSelection objects, let me see...
rohitover 2 years ago
Here's the types from my execution:
Here's the types from your modified edit:
------------------------------------------------------------
subnet_ids: ['#{Token[TOKEN.837]}']
type for subnet_ids: <class 'list'>
vpc_subnets: SubnetSelection(subnets=['#{Token[TOKEN.837]}'])
type for vpc_subnets: <class 'aws_cdk.aws_ec2.SubnetSelection'>
------------------------------------------------------------Here's the types from your modified edit:
------------------------------------------------------------
subnet_ids: ['#{Token[TOKEN.836]}']
type for subnet_ids: <class 'list'>
vpc_subnets: SubnetSelection(subnets=[<jsii._reference_map.InterfaceDynamicProxy object at 0x7fd6db178e10>])
type for vpc_subnets: <class 'aws_cdk.aws_ec2.SubnetSelection'>
------------------------------------------------------------rohitover 2 years ago(edited)
same error:
File "/home/cdk/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/typeguard/__init__.py", line 558, in check_union
raise TypeError('type of {} must be one of ({}); got {} instead'.
TypeError: type of argument vpc_subnets must be one of (Sequence[Union[aws_cdk.aws_ec2.SubnetSelection, Dict[str, Any]]], NoneType); got aws_cdk.aws_ec2.SubnetSelection insteadrohitover 2 years ago
same object types, but with subnets being a different type
This is being passed to the FargateCluster construct
This is being passed to the FargateCluster construct
Leoover 2 years ago
Hello, does anybody know whether there's anything I can run on a machine on-prem to check whether it's accessing S3 via Direct Connect and not the public internet?
Balazs Vargaover 2 years ago
hello all, we would like to start to use ecr, but my manager has a question.
is the private ecr data out transfer is free until it remains in same region? and not limited?
is the private ecr data out transfer is free until it remains in same region? and not limited?
Muhammad AbuBakerover 2 years ago(edited)
I am facing an issue with my Laravel application deployment on AWS ECS.
The deployment process involves Jenkins, AWS ECR, and ECS.
The new task is created, but there's an "Access Denied" error connecting to the RDS database.
Due to this issue after some time service is automatically deleted
I have provided my deployment files for reference.
JenkinsFile:
Dockerfile:
script.sh:
Problem:
The new task is created, but there is an "Access Denied" error connecting to the RDS database.
The .env.prod file contains the correct RDS connection details.
.env.prod:
What could be causing the "Access Denied" error in the deployment process,
and how can I resolve it? Any insights or suggestions for troubleshooting would be greatly appreciated.
The deployment process involves Jenkins, AWS ECR, and ECS.
The new task is created, but there's an "Access Denied" error connecting to the RDS database.
Due to this issue after some time service is automatically deleted
I have provided my deployment files for reference.
JenkinsFile:
pipeline {
agent any
environment {
AWS_ACCOUNT_ID="794664785634"
AWS_DEFAULT_REGION="us-east-1"
IMAGE_REPO_NAME="product-mangement"
IMAGE_TAG="${BUILD_NUMBER}"
REPOSITORY_URI = "794664785634.dkr.ecr.us-east-1.amazonaws.com/product-mangement"
ECS_CLUSTER = "product-mangement"
ECS_SERVICE = "product-mangement"
}
stages {
stage('Checkout Latest Source') {
steps {
git branch: 'master',
url: '<https://github.com/jhon-123/product-mangement>',
credentialsId: 'jenkins_pta'
}
}
stage('Logging into AWS ECR') {
steps {
script {
sh """aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com"""
}
}
}
// Building Docker images
stage('Building image') {
steps{
script {
dockerImage = docker.build "${IMAGE_REPO_NAME}:${IMAGE_TAG}"
}
}
}
// Uploading Docker images into AWS ECR
stage('Pushing to ECR') {
steps{
script {
sh """docker tag ${IMAGE_REPO_NAME}:${IMAGE_TAG} ${REPOSITORY_URI}:$IMAGE_TAG"""
sh """docker push ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}:${IMAGE_TAG}"""
}
}
}
stage('Deploy to ECS') {
steps {
sh "aws ecs update-service --cluster ${ECS_CLUSTER} --service ${ECS_SERVICE} --force-new-deployment"
}
}
}
}Dockerfile:
# Use the official PHP image as a base
FROM php:8.1-fpm
ENV COMPOSER_ALLOW_SUPERUSER 1
# Arguments defined in docker-compose.yml
ARG user
ARG uid
USER root
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
# RUN useradd -G www-data,root -u $uid -d /home/$user $user
# RUN mkdir -p /home/$user/.composer && \
# chown -R $user:$user /home/$user
# Set the working directory
WORKDIR /var/www
# Copy the project files into the container
COPY . /var/www
# Copy .env.example to .env
COPY .env.prod .env
# Install Composer dependencies
RUN composer install
# Cache configuration
RUN php artisan config:clear
RUN php artisan config:cache
# Generate Laravel application key
RUN php artisan key:generate
# Copy the start script into the container
COPY script.sh /var/www/script.sh
# Make the script executable
RUN chmod +x /var/www/script.sh
# Expose port 8000
EXPOSE 8000
# show message
RUN echo "ehllo"
# Run the start script as the CMD
CMD ["/var/www/script.sh"]script.sh:
#!/bin/sh
# Run Laravel migrations
php artisan migrate
# Seed Database
php artisan db:seed
echo "seeded successfully"
# Start the Laravel application
php artisan serve --host=0.0.0.0 --port=8000Problem:
The new task is created, but there is an "Access Denied" error connecting to the RDS database.
The .env.prod file contains the correct RDS connection details.
.env.prod:
APP_NAME=Laravel
APP_ENV=prod
APP_KEY=base64:LyxaydSCa8HIgUdaLLQCPehtSK2siVr0o+bT6jcXWmM=
APP_DEBUG=false
APP_URL=<http://localhost>
LOG_CHANNEL=stack
LOG_DEPRECATIONS_CHANNEL=null
LOG_LEVEL=debug
DB_CONNECTION=mysql
DB_HOST=product-management.c7ebhtqyydqk.us-east-1.rds.amazonaws.com
DB_PORT=3306
DB_DATABASE=product-management
#DB_USERNAME=laravel
#DB_PASSWORD=secret
BROADCAST_DRIVER=log
CACHE_DRIVER=file
FILESYSTEM_DRIVER=local
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
SESSION_LIFETIME=120
MEMCACHED_HOST=127.0.0.1
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=mailhog
MAIL_PORT=1025
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
MAIL_FROM_ADDRESS=null
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=
AWS_USE_PATH_STYLE_ENDPOINT=false
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"What could be causing the "Access Denied" error in the deployment process,
and how can I resolve it? Any insights or suggestions for troubleshooting would be greatly appreciated.
TechHippieover 2 years ago
Hello Team - I am creating a private EKS cluster using Terraform. I am getting a error stating "Error: waiting for EKS Add-On (demcluster: vpc-cni) create: timeout while waiting for state to become 'ACTIVE' (last state: 'CREATING', timeout: 20m). Has anyone seen the error before? I get the same error for coredns aswell. It is in a VPC with no NAT gateway so I have vpc endpoints for the services as per the documentation (EC2, S3, EKS, ASG,ECR etc)
Josh B.over 2 years ago
Is there a way to share a transit gateway in another region or what would be the best approach? Maybe just use peering instead of adding another gateway?
Steven Millerover 2 years ago
Is anyone having an issue with AWS SSO with google workspaces SAML today?
Balazs Vargaabout 2 years ago
Using ecr private repo I have a vpc (same region) with transit gateway attachment attached to another vpc (nat vpc). Is the traffic free or will be charged? if charged, then how can I pull images for free ?
PePe Amengualabout 2 years ago
I was following this guide https://docs.aws.amazon.com/singlesignon/latest/userguide/gs-gwp.html and we got it to work, but somehow, we see no groups being synced, only users. Has anyone had a similar issue?
TechHippieabout 2 years ago
Hello Team - I am deploying a private EKS cluster in a private VPC using terraform. I added subnets the tags of "Key –
<http://kubernetes.io/role/internal-elb|kubernetes.io/role/internal-elb> ; Value – 1" & "Key – <http://kubernetes.io/cluster/my-cluster|kubernetes.io/cluster/my-cluster>; Value – shared" as per the documentation. it gets deployed in my personal account successfully. But in another account (dev) account, I get a context deadline exceeded error. When i describe the service nginx-ingress-controller I see a event stating that "Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB". When we manually add a service annotation of "service.beta.kubernetes.io/aws-load-balancer-internal=true" and "terraform apply" it succeeds without any error. Is the annotation (aws-load-balancer-controller-internal) required even for nginx controller in EKS? Did anyone try to add the annotation using terraform? I am unable to find the right syntax to set on the helm_release resource for nginx-ingress-controller helm chart.Ihor Urazovabout 2 years ago
https://aws.amazon.com/blogs/aws/amazon-eks-pod-identity-simplifies-iam-permissions-for-applications-on-amazon-eks-clusters/
one screenshot "leaks" upcoming feature that will replace aws-auth configmap
one screenshot "leaks" upcoming feature that will replace aws-auth configmap
Ihor Urazovabout 2 years ago
Igor Mabout 2 years ago
Who is at re:Invent?
TechHippieabout 2 years ago
Hello Team - I am using the fully-private-cluster terraform blueprint to create a private EKS cluster with 2 managed node groups. I am trying to restrict the ECR repositories EKS cluster can pull images from by modifying the AmazonEC2ContainerRegistryReadOnly policy(custom policy) to contain specific repositories instead of all. This setup works for the first node group. But for the second node group it fails saying policy with the same name exists. How can I make it use the existing IAM policy if it exists? I tried to use aws_iam_policy data source but now it fails on node group 1 execution itself as the IAM policy doesn’t exist with that name yet. Any guidance on troubleshooting it will be of great help.