10 messages
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
Maheshabout 2 years ago
Hi All, I am trying to deploy ec2 using https://registry.terraform.io/modules/cloudposse/ec2-instance/aws/latest with providing existing VPC ID and subnet ID via tfvars.
Getting below error
│ Error: creating EC2 Instance: Unsupported: The requested configuration is currently not supported. Please check the documentation for supported configurations.
│ status code: 400, request id: d2477b03-6c8a-4c94-b149-3f6a3f5e976
Tried changing instance type , added AZ, tenancy. But fails.
Getting below error
│ Error: creating EC2 Instance: Unsupported: The requested configuration is currently not supported. Please check the documentation for supported configurations.
│ status code: 400, request id: d2477b03-6c8a-4c94-b149-3f6a3f5e976
Tried changing instance type , added AZ, tenancy. But fails.
AdamPabout 2 years ago(edited)
Lets say I have a private RDS instance, and a 3rd party needs to be "whitelisted" to access said RDS instance. Without privatelink or any site to site vpns, I think the best solution would be to have a Network Load Balancer in a public subnet, that routes allowed traffic to the private subnets allowing the 3rd party to hit the private RDS instance? I think thats the simplest solution, assuming the small handful of 3rd parties aren't AWS customers and we cant do any cross account roles are anything like that. Thoughts?
Mike Jaweedabout 2 years ago
Hi everyone, I have hopefully an easy question.
I’m using some python modules that are not natively available on AWS lambda. I’m seeing this error message:
`{
"errorMessage": "Unable to import module 'transcribe-job.app': No module named 'pusher'",
"errorType": "Runtime.ImportModuleError",
"requestId": "0218478d-f56c-4b59-89f7-15d43e26a665",
"stackTrace": []
}`
I’m wondering how I can fix this issue. I currently am using lambda layers that has this already prepackaged. I’m doing a number of imports but the only one it seems to be complaining about is pusher. I’m wondering has anyone else experienced this and what would were your solutions to it. I’m hoping to solve this in lambda layers because I don’t care to export the app.zip as a pay load. I understand pusher is probably not installed hence why it’s throwing this error. Hence why I packaged this under the lambda layer. When I inspect the lambda layer, I see pusher is there
This is also using sls for more context
I’m using some python modules that are not natively available on AWS lambda. I’m seeing this error message:
`{
"errorMessage": "Unable to import module 'transcribe-job.app': No module named 'pusher'",
"errorType": "Runtime.ImportModuleError",
"requestId": "0218478d-f56c-4b59-89f7-15d43e26a665",
"stackTrace": []
}`
I’m wondering how I can fix this issue. I currently am using lambda layers that has this already prepackaged. I’m doing a number of imports but the only one it seems to be complaining about is pusher. I’m wondering has anyone else experienced this and what would were your solutions to it. I’m hoping to solve this in lambda layers because I don’t care to export the app.zip as a pay load. I understand pusher is probably not installed hence why it’s throwing this error. Hence why I packaged this under the lambda layer. When I inspect the lambda layer, I see pusher is there
import boto3
import botocore
import botocore.session
import json
import os
import pusher
import pymysql.cursors
import webvtt
from aws_secretsmanager_caching import SecretCache, SecretCacheConfig
from botocore.exceptions import ClientError,NoCredentialsError
from dotenv import load_dotenv
from flashtext import KeywordProcessor
from io import StringIO
from urllib.parse import urlparseThis is also using sls for more context
service: lambda-transcribe
provider:
name: aws
runtime: python3.10
region: us-west-2
stage: ${opt:stage, 'pg'}
functions:
proleagueLambdaTranscribe:
environment: ${file(env.${opt:stage, self:provider.stage}.json)}
handler: transcribe.app.lambda_handler
name: ${self:provider.stage}-proleague-lambda-transcribe
layers:
arn:aws:lambda:${self:provider.region}:${aws:accountId}:layer:pythonAppDependencies:${opt:layerVersion, '4'}
events:
eventBridge:
eventBus: default
pattern:
source:
"aws.transcribe"
detail-type:
"Transcribe Job State Change"
detail:
TranscriptionJobStatus:
"COMPLETED"Balazs Vargaabout 2 years ago
do I need to reboot writers when I change acu ? I still see I reached the max connection, but I increased the max capacity of my serverless v2 instance
E-Loveabout 2 years ago
Anyone else get bitten by the new charge for all public IPv4 IPs as of Feb 1? Turns out running a SaaS with a single tenant architecture on EKS (one namespace per customer) with the AWS LB controller (without using IngressGroups) is a recipe for a ton of public IPs (number of AZs times number of ALBs).
Matt Gowieabout 2 years ago
Hey folks -- We typically enforce S3 encryption on our buckets via the below policies / via the
Policies:
I am just seeing this update in the AWS Using server-side encryption with Amazon S3 docs:
This seems to say to me... that we don't need to enforce those policies any longer as all objects will get applied that same AES256 encryption regardless.
Is that correct? Anyone else gone down that rabbit hole before?
allow_ssl_requests_only + allow_encrypted_uploads_only flags in the cloudposse/s3-bucket/aws module.Policies:
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::allma-ue1-production-rag-system-text/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::allma-ue1-production-rag-system-text/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
},I am just seeing this update in the AWS Using server-side encryption with Amazon S3 docs:
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs. For more information, see Default encryption FAQ.
This seems to say to me... that we don't need to enforce those policies any longer as all objects will get applied that same AES256 encryption regardless.
Is that correct? Anyone else gone down that rabbit hole before?
Corkyabout 2 years ago
Hey y'all,
Right now, we're thinking about the best way to manage RDS SQL Server logins. Does anyone have preferred methods for managing something like that declaratively? We were considering doing this in Terraform since logins are per-instance and could live with the db instance creation, but there would be a lot of moving pieces to be able to connect to a non-publicly-available RDS instance.
Right now, we're thinking about the best way to manage RDS SQL Server logins. Does anyone have preferred methods for managing something like that declaratively? We were considering doing this in Terraform since logins are per-instance and could live with the db instance creation, but there would be a lot of moving pieces to be able to connect to a non-publicly-available RDS instance.
Sean Turnerabout 2 years ago
Faced with an interesting engineering problem.
We work with a lot of GIS data which geologists interact with via qgis, a desktop client for interacting with GIS layers. Architecturally this is a postgresql RDS Instance in AWS. Geologists are geographically distributed and therefore suffer latency issues due to interacting with the RDS Instance in Oregon from Africa or Australia. Access patterns involve frequent reads and less than moderate writes. Generally Geologists wouldn't be interacting with the same project(s) and general conflict resolution should be sufficient otherwise.
pgActive (RDS Postgresql plugin for an active-active or multi-writer replication configuration) and Aurora Global DB for PostgreSQL (write forwarding) both don't replicate DDL Statements (e.g. CREATE) which takes these solutions out of the running (qgis uses CREATE frequently)
We're looking at pgEdge which seems to be a young startup that can replicate DDL statements and looks to be very compelling, but wanted to see if there are any other vendors, or, if anyone else has done some serious thinking about these problems and has insight.
Cheers!
We work with a lot of GIS data which geologists interact with via qgis, a desktop client for interacting with GIS layers. Architecturally this is a postgresql RDS Instance in AWS. Geologists are geographically distributed and therefore suffer latency issues due to interacting with the RDS Instance in Oregon from Africa or Australia. Access patterns involve frequent reads and less than moderate writes. Generally Geologists wouldn't be interacting with the same project(s) and general conflict resolution should be sufficient otherwise.
pgActive (RDS Postgresql plugin for an active-active or multi-writer replication configuration) and Aurora Global DB for PostgreSQL (write forwarding) both don't replicate DDL Statements (e.g. CREATE) which takes these solutions out of the running (qgis uses CREATE frequently)
We're looking at pgEdge which seems to be a young startup that can replicate DDL statements and looks to be very compelling, but wanted to see if there are any other vendors, or, if anyone else has done some serious thinking about these problems and has insight.
Cheers!
jonjitsualmost 2 years ago
Has anyone created any serverless api gateway + lambda system that was not exposed to the internet but only access from an on premise system through some dedicated connection like private connect? Looking at this https://aws.amazon.com/blogs/compute/integrating-amazon-api-gateway-private-endpoints-with-on-premises-networks/ it seems to be possible. I would have a VPC purely for connecting on-prem to AWS and allowing it to access the API gateway endpoint.