aws
1048813,714
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
P
Prasanna6 days ago
@Prasanna has joined the channel
J
JS6 days ago
@JS has joined the channel
S
Salman Shaik7 days ago
@Salman Shaik has joined the channel
D
Deep7 days ago
@Deep has joined the channel
B
bradym10 days ago
Based entirely on the https://www.lastweekinaws.com/ newsletter they send (I've never used them), I'd start with Duckbill https://www.duckbillhq.com/
C
Charlie Mallery10 days ago
Hello, does anyone have a good billing solutions partner for billing optimization? Like one that provides insured RIs and SPs? Our team has been struggling to keep track of our costs and just been focusing on scale. Can anyone relate or does anyone have a solution?
D
drskyle12 days ago
D
drskyle13 days ago
Hi SweepOps team, just updated to CloudSlash v2.2 — a major update to my open-source, local-first AWS cost cleaner. It builds a graph of your infrastructure to find waste (orphaned volumes, idle RDS, zombie dependencies), but now the core engine is a modular SDK. Do check it out and see it it helps , try running it locally using mock flag : )
New in v2.2:
1. Embeddable Go Library: You can now import
2. Concurrent "Swarm" Scanning: Much faster graph ingestion on large accounts.
3. Live Pricing Integration: Now connects to the AWS Pricing API for accurate implementation costs.
4. Graceful Degradation: Skips un-authed regions/profiles safely without hanging.
It’s written in Go, runs entirely locally (no SaaS), and generates Terraform import blocks to help you clean up safely.
Repo: https://github.com/DrSkyle/CloudSlash
: ) DrSkyle
New in v2.2:
1. Embeddable Go Library: You can now import
pkg/engine directly into your own internal platforms or CI tools.2. Concurrent "Swarm" Scanning: Much faster graph ingestion on large accounts.
3. Live Pricing Integration: Now connects to the AWS Pricing API for accurate implementation costs.
4. Graceful Degradation: Skips un-authed regions/profiles safely without hanging.
It’s written in Go, runs entirely locally (no SaaS), and generates Terraform import blocks to help you clean up safely.
Repo: https://github.com/DrSkyle/CloudSlash
: ) DrSkyle
P
paulm23 days ago(edited)
Hello! I've published an article that demystifies S3 storage class selection. Many, many other people have written about this, but I avoid the usual sins:
• Copying quantities from the price list without attending to the units. What is $0.00099 per GiB worth? When I write it as 0.099¢, you see it's about 0.1¢ or ⅒ of a penny.
• Taking marketing seriously. The price of data retrieval from the Intelligent Tiering storage class is 0, but I can calculate the cost (up to 3.6¢ per GiB) so you can compare storage classes directly.
• Not keeping up with new storage classes, pricing rule changes, and price reductions.
• Complicating the math. Though (or maybe because?) I actually was a credentialed K–12 math teacher, I hate complexity! The intersection of two linear equations is one way to figure and visualize the break-even point, but sums or series make sense to everyone:
I hope you'll find the article useful, and learn at least one new fact about S3 costs! Feedback is welcome. This piece grew out of a much shorter and more limited submission to the (quite useful!) Cloud Efficiency Hub at hub.pointfive.co/hub .
• Copying quantities from the price list without attending to the units. What is $0.00099 per GiB worth? When I write it as 0.099¢, you see it's about 0.1¢ or ⅒ of a penny.
• Taking marketing seriously. The price of data retrieval from the Intelligent Tiering storage class is 0, but I can calculate the cost (up to 3.6¢ per GiB) so you can compare storage classes directly.
• Not keeping up with new storage classes, pricing rule changes, and price reductions.
• Complicating the math. Though (or maybe because?) I actually was a credentialed K–12 math teacher, I hate complexity! The intersection of two linear equations is one way to figure and visualize the break-even point, but sums or series make sense to everyone:
STANDARD_IA: 1.25 + 1.25 + 1.25 + 1.25 + 1.25 = 6.25¢ per GiB
INTELLIGENT_TIERING: 2.30 + 1.25 + 1.25 + 0.40 + 0.40 = 5.60
INTELLIGENT_TIERING: 2.30 + 1.25 + 1.25 + 0.40 + 0.40 + ...
GLACIER_IR: 0.40 + 0.40 + 0.40 + 0.40 + 0.40 + ...I hope you'll find the article useful, and learn at least one new fact about S3 costs! Feedback is welcome. This piece grew out of a much shorter and more limited submission to the (quite useful!) Cloud Efficiency Hub at hub.pointfive.co/hub .
D
Dan Herrington28 days ago
Hey all. Wondering if anyone here has used AWS Config? I'm looking to setup an aggregator in my management account to get all the ec2 and running ecs objects into a single inventory view, then use that for my ansible inventory so I can manage hosts across accounts.
D
drskyleabout 1 month ago
Hi SweetOps, i have been building CloudSlash which is an open-source, local-first tool to clean up AWS costs without sending your data to a SaaS.
It builds a graph of your infrastructure to find:
1) Orphaned Ebs Volumes / Snapshots
2) Idle RDS / Load Balancers
3) "Zombie" Assets (dependencies that don't exist)
It’s written in Go, runs locally, and generates terraform import blocks to help you clean up safely. Just shipped v2.1.1 today to fix some multi-region S3 scanning issues.
Repo : https://github.com/DrSkyle/CloudSlash
: ) DrSkyle
It builds a graph of your infrastructure to find:
1) Orphaned Ebs Volumes / Snapshots
2) Idle RDS / Load Balancers
3) "Zombie" Assets (dependencies that don't exist)
It’s written in Go, runs locally, and generates terraform import blocks to help you clean up safely. Just shipped v2.1.1 today to fix some multi-region S3 scanning issues.
Repo : https://github.com/DrSkyle/CloudSlash
: ) DrSkyle
W
Wojciech Rybakiewiczabout 2 months ago
hey all 👋
wanted to share something small I built recently and get your thoughts.
in my day job I kept running into CloudWatch alarms that technically existed,
but in practice didn’t really protect anything - no actions, disabled actions,
or alarms stuck in ALARM / INSUFFICIENT_DATA for a long time that everyone just kind of accepted.
I ended up writing a tiny, read-only CLI to audit alarms across regions in one place and surface those cases:
https://github.com/wrybakiewicz/cw-alarm-audit
honestly just curious - is this something you’ve seen as well ? or do you handle this problem in a different way ?
wanted to share something small I built recently and get your thoughts.
in my day job I kept running into CloudWatch alarms that technically existed,
but in practice didn’t really protect anything - no actions, disabled actions,
or alarms stuck in ALARM / INSUFFICIENT_DATA for a long time that everyone just kind of accepted.
I ended up writing a tiny, read-only CLI to audit alarms across regions in one place and surface those cases:
https://github.com/wrybakiewicz/cw-alarm-audit
honestly just curious - is this something you’ve seen as well ? or do you handle this problem in a different way ?
J
Juan Pablo Lorierabout 2 months ago
hi, I use msk-apache-kafka-cluster version ~>2.5.0 module to manage my clusters and only for 1 cluster I get changes to the cluster with my plan but the plan shows no attributes changes.
If I try to apply, I get this error
If I modify manually the cluster to generate a real change, the plan and apply works fine.
I've tried destroying the cluster and creating it again with terraform but it still gets this issues
If I try to apply, I get this error
Error: updating MSK Cluster (arn:aws:kafka:us-east-1:xx:cluster/xxxxkafka/32a87522-10c3-40e2-8d44-2472cce4a1fd-14) security: operation error Kafka: UpdateSecurity, https response error StatusCode: 400, RequestID: 63c63899-ef76-4098-96df-169739e3aeea, BadRequestException: The request does not include any updates to the security setting of the cluster. Verify the request, then try again.If I modify manually the cluster to generate a real change, the plan and apply works fine.
I've tried destroying the cluster and creating it again with terraform but it still gets this issues
T
Tech2 months ago
How do you optimize CloudWatch (Log Delivery) spend when send VPC flow logs to S3?
H
Harry Skinner2 months ago
Just dropped a quick write-up on using Karpenter to achieve "scale-to-zero" for dev clusters.
We managed to eliminate 100% of idle compute costs (vs running fixed ASGs). It’s a huge win for Dev environments that sit empty at night. Subscribe for more.....
Link to the full case study if you're interested: https://www.linkedin.com/pulse/burn-rate-alert-why-your-safety-buffer-servers-just-odpmc
We managed to eliminate 100% of idle compute costs (vs running fixed ASGs). It’s a huge win for Dev environments that sit empty at night. Subscribe for more.....
Link to the full case study if you're interested: https://www.linkedin.com/pulse/burn-rate-alert-why-your-safety-buffer-servers-just-odpmc
S
Sean Nguyen2 months ago
Hey all, our org has recently grown quite a bit. We have new engineers who are making large contributions to our Terraform codebase (yay!).
That being said, I’ve been seeing some quite poor IAM policies for application IRSAs come through (copying policies from AWS docs w/ blanket Allows).
There are some cultural/review process issues which can help address this, but I was wondering if anyone had any useful resources for automatically catching quality issues with IAM policies defined via Terraform?
I’m thinking along the lines of Spacelift REGO policies or linters which we can run as PR checks?
That being said, I’ve been seeing some quite poor IAM policies for application IRSAs come through (copying policies from AWS docs w/ blanket Allows).
There are some cultural/review process issues which can help address this, but I was wondering if anyone had any useful resources for automatically catching quality issues with IAM policies defined via Terraform?
I’m thinking along the lines of Spacelift REGO policies or linters which we can run as PR checks?
M
Mike Sherman3 months ago
Hey all, I'm new to this Slack channel. Wanted seek some advice on this framework: https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html. It seems like someone could provide this document as context to AI, any of the AI engines, and then ask it to validate an AWS "footprint" for issues, including cost. Anyone done this?
C
Cyril (runs-on.com)3 months ago
I've released an ec2 instance leaderboard, if you've ever wondered about which ec2 instance has the fastest single-thread CPU performance: https://go.runs-on.com/instances/ranking
H
hasansaiful3 months ago
If you’re a DevOps engineer, software developer, or tech professional who wants to monitor all your system analytics in one place, this is your tool.
Introducing QueryInside — the next-generation log monitoring and analytics tool.
Start Free: https://queryinside.com/
Why You’ll Love It (Because You’ve Probably Felt These Pain Points)
• Logs are scattered everywhere with no unified view; they usually drop in Discord or Slack messages and are hard to find among millions
• Traditional dashboards that feel rigid and outdated
• Slow debugging because you can’t instantly find what you need
• Limited analytics that don’t show meaningful trends
• No flexibility to build the exact report or workflow your team needs
• Tools that demand a credit card before you even evaluate them
🔍️ Why it's worth trying:
• Create custom reports the way you want
• Advanced log and system analytics with real-time insights
• Works with any custom app or stack
• Built-in semantic search — find anything instantly
• Faster debugging with full context
🎉 Black Friday Special
We’re opening early access for everyone.
🔥 Try QueryInside FREE — no credit card required.
If you want smarter monitoring, easier searches, and complete visibility into your system logs… this is your tool.
Start Free: https://queryinside.com/
Introducing QueryInside — the next-generation log monitoring and analytics tool.
Start Free: https://queryinside.com/
Why You’ll Love It (Because You’ve Probably Felt These Pain Points)
• Logs are scattered everywhere with no unified view; they usually drop in Discord or Slack messages and are hard to find among millions
• Traditional dashboards that feel rigid and outdated
• Slow debugging because you can’t instantly find what you need
• Limited analytics that don’t show meaningful trends
• No flexibility to build the exact report or workflow your team needs
• Tools that demand a credit card before you even evaluate them
🔍️ Why it's worth trying:
• Create custom reports the way you want
• Advanced log and system analytics with real-time insights
• Works with any custom app or stack
• Built-in semantic search — find anything instantly
• Faster debugging with full context
🎉 Black Friday Special
We’re opening early access for everyone.
🔥 Try QueryInside FREE — no credit card required.
If you want smarter monitoring, easier searches, and complete visibility into your system logs… this is your tool.
Start Free: https://queryinside.com/
jaysun4 months ago
hey folks. are most of you using permission sets or vanilla aws iam roles these days? I go through this internal debate like every year, and I generally land on some variation of permission set to identity account with a second assume role to destination accounts being the “right” way, but the user experience is a bit meh. aka, you would need to manually do the extra assume role hop if you’re using the console, and the cli flow won’t “just work” out of the box without some internal tooling / wrappers
Prastik4 months ago(edited)
Hey Guys
Just wanted to share something I’ve been using (and building) lately — an app that lets you chat with your CloudWatch logs, spin up on-demand dashboards, ask questions, and troubleshoot issues without juggling 10 different tools 😅 all from one single terminal. I've been using it myself and
it's been super helpful, imagine all your monitoring and observability shrinks down to one chat window that can help you to troubleshoot issues, analyse logs and create on demand dashboards as well. I basically run my monitoring on autopilot now.
Figured I’d share it here in case anyone’s interested. Happy to demo or chat more about it! 🙌 alternatively you can use it yourself at: https://jodmcp.com
Thanks!
Just wanted to share something I’ve been using (and building) lately — an app that lets you chat with your CloudWatch logs, spin up on-demand dashboards, ask questions, and troubleshoot issues without juggling 10 different tools 😅 all from one single terminal. I've been using it myself and
it's been super helpful, imagine all your monitoring and observability shrinks down to one chat window that can help you to troubleshoot issues, analyse logs and create on demand dashboards as well. I basically run my monitoring on autopilot now.
Figured I’d share it here in case anyone’s interested. Happy to demo or chat more about it! 🙌 alternatively you can use it yourself at: https://jodmcp.com
Thanks!
Saljook Haider4 months ago
Hi everyone 👋
I’m new to AWS and would really appreciate some guidance.
I recently bought a low-cost domain from Hostinger and I’m trying to request an SSL certificate using AWS Certificate Manager (ACM).
I’ve added the required CNAME records in Hostinger’s DNS panel, and online tools like DNSChecker show that the records have propagated correctly.
However, ACM still keeps showing “Pending validation” or “Validation failed.”
I’m wondering if there might be any formatting issues or misconfigurations specific to Hostinger’s DNS setup (e.g., whether I need to remove the domain suffix or trailing dots).
I can’t use email validation, so I’m relying on DNS validation only.
Any suggestions or examples from those who’ve done ACM validation with Hostinger domains would be really helpful
I’m new to AWS and would really appreciate some guidance.
I recently bought a low-cost domain from Hostinger and I’m trying to request an SSL certificate using AWS Certificate Manager (ACM).
I’ve added the required CNAME records in Hostinger’s DNS panel, and online tools like DNSChecker show that the records have propagated correctly.
However, ACM still keeps showing “Pending validation” or “Validation failed.”
I’m wondering if there might be any formatting issues or misconfigurations specific to Hostinger’s DNS setup (e.g., whether I need to remove the domain suffix or trailing dots).
I can’t use email validation, so I’m relying on DNS validation only.
Any suggestions or examples from those who’ve done ACM validation with Hostinger domains would be really helpful
Mubarak J4 months ago
Many AWS services are moving to maintenance mode. I was once interested in CodeCatalyst and Proton...glad we didn't end up using them.
https://aws.amazon.com/about-aws/whats-new/2025/10/aws-service-availability/
https://aws.amazon.com/about-aws/whats-new/2025/10/aws-service-availability/
shannon agarwal5 months ago
Hello SweetOps people, l have a new project to build a Window Server in AWS but looks like there are currently no Windows AMIs created. What is the best way to go about this? I have the requirements.
Igor Rodionov5 months ago(edited)
Hey SweetOps community 👋
I’m looking to learn how other teams are handling ECS deployments — specifically how they’re structuring and automating the process of deploying applications on ECS.
We’re reviewing and refreshing our ECS deployment strategy and patterns, and we’d love to hear from others in the community who are running workloads on ECS.
If you or your team are deploying apps to ECS, I’d love to know:
• How do you typically deploy? (e.g., GitHub Actions, CodePipeline, Terraform, custom tooling, etc.)
• What deployment patterns or architectures are you using? (e.g., blue/green, rolling updates, canary, etc.)
• How do you decouple application and infrastructure releated configs?
• What have you found works best (or doesn’t)?
• Are there any lessons learned or gotchas you’d share?
The goal is to gather insights into common practices and trade-offs teams are making so we can align our approach with the broader DevOps community and current best practices.
If you’re open to sharing your experience or discussing this further, please drop a comment or DM me — we’d really appreciate your input!
Thanks in advance!
I’m looking to learn how other teams are handling ECS deployments — specifically how they’re structuring and automating the process of deploying applications on ECS.
We’re reviewing and refreshing our ECS deployment strategy and patterns, and we’d love to hear from others in the community who are running workloads on ECS.
If you or your team are deploying apps to ECS, I’d love to know:
• How do you typically deploy? (e.g., GitHub Actions, CodePipeline, Terraform, custom tooling, etc.)
• What deployment patterns or architectures are you using? (e.g., blue/green, rolling updates, canary, etc.)
• How do you decouple application and infrastructure releated configs?
• What have you found works best (or doesn’t)?
• Are there any lessons learned or gotchas you’d share?
The goal is to gather insights into common practices and trade-offs teams are making so we can align our approach with the broader DevOps community and current best practices.
If you’re open to sharing your experience or discussing this further, please drop a comment or DM me — we’d really appreciate your input!
Thanks in advance!
paulm5 months ago
I thought I'd share IaC source for creating "shareable" AWS Lambda test events…
Background: aws.amazon.com/about-aws/whats-new/2022/03/aws-lambda-console-test-events
• Introduced in 2022, this way of distributing sample events so that multiple developers can test Lambda functions realistically, not locally, hasn't received much attention. The documentation covers manual creation, not IaC.
Sample: github.com/sqlxpert/stay-stopped-aws-rds-aurora/blob/b9c2457/stay_stopped_aws_rds_aurora.yaml#L883-L970
• I like CloudFormation. The Terraform resources are aws_schemas_registry and aws_schemas_schema .
• The
• Test events for internal projects will be more specific than mine, and won't need editing. Test execution can be automated but extra keys with instructions for humans might be useful nevertheless.
I hope this will be helpful to someone!
Background: aws.amazon.com/about-aws/whats-new/2022/03/aws-lambda-console-test-events
• Introduced in 2022, this way of distributing sample events so that multiple developers can test Lambda functions realistically, not locally, hasn't received much attention. The documentation covers manual creation, not IaC.
Sample: github.com/sqlxpert/stay-stopped-aws-rds-aurora/blob/b9c2457/stay_stopped_aws_rds_aurora.yaml#L883-L970
• I like CloudFormation. The Terraform resources are aws_schemas_registry and aws_schemas_schema .
• The
lambda-testevent-schemas registry serves every Lambda function in an AWS account and region, so you've got to be able to create it conditionally, and it has to survive the deletion of your project. Consider a removed block with destroy = false .• Test events for internal projects will be more specific than mine, and won't need editing. Test execution can be automated but extra keys with instructions for humans might be useful nevertheless.
I hope this will be helpful to someone!
Hamza Nasir5 months ago
Need help with some aws thing, this is maybe dumb, but i think i am wasting too much time on this and thought to get help from community.
I am getting this error,i am able to access this instance using connect endpoint via another role, assigned the same perms to the second role but getting this error i tried editing the inbound rule for its sg by allowing all traffic from 10.0.0.0/8 which includes my all vpcs cidr not sure what should i do now has someone faced this before
I am getting this error,i am able to access this instance using connect endpoint via another role, assigned the same perms to the second role but getting this error i tried editing the inbound rule for its sg by allowing all traffic from 10.0.0.0/8 which includes my all vpcs cidr not sure what should i do now has someone faced this before
Hamza Nasir5 months ago
Hello guys
Ivan Pinatti5 months ago
👋🏼 Curious how others are handling S3 Access Logs when facing requirements like;
1. CMK (customer managed keys) for encryption
2. WORM (write once read many) compliance policies with deletion blocked
3. MFA delete required enabled
4. Alerts when manual deletion happen (not lifecycle rules)
According to AWS docs, you can't have 1 and 2 for a logging destination bucket. And, it seems 3 is unsupported on Atmos/Terraform.
1. CMK (customer managed keys) for encryption
2. WORM (write once read many) compliance policies with deletion blocked
3. MFA delete required enabled
4. Alerts when manual deletion happen (not lifecycle rules)
According to AWS docs, you can't have 1 and 2 for a logging destination bucket. And, it seems 3 is unsupported on Atmos/Terraform.
Mubarak J6 months ago
Does anyone know what will happen to Bitnami images hosted on public ECR? We reference these images, and I was curious if these will move to something like
<https://gallery.ecr.aws/bitnami-legacy/>Erik Osterman (Cloud Posse)6 months ago
Slackbot6 months ago
This message was deleted.
Pierre Humberdroz8 months ago
I feel like people would have an idea on how to solve this..
I am in need to let some cloud managed tools to connect into my aws vpc via an ssh bastion. Currently we have an ec2 instance setup which kinda works but I am wondering if we can host the ssh server within our eks cluster and enable external access like this. Has anyone here done something like that before?
I am in need to let some cloud managed tools to connect into my aws vpc via an ssh bastion. Currently we have an ec2 instance setup which kinda works but I am wondering if we can host the ssh server within our eks cluster and enable external access like this. Has anyone here done something like that before?
Shivam s8 months ago
I found few automations using shell scripting https://github.com but i wnat to put it on action if anyone is aware
Jan Costandius8 months ago
Has anyone been using diagrams as code on a larger scale? Which frameworks have you been using?
PePe Amengual9 months ago(edited)
Who uses shared vpcs? do they work now? how is your experience?
akhan4u9 months ago
Hi Team, I've a question around integration of AWS LakeFormation & IAM Identity Center. I'd like to grant external-users (AD Users/SAML users) access to AWS Lakeformation resources i.e S3, Redshift, etc and classify access using Tags. (aka. More fine grained permissions). I'd be great if someone can provide me general guidance, or something like a rough flowchart for this use-case.
Mubarak J9 months ago
It looks like Terraform will add enhanced region support as part of the AWS provider v6. I'm curious how this will work in root and child modules.
joey9 months ago
does anyone have strong opinions on EKS network flow monitoring (e.g. cross-AZ) for people that aren't using a CNI that provides things like Hubble? i've found AWS Network Flow Monitoring to be... not good, kubecost to be inaccurate (and not good), most open source solutions to not work, VPC flow logs to be painful, and the class of Datadog, Splunk, etc. to be expensive.
Aarushi9 months ago(edited)
Hey folks— quick pulse check: is anyone else seeing AWS costs creep up again this year?
We’ve been digging into 400+ AWS environments and spotting some recurring patterns — things even seasoned teams miss.
We’re putting together a free tactical webinar:
2025 Cloud Fitness: 5 Pro-tips for Healthier AWS Infrastructure
No fluff — just 5 expert-backed fixes to cut waste and boost performance this year.
• Focused on actionable steps
• Backed by real-world infra data
• 30 minutes + live Q&A
Grab your seat here: https://www.cloudkeeper.com/cloud-fitness-healthier-aws-infrastructure-webinar-2025?utm_source=Slack&utm_medium=slack_webinar&utm_campaign=cfc
We’ve been digging into 400+ AWS environments and spotting some recurring patterns — things even seasoned teams miss.
We’re putting together a free tactical webinar:
2025 Cloud Fitness: 5 Pro-tips for Healthier AWS Infrastructure
No fluff — just 5 expert-backed fixes to cut waste and boost performance this year.
• Focused on actionable steps
• Backed by real-world infra data
• 30 minutes + live Q&A
Grab your seat here: https://www.cloudkeeper.com/cloud-fitness-healthier-aws-infrastructure-webinar-2025?utm_source=Slack&utm_medium=slack_webinar&utm_campaign=cfc
jaysun9 months ago
I’ve been thinking about switching to elasticache serverless for redis (we’re currently using non-clustered with replication group) and noticing that you can’t create a user / pass with the terraform resource… how are people adding that additional layer without using replication group nodes?
is it just not needed as a general rule? (rely on security groups + TLS)
is it just not needed as a general rule? (rely on security groups + TLS)
Michael10 months ago
AWS news: Lambda billing will now charge for cold starts
https://aws.amazon.com/blogs/compute/aws-lambda-standardizes-billing-for-init-phase/
https://aws.amazon.com/blogs/compute/aws-lambda-standardizes-billing-for-init-phase/
SA10 months ago(edited)
Question on Detaching and Re-enrolling Accounts in AWS Control Tower via Account Factory:
I’m working on updating AWS alias, root email, and account name and need some clarification regarding the detachment and re-enrollment of accounts in AWS Control Tower.
1. The accounts are provisioned via AWS Control Tower using Account Factory, where each account is specified in the .
2. Detaching from Control Tower for Updates:
To update account aliases, root email addresses, and account names, do I need to detach the account from Control Tower by removing the corresponding
I couldn't find much or it's a bit unclear for me from the AWS Docs. can someone shed some light on whether I am thinking the process correctly or not
Resource Block (to remove for detachment):
Parameter Block (to ensure re-enrollment):
TL;DR:
To update the alias, root email, and account name, do we need to detach the account from Control Tower by removing the
Any insight is much appreciated. TI
I’m working on updating AWS alias, root email, and account name and need some clarification regarding the detachment and re-enrollment of accounts in AWS Control Tower.
1. The accounts are provisioned via AWS Control Tower using Account Factory, where each account is specified in the .
yaml template. 2. Detaching from Control Tower for Updates:
To update account aliases, root email addresses, and account names, do I need to detach the account from Control Tower by removing the corresponding
AWS::ServiceCatalog::CloudFormationProvisionedProduct resource in the .yaml template? and Once the updates are done, should I re-enroll the account back into Control Tower by adding the account back to the template and redeploying?I couldn't find much or it's a bit unclear for me from the AWS Docs. can someone shed some light on whether I am thinking the process correctly or not
Resource Block (to remove for detachment):
yaml
AccountName:
Type: AWS::ServiceCatalog::CloudFormationProvisionedProduct
Properties:
ProductId: !Ref pProvisionedProductId
PathId: !Ref pPathId
ProvisioningArtifactId: !Ref pProvisioningArtifactId
ProvisionedProductName: !Ref pAccountName
ProvisioningParameters:
- Key: AccountEmail
Value: !Ref pAccountEmail
- Key: AccountName
Value: !Ref pAccountName
- Key: ManagedOrganizationalUnit
Value: !Sub "dev (${pDevOuId})"
- Key: SSOUserEmail
Value: aws-mgmt+usw2-controltower@.com
- Key: SSOUserFirstName
Value: AWS Control Tower
- Key: SSOUserLastName
Value: AdminParameter Block (to ensure re-enrollment):
yaml
pAccountName:
Type: String
Default: account-nameTL;DR:
To update the alias, root email, and account name, do we need to detach the account from Control Tower by removing the
AWS::ServiceCatalog::CloudFormationProvisionedProduct resource and its associated parameters in the YAML? Once the updates are complete, should we re-enroll the account by adding the resource and parameter blocks back to the YAML and redeploying?Any insight is much appreciated. TI
Gitmoxi10 months ago
Hey Folks!!
I'm currently exploring a startup idea at the intersection of DevOps and Generative AI. As someone with hands-on experience in DevOps, I’d love to hear your perspective on the current challenges practitioners face and where you think GenAI could meaningfully help.
If you're open to it, I’d really appreciate a quick 30-minute chat. Would love to learn from your experience — and happy to share more about what I’m thinking too! Please like or comment on this message and I can reach out to you directly.
Thanks so much for considering, and hope to connect! 🙏 🙏
I'm currently exploring a startup idea at the intersection of DevOps and Generative AI. As someone with hands-on experience in DevOps, I’d love to hear your perspective on the current challenges practitioners face and where you think GenAI could meaningfully help.
If you're open to it, I’d really appreciate a quick 30-minute chat. Would love to learn from your experience — and happy to share more about what I’m thinking too! Please like or comment on this message and I can reach out to you directly.
Thanks so much for considering, and hope to connect! 🙏 🙏
SA11 months ago(edited)
Hi everyone,
I'm working on a data restoration process involving large batches of archived files stored in S3 Glacier Deep Archive. The recent data we store is tarred into single files for efficiency, but the older data we're trying to restore consists of thousands of individual files per batch, which is making the restoration process challenging.
S3 doesn’t support folder restores and requires files to be restored individually. As you can imagine, trying to restore such a large number of individual files using bash scripts is causing timeouts and issues with tracking the progress.
Has anyone experienced a similar issue with restoring large numbers of individual files from S3 Glacier? We’re considering S3 Batch Operations, but I’d love to hear if anyone has other strategies or best practices for efficiently handling this kind of large-scale restore, especially when dealing with a massive number of files.
TIA
I'm working on a data restoration process involving large batches of archived files stored in S3 Glacier Deep Archive. The recent data we store is tarred into single files for efficiency, but the older data we're trying to restore consists of thousands of individual files per batch, which is making the restoration process challenging.
S3 doesn’t support folder restores and requires files to be restored individually. As you can imagine, trying to restore such a large number of individual files using bash scripts is causing timeouts and issues with tracking the progress.
Has anyone experienced a similar issue with restoring large numbers of individual files from S3 Glacier? We’re considering S3 Batch Operations, but I’d love to hear if anyone has other strategies or best practices for efficiently handling this kind of large-scale restore, especially when dealing with a massive number of files.
TIA
Erik Osterman (Cloud Posse)11 months ago
set the channel topic:
Discussion related to Amazon Web Services (AWS). Please use #terraform for Terraform questions.
Discussion related to Amazon Web Services (AWS). Please use #terraform for Terraform questions.Andy Wortman11 months ago
Looking at the Cloudposse terraform-aws-documentdb-cluster, I’m not finding an option to provision an elastic cluster. Is that in a separate module, or perhaps doesn’t exist (yet)?