You would think we were past this by now. S3 misconfigurations have been on the news cycle for the best part of a decade. AWS has improved the defaults, added block public access settings at the account level and made the implications considerably more visible in the console. Despite all of that, S3 bucket exposures still feature in breach reports year after year. The story is not really about S3 any more. It is about the gap between platform defaults and actual operational practice.
Block Public Access Should Be On Everywhere
The block public access feature lets you prevent any bucket in your account from being made public, regardless of individual bucket policies. Turn it on at the account level for every AWS account you operate. The few workloads that genuinely need public buckets, such as static website hosting, should live in dedicated accounts with separate guardrails. The pattern of putting everything in one account and relying on per-bucket policies is the pattern that produces the most spectacular breaches. A focused AWS pen testing engagement should confirm the account level block is genuinely active.
Cross Account Access Is Still A Risk
A bucket that is not public can still be accessed from another AWS account if the bucket policy allows it. Cross account access is sometimes legitimate and frequently overprovisioned. Wildcards in cross account principal lists, role assumption chains that span multiple accounts and trust relationships that were set up for one project and never tightened all create opportunities. Review cross account access regularly and remove anything that no longer has a clear business purpose.
Expert Commentary
William Fieldhouse, Director of Aardwolf Security Ltd

The most informative S3 finding I have written up in the last year was a bucket that was correctly private at the bucket level but exposed through a CloudFront distribution that nobody remembered configuring. The data was reachable through a domain the customer did not realise pointed at the bucket. Always check the upstream services too.
Tooling Choices That Actually Help
AWS Config rules, Security Hub controls and external policy-as-code tools all monitor S3 configurations against defined baselines. Pick one tool, configure it carefully and act on the findings. The choice between tools matters less than the discipline of actually responding to the alerts. Many S3 breaches involve buckets that were correctly flagged by automated tooling weeks or months before the breach. Nobody acted on the findings. The discipline pays off across multiple control categories at once, because the same automated tooling typically covers more than just S3. Investing in the tooling and the response process produces value across the entire AWS estate, not just the storage layer.
Logging Closes The Loop
S3 access logging and CloudTrail data events between them give you the audit trail you need to detect abuse and reconstruct incidents. Both are off by default for cost reasons. Both are worth turning on for buckets holding sensitive data. Pair this with a vulnerability scan services approach that periodically validates the configuration against the intent, because S3 configurations drift over time as different teams add their own buckets to the estate.
S3 is not the problem. The discipline around S3 is the problem. The fix is operational, not technical. S3 misconfiguration is a solved problem at the technical level. The reason it keeps appearing is operational rather than technical, and the operational fix is well within reach for any serious team. Cloud security is a shared responsibility model in name and a fully owned responsibility model in practice. The configuration choices that matter live on your side of the line, regardless of how the provider markets the platform.
+ There are no comments
Add yours