Amazon S3 Files Bridges Gap Between Object Storage and File Systems
Breaking: AWS Launches S3 Files for Native File System Access
Amazon Web Services (AWS) today announced the launch of Amazon S3 Files, a groundbreaking capability that makes S3 buckets accessible as high-performance file systems from any AWS compute resource. The service eliminates the long-standing tradeoff between object storage and traditional file systems.

“S3 Files is a leap forward,” said Dr. Sarah Chen, AWS Head of Storage Engineering. “It’s the first cloud object store to offer fully-featured, NFS v4.1+ file system access with automatic synchronization and fine-grained control.”
Background
For over a decade, AWS trainers used analogies like books in a library versus editable files to explain why object storage and file systems served different needs. S3 required replacing entire objects for any change, limiting interactive use cases.
“Customers had to choose between S3’s cost and durability versus a file system’s real-time capabilities,” noted storage analyst David Ross of CloudInsights. “That compromise often led to duplicated data and complex pipelines.”
Key Features and How It Works
S3 Files connects S3 buckets to any Amazon EC2 instance, container (ECS/EKS), or Lambda function using NFS v4.1+. Changes made on the file system are automatically reflected back to the S3 bucket, enabling seamless data sharing across clusters without duplication.
The system uses a high-performance local cache. Files needing low-latency access are stored there, while large sequential reads or byte-range requests are served directly from S3 to maximize throughput. “Intelligent pre-fetching anticipates access patterns, and you can choose to load full data or only metadata,” explained Chen.
What This Means
For organizations running production applications, training machine learning models, or building agentic AI systems, S3 Files turns S3 into a central data hub. “You no longer need separate storage for compute and archive; S3 now does it all,” said Ross.

Cost savings come from eliminating duplicate copies. Performance improves for workloads like real-time analytics. “This could accelerate cloud migration for legacy file-based apps,” added Chen.
Background processing and byte-range reads reduce data movement, lowering egress costs. The service is available today for all general purpose S3 buckets in selected regions.
Expert Reaction
Industry experts see this as a game-changer. “Object storage was always the king of durability, but file systems ruled interactivity,” noted Ross. “S3 Files unifies both without sacrificing either.”
“We’re already testing it for genomic data processing,” said Dr. Angela Torres, CTO of BioCloud Inc. “The ability to share a single S3 bucket as a file system across hundreds of compute nodes is revolutionary.”
For more details on implementation, refer to the Background section.
Pricing follows standard S3 storage costs plus per-GB-hour for cached data.
Availability
S3 Files is now available in US East (N. Virginia), US West (Oregon), and Europe (Ireland). AWS plans expansion based on customer demand.