Thursday, January 15, 2026

Amazon S3 Storage Lens provides efficiency metrics, assist for billions of prefixes, and export to S3 Tables


As we speak, we’re saying three new capabilities for Amazon S3 Storage Lens that provide you with deeper insights into your storage efficiency and utilization patterns. With the addition of efficiency metrics, assist for analyzing billions of prefixes, and direct export to Amazon S3 Tables, you have got the instruments that you must optimize utility efficiency, cut back prices, and make data-driven choices about your Amazon S3 storage technique.

New efficiency metric classes

S3 Storage Lens now consists of eight new efficiency metric classes that assist determine and resolve efficiency constraints throughout your group. These can be found at group, account, bucket, and prefix ranges. For instance, the service helps you determine small objects in a bucket or prefix that may  decelerate utility efficiency. This may be mitigated by batching small objects or utilizing the Amazon S3 Categorical One Zone storage class for increased efficiency small object workloads.

To entry the brand new efficiency metrics, that you must allow efficiency metrics within the S3 Storage Lens superior tier when creating a brand new Storage Lens dashboard or enhancing an present configuration.

Metric class Particulars Use case Mitigation
Learn request measurement Distribution of learn request sizes (GET) by day Establish dataset with small learn request patterns that decelerate efficiency Small request: Batch small objects or use Amazon S3 Categorical One Zone for high-performance small object workloads
Write request measurement Distribution of write request sizes (PUT, POST, COPY, and UploadPart) by day Establish dataset with small write request patterns that decelerate efficiency Giant request: Parallelize requests, use MPU or use AWS CRT
Storage measurement Distribution of object sizes Establish dataset with small small objects that decelerate efficiency Small object sizes: Contemplate bundling small objects
Concurrent PUT 503 errors Variety of 503s resulting from concurrent PUT operation on identical object Establish prefixes with concurrent PUT throttling that decelerate efficiency For single author, modify retry conduct or use Amazon S3 Categorical One Zone. For a number of writers, use consensus mechanism or use Amazon S3 Categorical One Zone
Cross-Area information switch Bytes transferred and requests despatched throughout Area, in Area Establish potential efficiency and price degradation resulting from cross-Area information entry Co-locate compute with information in the identical AWS Area
Distinctive objects accessed Quantity or share of distinctive objects accessed per day Establish datasets the place small subset of objects are being incessantly accessed. These could be moved to increased efficiency storage tier for higher efficiency Contemplate transferring energetic information to Amazon S3 Categorical One Zone or different caching options
FirstByteLatency (present Amazon CloudWatch metric) Every day common of first byte latency metric The every day common per-request time from the entire request being obtained to when the response begins to be returned
TotalRequestLatency (present Amazon CloudWatch metric) Every day common of Complete Request Latency The every day common elapsed per request time from the primary byte obtained to the final byte despatched

The way it works

On the Amazon S3 console I select Create Storage Lens dashboard to create a brand new dashboard. You can even edit an present dashboard configuration. I then configure normal settings reminiscent of offering a Dashboard title, Standing, and the non-compulsory Tags. Then, I select Subsequent.



Subsequent, I outline the scope of the dashboard by deciding on Embody all Areas and Embody all buckets and specifying the Areas and buckets to be included.



I choose in to the Superior tier within the Storage Lens dashboard configuration, choose Efficiency metrics, then select Subsequent.



Subsequent, I choose Prefix aggregation as an extra metrics aggregation, then go away the remainder of the knowledge as default earlier than I select Subsequent.



I choose the Default metrics report, then Common objective bucket because the bucket kind, after which choose the Amazon S3 bucket in my AWS account because the Vacation spot bucket. I go away the remainder of the knowledge as default, then choose Subsequent.



I evaluation all the knowledge earlier than I select Submit to finalize the method.



After it’s enabled, I’ll obtain every day efficiency metrics instantly within the Storage Lens console dashboard. You can even select to export report in CSV or Parquet format to any bucket in your account or publish to Amazon CloudWatch. The efficiency metrics are aggregated and printed every day and might be out there at a number of ranges: group, account, bucket, and prefix. On this dropdown menu, I select the % concurrent PUT 503 error for the Metric, Final 30 days for the Date vary, and 10 for the High N buckets.



The Concurrent PUT 503 error depend metric tracks the variety of 503 errors generated by simultaneous PUT operations to the identical object. Throttling errors can degrade utility efficiency. For a single author, modify retry conduct or use increased efficiency storage tier reminiscent of Amazon S3 Categorical One Zone to mitigate concurrent PUT 503 errors. For a number of writers state of affairs, use a consensus mechanism to keep away from concurrent PUT 503 errors or use increased efficiency storage tier reminiscent of Amazon S3 Categorical One Zone.

Full analytics for all prefixes in your S3 buckets

S3 Storage Lens now helps analytics for all prefixes in your S3 buckets by way of a brand new Expanded prefixes metrics report. This functionality removes earlier limitations that restricted evaluation to prefixes assembly a 1% measurement threshold and a most depth of 10 ranges. Now you can monitor as much as billions of prefixes per bucket for evaluation on the most granular prefix stage, no matter measurement or depth.

The Expanded prefixes metrics report consists of all present S3 Storage Lens metric classes: storage utilization, exercise metrics (requests and bytes transferred), information safety metrics, and detailed standing code metrics.

The right way to get began

I observe the identical steps outlined within the The way it works part to create or replace the Storage Lens dashboard. In Step 4 on the console, the place you choose export choices, you may choose the brand new Expanded prefixes metrics report. Thereafter, I can export the expanded prefixes metrics report in CSV or Parquet format to any normal objective bucket in my account for environment friendly querying of my Storage Lens information.



Good to know

This enhancement addresses situations the place organizations want granular visibility throughout their complete prefix construction. For instance, you may determine prefixes with incomplete multipart uploads to cut back prices, monitor compliance throughout your complete prefix construction for encryption and replication necessities, and detect efficiency points on the most granular stage.

Export S3 Storage Lens metrics to S3 Tables

S3 Storage Lens metrics can now be robotically exported to S3 Tables, a totally managed characteristic on AWS with built-in Apache Iceberg assist. This integration supplies every day computerized supply of metrics to AWS managed S3 Tables for quick querying with out requiring further processing infrastructure.

The right way to get began

I begin by following the method outlined in Step 5 on the console, the place I select the export vacation spot. This time, I select Expanded prefixes metrics report. Along with Common objective bucket, I select Desk bucket.

The brand new Storage Lens metrics are exported to new tables in an AWS managed bucket aws-s3.



I choose the expanded_prefixes_activity_metrics desk to view API utilization metrics for expanded prefix stories.



I can preview the desk on the Amazon S3 console or use Amazon Athena to question the desk.



Good to know

S3 Tables integration with S3 Storage Lens simplifies metric evaluation utilizing acquainted SQL instruments and AWS analytics providers reminiscent of Amazon Athena, Amazon QuickSight, Amazon EMR, and Amazon Redshift, with out requiring a knowledge pipeline. The metrics are robotically organized for optimum querying, with customized retention and encryption choices to fit your wants.

This integration allows cross-account and cross-Area evaluation, customized dashboard creation, and information correlation with different AWS providers. For instance, you may mix Storage Lens metrics with S3 Metadata to research prefix-level exercise patterns and determine objects in prefixes with chilly information which are eligible for transition to lower-cost storage tiers.

On your agentic AI workflows, you need to use pure language to question S3 Storage Lens metrics in S3 Tables with the S3 Tables MCP Server. Brokers can ask questions reminiscent of ‘which buckets grew essentially the most final month?’ or ‘present me storage prices by storage class’ and get on the spot insights out of your observability information.

Now out there

All three enhancements can be found in all AWS Areas the place S3 Storage Lens is at present supplied (besides the China Areas and AWS GovCloud (US)).

These options are included within the Amazon S3 Storage Lens Superior tier at no further cost past commonplace superior tier pricing. For the S3 Tables export, you pay just for S3 Tables storage, upkeep, and queries. There isn’t any further cost for the export performance itself.

To study extra about Amazon S3 Storage Lens efficiency metrics, assist for billions of prefixes, and export to S3 Tables, discuss with the Amazon S3 consumer information. For pricing particulars, go to the Amazon S3 pricing web page.

Veliswa Boya.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles