Wednesday, January 21, 2026

Introducing multimodal retrieval for Amazon Bedrock Information Bases


We’re excited to announce the final availability of multimodal retrieval for Amazon Bedrock Information Bases. This new functionality provides native assist for video and audio content material, on high of textual content and pictures. With it you possibly can construct Retrieval Augmented Technology (RAG) purposes that may search and retrieve data throughout textual content, photos, audio, and video—all inside a completely managed service.

Trendy enterprises retailer precious data in a number of codecs. Product documentation contains diagrams and screenshots, coaching supplies include tutorial movies, and buyer insights are captured in recorded conferences. Till now, constructing synthetic intelligence (AI) purposes that would successfully search throughout these content material sorts required complicated customized infrastructure and important engineering effort.

Beforehand, Bedrock Information Bases used text-based embedding fashions for retrieval. Whereas it supported textual content paperwork and pictures, photos needed to be processed utilizing basis fashions (FM) or Bedrock Knowledge Automation to generate textual content descriptions—a text-first strategy that misplaced visible context and prevented visible search capabilities. Video and audio required customized preprocessing exterior pipelines. Now, with multimodal embeddings, the retriever natively helps textual content, photos, audio, and video inside a single embedding mannequin.

With multimodal retrieval in Bedrock Information Bases, now you can ingest, index, and retrieve data from textual content, photos, video, and audio utilizing a single, unified workflow. Content material is encoded utilizing multimodal embeddings that protect visible and audio context, enabling your purposes to seek out related data throughout media sorts. You’ll be able to even search utilizing a picture to seek out visually comparable content material or find particular scenes in movies.

On this put up, we’ll information you thru constructing multimodal RAG purposes. You’ll find out how multimodal data bases work, how to decide on the best processing technique based mostly in your content material kind, and how one can configure and implement multimodal retrieval utilizing each the console and code examples.

Understanding multimodal data bases

Amazon Bedrock Information Bases automates the entire RAG workflow: ingesting content material out of your information sources, parsing and chunking it into searchable segments, changing chunks to vector embeddings, and storing them in a vector database. Throughout retrieval, person queries are embedded and matched in opposition to saved vectors to seek out semantically comparable content material, which augments the immediate despatched to your basis mannequin.

With multimodal retrieval, this workflow now handles photos, video, and audio alongside textual content by means of two processing approaches. Amazon Nova Multimodal Embeddings encodes content material natively right into a unified vector house, for cross-modal retrieval the place you possibly can question with textual content and retrieve movies, or search utilizing photos to seek out visible content material.

Alternatively, Bedrock Knowledge Automation converts multimedia into wealthy textual content descriptions and transcripts earlier than embedding, offering high-accuracy retrieval over spoken content material. Your alternative is dependent upon whether or not visible context or speech precision issues most in your use case.

We discover every of those approaches on this put up.

Amazon Nova Multimodal Embeddings

Amazon Nova Multimodal Embeddings is the primary unified embedding mannequin that encodes textual content, paperwork, photos, video, and audio right into a single shared vector house. Content material is processed natively with out textual content conversion. The mannequin helps as much as 8,172 tokens for textual content and 30 seconds for video/audio segments, handles over 200 languages, and presents 4 embedding dimensions (with 3072-dimension as default, 1,024, 384, 256) to stability accuracy and effectivity. Bedrock Information Bases segments video and audio routinely into configurable chunks (5-30 seconds), with every phase independently embedded.

For video content material, Nova embeddings seize visible parts—scenes, objects, movement, and actions—in addition to audio traits like music, sounds, and ambient noise. For movies the place spoken dialogue is essential to your use case, you should utilize Bedrock Knowledge Automation to extract transcripts alongside visible descriptions. For standalone audio recordsdata, Nova processes acoustic options equivalent to music, environmental sounds, and audio patterns. The cross-modal functionality allows use circumstances equivalent to describing a visible scene in textual content to retrieve matching movies, add a reference picture to seek out comparable merchandise, or find particular actions in footage—all with out pre-existing textual content descriptions.

Finest for: Product catalogs, visible search, manufacturing movies, sports activities footage, safety cameras, and eventualities the place visible content material drives the use case.

Amazon Bedrock Knowledge Automation

Bedrock Knowledge Automation takes a distinct strategy by changing multimedia content material into wealthy textual representations earlier than embedding. For photos, it generates detailed descriptions together with objects, scenes, textual content inside photos, and spatial relationships. For video, it produces scene-by-scene summaries, identifies key visible parts, and extracts the on-screen textual content. For audio and video with speech, Bedrock Knowledge Automation offers correct transcriptions with timestamps and speaker identification, together with phase summaries that seize the important thing factors mentioned.

As soon as transformed to textual content, this content material is chunked and embedded utilizing textual content embedding fashions like Amazon Titan Textual content Embeddings or Amazon Nova Multimodal Embeddings. This text-first strategy allows extremely correct question-answering over spoken content material—when customers ask about particular statements made in a gathering or subjects mentioned in a podcast, the system searches by means of exact transcripts moderately than audio embeddings. This makes it significantly precious for compliance eventualities the place you want actual quotes and verbatim data for audit trails, assembly evaluation, buyer assist name mining, and use circumstances the place you’ll want to retrieve and confirm particular spoken data.

Finest for: Conferences, webinars, interviews, podcasts, coaching movies, assist calls, and eventualities requiring exact retrieval of particular statements or discussions.

Use case situation: Visible product seek for e-commerce

Multimodal data bases can be utilized for purposes starting from enhanced buyer experiences and worker coaching to upkeep operations and authorized evaluation. Conventional e-commerce search depends on textual content queries, requiring prospects to articulate what they’re on the lookout for with the best key phrases. This breaks down once they’ve seen a product elsewhere, have a photograph of one thing they like, or need to discover gadgets much like what seems in a video. Now, prospects can search your product catalog utilizing textual content descriptions, add a picture of an merchandise they’ve photographed, or reference a scene from a video to seek out matching merchandise. The system retrieves visually comparable gadgets by evaluating the embedded illustration of their question—whether or not textual content, picture, or video—in opposition to the multimodal embeddings of your product stock. For this situation, Amazon Nova Multimodal Embeddings is the perfect alternative. Product discovery is essentially visible—prospects care about colours, kinds, shapes, and visible particulars. By encoding your product photos and movies into the Nova unified vector house, the system matches based mostly on visible similarity with out counting on textual content descriptions which may miss refined visible traits. Whereas a whole advice system would incorporate buyer preferences, buy historical past, and stock availability, retrieval from a multimodal data base offers the foundational functionality: discovering visually related merchandise no matter how prospects select to go looking.

Console walkthrough

Within the following part, we stroll by means of the high-level steps to arrange and check a multimodal data base for our e-commerce product search instance. We create a data base containing smartphone product photos and movies, then exhibit how prospects can search utilizing textual content descriptions, uploaded photos, or video references. The GitHub repository offers a guided pocket book that you would be able to observe to deploy this instance in your account.

Conditions

Earlier than you get began, just be sure you have the next stipulations:

Present the data base particulars and information supply kind

Begin by opening the Amazon Bedrock console and creating a brand new data base. Present a descriptive identify in your data base and choose your information supply kind—on this case, Amazon S3 the place your product photos and movies are saved.

Configure information supply

Join your S3 bucket containing product photos and movies. For the parsing technique, choose Amazon Bedrock default parser. Since we’re utilizing Nova Multimodal Embeddings, the pictures and movies are processed natively and embedded immediately into the unified vector house, preserving their visible traits with out conversion to textual content.

Configure information storage and processing

Choose Amazon Nova Multimodal Embeddings as your embedding mannequin. This unified embedding mannequin encodes each your product photos and buyer queries into the identical vector house, enabling cross-modal retrieval the place textual content queries can retrieve photos and picture queries can discover visually comparable merchandise. For this instance, we use Amazon S3 Vectors because the vector retailer (you might optionally use different out there vector shops), which offers cost-effective and sturdy storage optimized for large-scale vector information units whereas sustaining sub-second question efficiency. You additionally have to configure the multimodal storage vacation spot by specifying an S3 location. Information Bases makes use of this location to retailer extracted photos and different media out of your information supply. When customers question the data base, related media is retrieved from this storage.

Overview and create

Overview your configuration settings together with the data base particulars, information supply configuration, embedding mannequin choice—we’re utilizing Amazon Nova Multimodal Embeddings v1 with 3072 vector dimensions (greater dimensions present richer representations; you should utilize decrease dimensions like 1,024, 384, or 256 to optimize for storage and price) —and vector retailer setup (Amazon S3 Vectors). As soon as all the things seems to be right, create your data base.

Create an ingestion job

As soon as created, provoke the sync course of to ingest your product catalog. The data base processes every picture and video, generates embeddings and shops them within the managed vector database. Monitor the sync standing to verify the paperwork are efficiently listed.

Check the data base utilizing textual content as enter in your immediate

Along with your data base prepared, check it utilizing a textual content question within the console. Search with product descriptions like “A metallic cellphone cowl” (or something equal that may very well be related in your merchandise media) to confirm that text-based retrieval works accurately throughout your catalog.

Check the data base utilizing a reference picture and retrieve totally different modalities

Now for the highly effective half—visible search. Add a reference picture of a product you need to discover. For instance, think about you noticed a cellphone cowl on one other web site and need to discover comparable gadgets in your catalog. Merely add the picture with out further textual content immediate.

The multimodal data base extracts visible options out of your uploaded picture and retrieves visually comparable merchandise out of your catalog. As you possibly can see within the outcomes, the system returns cellphone covers with comparable design patterns, colours, or visible traits. Discover the metadata related to every chunk within the Supply particulars panel. The x-amz-bedrock-kb-chunk-start-time-in-millis and x-amz-bedrock-kb-chunk-end-time-in-millis fields point out the precise temporal location of this phase inside the supply video. When constructing purposes programmatically, you should utilize these timestamps to extract and show the precise video phase that matched the question, enabling options like “leap to related second” or clip technology immediately out of your supply movies. This cross-modal functionality transforms the procuring expertise—prospects now not want to explain what they’re on the lookout for with phrases; they will present you.

Check the data base utilizing a reference picture and retrieve totally different modalities utilizing Bedrock Knowledge Automation

Now we have a look at what the outcomes would appear to be if you happen to configured Bedrock Knowledge Automation parsing through the information supply setup. Within the following screenshot, discover the transcript part within the Supply particulars panel.

For every retrieved video chunk, Bedrock Knowledge Automation routinely generates an in depth textual content description—on this instance, describing the smartphone’s metallic rose gold end, studio lighting, and visible traits. This transcript seems immediately within the check window alongside the video, offering wealthy textual context. You get each visible similarities matching from the multimodal embeddings and detailed product descriptions that may reply particular questions on options, colours, supplies, and different attributes seen within the video.

Clear-up

To scrub up your sources, full the next steps, beginning with deleting the data base:

  1. On the Amazon Bedrock console, select Information Bases
  2. Choose your Information Base and word each the IAM service position identify and S3 Vector index ARN
  3. Select Delete and make sure

To delete the S3 Vector as a vector retailer, use the next AWS Command Line Interface (AWS CLI) instructions:

aws s3vectors delete-index --vector-bucket-name YOUR_VECTOR_BUCKET_NAME --index-name YOUR_INDEX_NAME --region YOUR_REGION
aws s3vectors delete-vector-bucket --vector-bucket-name YOUR_VECTOR_BUCKET_NAME --region YOUR_REGION

  1. On the IAM console, discover the position famous earlier
  2. Choose and delete the position

To delete the pattern dataset:

  1. On the Amazon S3 console, discover your S3 bucket
  2. Choose and delete the recordsdata you uploaded for this tutorial

Conclusion

Multimodal retrieval for Amazon Bedrock Information Bases removes the complexity of constructing RAG purposes that span textual content, photos, video, and audio. With native assist for video and audio content material, now you can construct complete data bases that unlock insights out of your enterprise information—not simply textual content paperwork.

The selection between Amazon Nova Multimodal Embeddings and Bedrock Knowledge Automation provides you flexibility to optimize in your particular content material. The Nova unified vector house allows cross-modal retrieval for visual-driven use circumstances, whereas the Bedrock Knowledge Automation text-first strategy delivers exact transcription-based retrieval for speech-heavy content material. Each approaches combine seamlessly into the identical totally managed workflow, assuaging the necessity for customized preprocessing pipelines.

Availability

Area availability depends on the options chosen for multimodal assist, please discuss with the documentation for particulars.

Subsequent steps

Get began with multimodal retrieval at the moment:

  1. Discover the documentation: Overview the Amazon Bedrock Information Bases documentation and Amazon Nova Consumer Information for added technical particulars.
  2. Experiment with code examples: Try the Amazon Bedrock samples repository for hands-on notebooks demonstrating multimodal retrieval.
  3. Be taught extra about Nova: Learn the Amazon Nova Multimodal Embeddings announcement for deeper technical insights.

In regards to the authors

Dani Mitchell is a Generative AI Specialist Options Architect at Amazon Internet Companies (AWS). He’s targeted on serving to speed up enterprises internationally on their generative AI journeys with Amazon Bedrock and Bedrock AgentCore.

Pallavi NargundPallavi Nargund is a Principal Options Architect at AWS. She is a generative AI lead for US Greenfield and leads the AWS for Authorized Tech crew. She is keen about girls in expertise and is a core member of Ladies in AI/ML at Amazon. She speaks at inner and exterior conferences equivalent to AWS re:Invent, AWS Summits, and webinars. Pallavi holds a Bachelor’s of Engineering from the College of Pune, India. She lives in Edison, New Jersey, together with her husband, two women, and her two pups.

Jean-Pierre Dodel is a Principal Product Supervisor for Amazon Bedrock, Amazon Kendra, and Amazon Fast Index. He brings 15 years of Enterprise Search and AI/ML expertise to the crew, with prior work at Autonomy, HP, and search startups earlier than becoming a member of Amazon 8 years in the past. JP is presently specializing in improvements for multimodal RAG, agentic retrieval, and structured RAG.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles