At present, we’re saying the final availability of a further 18 absolutely managed open weight fashions in Amazon Bedrock from Google, MiniMax AI, Mistral AI, Moonshot AI, NVIDIA, OpenAI, and Qwen, together with the brand new Mistral Giant 3 and Ministral 3 3B, 8B, and 14B fashions.
With this launch, Amazon Bedrock now supplies practically 100 serverless fashions, providing a broad and deep vary of fashions from main AI firms, so clients can select the exact capabilities that greatest serve their distinctive wants. By intently monitoring each buyer wants and technological developments, we recurrently increase our curated choice of fashions primarily based on buyer wants and technological developments to incorporate promising new fashions alongside established business favorites.
This ongoing growth of high-performing and differentiated mannequin choices helps clients keep on the forefront of AI innovation. You possibly can entry these fashions on Amazon Bedrock by way of the unified API, consider, swap, and undertake new fashions with out rewriting purposes or altering infrastructure.
New Mistral AI fashions
These 4 Mistral AI fashions at the moment are accessible first on Amazon Bedrock, every optimized for various efficiency and value necessities:
- Mistral Giant 3 – This open weight mannequin is optimized for long-context, multimodal, and instruction reliability. It excels in lengthy doc understanding, agentic and gear use workflows, enterprise data work, coding help, superior workloads resembling math and coding duties, multilingual evaluation and processing, and multimodal reasoning with imaginative and prescient.
- Ministral 3 3B – The smallest within the Ministral 3 household is edge-optimized for single GPU deployment with sturdy language and imaginative and prescient capabilities. It exhibits sturdy efficiency in picture captioning, textual content classification, real-time translation, knowledge extraction, brief content material era, and light-weight real-time purposes on edge or low-resource gadgets.
- Ministral 3 8B – The most effective-in-class Ministral 3 mannequin for textual content and imaginative and prescient is edge-optimized for single GPU deployment with excessive efficiency and minimal footprint. This mannequin is good for chat interfaces in constrained environments, picture and doc description and understanding, specialised agentic use instances, and balanced efficiency for native or embedded programs.
- Ministral 3 14B – Probably the most succesful Ministral 3 mannequin delivers state-of the-art textual content and imaginative and prescient efficiency optimized for single GPU deployment. You need to use superior native agentic use instances and personal AI deployments the place superior capabilities meet sensible {hardware} constraints.
Extra open weight mannequin choices
You need to use these open weight fashions for a variety of use instances throughout industries:
| Mannequin supplier | Mannequin identify | Description | Use instances |
| Gemma 3 4B | Environment friendly textual content and picture mannequin that runs domestically on laptops. Multilingual help for on-device AI purposes. | On-device AI for cellular and edge purposes, privacy-sensitive native inference, multilingual chat assistants, picture captioning and outline, and light-weight content material era. | |
| Gemma 3 12B | Balanced textual content and picture mannequin for workstations. Multi-language understanding with native deployment for privacy-sensitive purposes. | Workstation-based AI purposes; native deployment for enterprises; multilingual doc processing, picture evaluation and Q&A; and privacy-compliant AI assistants. | |
| Gemma 3 27B | Highly effective textual content and picture mannequin for enterprise purposes. Multi-language help with native deployment for privateness and management. | Enterprise native deployment, high-performance multimodal purposes, superior picture understanding, multilingual customer support, and data-sensitive AI workflows. | |
| Moonshot AI | Kimi K2 Considering | Deep reasoning mannequin that thinks whereas utilizing instruments. Handles analysis, coding and complicated workflows requiring tons of of sequential actions. | Advanced coding initiatives requiring planning, multistep workflows, knowledge evaluation and computation, and long-form content material creation with analysis. |
| MiniMax AI | MiniMax M2 | Constructed for coding brokers and automation. Excels at multi-file edits, terminal operations and executing lengthy tool-calling chains effectively. | Coding brokers and built-in improvement surroundings (IDE) integration, multi-file code modifying, terminal automation and DevOps, long-chain software orchestration, and agentic software program improvement. |
| Mistral AI | Magistral Small 1.2 | Excels at math, coding, multilingual duties, and multimodal reasoning with imaginative and prescient capabilities for environment friendly native deployment. | Math and coding duties, multilingual evaluation and processing, and multimodal reasoning with imaginative and prescient. |
| Voxtral Mini 1.0 | Superior audio understanding mannequin with transcription, multilingual help, Q&A, and summarization. | Voice-controlled purposes, quick speech-to-text conversion, and offline voice assistants. | |
| Voxtral Small 1.0 | Options state-of-the-art audio enter with best-in-class textual content efficiency; excels at speech transcription, translation, and understanding. | Enterprise speech transcription, multilingual customer support, and audio content material summarization. | |
| NVIDIA | NVIDIA Nemotron Nano 2 9B | Excessive effectivity LLM with hybrid transformer Mamba design, excelling in reasoning and agentic duties. | Reasoning, software calling, math, coding, and instruction following. |
| NVIDIA Nemotron Nano 2 VL 12B | Superior multimodal reasoning mannequin for video understanding and doc intelligence, powering Retrieval-Augmented Technology (RAG) and multimodal agentic purposes. | Multi-image and video understanding, visible Q&A, and summarization. | |
| OpenAI | gpt-oss-safeguard-20b | Content material security mannequin that applies your customized insurance policies. Classifies dangerous content material with explanations for belief and security workflows. | Content material moderation and security classification, customized coverage enforcement, user-generated content material filtering, belief and security workflows, and automatic content material triage. |
| gpt-oss-safeguard-120b | Bigger content material security mannequin for advanced moderation. Applies customized insurance policies with detailed reasoning for enterprise belief and security groups. | Enterprise content material moderation at scale, advanced coverage interpretation, multilayered security classification, regulatory compliance checking, high-stakes content material overview. | |
| Qwen | Qwen3-Subsequent-80B-A3B | Quick inference with hybrid consideration for ultra-long paperwork. Optimized for RAG pipelines, software use & agentic workflows with fast responses. | RAG pipelines with lengthy paperwork, agentic workflows with software calling, code era and software program improvement, multi-turn conversations with prolonged context, multilingual content material era. |
| Qwen3-VL-235B-A22B | Understands pictures and video. Extracts textual content from paperwork, converts screenshots to working code, and automates clicking by way of interfaces. | Extracting textual content from pictures and PDFs, changing UI designs or screenshots to working code, automating clicks and navigation in purposes, video evaluation and understanding, studying charts and diagrams. |
When implementing publicly accessible fashions, give cautious consideration to knowledge privateness necessities in your manufacturing environments, verify for bias in output, and monitor your outcomes for knowledge safety, accountable AI, and mannequin analysis.
You possibly can entry the enterprise-grade security measures of Amazon Bedrock and implement safeguards personalized to your utility necessities and accountable AI insurance policies with Amazon Bedrock Guardrails. It’s also possible to consider and evaluate fashions to establish the optimum fashions on your use instances by utilizing Amazon Bedrock mannequin analysis instruments.
To get began, you’ll be able to rapidly check these fashions with a number of prompts within the playground of the Amazon Bedrock console or use any AWS SDKs to incorporate entry to the Bedrock InvokeModel and Converse APIs. It’s also possible to use these fashions with any agentic framework that helps Amazon Bedrock and deploy the brokers utilizing Amazon Bedrock AgentCore and Strands Brokers. To study extra, go to Code examples for Amazon Bedrock utilizing AWS SDKs within the Amazon Bedrock Person Information.
Now accessible
Verify the full Area record for availability and future updates of latest fashions or search your mannequin identify within the AWS CloudFormation sources tab of AWS Capabilities by Area. To study extra, take a look at the Amazon Bedrock product web page and the Amazon Bedrock pricing web page.
Give these fashions a strive within the Amazon Bedrock console right this moment and ship suggestions to AWS re:Put up for Amazon Bedrock or by way of your common AWS Help contacts.
— Channy
Up to date on 4 December — Amazon Bedrock now helps Responses API on new OpenAI API-compatible service endpoints for GPT OSS 20B and 120B fashions. To study extra, go to Generate responses utilizing OpenAI APIs.

