Mistral Large
Developed by Mistral AI
Key Capabilities
- 128K context window with strong long-document understanding
- Native function calling and tool use
- Fluent in English, French, Spanish, German, Italian, and more
- Strong reasoning and instruction following
- JSON mode for structured output
VRAM Requirements by Quantization
Choose the right GPU based on your performance and quality needs.
| Model / Quantization | VRAM Required |
|---|---|
| FP16 | 246GB |
| q8 | 123GB |
| Q4 | 70GB |
Use Cases
Mistral Large (123B) can be deployed for enterprise AI applications including document processing, code generation, data analysis, and conversational AI. License: Mistral Research License (non-commercial) / Commercial license available.
Run Mistral Large with Petronella
PTG deploys Mistral Large for European-facing businesses needing strong multilingual AI. Ideal for law firms, financial services, and international organizations requiring on-premises deployment.
Recommended Hardware
| Model Size | Recommended GPU |
|---|---|
| FP16 | 2x RTX PRO 6000 Blackwell (192GB) or DGX Spark (128GB q4) |
| Q4 | RTX PRO 6000 Blackwell (96GB) |
Deploy Mistral Large On-Premises
Our team builds GPU-accelerated systems configured and optimized for Mistral Large. Private, secure, and fully under your control.