Mistral: Mixtral 8x22B Instruct by mistralai | Mume AI
Mixtral 8x22B Instruct
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:
- strong math, coding, and reasoning
- large context length (64k)
- fluency in English, French, Italian, German, and Spanish
See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/).
#moe
by Mistralai|66K context|$0.90/M input tokens|$0.90/M output tokens
Endpoints
Available providers for this model, with details on pricing, context limits, and real-time health metrics.