Mistral: Mistral 7B Instruct v0.2 by mistralai | Mume AI
Mistral 7B Instruct v0.2
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
An improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), with the following changes:
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
by Mistralai|33K context|$0.20/M input tokens|$0.20/M output tokens
Endpoints
Available providers for this model, with details on pricing, context limits, and real-time health metrics.