A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.
The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.
It supports function calling and is released under the Apache 2.0 license.
by Mistralai|32K context|$0.01/M input tokens|$0.05/M output tokens
Endpoints
Available providers for this model, with details on pricing, context limits, and real-time health metrics.