The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of overall performance, this model is second only to Qwen3.5-397B-A17B. Its text capabilities significantly outperform those of Qwen3-235B-2507, and its visual capabilities surpass those of Qwen3-VL-235B.
by Qwen|262K context|$0.26/M input tokens|$2.08/M output tokens
Endpoints
Available providers for this model, with details on pricing, context limits, and real-time health metrics.