Cogito V2 Preview Llama 109B by deepcogito | Mume AI
Cogito V2 Preview Llama 109B
An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated Distillation & Amplification (IDA). It targets coding, STEM, instruction following, and general helpfulness, with stronger multilingual, tool-calling, and reasoning performance than size-equivalent baselines. The model supports long-context use (up to 10M tokens) and standard Transformers workflows. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)
by Deepcogito|33K context|$0.18/M input tokens|$0.59/M output tokens
Endpoints
Available providers for this model, with details on pricing, context limits, and real-time health metrics.