Deep Cogito: Cogito V2 Preview Deepseek 671B by deepcogito | Mume AI
Cogito V2 Preview Deepseek 671B
Cogito v2 is a multilingual, instruction-tuned Mixture of Experts (MoE) large language model with 671 billion parameters. It supports both standard and reasoning-based generation modes. The model introduces hybrid reasoning via Iterated Distillation and Amplification (IDA)—an iterative self-improvement strategy designed to scale alignment with general intelligence. Cogito v2 has been optimized for STEM, programming, instruction following, and tool use. It supports 128k context length and offers strong performance in both multilingual and code-heavy environments. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)
by Deepcogito|164K context|$1.25/M input tokens|$1.25/M output tokens
Endpoints
Available providers for this model, with details on pricing, context limits, and real-time health metrics.