Mume AI Logo
  • New chat
  • Models
  • Images
  • Editor
  • Chess
  • Dashboard
Your chats
Mume AI
DocsChat

Kimi Linear 48B A3B Instruct

Kimi Linear is a hybrid linear attention architecture that outperforms traditional full attention methods across various contexts, including short, long, and reinforcement learning (RL) scaling regimes. At its core is Kimi Delta Attention (KDA)—a refined version of Gated DeltaNet that introduces a more efficient gating mechanism to optimize the use of finite-state RNN memory. Kimi Linear achieves superior performance and hardware efficiency, especially for long-context tasks. It reduces the need for large KV caches by up to 75% and boosts decoding throughput by up to 6x for contexts as long as 1M tokens.
by Moonshotai|1M context|$0.30/M input tokens|$0.60/M output tokens

Endpoints

Available providers for this model, with details on pricing, context limits, and real-time health metrics.

No explicit endpoints reported for this model.