Mume AI Logo
  • New chat
  • Models
  • Images
  • Editor
  • Chess
  • Dashboard
Your chats
Mume AI
DocsChat

Gemma 4 26B A4B

Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at a fraction of the compute cost. Supports multimodal input including text, images, and video (up to 60s at 1fps). Features a 256K token context window, native function calling, configurable thinking/reasoning mode, and structured output support. Released under Apache 2.0.
by Google|262K context|$0.13/M input tokens|$0.40/M output tokens

Endpoints

Available providers for this model, with details on pricing, context limits, and real-time health metrics.

No explicit endpoints reported for this model.