For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider polyglot coding – even higher than GPT‑4o mini. It’s ideal for tasks like classification or autocompletion.
by Openai|1M context|$0.10/M input tokens|$0.40/M output tokens
Endpoints
Available providers for this model, with details on pricing, context limits, and real-time health metrics.