Error Handling
Build robust applications with proper error handling. The Mume Gateway uses standard HTTP error codes and the OpenAI SDK error classes.
Standard Errors
Python
Python
import openai
client = openai.OpenAI(
api_key="your-api-key",
base_url="https://mume.ai/api/v1",
)
try:
response = client.chat.completions.create(
model="openai/gpt-4.1-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
except openai.APIConnectionError as e:
print(f"Connection error: {e}")
except openai.RateLimitError as e:
print(f"Rate limit exceeded: {e}")
except openai.APIStatusError as e:
print(f"API error: {e.status_code} - {e.message}")JavaScript
JavaScript
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "your-api-key",
baseURL: "https://mume.ai/api/v1",
});
try {
const response = await client.chat.completions.create({
model: "openai/gpt-4.1-mini",
messages: [{role: "user", content: "Hello!"}],
});
console.log(response.choices[0].message.content);
} catch (error) {
if (error instanceof OpenAI.APIConnectionError) {
console.error("Connection error:", error.message);
} else if (error instanceof OpenAI.RateLimitError) {
console.error("Rate limit exceeded:", error.message);
} else if (error instanceof OpenAI.APIError) {
console.error(`API error: ${error.status} - ${error.message}`);
} else {
throw error;
}
}Error Codes
| Status | Error Type | Description |
|---|---|---|
| 400 | Bad Request | Invalid request parameters or malformed JSON |
| 401 | Unauthorized | Missing or invalid API key |
| 402 | Insufficient Credits | Account has no remaining credits |
| 429 | Rate Limit | Too many requests — back off and retry |
| 500 | Server Error | Internal server error — retry with backoff |
| 503 | Service Unavailable | Provider temporarily unavailable |
Streaming Error Handling
When using streaming, errors that occur mid-stream are delivered as SSE events rather than thrown exceptions. This allows partial responses to be preserved.
Python
Python
try:
stream = client.chat.completions.create(
model="openai/gpt-4.1-mini",
messages=[{"role": "user", "content": "Hello!"}],
stream=True,
)
for chunk in stream:
# Check for error events in the stream
if hasattr(chunk, "error") and chunk.error:
print(f"Stream error: {chunk.error.get('message')}")
break
content = chunk.choices[0].delta.content
if content:
print(content, end="", flush=True)
except openai.APIError as e:
# Connection or pre-stream errors are still raised as exceptions
print(f"API error: {e}")JavaScript
JavaScript
try {
const stream = await client.chat.completions.create({
model: "openai/gpt-4.1-mini",
messages: [{role: "user", content: "Hello!"}],
stream: true,
});
for await (const chunk of stream) {
if (chunk.error) {
console.error("Stream error:", chunk.error.message);
break;
}
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error(`API error: ${error.status} - ${error.message}`);
} else {
throw error;
}
}Note: Streaming errors include a
reqIdfield that can be used for debugging and support requests.
Best Practices
- Always wrap API calls in try/catch blocks
- Implement exponential backoff for 429 and 5xx errors
- Log the
reqIdfrom streaming errors for debugging - Monitor error rates to detect provider issues early
- Consider fallback models — if one provider is down, switch to another