Skip to main content

Error Response Format

When an error occurs, the gateway returns a JSON response with error details:
{
  "status": "failure",
  "message": "Error description",
  "error": {
    "type": "invalid_request_error",
    "code": "invalid_api_key",
    "message": "Incorrect API key provided"
  }
}

HTTP Status Codes

The gateway uses standard HTTP status codes:

2xx Success

200
OK
Request succeeded
201
Created
Resource created successfully

4xx Client Errors

400
Bad Request
Invalid request format or parameters
401
Unauthorized
Invalid or missing authentication credentials
403
Forbidden
Valid credentials but insufficient permissions
404
Not Found
Requested resource does not exist
422
Unprocessable Entity
Request format is valid but contains semantic errors
429
Too Many Requests
Rate limit exceeded

5xx Server Errors

500
Internal Server Error
Unexpected server error
502
Bad Gateway
Error from upstream provider
503
Service Unavailable
Service temporarily unavailable
504
Gateway Timeout
Request timeout from upstream provider

Error Types

Authentication Errors

{
  "status": "failure",
  "message": "Provider authentication failed",
  "error": {
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}
Common causes:
  • Missing x-portkey-api-key header
  • Invalid API key for the provider
  • Expired API key

Invalid Request Errors

{
  "status": "failure",
  "message": "Invalid request parameters",
  "error": {
    "type": "invalid_request_error",
    "code": "missing_required_parameter",
    "param": "model"
  }
}
Common causes:
  • Missing required parameters
  • Invalid parameter values
  • Malformed JSON

Provider Errors

{
  "status": "failure",
  "message": "Provider request failed",
  "error": {
    "type": "provider_error",
    "code": "model_not_found",
    "provider": "openai"
  }
}
Common causes:
  • Invalid model name
  • Model not available for your account
  • Provider API is down

Rate Limit Errors

{
  "status": "failure",
  "message": "Rate limit exceeded",
  "error": {
    "type": "rate_limit_error",
    "code": "rate_limit_exceeded"
  }
}
Response headers:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1677652320

Timeout Errors

{
  "status": "failure",
  "message": "Request timeout",
  "error": {
    "type": "timeout_error",
    "code": "request_timeout"
  }
}
Common causes:
  • Provider taking too long to respond
  • Network issues
  • Large request or response

Error Handling Best Practices

Retry Strategy

Implement exponential backoff for retries:
import time
from openai import OpenAI

def make_request_with_retry(client, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(
                model="gpt-4o-mini",
                messages=[{"role": "user", "content": "Hello!"}]
            )
            return response
        except Exception as e:
            if attempt == max_retries - 1:
                raise
            wait_time = 2 ** attempt
            time.sleep(wait_time)

Fallback Configuration

Use the gateway’s built-in fallback support:
curl http://localhost:8787/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H 'x-portkey-config: {
    "strategy": {"mode": "fallback"},
    "targets": [
      {"provider": "openai", "api_key": "sk-..."},
      {"provider": "anthropic", "api_key": "sk-ant-..."}
    ]
  }' \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Check Provider Status

Before making requests, you can check provider availability:
curl http://localhost:8787/v1/models \
  -H "x-portkey-provider: openai" \
  -H "x-portkey-api-key: sk-..."

Debug Mode

Enable debug mode for detailed error information:
x-portkey-debug
boolean
Enable debug mode (returns additional error details)
curl http://localhost:8787/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "x-portkey-provider: openai" \
  -H "x-portkey-api-key: sk-..." \
  -H "x-portkey-debug: true" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Build docs developers (and LLMs) love