Skip to main content

OpenAI SDK Integration

The OpenAI SDK provides official client libraries for Python and Node.js. Since LLM Gateway is fully OpenAI-compatible, you can use these SDKs with minimal configuration changes.

Quick Start

To use LLM Gateway with the OpenAI SDK, you only need to:
  1. Set the base URL to https://api.llmgateway.io/v1
  2. Use your LLM Gateway API key for authentication
  3. Specify models using the format provider/model or use auto for automatic routing
from openai import OpenAI

client = OpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key"
)

response = client.chat.completions.create(
    model="gpt-5",  # or "auto" for automatic routing
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Installation

pip install openai

Before and After Comparison

Python

from openai import OpenAI

client = OpenAI(
    api_key="sk-..."  # OpenAI API key
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

Node.js

import OpenAI from 'openai';

const client = new OpenAI({
    apiKey: 'sk-...'  // OpenAI API key
});

const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello!' }]
});

Streaming

LLM Gateway fully supports streaming responses:
from openai import OpenAI

client = OpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key"
)

stream = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "Write a story"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Function Calling (Tools)

LLM Gateway supports OpenAI’s function calling API:
from openai import OpenAI

client = OpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key"
)

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City name"
                    }
                },
                "required": ["location"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "What's the weather in Boston?"}],
    tools=tools
)

print(response.choices[0].message.tool_calls)

Environment Variables

You can use environment variables instead of hardcoding credentials:
.env
OPENAI_BASE_URL=https://api.llmgateway.io/v1
OPENAI_API_KEY=your-llmgateway-api-key
from openai import OpenAI

# Automatically reads OPENAI_BASE_URL and OPENAI_API_KEY
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "Hello!"}]
)

Model Selection

LLM Gateway supports multiple ways to specify models:
# Use LLM Gateway's model names (auto-routing across providers)
model="gpt-5"  # Routes to best available provider

# Use automatic routing
model="auto"  # Automatically selects cheapest model

# Specify a provider
model="openai/gpt-4o"  # Use OpenAI specifically
model="anthropic/claude-3-5-sonnet-20241022"  # Use Anthropic

Advanced Features

JSON Output

Force the model to output valid JSON:
response = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "Generate a user profile"}],
    response_format={"type": "json_object"}
)

Reasoning Models

Control reasoning effort for models like o1:
response = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "Solve this complex problem"}],
    reasoning_effort="high"  # Options: minimal, low, medium, high, xhigh
)

Caveats and Limitations

  • Model Names: Use LLM Gateway’s model naming scheme (e.g., gpt-5 instead of gpt-4o)
  • Authentication: Use your LLM Gateway API key, not provider-specific keys
  • Base URL: Always set base_url (Python) or baseURL (Node.js) to https://api.llmgateway.io/v1
  • Response Metadata: LLM Gateway adds extra metadata in the response (provider used, routing info, costs)

Next Steps

Build docs developers (and LLMs) love