Skip to main content

Overview

The OpenAI Python SDK provides two client classes for interacting with the API:
  • OpenAI - Synchronous client for blocking operations
  • AsyncOpenAI - Asynchronous client for concurrent operations
Both clients automatically infer credentials from environment variables and provide the same interface for API operations.

Basic Initialization

Synchronous Client

from openai import OpenAI

client = OpenAI()
The client automatically reads the OPENAI_API_KEY environment variable. You can also pass the API key explicitly:
client = OpenAI(api_key="your-api-key-here")

Asynchronous Client

from openai import AsyncOpenAI

client = AsyncOpenAI()
The AsyncOpenAI client provides the same initialization options as OpenAI. All API methods return awaitable coroutines.

Configuration Options

The client accepts several configuration parameters to customize behavior:
api_key
str | Callable[[], str]
Your OpenAI API key. Can be a string or a callable that returns a string (for dynamic key rotation).Environment variable: OPENAI_API_KEY
organization
str | None
default:"None"
Your organization ID for API requests.Environment variable: OPENAI_ORG_ID
project
str | None
default:"None"
Your project ID for API requests.Environment variable: OPENAI_PROJECT_ID
base_url
str | httpx.URL | None
default:"https://api.openai.com/v1"
Override the default API base URL.Environment variable: OPENAI_BASE_URL
timeout
float | Timeout | None
default:"600 seconds"
Request timeout in seconds. Can be a float or an httpx.Timeout object for granular control.See Timeouts for detailed configuration.
max_retries
int
default:"2"
Maximum number of retry attempts for failed requests.See Retries for retry behavior details.
default_headers
Mapping[str, str] | None
default:"None"
Additional headers to include with every request.
default_query
Mapping[str, object] | None
default:"None"
Additional query parameters to include with every request.
http_client
httpx.Client | httpx.AsyncClient | None
default:"None"
Custom HTTP client instance. Use DefaultHttpxClient or DefaultAsyncHttpxClient to preserve SDK defaults.
webhook_secret
str | None
default:"None"
Secret for webhook signature verification.Environment variable: OPENAI_WEBHOOK_SECRET
websocket_base_url
str | httpx.URL | None
default:"None"
Base URL for WebSocket connections. If not specified, the default base URL is used with ‘wss://’ scheme.

Advanced Configuration

Custom Headers and Query Parameters

client = OpenAI(
    default_headers={
        "X-Custom-Header": "value",
    },
    default_query={
        "custom_param": "value",
    },
)

Organization and Project IDs

client = OpenAI(
    organization="org-123",
    project="proj-456",
)
Organization and Project IDs are sent as OpenAI-Organization and OpenAI-Project headers with each request.

Dynamic API Keys

For scenarios requiring API key rotation, you can provide a callable:
def get_api_key() -> str:
    # Fetch API key from secure storage
    return fetch_from_vault()

client = OpenAI(api_key=get_api_key)
For async clients, the callable can be async:
async def get_api_key() -> str:
    # Fetch API key asynchronously
    return await fetch_from_vault_async()

client = AsyncOpenAI(api_key=get_api_key)

Custom HTTP Client

Customize the underlying HTTP client for advanced use cases like proxies or custom transports:
import httpx
from openai import OpenAI, DefaultHttpxClient

client = OpenAI(
    http_client=DefaultHttpxClient(
        proxy="http://proxy.example.com:8080",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)
When providing a custom http_client, use DefaultHttpxClient or DefaultAsyncHttpxClient to preserve the SDK’s default timeout, connection limits, and redirect behavior.

Context Manager Usage

Both clients support context managers for automatic resource cleanup:
with OpenAI() as client:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Hello!"}],
    )
# Client is automatically closed
Async client:
async with AsyncOpenAI() as client:
    response = await client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Hello!"}],
    )
# Client is automatically closed

Per-Request Configuration

Override client settings for individual requests using with_options():
client = OpenAI()

# Override timeout for a single request
response = client.with_options(timeout=30.0).chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
)

# Override max_retries for a single request
response = client.with_options(max_retries=5).chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
)

Client Lifecycle

Manual Cleanup

If not using a context manager, manually close the client when done:
client = OpenAI()
try:
    # Use client
    pass
finally:
    client.close()
Async client:
client = AsyncOpenAI()
try:
    # Use client
    pass
finally:
    await client.close()
The client automatically closes when garbage collected, but explicit cleanup is recommended for long-running applications.

Build docs developers (and LLMs) love