Skip to main content
This quickstart guide will help you make your first API call using the OpenAI Python SDK. We’ll cover both the Responses API (recommended) and the Chat Completions API.

Prerequisites

Before you begin, make sure you have:
1

Installed the SDK

pip install openai
See the Installation guide for more details.
2

Set up your API key

export OPENAI_API_KEY="sk-proj-..."
Get your API key from the OpenAI dashboard.

Using the Responses API

The Responses API is the primary way to interact with OpenAI models. It provides a simplified interface for generating text.

Basic Text Generation

Create your first response:
import os
from openai import OpenAI

client = OpenAI(
    # This is the default and can be omitted
    api_key=os.environ.get("OPENAI_API_KEY"),
)

response = client.responses.create(
    model="gpt-5.2",
    instructions="You are a coding assistant that talks like a pirate.",
    input="How do I check if a Python object is an instance of a class?",
)

print(response.output_text)
The api_key parameter is optional. If not provided, the client automatically reads from the OPENAI_API_KEY environment variable.

Simple Input

For straightforward queries, you can pass just the input text:
response = client.responses.create(
    model="gpt-5.2",
    input="Explain quantum computing in simple terms.",
)

print(response.output_text)

Vision Input

Process images alongside text:
response = client.responses.create(
    model="gpt-5.2",
    input=[
        {
            "role": "user",
            "content": [
                {"type": "input_text", "text": "What is in this image?"},
                {
                    "type": "input_image",
                    "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"
                },
            ],
        }
    ],
)

print(response.output_text)

Using the Chat Completions API

The Chat Completions API is the previous standard for generating text, and is supported indefinitely.

Basic Chat Completion

from openai import OpenAI

client = OpenAI()

completion = client.chat.completions.create(
    model="gpt-5.2",
    messages=[
        {"role": "developer", "content": "Talk like a pirate."},
        {
            "role": "user",
            "content": "How do I check if a Python object is an instance of a class?",
        },
    ],
)

print(completion.choices[0].message.content)

Message Roles

The Chat Completions API uses different message roles:
  • developer: System-level instructions for the model’s behavior
  • user: User messages in the conversation
  • assistant: Previous assistant responses (for conversation history)
completion = client.chat.completions.create(
    model="gpt-5.2",
    messages=[
        {"role": "developer", "content": "You are a helpful Python tutor."},
        {"role": "user", "content": "What is a list comprehension?"},
        {"role": "assistant", "content": "A list comprehension is a concise way to create lists..."},
        {"role": "user", "content": "Can you show me an example?"},
    ],
)

print(completion.choices[0].message.content)

Streaming Responses

Stream responses in real-time for a better user experience:
from openai import OpenAI

client = OpenAI()

stream = client.responses.create(
    model="gpt-5.2",
    input="Write a one-sentence bedtime story about a unicorn.",
    stream=True,
)

for event in stream:
    print(event)

Async Usage

For async applications, use AsyncOpenAI instead of OpenAI:
import os
import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    # This is the default and can be omitted
    api_key=os.environ.get("OPENAI_API_KEY"),
)

async def main() -> None:
    response = await client.responses.create(
        model="gpt-5.2",
        input="Explain disestablishmentarianism to a smart five year old.",
    )
    print(response.output_text)

asyncio.run(main())
The async client provides identical functionality to the sync client, but all methods use await.

Error Handling

Handle API errors gracefully:
import openai
from openai import OpenAI

client = OpenAI()

try:
    response = client.responses.create(
        model="gpt-5.2",
        input="Hello, world!",
    )
    print(response.output_text)

except openai.APIConnectionError as e:
    print("The server could not be reached")
    print(e.__cause__)  # an underlying Exception, likely raised within httpx.

except openai.RateLimitError as e:
    print("A 429 status code was received; we should back off a bit.")

except openai.APIStatusError as e:
    print("Another non-200-range status code was received")
    print(e.status_code)
    print(e.response)

Error Types

The SDK provides specific exception types for different errors:
Status CodeError Type
400BadRequestError
401AuthenticationError
403PermissionDeniedError
404NotFoundError
422UnprocessableEntityError
429RateLimitError
≥500InternalServerError
N/AAPIConnectionError

Common Configuration

Timeouts

Set custom timeout values:
from openai import OpenAI

# Set default timeout for all requests (default is 10 minutes)
client = OpenAI(timeout=20.0)  # 20 seconds

# Override per-request
client.with_options(timeout=5.0).responses.create(
    model="gpt-5.2",
    input="Quick response please!",
)

Retries

Configure retry behavior:
from openai import OpenAI

# Configure default retries (default is 2)
client = OpenAI(max_retries=0)  # Disable retries

# Override per-request
client.with_options(max_retries=5).responses.create(
    model="gpt-5.2",
    input="Important request - retry if it fails",
)
Certain errors (connection errors, 408, 409, 429, and ≥500) are automatically retried with exponential backoff.

Custom Base URL

Use a custom API endpoint:
from openai import OpenAI

client = OpenAI(
    base_url="http://my.test.server.example.com:8083/v1",
)

Next Steps

Now that you’ve made your first API call, explore more advanced features:

Responses API

Learn more about the Responses API

Chat Completions

Explore the Chat Completions API

Streaming

Master streaming responses

Error Handling

Handle errors like a pro

Build docs developers (and LLMs) love