Skip to main content
GET
/
v1
/
batches
/
{batch_id}
from openai import OpenAI
client = OpenAI()

batch = client.batches.retrieve("batch_abc123")
print(batch.status)
{
  "id": "batch_abc123",
  "object": "batch",
  "endpoint": "/v1/chat/completions",
  "input_file_id": "file-abc123",
  "completion_window": "24h",
  "status": "completed",
  "output_file_id": "file-xyz789",
  "error_file_id": "file-error456",
  "created_at": 1711073447,
  "in_progress_at": 1711073448,
  "completed_at": 1711075047,
  "request_counts": {
    "total": 100,
    "completed": 95,
    "failed": 5
  }
}
Retrieves a batch.

Path Parameters

batch_id
string
required
The ID of the batch to retrieve.

Response

Returns the Batch object matching the specified ID.
id
string
The batch ID.
object
string
The object type, always batch.
endpoint
string
The OpenAI API endpoint used by the batch.
status
string
The current status of the batch.
request_counts
object
The request counts for different statuses within the batch.
output_file_id
string
The ID of the file containing the outputs of successfully executed requests.
error_file_id
string
The ID of the file containing the outputs of requests with errors.
created_at
integer
The Unix timestamp (in seconds) for when the batch was created.
completed_at
integer
The Unix timestamp (in seconds) for when the batch was completed.
from openai import OpenAI
client = OpenAI()

batch = client.batches.retrieve("batch_abc123")
print(batch.status)
{
  "id": "batch_abc123",
  "object": "batch",
  "endpoint": "/v1/chat/completions",
  "input_file_id": "file-abc123",
  "completion_window": "24h",
  "status": "completed",
  "output_file_id": "file-xyz789",
  "error_file_id": "file-error456",
  "created_at": 1711073447,
  "in_progress_at": 1711073448,
  "completed_at": 1711075047,
  "request_counts": {
    "total": 100,
    "completed": 95,
    "failed": 5
  }
}

Build docs developers (and LLMs) love