Skip to main content

GET /v1/batches/:id

Retrieve information about a specific batch, including its status and results.

Authentication

Requires provider authentication headers:
x-portkey-provider: openai
Authorization: Bearer YOUR_OPENAI_API_KEY

Request

Headers

x-portkey-provider
string
required
The provider to route the request to (e.g., openai)
Authorization
string
required
Bearer token for the provider API

Path Parameters

id
string
required
The ID of the batch to retrieve

Response

id
string
The batch identifier
object
string
The object type, always “batch”
endpoint
string
The endpoint used for the batch
errors
object
Error information if any requests failed
input_file_id
string
The ID of the input file
completion_window
string
The completion time frame
status
string
The current status of the batch:
  • validating - Checking the input file format
  • in_progress - Processing the requests
  • finalizing - Generating output files
  • completed - All requests processed
  • failed - Batch processing failed
  • cancelled - Batch was cancelled
output_file_id
string
The ID of the file containing the outputs (available when status is completed)
error_file_id
string
The ID of the file containing errors (if any)
created_at
integer
Unix timestamp of when the batch was created
in_progress_at
integer
Unix timestamp of when the batch started processing
completed_at
integer
Unix timestamp of when the batch completed
request_counts
object
Statistics about the batch requests
metadata
object
Custom metadata attached to the batch

Example

curl https://localhost:8787/v1/batches/batch_abc123 \
  -H "x-portkey-provider: openai" \
  -H "Authorization: Bearer $OPENAI_API_KEY"

Response Example

{
  "id": "batch_abc123",
  "object": "batch",
  "endpoint": "/v1/chat/completions",
  "errors": null,
  "input_file_id": "file-abc123",
  "completion_window": "24h",
  "status": "completed",
  "output_file_id": "file-xyz789",
  "error_file_id": "file-err456",
  "created_at": 1713894800,
  "in_progress_at": 1713894900,
  "completed_at": 1713898500,
  "request_counts": {
    "total": 100,
    "completed": 98,
    "failed": 2
  },
  "metadata": {
    "description": "Daily batch job"
  }
}

Output File Format

Once the batch is completed, download the output file:
{"id": "batch_req_abc123", "custom_id": "request-1", "response": {"status_code": 200, "body": {"id": "chatcmpl-123", "object": "chat.completion", "created": 1713894800, "model": "gpt-4o-mini", "choices": [{"index": 0, "message": {"role": "assistant", "content": "2+2 equals 4."}, "finish_reason": "stop"}]}}, "error": null}
{"id": "batch_req_def456", "custom_id": "request-2", "response": {"status_code": 200, "body": {"id": "chatcmpl-456", "object": "chat.completion", "created": 1713894801, "model": "gpt-4o-mini", "choices": [{"index": 0, "message": {"role": "assistant", "content": "The capital of France is Paris."}, "finish_reason": "stop"}]}}, "error": null}

Best Practices

Batch processing typically takes several hours. Implement polling logic with appropriate intervals (1-5 minutes) rather than continuous polling.
Use webhooks (if supported by your provider) to get notified when batches complete instead of polling.

Create Batch

Create a new batch

Cancel Batch

Cancel a running batch

List Batches

View all batches

Build docs developers (and LLMs) love