Skip to main content
OpenAI Codex is OpenAI’s advanced coding assistant, supporting GPT-5, GPT-4, and O-series reasoning models with Maestro.

Overview

Codex (codex) provides intelligent code generation with support for the latest OpenAI models including GPT-5.1, GPT-5.2, O3, and O4-mini.
Codex supports session resume, read-only sandbox mode, image input, and provides detailed usage statistics with reasoning tokens.

Installation

1

Install Codex CLI

Install via your preferred method:
npm install -g codex
2

Configure API Key

Set your OpenAI API key:
export OPENAI_API_KEY=sk-...
Or configure in ~/.codex/config.toml:
[auth]
api_key = "sk-..."
3

Verify Installation

Test the installation:
codex --version

Capabilities

Codex provides comprehensive AI coding features:
supportsResume
boolean
default:"true"
Resume sessions with exec resume <id>
supportsReadOnlyMode
boolean
default:"true"
Sandbox read-only mode with --sandbox read-only
supportsJsonOutput
boolean
default:"true"
Structured JSONL output with --json flag
supportsSessionId
boolean
default:"true"
Thread IDs in thread.started events
supportsImageInput
boolean
default:"true"
Attach images with -i, --image flag
supportsImageInputOnResume
boolean
default:"true"
Images embedded in prompt text on resume (no -i flag support)
supportsSessionStorage
boolean
default:"true"
Sessions stored in ~/.codex/sessions/YYYY/MM/DD/*.jsonl
supportsUsageStats
boolean
default:"true"
Token usage in turn.completed events
supportsModelSelection
boolean
default:"true"
Select models with -m, --model flag
supportsThinkingDisplay
boolean
default:"true"
Reasoning tokens for O3/O4-mini models
supportsContextMerge
boolean
default:"true"
Receive merged context via prompts
supportsContextExport
boolean
default:"true"
Export session context for transfer
Codex does NOT support cost tracking - only token counts. Pricing varies by model.
Source: src/main/agents/capabilities.ts:176

Command-Line Arguments

Maestro uses the exec subcommand for batch operations:
codex exec --json --dangerously-bypass-approvals-and-sandbox --skip-git-repo-check [options] -- "prompt"
exec
subcommand
Batch execution mode (required for Maestro)
--json
flag
Enable JSON output for parsing
--dangerously-bypass-approvals-and-sandbox
flag
Auto-approve all operations (required by Maestro)
--skip-git-repo-check
flag
Allow execution outside git repositories

Resume Mode

Resume an existing session:
codex exec --json [options] resume {threadId} -- "prompt"
resume
subcommand
Thread ID to resume
Codex resume requires a prompt - you cannot resume without sending a message.

Read-Only Mode

For analysis without file modifications:
codex exec --json --sandbox read-only [options] -- "prompt"
--sandbox
string
default:"read-only"
Sandbox mode restricting file write access

Model Selection

Specify a model:
codex exec --json -m gpt-5.3-codex [options] -- "prompt"
-m, --model
string
Model ID (e.g., gpt-5.3-codex, o3, o4-mini)

Working Directory

Set working directory:
codex exec --json -C /path/to/project [options] -- "prompt"
-C
string
Working directory for execution

Image Input

Attach an image:
codex exec --json -i /path/to/screenshot.png [options] -- "prompt"
-i, --image
string
Path to image file
The -i flag is NOT supported with resume. Images are saved to temp files and paths embedded in prompt text.
Source: src/main/agents/definitions.ts:140

Configuration

Configure Codex in Maestro’s agent settings:

Model Override

Specify a default model:
model
string
default:""
Model ID (empty = use config.toml default)Examples: gpt-5.3-codex, o3, gpt-4o

Context Window

Set context window for UI display:
contextWindow
number
default:"400000"
Token limit for the selected model
  • GPT-5.2/5.3: 400,000 tokens
  • GPT-4o: 128,000 tokens
Source: src/main/agents/definitions.ts:162

Session Storage

Codex stores sessions in date-organized JSONL files:
~/.codex/sessions/YYYY/MM/DD/{thread-id}.jsonl
Each line is a JSON event representing conversation turns, tool uses, and results. Maestro can import and resume existing Codex sessions. Source: src/main/storage/codex-session-storage.ts

Output Format

Codex outputs newline-delimited JSON events:
{
  "type": "thread.started",
  "thread_id": "thread_abc123",
  "timestamp": "2024-03-01T12:00:00Z"
}
Reasoning tokens are only present for O3/O4-mini models.
Source: src/main/parsers/codex-parser.ts

Supported Models

Codex supports a wide range of OpenAI models:
  • gpt-5.1 - Latest GPT-5 model
  • gpt-5.1-codex - Optimized for coding
  • gpt-5.1-codex-max - Extended context
  • gpt-5.2 - Improved reasoning
  • gpt-5.3-codex - Latest coding variant
  • o3 - Advanced reasoning model
  • o4-mini - Lightweight reasoning
  • gpt-4o - GPT-4 Omni
  • gpt-4-turbo - Fast GPT-4 variant

Error Patterns

Common errors Maestro detects:
API_KEY_INVALID
error
Pattern: invalid API keySolution: Set valid OPENAI_API_KEY
RATE_LIMIT
error
Pattern: rate limit exceededSolution: Wait or upgrade API tier
CONTEXT_LENGTH
error
Pattern: maximum context lengthSolution: Start new session or use model with larger context
INSUFFICIENT_QUOTA
error
Pattern: insufficient quotaSolution: Add credits to OpenAI account
Source: src/main/parsers/error-patterns.ts

Usage with Maestro Features

Auto Run

Full support for playbooks and automated workflows

Group Chat

Multi-agent collaboration with other providers

Context Grooming

Export and merge conversation context

Session Discovery

Import sessions from ~/.codex/sessions/

Best Practices

1

Choose the Right Model

  • Use gpt-5.3-codex for coding tasks
  • Use o3 or o4-mini for complex reasoning
  • Use gpt-4o for general tasks
2

Set Context Window

Configure the correct context window in agent settings to track usage accurately
3

Monitor Token Usage

O-series models use reasoning tokens - monitor costs carefully
4

Use Read-Only Mode

Enable sandbox read-only for analysis tasks to prevent accidental modifications

Troubleshooting

Verify installation:
which codex
codex --version
Set environment variable:
export OPENAI_API_KEY=sk-...
Or configure in ~/.codex/config.toml
Ensure you’re providing a prompt:
codex exec resume thread_abc123 -- "Continue the task"

Build docs developers (and LLMs) love