Overview
Codex (codex) provides intelligent code generation with support for the latest OpenAI models including GPT-5.1, GPT-5.2, O3, and O4-mini.
Codex supports session resume, read-only sandbox mode, image input, and provides detailed usage statistics with reasoning tokens.
Installation
Capabilities
Codex provides comprehensive AI coding features:Resume sessions with
exec resume <id>Sandbox read-only mode with
--sandbox read-onlyStructured JSONL output with
--json flagThread IDs in
thread.started eventsAttach images with
-i, --image flagImages embedded in prompt text on resume (no -i flag support)
Sessions stored in
~/.codex/sessions/YYYY/MM/DD/*.jsonlToken usage in
turn.completed eventsSelect models with
-m, --model flagReasoning tokens for O3/O4-mini models
Receive merged context via prompts
Export session context for transfer
src/main/agents/capabilities.ts:176
Command-Line Arguments
Maestro uses theexec subcommand for batch operations:
Batch execution mode (required for Maestro)
Enable JSON output for parsing
Auto-approve all operations (required by Maestro)
Allow execution outside git repositories
Resume Mode
Resume an existing session:Thread ID to resume
Codex resume requires a prompt - you cannot resume without sending a message.
Read-Only Mode
For analysis without file modifications:Sandbox mode restricting file write access
Model Selection
Specify a model:Model ID (e.g.,
gpt-5.3-codex, o3, o4-mini)Working Directory
Set working directory:Working directory for execution
Image Input
Attach an image:Path to image file
src/main/agents/definitions.ts:140
Configuration
Configure Codex in Maestro’s agent settings:Model Override
Specify a default model:Model ID (empty = use config.toml default)Examples:
gpt-5.3-codex, o3, gpt-4oContext Window
Set context window for UI display:Token limit for the selected model
- GPT-5.2/5.3: 400,000 tokens
- GPT-4o: 128,000 tokens
src/main/agents/definitions.ts:162
Session Storage
Codex stores sessions in date-organized JSONL files:src/main/storage/codex-session-storage.ts
Output Format
Codex outputs newline-delimited JSON events:Reasoning tokens are only present for O3/O4-mini models.
src/main/parsers/codex-parser.ts
Supported Models
Codex supports a wide range of OpenAI models:GPT-5 Series
GPT-5 Series
gpt-5.1- Latest GPT-5 modelgpt-5.1-codex- Optimized for codinggpt-5.1-codex-max- Extended contextgpt-5.2- Improved reasoninggpt-5.3-codex- Latest coding variant
O-Series (Reasoning)
O-Series (Reasoning)
o3- Advanced reasoning modelo4-mini- Lightweight reasoning
GPT-4 Series
GPT-4 Series
gpt-4o- GPT-4 Omnigpt-4-turbo- Fast GPT-4 variant
Error Patterns
Common errors Maestro detects:Pattern:
invalid API keySolution: Set valid OPENAI_API_KEYPattern:
rate limit exceededSolution: Wait or upgrade API tierPattern:
maximum context lengthSolution: Start new session or use model with larger contextPattern:
insufficient quotaSolution: Add credits to OpenAI accountsrc/main/parsers/error-patterns.ts
Usage with Maestro Features
Auto Run
Full support for playbooks and automated workflows
Group Chat
Multi-agent collaboration with other providers
Context Grooming
Export and merge conversation context
Session Discovery
Import sessions from
~/.codex/sessions/Best Practices
Choose the Right Model
- Use
gpt-5.3-codexfor coding tasks - Use
o3oro4-minifor complex reasoning - Use
gpt-4ofor general tasks
Troubleshooting
Codex not detected
Codex not detected
Verify installation:
API key not found
API key not found
Set environment variable:Or configure in
~/.codex/config.tomlResume fails
Resume fails
Ensure you’re providing a prompt: