Skip to main content

@ccusage/codex

Analyze OpenAI Codex CLI usage logs with the same reporting experience as ccusage. Track token usage, costs, and sessions for GPT-5 and other OpenAI models.
Beta: The Codex CLI support is experimental. Expect breaking changes until the upstream Codex tooling stabilizes.

Quick Start

npx @ccusage/codex@latest --help
Critical for bunx users: Bun 1.2.x’s bunx prioritizes binaries matching the package name suffix. If you have an existing codex command installed (e.g., GitHub Copilot’s codex), always use bunx @ccusage/codex@latest with the version tag.

Installation

Since npx @ccusage/codex@latest is quite long to type repeatedly, set up a shell alias:
alias ccusage-codex='bunx @ccusage/codex@latest'
Then simply run:
ccusage-codex daily
ccusage-codex monthly --json

Data Source

The CLI looks for Codex session JSONL files under:
  • Default: ~/.codex/sessions/
  • Custom: Set CODEX_HOME environment variable
Each JSONL line is an event_msg with payload.type === "token_count" containing usage data.

Common Commands

Daily Usage Report

View token usage and costs grouped by date:
npx @ccusage/codex@latest daily

Date Range Filtering

Filter reports by specific date ranges:
npx @ccusage/codex@latest daily --since 20250911 --until 20250917

JSON Output

Export structured data for scripting or integration:
npx @ccusage/codex@latest daily --json

Monthly Report

View usage aggregated by month:
npx @ccusage/codex@latest monthly
npx @ccusage/codex@latest monthly --json

Session Report

Detailed breakdown by individual sessions:
npx @ccusage/codex@latest sessions

Token Fields

The analyzer tracks these token types:
FieldDescriptionBilling
input_tokensPrompt tokens sent this turnPriced at input rate minus cached share
cached_input_tokensPrompt tokens from cachePriced at cached input rate (cheaper)
output_tokensCompletion tokens including reasoningPriced at output rate
reasoning_output_tokensStructured reasoning breakdownInformational only (included in output)
total_tokensCumulative totalSum of input + output

Features

Responsive Tables

Beautiful terminal tables shared with the ccusage CLI

Offline Pricing

Offline-first pricing cache with automatic LiteLLM refresh

Per-Model Tracking

Token and cost aggregation by model, including cached tokens

Multiple Reports

Daily, monthly, and session rollups with identical CLI options

Environment Variables

VariableDescriptionDefault
CODEX_HOMEOverride root directory for Codex sessions~/.codex
LOG_LEVELControl log verbosity (0 silent … 5 trace)-

Model Support

GPT-5 and Aliases

The CLI supports GPT-5 models and automatically resolves aliases:
  • gpt-5-codex → Maps to LiteLLM’s gpt-5 pricing
  • gpt-5 → 1M token context windows
  • Automatic fallback for legacy sessions without model metadata

Pricing Notes

Example GPT-5 pricing (as of 2025-08-07):
  • Input: $1.25 per 1M tokens
  • Cached Input: $0.125 per 1M tokens (10x cheaper)
  • Output: $10 per 1M tokens
Sessions from early September 2025 that lack model metadata are treated as gpt-5 and marked with isFallback: true in JSON output. Pricing for these sessions is approximate.

Legacy JSONL Handling

For legacy JSONL files missing turn_context metadata:
  • Tokens are treated as gpt-5 for visibility
  • Pricing is approximate (flagged in reports)
  • JSON output includes "isFallback": true on affected model entries

npm Package

View on npm registry

OpenAI Codex

Official Codex CLI repository

ccusage Family

Explore all related tools

Main Documentation

Full ccusage documentation

Build docs developers (and LLMs) love