Synopsis
cost command analyzes local conversation history to calculate token usage and estimated costs for Claude and Codex. It does not require web or CLI access, making it useful for offline cost tracking.
Parameters
Provider ID to analyze. Options:
- Specific provider ID (e.g.,
codex,claude) both- Analyze both Codex and Claudeall- Analyze all providers with local conversation history
~/.codexbar/config.json.Output format. Options:
text- Human-readable output with cost summariesjson- Machine-readable JSON output with detailed token breakdowns
Ignore cached conversation scans and re-analyze all local data. Useful when conversation files have been updated.
Pretty-print JSON output with indentation.
Emit machine-readable JSONL logs on stderr.
Suppress non-JSON output; errors become JSON payloads.
Enable verbose logging.
Set log level:
trace, verbose, debug, info, warning, error, or critical.Disable ANSI colors in text output.
Show help message.
Show version information.
Response Fields
JSON Output Structure
Thecost command returns an array of provider cost payloads:
Provider ID (e.g.,
codex, claude).Data source: always
local for cost command.ISO 8601 timestamp of when cost analysis was performed.
Total tokens used in current session.
Estimated cost in USD for current session.
Total tokens used in the last 30 days.
Estimated cost in USD for the last 30 days.
Array of daily token usage and cost breakdowns.
Aggregate token and cost totals.
Examples
Default cost analysis (text format)
JSON output with pretty printing
Query specific provider
Force refresh (ignore cache)
Machine-readable output only
Exit Codes
- 0: Success
- 1: Unexpected failure
- 2: Provider binary not found (not applicable for cost command)
- 3: Parse/format error
- 4: CLI timeout (not applicable for cost command)
Notes
Data Sources
- The
costcommand analyzes local conversation history stored by the menubar app - Does not require web scraping, CLI tools, or API access
- Useful for offline cost tracking and budgeting
- Claude: Analyzes conversations from
~/Library/Application Support/Claude/ - Codex: Analyzes conversations from OpenAI API usage logs
Cost Estimation
- Costs are estimates based on published pricing and may not reflect actual charges
- Pricing varies by model:
- Input tokens, output tokens, and cache tokens are priced differently
- Some models have tiered pricing or volume discounts
- Cache tokens (prompt caching) are typically cheaper than regular input tokens
- Actual costs depend on your provider plan and any custom pricing agreements
Token Counting
- Token counts are extracted from conversation metadata when available
- For older conversations without metadata, tokens may be estimated using tokenizer libraries
- Cache read/creation tokens are only counted for providers that support prompt caching (e.g., Claude)
Caching
- By default, conversation scans are cached for performance
- Use
--refreshto ignore the cache and re-scan all conversations - Cache is automatically invalidated when new conversations are added
Supported Providers
- Claude: Full support with prompt caching token breakdown
- Codex (OpenAI): Full support for GPT-4, GPT-4o, GPT-3.5-turbo, etc.
- Other providers may have limited or no local cost tracking
Time Windows
- Session: Current session’s token usage (resets with the provider’s session window)
- Last 30 days: Rolling 30-day window of token usage
- Daily breakdown includes all days with recorded usage in the last 30 days
