Overview
The/stats command displays comprehensive statistics about your current Qwen Code session, including token usage, API calls, execution time, and tool usage.
Usage
Basic Usage
View overall session statistics:Alternative Names
The following aliases are available:/stats/usage
Subcommands
View specific statistics:Output Example
General Statistics
Model Statistics
Tool Statistics
What It Shows
Session Information
- Session ID: Unique identifier for this session
- Duration: How long the session has been running
- Model: Currently active AI model
- Started: When the session began
Token Usage
- Input tokens: Tokens sent to the model (prompts, context, tool results)
- Output tokens: Tokens generated by the model (responses)
- Total tokens: Sum of input and output
- Context: Current context window usage
API Metrics
- Total requests: Number of API calls made
- Success rate: Percentage of successful calls
- Average duration: Mean API response time
Tool Metrics
- Tool calls: Number of times each tool was invoked
- Success rate: Tool execution success percentage
- Performance: Average execution time per tool
Use Cases
Monitoring Token Usage
Track tokens to avoid hitting limits:Cost Tracking
Estimate costs for your session:Performance Analysis
Identify slow operations:Debugging
Check for failed operations:JSON Output
Get statistics in JSON format:Real-Time Monitoring
In interactive mode, watch stats in real-time:Understanding Token Counts
What Counts as Tokens
Input tokens include:- Your messages and prompts
- System instructions
- File contents from tools
- Previous conversation context
- Tool call definitions
- AI responses
- Tool call requests
- Reasoning text
Token Optimization
Reduce token usage:Session Comparison
Compare statistics across sessions:Exporting Statistics
Export stats to a file:Rate Limiting
Monitor API usage to avoid rate limits:- Slow down requests
- Use more efficient prompts
- Consider upgrading API tier
Troubleshooting
Missing Statistics
If stats are unavailable:- Session just started
- Telemetry is disabled
- Configuration issue
Inaccurate Token Counts
Token counts are estimates. For exact counts:High Token Usage
If token usage is unexpectedly high:- Check context size:
/stats - Review recent operations
- Compress context:
/compress - Clear if needed:
/clear
Best Practices
Regular Monitoring
Regular Monitoring
Check stats periodically:This helps you:
- Avoid hitting token limits
- Track costs
- Identify issues early
Baseline Measurements
Baseline Measurements
Record initial stats:
Tool Performance
Tool Performance
Monitor slow tools:Optimize or avoid slow tools when possible.
Cost Awareness
Cost Awareness
Track estimated costs:Switch to cheaper models for simple tasks.
Integration Examples
CI/CD Cost Tracking
Performance Dashboard
See Also
/compress
Reduce token usage with compression
Context Management
Understanding tokens and context
Cost Optimization
Tips for reducing AI costs
Performance
Optimizing performance and speed
