Skip to main content

Overview

The /stats command displays comprehensive statistics about your current Qwen Code session, including token usage, API calls, execution time, and tool usage.

Usage

Basic Usage

View overall session statistics:
qwen
> /stats

Alternative Names

The following aliases are available:
  • /stats
  • /usage

Subcommands

View specific statistics:
# Model-specific statistics
> /stats model

# Tool-specific statistics
> /stats tools

Output Example

General Statistics

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                    Session Statistics                           
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Session ID:        a1b2c3d4-e5f6-7890-abcd-ef1234567890
Duration:          1h 23m 45s
Model:             qwen-coder-plus
Started:           2024-03-10 14:30:15

Tokens:
  Input:           45,234 tokens
  Output:          12,567 tokens
  Total:           57,801 tokens
  Context:         15,234 / 262,144 (5.8%)

API Calls:
  Total requests:  23
  Successful:      23
  Failed:          0
  Avg duration:    1.2s

Messages:
  User messages:   15
  AI responses:    15
  Tool calls:      47

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Model Statistics

> /stats model
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                    Model Statistics                             
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Current Model:     qwen-coder-plus
Provider:          dashscope
Context Window:    262,144 tokens

Usage:
  Input tokens:    45,234
  Output tokens:   12,567
  Total tokens:    57,801
  Context used:    15,234 (5.8%)

Costs (estimated):
  Input:           $0.45
  Output:          $0.25
  Total:           $0.70

Performance:
  Avg latency:     1.2s
  Tokens/sec:      847
  Cache hits:      12
  Cache misses:    11

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Tool Statistics

> /stats tools
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                    Tool Statistics                              
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Tool Usage Summary:

  read          15 calls    avg 0.2s    ✓ 15  ✗ 0
  write         8 calls     avg 0.3s    ✓ 8   ✗ 0
  edit          12 calls    avg 0.5s    ✓ 11  ✗ 1
  bash          7 calls     avg 1.2s    ✓ 7   ✗ 0
  glob          3 calls     avg 0.1s    ✓ 3   ✗ 0
  grep          2 calls     avg 0.2s    ✓ 2   ✗ 0

Total Tool Calls:  47
Successful:        46 (97.9%)
Failed:            1 (2.1%)
Avg duration:      0.4s

Most Used Tools:
  1. read       (31.9%)
  2. edit       (25.5%)
  3. write      (17.0%)
  4. bash       (14.9%)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

What It Shows

Session Information

  • Session ID: Unique identifier for this session
  • Duration: How long the session has been running
  • Model: Currently active AI model
  • Started: When the session began

Token Usage

  • Input tokens: Tokens sent to the model (prompts, context, tool results)
  • Output tokens: Tokens generated by the model (responses)
  • Total tokens: Sum of input and output
  • Context: Current context window usage

API Metrics

  • Total requests: Number of API calls made
  • Success rate: Percentage of successful calls
  • Average duration: Mean API response time

Tool Metrics

  • Tool calls: Number of times each tool was invoked
  • Success rate: Tool execution success percentage
  • Performance: Average execution time per tool

Use Cases

Monitoring Token Usage

Track tokens to avoid hitting limits:
qwen
> /stats
Context: 45,000 / 262,144 (17.2%)

# Continue working...

> /stats
Context: 180,000 / 262,144 (68.7%)

# Time to compress
> /compress

Cost Tracking

Estimate costs for your session:
> /stats model
Costs (estimated): $2.45

Performance Analysis

Identify slow operations:
> /stats tools

bash     12 calls    avg 3.5s    # Slow!
read     45 calls    avg 0.1s    # Fast

Debugging

Check for failed operations:
> /stats tools

edit     8 calls 6 2   # 2 failures
Investigate the failures in your conversation history.

JSON Output

Get statistics in JSON format:
qwen --prompt "Some task" --output-format json
The final result includes stats:
{
  "type": "result",
  "isError": false,
  "durationMs": 45678,
  "apiDurationMs": 12345,
  "numTurns": 5,
  "usage": {
    "inputTokens": 1234,
    "outputTokens": 567,
    "totalTokens": 1801
  },
  "stats": {
    "sessionId": "abc123",
    "model": "qwen-coder-plus",
    "toolCalls": {
      "read": { "count": 5, "avgDuration": 0.2 },
      "write": { "count": 2, "avgDuration": 0.3 }
    }
  }
}

Real-Time Monitoring

In interactive mode, watch stats in real-time:
# Terminal 1: Run Qwen Code
qwen

# Terminal 2: Watch stats
watch -n 2 'qwen --prompt "/stats" --output-format json | jq .usage'
This displays live token usage updates.

Understanding Token Counts

What Counts as Tokens

Input tokens include:
  • Your messages and prompts
  • System instructions
  • File contents from tools
  • Previous conversation context
  • Tool call definitions
Output tokens include:
  • AI responses
  • Tool call requests
  • Reasoning text

Token Optimization

Reduce token usage:
# Before optimization
> /stats
Total tokens: 180,000

# Compress context
> /compress

# After optimization
> /stats
Total tokens: 25,000
See Context Management for more tips.

Session Comparison

Compare statistics across sessions:
# Session 1
qwen --session-id feature-a
> Build feature A
> /stats
Total tokens: 45,000
Duration: 30m

# Session 2
qwen --session-id feature-b  
> Build feature B
> /stats
Total tokens: 52,000
Duration: 45m

Exporting Statistics

Export stats to a file:
qwen --prompt "/stats" --output-format json > stats.json
Process with tools:
# Extract token usage
jq '.usage' stats.json

# Calculate costs
jq '.stats.estimatedCost' stats.json

# Generate report
jq -r '.stats | "Session: \(.sessionId)\nTokens: \(.totalTokens)\nDuration: \(.duration)"' stats.json

Rate Limiting

Monitor API usage to avoid rate limits:
> /stats model

API Calls:
  Total requests:  450
  Requests/min:    15
  Rate limit:      500/min (90% used)
If approaching limits:
  • Slow down requests
  • Use more efficient prompts
  • Consider upgrading API tier

Troubleshooting

Missing Statistics

If stats are unavailable:
Session start time is unavailable, cannot calculate stats.
This happens when:
  • Session just started
  • Telemetry is disabled
  • Configuration issue
Enable telemetry:
{
  "telemetry": {
    "enabled": true
  }
}

Inaccurate Token Counts

Token counts are estimates. For exact counts:
qwen --telemetry-log-prompts true
This logs full prompts for analysis.

High Token Usage

If token usage is unexpectedly high:
  1. Check context size: /stats
  2. Review recent operations
  3. Compress context: /compress
  4. Clear if needed: /clear

Best Practices

Check stats periodically:
# Every 10 messages
> /stats
This helps you:
  • Avoid hitting token limits
  • Track costs
  • Identify issues early
Record initial stats:
> /stats > baseline.txt
# Work on your task
> /stats > after.txt
# Compare
diff baseline.txt after.txt
Monitor slow tools:
> /stats tools
Optimize or avoid slow tools when possible.
Track estimated costs:
> /stats model
Switch to cheaper models for simple tasks.

Integration Examples

CI/CD Cost Tracking

#!/bin/bash
# Track CI costs

qwen --prompt "Run tests and fix issues" --output-format json > result.json

# Extract cost
COST=$(jq -r '.stats.estimatedCost' result.json)
echo "AI usage cost: \$${COST}" >> ci-costs.log

Performance Dashboard

#!/bin/bash
# Collect stats every hour

while true; do
  DATE=$(date +%Y-%m-%d_%H:%M:%S)
  qwen --prompt "/stats" --output-format json > "stats_${DATE}.json"
  sleep 3600
done

See Also

/compress

Reduce token usage with compression

Context Management

Understanding tokens and context

Cost Optimization

Tips for reducing AI costs

Performance

Optimizing performance and speed