Skip to main content

Overview

The /model command allows you to switch between different AI models during your session. This is useful for choosing models with different capabilities, speed, or cost characteristics.

Usage

In Interactive Mode

qwen
> /model
This opens an interactive model selection dialog.

Via Command-Line Flag

Specify a model when starting Qwen Code:
qwen --model qwen-coder-plus

What It Does

The /model command:
  1. Displays Available Models: Shows all models you have access to
  2. Shows Current Model: Indicates which model is active
  3. Allows Selection: Lets you choose a new model
  4. Validates Access: Checks authentication and permissions
  5. Switches Seamlessly: Changes the model without losing context

Available Models

The available models depend on your authentication provider:

Qwen Models (Dashscope)

qwen-coder-plus
  • Best for complex coding tasks
  • 256K context window
  • Optimized for multi-file edits
  • Highest quality reasoning
qwen --model qwen-coder-plus

OpenAI Models

When using OpenAI authentication:
# GPT-4 Turbo
qwen --model gpt-4-turbo --auth-type openai

# GPT-4
qwen --model gpt-4 --auth-type openai

# GPT-3.5 Turbo
qwen --model gpt-3.5-turbo --auth-type openai

Anthropic Models

When using Anthropic authentication:
# Claude 3 Opus
qwen --model claude-3-opus-20240229 --auth-type anthropic

# Claude 3 Sonnet
qwen --model claude-3-sonnet-20240229 --auth-type anthropic

# Claude 3 Haiku
qwen --model claude-3-haiku-20240307 --auth-type anthropic

Model Selection Dialog

When you run /model in interactive mode:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                    Select Model                                 
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  ● qwen-coder-plus      256K context, best for coding
    qwen-turbo           128K context, fastest
    qwen-max             200K context, balanced
    
Use ↑↓ to navigate, Enter to select, Esc to cancel
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Model Switching

During a Session

Switch models mid-conversation:
qwen
> Help me write a function
AI: [using qwen-turbo for quick response]

> /model
# Select qwen-coder-plus

> Now optimize this for performance  
AI: [using qwen-coder-plus for deep analysis]
Your conversation context is preserved when switching models.

Between Sessions

Set a default model:
// settings.json
{
  "ai": {
    "model": "qwen-coder-plus",
    "provider": "dashscope"
  }
}
Then start Qwen Code:
qwen  # Uses qwen-coder-plus by default

Choosing the Right Model

For Complex Code Generation

Use powerful models for sophisticated tasks:
qwen --model qwen-coder-plus --prompt "Refactor this entire module"
Best for:
  • Multi-file refactoring
  • Architecture decisions
  • Complex algorithms
  • Debugging difficult issues

For Quick Questions

Use fast models for simple queries:
qwen --model qwen-turbo --prompt "What does this function do?"
Best for:
  • Code explanations
  • Simple questions
  • Quick fixes
  • Documentation

For General Development

Use balanced models for everyday work:
qwen --model qwen-max
Best for:
  • Regular development
  • Code reviews
  • Feature implementation
  • Testing

Model Comparison

ModelContextSpeedQualityCostBest For
qwen-coder-plus256KMediumHighest$$$Complex coding
qwen-max200KMediumHigh$$General purpose
qwen-turbo128KFastGood$Quick tasks

Model Configuration

Via Settings File

// .qwen/settings.json
{
  "ai": {
    "model": "qwen-coder-plus",
    "provider": "dashscope",
    "temperature": 0.7,
    "maxTokens": 4096
  }
}

Via Environment Variables

export QWEN_MODEL="qwen-coder-plus"
qwen

Via Command-Line

qwen --model qwen-coder-plus
Precedence order:
  1. Command-line flags (highest)
  2. Environment variables
  3. Project settings
  4. Global settings (lowest)

Context Window Limits

Each model has a context window limit:
qwen
> /stats

Model: qwen-coder-plus
Context: 15,234 / 262,144 tokens (5.8%)
Larger context windows allow:
  • Longer conversations
  • More file content
  • Better cross-file understanding
  • Fewer compressions needed

Model Compatibility

Tool Support

All models support standard Qwen Code tools: ✅ File operations (read, write, edit)
✅ Shell commands (bash)
✅ Code search and navigation
✅ Git operations
✅ Web search

Feature Support

Some features may work better with certain models:
Featureqwen-coder-plusqwen-maxqwen-turbo
Multi-file edits⭐⭐⭐⭐⭐
Code generation⭐⭐⭐⭐⭐⭐⭐
Explanations⭐⭐⭐⭐⭐⭐⭐⭐
Speed⭐⭐⭐⭐⭐⭐⭐
Cost efficiency⭐⭐⭐⭐⭐

Custom Models

For custom or self-hosted models:
qwen --model custom-model \
     --openai-base-url https://your-api.com/v1 \
     --openai-api-key your-key
Configuration:
{
  "ai": {
    "provider": "openai",
    "model": "custom-model",
    "baseUrl": "https://your-api.com/v1",
    "apiKey": "${CUSTOM_API_KEY}"
  }
}

Troubleshooting

Model Not Available

Error: Model 'qwen-coder-plus' is not available
Check:
  1. Authentication is configured: /auth
  2. You have access to the model
  3. Model name is correct
  4. API key has proper permissions

Authentication Required

Error: Authentication type not available
Configure authentication first:
qwen
> /auth
# Configure your API key

> /model
# Now select a model

Model Switch Failed

Error: Failed to switch to model 'qwen-max'
Try:
  1. Check network connection
  2. Verify API credentials
  3. Test with a different model
  4. Restart Qwen Code

Best Practices

Begin with fast models for exploration:
qwen --model qwen-turbo
> Explore the codebase and ask questions

> /model  # Switch to qwen-coder-plus
> Now implement the complex feature
Choose models based on task complexity:
  • Simple questions: qwen-turbo
  • Code generation: qwen-coder-plus
  • General work: qwen-max
Configure appropriate defaults per project:
// For a large codebase
{
  "ai": {
    "model": "qwen-coder-plus"  // Need large context
  }
}
Track usage with different models:
> /stats model
Switch to cheaper models when appropriate.

Model-Specific Tips

Qwen Coder Plus

qwen --model qwen-coder-plus
  • Best for: Multi-file refactoring, complex implementations
  • Use when: Working on architecture or difficult problems
  • Tip: Provide broader context for better results

Qwen Turbo

qwen --model qwen-turbo
  • Best for: Quick answers, simple code generation
  • Use when: Iterating rapidly or asking questions
  • Tip: Keep prompts focused and specific

Qwen Max

qwen --model qwen-max
  • Best for: Everyday development tasks
  • Use when: General coding work
  • Tip: Good default for most scenarios

See Also

Authentication

Set up model provider authentication

Configuration

Configure default models and settings

Statistics

View model usage statistics

Model Comparison

Detailed model comparison and benchmarks