Overview
The Model Context Protocol (MCP) is an open standard that allows AI models to interact with external tools and data sources. Jan implements MCP as a host, letting you connect your models to databases, APIs, web services, and custom tools through a unified interface.MCP acts as a universal adapter between AI models and external tools, eliminating the need for custom integrations for each tool-model combination.
Why Use MCP?
Standardized Integration
One protocol for all tools - no custom connectors needed for each model-tool pair.
Extensible Capabilities
Give models access to real-time data, search engines, databases, and custom APIs.
Modular & Flexible
Swap models or tools without changing your integration code.
Open Ecosystem
Use community-built MCP servers or create your own custom tools.
How MCP Works
MCP uses a client-server architecture:- Jan (MCP Host): Coordinates between your model and MCP servers
- MCP Servers: Provide tools, resources, and prompts
- AI Model: Decides when and how to use available tools
Quick Start
Prerequisites
Enable Experimental Features
- Go to Settings > General > Advanced
- Enable Experimental Features
- Restart Jan if prompted
Install Dependencies
MCP servers require Node.js or Python:
- Download Node.js (v18 or later)
- Download Python (3.10 or later)
Example: Browser MCP Setup
Let’s set up the Browser MCP server to give your models web browsing capabilities:Add MCP Server
- Go to Settings > MCP Servers
- Click the + button in the upper right
- Enter configuration:
- Server Name:
browsermcp - Command:
npx - Arguments:
@browsermcp/mcp - Environment Variables: Leave empty
- Server Name:
- Click Save
Install Browser Extension
- Open a Chromium browser (Chrome, Brave, Edge, Vivaldi)
- Visit Browser MCP Extension
- Click Add to Chrome/Browser
- Enable the extension in incognito/private windows
- Click the extension icon and connect to the MCP server
Enable Model Tool Calling
For Cloud Models (e.g., Claude):
- Go to Settings > Model Providers > Anthropic
- After entering your API key, click the + or edit button next to your model
- Enable Tools
- Go to Settings > Model Providers > Llama.cpp
- Click the edit button next to your model
- Enable Tools capability
Popular MCP Servers
Search & Research
Serper
Google search integration with 2,500 free searches/month
Exa
AI-powered semantic search engine
Browser MCP
Automate web browsing and data extraction
Octagon
Deep research assistant for comprehensive analysis
Productivity
Todoist
Task management and to-do lists
Linear
Issue tracking and project management
Canva
Design and content creation
Data & Development
Jupyter
Execute Python code in Jupyter notebooks
E2B
Secure code execution sandbox
Adding MCP Servers
Configuration Format
All MCP servers follow this configuration pattern:| Field | Description | Example |
|---|---|---|
| Server Name | Unique identifier for the server | serper, browsermcp |
| Command | Executable to run the server | npx, python, node |
| Arguments | Command arguments | @serper-mcp/server, -m jupyter |
| Environment Variables | Key-value pairs for configuration | API_KEY=your-key-here |
NPM-based Servers
Many MCP servers are distributed via npm:Python-based Servers
Some servers use Python:Local Script Servers
Run custom scripts as MCP servers:Model Compatibility
Verifying Tool Calling Support
Cloud Models
Cloud Models
Most cloud models support tool calling:✅ Fully Supported:
- OpenAI: GPT-4o, GPT-4 Turbo, GPT-4, GPT-3.5 Turbo
- Anthropic: Claude Opus 4, Sonnet 4, Haiku 3.5
- Google: Gemini Pro, Gemini Ultra
- Mistral: Large, Medium models
- Older GPT-3 models
- Some specialized models
- Go to Settings > Model Providers > [Provider]
- Click edit or + next to your model
- Enable Tools capability
Local Models
Local Models
Tool calling support varies widely for local models:✅ Good Tool Calling:
- Llama 3.3 70B
- Hermes 2 Pro (Mistral/Llama variants)
- Jan v1 (optimized for tool calling)
- Functionary models
- NousResearch Hermes
- Smaller models (< 7B parameters)
- Base models (non-instruct versions)
- Older model architectures
- Go to Settings > Model Providers > Llama.cpp
- Click the edit button next to your model
- Enable Tools capability
- Test with simple tool calling tasks first
Managing MCP Servers
View Active Servers
Go to Settings > MCP Servers to see:- Server name and status (connected/disconnected)
- Available tools from each server
- Resource count and types
- Connection logs
Enable/Disable Servers
Toggle servers on/off without deleting configuration:- Find the server in Settings > MCP Servers
- Use the toggle switch to enable/disable
- Disabled servers won’t load or consume resources
Update Server Configuration
- Click the edit icon next to a server
- Modify settings (command, arguments, environment variables)
- Save changes
- Restart the server for changes to take effect
Remove Servers
- Find the server in Settings > MCP Servers
- Click the delete icon or three dots menu
- Confirm deletion
Security Considerations
Permission Model
Allow All MCP Tool Permission:- ✅ Convenient - models can use tools automatically
- ⚠️ Less secure - no per-request approval
- Best for: Trusted models, local development, single-user setups
- ✅ More secure - approve each tool use
- ⚠️ Less convenient - interrupts workflow
- Best for: Sensitive data, production systems, shared computers
Best Practices
Use Environment Variables
Store API keys in MCP server environment variables, not in prompts or model configs.
Limit API Access
Use API keys with minimal required permissions. Create read-only keys when possible.
Prompt Injection Risks
MCP servers can be vulnerable to prompt injection:- Malicious input could trick models into misusing tools
- External data (web pages, documents) may contain adversarial prompts
- Models might perform unintended actions
- Use models less susceptible to prompt injection (GPT-4, Claude)
- Review tool calls in server logs
- Implement rate limiting on sensitive APIs
- Use read-only access when possible
Building Custom MCP Servers
MCP Server Basics
An MCP server can provide:- Tools: Functions the model can call (e.g., search, calculate, query database)
- Resources: Data the model can access (e.g., files, databases, APIs)
- Prompts: Pre-built prompt templates for common tasks
Quick Example (Node.js)
Resources
Troubleshooting
MCP Server Won't Connect
MCP Server Won't Connect
Symptoms: Server shows as disconnected in settingsSolutions:
- Verify Node.js or Python is installed correctly
- Check command and arguments for typos
- Review server logs in Jan for error messages
- Ensure required npm packages are accessible:
npx @package-name/server --help - Restart Jan completely
Model Doesn't Use Tools
Model Doesn't Use Tools
Symptoms: Model ignores available MCP toolsSolutions:
- Verify Tools capability is enabled for the model
- Ensure the model supports tool calling (try Claude or GPT-4)
- Check that “Allow All MCP Tool Permission” is ON
- Be explicit in your prompt: “Use the search tool to find…”
- Try a different model known for good tool calling
Tools Fail to Execute
Tools Fail to Execute
Symptoms: Model tries to use tool but gets errorsSolutions:
- Check environment variables (API keys, config)
- Review server logs for specific error messages
- Verify API keys have sufficient credits/permissions
- Test the tool independently (e.g., curl for API-based tools)
- Ensure network connectivity for cloud-based tools
Vision Models Not Working with MCP
Vision Models Not Working with MCP
Symptoms: Browser MCP or screenshot tools failSolutions:
- Verify your model supports vision/images (not just tool calling)
- Enable Vision capability in model settings
- Use models known for vision: GPT-4o, Claude 4, Gemini Pro
- Check that image data is being passed correctly in server logs
Environment Variables Not Loading
Environment Variables Not Loading
Symptoms: MCP server can’t access API keys or configSolutions:
- Check format:
KEY=value(no quotes, no spaces around =) - Use multiple lines for multiple variables
- Restart the MCP server after changing variables
- Verify variable names match server’s requirements
- Check server documentation for required variables
Advanced Usage
Context Management
MCP servers consume context window space:- Each active tool adds to context overhead
- Large tool responses count toward token limits
- Multiple MCP servers = more context usage
- Only enable servers you’re actively using
- Use models with larger context windows (32k+) for multiple tools
- Disable tools after completing relevant tasks
- Monitor context usage in conversation
Chaining Tools
Models can chain multiple MCP tools together:Resource Access
Some MCP servers provide resources (not just tools):- File system access
- Database connections
- Document repositories
- API endpoints
Next Steps
MCP Examples
Step-by-step guides for popular MCP servers
Model Parameters
Optimize models for better tool calling
Local Models
Find models with strong tool calling support
API Server
Use MCP-enabled models via API