Welcome to Jan
Jan is bringing the best of open-source AI in an easy-to-use product. Download and run LLMs with full control and privacy.Download Jan
Get started in under 5 minutes. Download, install, and chat with AI models locally.
Installation Guide
Platform-specific instructions for Windows, macOS, and Linux.
Model Context Protocol
Enable agentic capabilities with MCP integration for web search, productivity tools, and more.
API Reference
OpenAI-compatible local API server at localhost:1337 for integrations.
Why Jan?
Jan combines the power of open-source AI with privacy-first principles, giving you a ChatGPT-like experience that runs entirely on your machine.100% Offline
Everything runs locally when you want it to. Your data never leaves your computer.
Local AI Models
Download and run Llama, Gemma, Qwen, GPT-oss, and more from HuggingFace.
Cloud Integration
Connect to GPT (OpenAI), Claude (Anthropic), Mistral, Groq, and other cloud models.
Custom Assistants
Create specialized AI assistants tailored to your specific tasks and workflows.
OpenAI-Compatible API
Local server at localhost:1337 lets other applications integrate with your models.
Free & Open Source
Apache 2.0 licensed. Inspect the code, contribute, or build your own extensions.
Key Features
Local AI Models
Download and run large language models from HuggingFace directly on your machine:- Llama 3.1, 3.2, 3.3
- Gemma 2
- Qwen 2.5
- GPT-oss
- Jan v1 (4B parameter model optimized for reasoning)
- And hundreds more
Cloud Model Support
Connect to leading cloud AI providers:- OpenAI: GPT-4, GPT-4 Turbo, GPT-3.5
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus
- Mistral AI: Mistral Large, Mistral Medium
- Groq: Ultra-fast inference
- Cohere: Command models
- Google: Gemini models
Model Context Protocol (MCP)
Extend Jan’s capabilities with agentic features:- Web search integration (Serper, Exa)
- Productivity tools (Linear, Todoist)
- Data analysis (Jupyter, E2B)
- Design tools (Canva)
- And more community servers
Privacy First
When running local models:- No internet connection required
- No data collection
- No telemetry
- Complete offline operation
- Full control over your data
System Requirements
- Tab Title
- Tab Title
Operating System
- macOS 13.6+
- Windows 10+
- Linux (most distributions)
- 8GB RAM (3B parameter models)
- 10GB free disk space
- CPU with AVX2 support
- Intel Haswell (2013) or newer
- AMD Excavator (2015) or newer
Larger models require more RAM and VRAM. A 7B parameter model typically needs 16GB RAM or 8GB VRAM for smooth operation.
Quick Example
Once installed, using Jan is as simple as:Popular Use Cases
Code Assistant
Use models like Qwen Coder or GPT-4 to help with programming tasks, debugging, and code reviews.
Research & Writing
Run DeepSeek R1 or Claude for in-depth research, content creation, and document analysis.
Privacy-Sensitive Work
Process confidential data locally without cloud exposure - perfect for legal, medical, or financial work.
Development & Testing
Use the local API server (localhost:1337) to build and test AI-powered applications.
Architecture
Jan is built on proven open-source technologies:- llama.cpp: Efficient inference engine for LLMs
- Tauri: Lightweight cross-platform desktop framework
- Model Context Protocol: Standard for AI agent integration
- OpenAI-compatible API: Industry-standard API format
Community & Support
Discord Community
Join 30,000+ users in our Discord server. Get help in #🆘|jan-help channel.
GitHub
Report issues, contribute code, or star the repo.
Changelog
See what’s new in each release.
Documentation
Comprehensive guides and API documentation.
Ready to get started? Head to the Quickstart Guide to download Jan and run your first AI model in minutes.
Next Steps
Quickstart
Install Jan and start your first chat in under 5 minutes.
Installation
Platform-specific installation instructions and GPU setup.