Skip to main content

Frequently Asked Questions

Find answers to common questions about Jan.

General

Jan is an open-source ChatGPT alternative that brings the best of open-source AI in an easy-to-use product. Download and run large language models (LLMs) with full control and privacy. Everything runs locally on your device when you want it to.
Yes, Jan is completely free and open-source under the Apache 2.0 license. There are no subscription fees, no hidden costs, and no usage limits when running models locally.
Jan runs entirely on your device, giving you complete privacy and control. You can use local AI models (like Llama, Gemma, Qwen) or connect to cloud services (OpenAI, Anthropic, Groq). No data leaves your device unless you choose to use cloud integrations.
No, Jan works completely offline when using local models. You only need internet to:
  • Download models initially
  • Use cloud integrations (OpenAI, Anthropic, etc.)
  • Download updates
All data is stored locally on your device:
  • macOS: ~/Library/Application Support/Jan
  • Windows: C:\Users\%USERNAME%\AppData\Roaming\Jan
  • Linux: ~/.config/Jan

Installation & Setup

Minimum requirements:
  • macOS: 13.6+ (Ventura)
  • Windows: 10 or later
  • Linux: Most distributions supported
RAM requirements:
  • 8GB RAM: Run 3B parameter models
  • 16GB RAM: Run 7B parameter models
  • 32GB RAM: Run 13B+ parameter models
See System Requirements for full details.
Download the installer for your platform:Also available on Microsoft Store and Flathub.
No, Jan works on CPU-only systems. However, a GPU significantly improves performance:
  • NVIDIA GPUs: Supported on Windows and Linux (CUDA 11.7+)
  • AMD GPUs: Supported on Windows
  • Intel Arc: Supported on Windows
  • Apple Silicon: Uses Metal acceleration on Mac
Yes, choose smaller models that fit your RAM:
  • 8GB RAM: Use 3B models (3-4GB)
  • 16GB RAM: Use 7B models (6-8GB)
  • Keep model size under 80% of available RAM for best performance

Models

Local models from HuggingFace:
  • Llama (Meta)
  • Gemma (Google)
  • Qwen (Alibaba)
  • Mistral
  • And many more open-source models
Cloud integrations:
  • OpenAI (GPT-4, GPT-3.5)
  • Anthropic (Claude)
  • Groq
  • Mistral API
  1. Open Jan
  2. Go to the Hub section
  3. Browse available models
  4. Click Download on your chosen model
  5. Wait for download to complete
  6. Start chatting!
Model sizes vary:
  • 3B models: 2-4GB
  • 7B models: 4-8GB
  • 13B models: 8-16GB
  • 70B+ models: 40GB+
Check the model card before downloading.
Yes, download as many models as you have disk space for. Switch between models at any time in the chat interface.
Yes, Jan supports importing GGUF format models from HuggingFace or other sources. Place them in your models directory and they’ll appear in Jan.

Usage

  1. Go to Thread section
  2. Click New Thread
  3. Configure assistant settings:
    • System prompt
    • Model selection
    • Parameters (temperature, max tokens, etc.)
  4. Save and start chatting
Jan runs a local API server at localhost:1337 that’s compatible with OpenAI’s API format. This lets you use Jan with:
  • Code editors (Continue, Cursor)
  • Development tools
  • Custom applications
No internet required - everything runs locally.
Yes! Jan works great for coding tasks:
  • Use code-specialized models (CodeLlama, Qwen Coder)
  • Integrate with VS Code via the API
  • Get code explanations, debugging help, and more
In the right sidebar during chat:
  • Temperature: Controls randomness (0 = focused, 1 = creative)
  • Max Tokens: Maximum response length
  • Top P: Nucleus sampling parameter
  • GPU Layers (ngl): How many layers run on GPU
Adjust based on your needs and hardware.

Privacy & Security

Yes, when using local models:
  • All processing happens on your device
  • No data is sent to external servers
  • Chat history stays on your device
  • Models run completely offline
Cloud integrations (OpenAI, etc.) follow their respective privacy policies.
Yes, Jan is suitable for enterprise use:
  • No external dependencies for local models
  • Complete data privacy
  • OpenAI-compatible API for integration
  • Open-source and auditable
Jan includes optional telemetry that can be disabled in Settings. No personal data or chat content is collected. Only anonymous usage statistics help improve the product.

Troubleshooting

  1. Check system requirements
  2. Update to the latest version
  3. Try a clean installation
  4. Check error logs in System Monitor
  • Check your internet connection
  • Verify sufficient disk space
  • Try downloading from a different mirror
  • Resume interrupted downloads from the Hub
See the GPU troubleshooting guide for:
  • Driver installation
  • CUDA toolkit setup
  • GPU acceleration settings
  • Verified configurations

Development

Yes! Prerequisites:
  • Node.js ≥ 20.0.0
  • Yarn ≥ 1.22.0
  • Make ≥ 3.81
  • Rust
git clone https://github.com/janhq/jan
cd jan
make dev
Contributions are welcome:
Yes, Jan has an extension system. Check the documentation for details on creating custom extensions.

Still Have Questions?

Visit our Community page for support channels, or check the Troubleshooting guide for technical issues.

Build docs developers (and LLMs) love