Skip to main content
Copy the example file and edit it with your configuration:
cp .env.example .env
Variables marked Yes under Required must be set for DeepTutor to start. Optional variables enable additional features or override defaults.

LLM settings

The primary language model used for all AI operations — chat, problem solving, research, idea generation, and more.
VariableRequiredDefaultDescription
LLM_BINDINGYesopenaiProvider binding. Options: openai, azure_openai, anthropic, deepseek, openrouter, groq, together, mistral, ollama, lm_studio, vllm, llama_cpp
LLM_MODELYesModel name, e.g. gpt-4o, deepseek-chat, claude-3-5-sonnet
LLM_API_KEYYesAPI key for the LLM provider
LLM_HOSTYesAPI endpoint URL, e.g. https://api.openai.com/v1
LLM_API_VERSIONNoAPI version string. Required for Azure OpenAI, e.g. 2024-02-15-preview

Embedding settings

The embedding model powers the RAG (Retrieval-Augmented Generation) pipeline used by the knowledge base and search features.
VariableRequiredDefaultDescription
EMBEDDING_BINDINGYesopenaiProvider binding. Options: openai, azure_openai, jina, cohere, huggingface, google, ollama, lm_studio
EMBEDDING_MODELYestext-embedding-3-smallEmbedding model name
EMBEDDING_API_KEYYesAPI key for the embedding provider
EMBEDDING_HOSTYesAPI endpoint URL, e.g. https://api.openai.com/v1
EMBEDDING_DIMENSIONYes3072Vector dimensions. Must match the model’s output dimensions
EMBEDDING_API_VERSIONNoAPI version string. Required for Azure OpenAI
The demo knowledge bases shipped with DeepTutor were built with text-embedding-3-large at dimensions = 3072. If you use a different model or dimension, create a new knowledge base — you cannot mix embeddings from different models in one index.

Port and access settings

VariableRequiredDefaultDescription
BACKEND_PORTNo8001Port for the FastAPI backend
FRONTEND_PORTNo3782Port for the Next.js frontend
NEXT_PUBLIC_API_BASENoFrontend API URL. Set this for LAN or remote access, e.g. http://192.168.1.100:8001
NEXT_PUBLIC_API_BASE_EXTERNALNoExternal API URL for cloud or Docker deployments. Takes priority over NEXT_PUBLIC_API_BASE
DISABLE_SSL_VERIFYNofalseDisable SSL certificate verification. Not recommended for production

Choosing the right API base variable

The frontend needs to know the backend’s address. The startup scripts resolve it using the following priority order:
  1. NEXT_PUBLIC_API_BASE_EXTERNAL — use for cloud or Docker deployments where the backend is accessed via a public URL
  2. NEXT_PUBLIC_API_BASE — use for LAN or remote access from another device on the same network
  3. Default: http://localhost:{BACKEND_PORT} — works only when the browser and backend are on the same machine
For a simple home or office setup where you access DeepTutor from another device (e.g. 192.168.1.66:3782), set:
NEXT_PUBLIC_API_BASE=http://192.168.1.66:8001
For any cloud deployment (Docker on a remote server, VPS, etc.), you must set NEXT_PUBLIC_API_BASE_EXTERNAL to your server’s public URL. Without it, the browser will try to contact localhost:8001, which will fail.

Search provider settings

Enable web search for the Deep Research and Smart Solver modules.
VariableRequiredDefaultDescription
SEARCH_PROVIDERNoperplexitySearch provider. Options: perplexity, tavily, serper, jina, exa
SEARCH_API_KEYNoAPI key for the selected search provider
Each provider requires its own API key. The SEARCH_API_KEY value is used for whichever provider SEARCH_PROVIDER is set to.

TTS settings

Enable audio narration in the Interactive IdeaGen (Co-Writer) module.
VariableRequiredDefaultDescription
TTS_BINDINGNoopenaiProvider binding. Options: openai, azure_openai
TTS_MODELNotts-1TTS model name
TTS_API_KEYNoAPI key for the TTS provider. Can be the same as LLM_API_KEY when using OpenAI
TTS_URLNoAPI endpoint URL, e.g. https://api.openai.com/v1
TTS_VOICENoalloyVoice for audio output. Options: alloy, echo, fable, onyx, nova, shimmer
TTS_BINDING_API_VERSIONNoAPI version string. Required for Azure OpenAI

HuggingFace / MinerU settings

These variables are optional and relate to the MinerU PDF parser used during knowledge base creation.
VariableRequiredDefaultDescription
HF_ENDPOINTNoHuggingFace mirror endpoint, e.g. https://your-hf-mirror.example.com
HF_HOMENoHuggingFace cache directory. Recommended: /app/data/hf with a Docker volume mount to reuse downloaded models
HF_HUB_OFFLINENoSet to 1 to force offline mode. Requires models already present in HF_HOME

Build docs developers (and LLMs) love