Skip to main content
NoteWise reads configuration from ~/.notewise/config.env in KEY=VALUE format. The notewise setup wizard creates and populates this file on first run. You can also edit it directly with notewise edit-config. Load order (later sources override earlier ones):
  1. Code defaults
  2. ~/.notewise/config.env
  3. Environment variables
CLI flags passed to notewise process always take final precedence over both the config file and environment variables.

State directory

All persistent state — the config file, SQLite cache, and session logs — lives under ~/.notewise/ by default. Override this path before launching NoteWise:
export NOTEWISE_HOME=/path/to/custom/dir
NOTEWISE_HOME is the only setting that cannot appear in config.env itself; it must be set as an environment variable before the process starts.

LLM settings

DEFAULT_MODEL
string
default:"gemini/gemini-2.5-flash"
The LiteLLM model string used when --model is not passed to notewise process. The format is provider/model-name for most providers.
DEFAULT_MODEL=gemini/gemini-2.5-flash
See Provider reference for valid model strings per provider.
TEMPERATURE
float
default:"0.7"
LLM sampling temperature. Accepts values from 0.0 (deterministic) to 1.0 (most varied). Lower values produce more consistent, factual output; higher values produce more varied phrasing.
TEMPERATURE=0.7
Values outside the 0.01.0 range are rejected at startup with a ConfigurationError.
MAX_TOKENS
integer
Maximum tokens the LLM may generate per response. When unset, the model’s own default context limit is used. Must be a positive integer if specified.
MAX_TOKENS=4096

Provider API keys

Set the key that matches your chosen provider. Only the key for your active provider needs to be configured. NoteWise syncs whichever keys are present into os.environ so that LiteLLM can pick them up automatically.
GEMINI_API_KEY
string
API key for Google Gemini. Obtain a free key from aistudio.google.com. Also used for Vertex AI models (vertex/ prefix).
GEMINI_API_KEY=AIza...
OPENAI_API_KEY
string
API key for OpenAI models (gpt-4o, gpt-4o-mini, o1, o3-mini, etc.).
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY
string
API key for Anthropic Claude models (claude-3-5-sonnet-20241022, claude-3-5-haiku, etc.).
ANTHROPIC_API_KEY=sk-ant-...
GROQ_API_KEY
string
API key for Groq’s fast inference API (groq/llama3-70b-8192, etc.).
GROQ_API_KEY=gsk_...
XAI_API_KEY
string
API key for xAI Grok models (xai/grok-2, etc.).
XAI_API_KEY=xai-...
MISTRAL_API_KEY
string
API key for Mistral AI models (mistral/mistral-large-latest, etc.).
MISTRAL_API_KEY=...
COHERE_API_KEY
string
API key for Cohere models (command-r-plus, etc.).
COHERE_API_KEY=...
DEEPSEEK_API_KEY
string
API key for DeepSeek models (deepseek/deepseek-chat, etc.).
DEEPSEEK_API_KEY=...

Output settings

OUTPUT_DIR
string
default:"./output"
Directory where generated study notes are written. Relative paths are resolved against the current working directory when notewise process is called.
OUTPUT_DIR=./output
Overridden per-run with -o / --output.

Concurrency settings

MAX_CONCURRENT_VIDEOS
integer
default:"5"
Maximum number of videos processed simultaneously when handling a playlist or batch file. Must be a positive integer.
MAX_CONCURRENT_VIDEOS=5
Lower this value if you encounter rate-limit errors or want to reduce API spend.
YOUTUBE_REQUESTS_PER_MINUTE
integer
default:"10"
Rate limit applied to outgoing YouTube transcript and metadata requests. Must be a positive integer.
YOUTUBE_REQUESTS_PER_MINUTE=10

Transcript settings

Path to a Netscape-format browser cookie file used to authenticate YouTube requests. Required for age-restricted, members-only, or login-required videos.
YOUTUBE_COOKIE_FILE=/home/user/.config/yt-cookies.txt
Overridden per-run with --cookie-file. See VideoUnavailableError for the access-restriction reasons that this resolves.

Code-only defaults

The following settings are defined as code constants in _constants.py and are not currently exposed as config.env keys. They apply automatically on every run.
SettingDefaultDescription
DEFAULT_LANGUAGESenPreferred transcript language(s), comma-separated
DEFAULT_CHUNK_SIZE4000 tokensTranscript chunk size fed to the LLM
DEFAULT_CHUNK_OVERLAP200 tokensOverlap between adjacent transcript chunks
DEFAULT_CHAPTER_MIN_DURATION3600 secondsMinimum video length to trigger chapter-aware processing
MAX_CONCURRENT_CHAPTERS3Parallel chapter processing limit
These values can be read from the source at src/notewise/_constants.py. Support for configuring them via config.env may be added in a future release.

Example config.env

config.env
# ~/.notewise/config.env
# Created by: notewise setup

# Model
DEFAULT_MODEL=gemini/gemini-2.5-flash

# Output
OUTPUT_DIR=./output

# Generation
TEMPERATURE=0.7
# MAX_TOKENS=4096

# Concurrency
MAX_CONCURRENT_VIDEOS=5
YOUTUBE_REQUESTS_PER_MINUTE=10

# Private video support
# YOUTUBE_COOKIE_FILE=/home/user/.config/yt-cookies.txt

# Provider API keys (set only the key for your active provider)
GEMINI_API_KEY=AIza...
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# GROQ_API_KEY=gsk_...
# XAI_API_KEY=xai-...
# MISTRAL_API_KEY=...
# COHERE_API_KEY=...
# DEEPSEEK_API_KEY=...

Build docs developers (and LLMs) love