~/.notewise/config.env in KEY=VALUE format. The notewise setup wizard creates and populates this file on first run. You can also edit it directly with notewise edit-config.
Load order (later sources override earlier ones):
- Code defaults
~/.notewise/config.env- Environment variables
notewise process always take final precedence over both the config file and environment variables.
State directory
All persistent state — the config file, SQLite cache, and session logs — lives under~/.notewise/ by default. Override this path before launching NoteWise:
NOTEWISE_HOME is the only setting that cannot appear in config.env itself; it must be set as an environment variable before the process starts.
LLM settings
The LiteLLM model string used when See Provider reference for valid model strings per provider.
--model is not passed to notewise process. The format is provider/model-name for most providers.LLM sampling temperature. Accepts values from
0.0 (deterministic) to 1.0 (most varied). Lower values produce more consistent, factual output; higher values produce more varied phrasing.Maximum tokens the LLM may generate per response. When unset, the model’s own default context limit is used. Must be a positive integer if specified.
Provider API keys
Set the key that matches your chosen provider. Only the key for your active provider needs to be configured. NoteWise syncs whichever keys are present intoos.environ so that LiteLLM can pick them up automatically.
API key for Google Gemini. Obtain a free key from aistudio.google.com. Also used for Vertex AI models (
vertex/ prefix).API key for OpenAI models (
gpt-4o, gpt-4o-mini, o1, o3-mini, etc.).API key for Anthropic Claude models (
claude-3-5-sonnet-20241022, claude-3-5-haiku, etc.).API key for Groq’s fast inference API (
groq/llama3-70b-8192, etc.).API key for xAI Grok models (
xai/grok-2, etc.).API key for Mistral AI models (
mistral/mistral-large-latest, etc.).API key for Cohere models (
command-r-plus, etc.).API key for DeepSeek models (
deepseek/deepseek-chat, etc.).Output settings
Directory where generated study notes are written. Relative paths are resolved against the current working directory when Overridden per-run with
notewise process is called.-o / --output.Concurrency settings
Maximum number of videos processed simultaneously when handling a playlist or batch file. Must be a positive integer.Lower this value if you encounter rate-limit errors or want to reduce API spend.
Rate limit applied to outgoing YouTube transcript and metadata requests. Must be a positive integer.
Transcript settings
Path to a Netscape-format browser cookie file used to authenticate YouTube requests. Required for age-restricted, members-only, or login-required videos.Overridden per-run with
--cookie-file. See VideoUnavailableError for the access-restriction reasons that this resolves.Code-only defaults
The following settings are defined as code constants in_constants.py and are not currently exposed as config.env keys. They apply automatically on every run.
| Setting | Default | Description |
|---|---|---|
DEFAULT_LANGUAGES | en | Preferred transcript language(s), comma-separated |
DEFAULT_CHUNK_SIZE | 4000 tokens | Transcript chunk size fed to the LLM |
DEFAULT_CHUNK_OVERLAP | 200 tokens | Overlap between adjacent transcript chunks |
DEFAULT_CHAPTER_MIN_DURATION | 3600 seconds | Minimum video length to trigger chapter-aware processing |
MAX_CONCURRENT_CHAPTERS | 3 | Parallel chapter processing limit |
These values can be read from the source at
src/notewise/_constants.py. Support for configuring them via config.env may be added in a future release.Example config.env
config.env