Config file location
NoteWise reads its configuration from:notewise setup. It uses a simple KEY=VALUE format, one setting per line. Lines beginning with # are treated as comments.
Overriding the state directory
All persistent state (config, cache, logs) lives under~/.notewise/ by default. To use a different directory, set the NOTEWISE_HOME environment variable before running any notewise command:
$NOTEWISE_HOME/config.env and write all state files there instead.
Managing your config
NoteWise provides three commands to work with the config file:CLI flags always take precedence over values in the config file. For example, passing
--model gpt-4o on the command line overrides whatever DEFAULT_MODEL is set to in config.env.Load order
Settings are resolved in this order, with later sources overriding earlier ones:- Code defaults (the values documented below)
~/.notewise/config.env(or$NOTEWISE_HOME/config.env)- Environment variables
All configuration keys
LLM settings
The LiteLLM-format model string used when
--model is not passed on the command line. Any model supported by LiteLLM can be used here. Examples: gpt-4o, claude-3-5-sonnet-20241022, groq/llama3-70b-8192.LLM sampling temperature. Accepts values between
0.0 (deterministic) and 1.0 (more creative). Lower values produce more consistent, focused notes; higher values may introduce more varied phrasing.Maximum number of tokens per LLM response. When unset, the model’s own default limit applies. Increase this value if notes are being truncated for very long video segments.
API keys
Set the key that corresponds to your chosen model’s provider. Only the key for the provider you are using needs to be present.API key for Google Gemini (and Vertex AI) models. Required when
DEFAULT_MODEL starts with gemini/ or vertex/. Get a free key at aistudio.google.com.API key for OpenAI models (e.g.
gpt-4o, o3-mini).API key for Anthropic Claude models (e.g.
claude-3-5-sonnet-20241022).API key for Groq-hosted models (e.g.
groq/llama3-70b-8192).API key for xAI Grok models (e.g.
xai/grok-2).API key for Mistral models (e.g.
mistral/mistral-large-latest).API key for Cohere models (e.g.
command-r-plus).API key for DeepSeek models (e.g.
deepseek/deepseek-chat).Output
Directory where generated Markdown files are written. Relative paths are resolved from the current working directory at the time you run
notewise process. Can be overridden per-run with --output / -o.Concurrency
Maximum number of videos processed in parallel when handling a playlist or batch file. Reduce this value if you hit rate limits from your LLM provider or YouTube.
Rate limit applied to YouTube transcript and metadata requests. Reduce this value if you encounter
429 Too Many Requests errors from YouTube.Transcript and language
Path to a Netscape-format cookies
.txt file. Required to access age-gated or login-required videos. Can be overridden per-run with --cookie-file.The default transcript language (
en) is a code default and is not configurable via config.env. Override it per-run with --language / -l.Chunking and chapter generation (code defaults only)
These settings are fixed code defaults and are not read fromconfig.env. They apply automatically on every run.
| Setting | Default | Description |
|---|---|---|
DEFAULT_CHUNK_SIZE | 4000 tokens | Maximum tokens per transcript chunk sent to the LLM |
DEFAULT_CHUNK_OVERLAP | 200 tokens | Token overlap between consecutive chunks |
DEFAULT_CHAPTER_MIN_DURATION | 3600 seconds | Minimum duration to trigger per-chapter notes (1 hour) |
MAX_CONCURRENT_CHAPTERS | 3 | Maximum chapters processed in parallel |
Example config file
A minimal~/.notewise/config.env for a user using the default Gemini model: