Skip to main content

Config file location

NoteWise reads its configuration from:
~/.notewise/config.env
This file is created by notewise setup. It uses a simple KEY=VALUE format, one setting per line. Lines beginning with # are treated as comments.

Overriding the state directory

All persistent state (config, cache, logs) lives under ~/.notewise/ by default. To use a different directory, set the NOTEWISE_HOME environment variable before running any notewise command:
export NOTEWISE_HOME=/path/to/custom/dir
NoteWise will read $NOTEWISE_HOME/config.env and write all state files there instead.

Managing your config

NoteWise provides three commands to work with the config file:
# Run the interactive setup wizard to create or rewrite config
notewise setup

# Display the current resolved configuration (secrets masked)
notewise config

# Open the config file in your $EDITOR (or the OS default editor)
notewise edit-config
CLI flags always take precedence over values in the config file. For example, passing --model gpt-4o on the command line overrides whatever DEFAULT_MODEL is set to in config.env.

Load order

Settings are resolved in this order, with later sources overriding earlier ones:
  1. Code defaults (the values documented below)
  2. ~/.notewise/config.env (or $NOTEWISE_HOME/config.env)
  3. Environment variables

All configuration keys

LLM settings

DEFAULT_MODEL
string
default:"gemini/gemini-2.5-flash"
The LiteLLM-format model string used when --model is not passed on the command line. Any model supported by LiteLLM can be used here. Examples: gpt-4o, claude-3-5-sonnet-20241022, groq/llama3-70b-8192.
TEMPERATURE
number
default:"0.7"
LLM sampling temperature. Accepts values between 0.0 (deterministic) and 1.0 (more creative). Lower values produce more consistent, focused notes; higher values may introduce more varied phrasing.
MAX_TOKENS
number
Maximum number of tokens per LLM response. When unset, the model’s own default limit applies. Increase this value if notes are being truncated for very long video segments.

API keys

Set the key that corresponds to your chosen model’s provider. Only the key for the provider you are using needs to be present.
GEMINI_API_KEY
string
API key for Google Gemini (and Vertex AI) models. Required when DEFAULT_MODEL starts with gemini/ or vertex/. Get a free key at aistudio.google.com.
OPENAI_API_KEY
string
API key for OpenAI models (e.g. gpt-4o, o3-mini).
ANTHROPIC_API_KEY
string
API key for Anthropic Claude models (e.g. claude-3-5-sonnet-20241022).
GROQ_API_KEY
string
API key for Groq-hosted models (e.g. groq/llama3-70b-8192).
XAI_API_KEY
string
API key for xAI Grok models (e.g. xai/grok-2).
MISTRAL_API_KEY
string
API key for Mistral models (e.g. mistral/mistral-large-latest).
COHERE_API_KEY
string
API key for Cohere models (e.g. command-r-plus).
DEEPSEEK_API_KEY
string
API key for DeepSeek models (e.g. deepseek/deepseek-chat).

Output

OUTPUT_DIR
string
default:"./output"
Directory where generated Markdown files are written. Relative paths are resolved from the current working directory at the time you run notewise process. Can be overridden per-run with --output / -o.

Concurrency

MAX_CONCURRENT_VIDEOS
number
default:"5"
Maximum number of videos processed in parallel when handling a playlist or batch file. Reduce this value if you hit rate limits from your LLM provider or YouTube.
YOUTUBE_REQUESTS_PER_MINUTE
number
default:"10"
Rate limit applied to YouTube transcript and metadata requests. Reduce this value if you encounter 429 Too Many Requests errors from YouTube.

Transcript and language

Path to a Netscape-format cookies .txt file. Required to access age-gated or login-required videos. Can be overridden per-run with --cookie-file.
The default transcript language (en) is a code default and is not configurable via config.env. Override it per-run with --language / -l.

Chunking and chapter generation (code defaults only)

These settings are fixed code defaults and are not read from config.env. They apply automatically on every run.
SettingDefaultDescription
DEFAULT_CHUNK_SIZE4000 tokensMaximum tokens per transcript chunk sent to the LLM
DEFAULT_CHUNK_OVERLAP200 tokensToken overlap between consecutive chunks
DEFAULT_CHAPTER_MIN_DURATION3600 secondsMinimum duration to trigger per-chapter notes (1 hour)
MAX_CONCURRENT_CHAPTERS3Maximum chapters processed in parallel

Example config file

A minimal ~/.notewise/config.env for a user using the default Gemini model:
GEMINI_API_KEY=your_gemini_api_key_here
DEFAULT_MODEL=gemini/gemini-2.5-flash
OUTPUT_DIR=./output
TEMPERATURE=0.7
MAX_CONCURRENT_VIDEOS=5
YOUTUBE_REQUESTS_PER_MINUTE=10
To switch to OpenAI GPT-4o instead:
OPENAI_API_KEY=your_openai_api_key_here
DEFAULT_MODEL=gpt-4o
OUTPUT_DIR=./output
TEMPERATURE=0.7
Never commit your config.env file to version control. It contains your API keys. The file lives in your home directory (~/.notewise/) precisely to keep it out of project repositories.

Build docs developers (and LLMs) love