Skip to main content
1

Install NoteWise

Install NoteWise using your preferred method. uv is recommended for the fastest install and global availability:
uv tool install notewise
See the Installation page for standalone binary and Docker options.
2

Run the setup wizard

Run the interactive setup wizard once to store your LLM API key:
notewise setup
The wizard creates ~/.notewise/config.env and walks you through selecting a provider and entering your API key.
The default model is Gemini 2.5 Flash (gemini/gemini-2.5-flash). You can get a free API key at aistudio.google.com. NoteWise also supports OpenAI, Anthropic, Groq, Mistral, Cohere, DeepSeek, and xAI — see Configuration for all provider keys.
3

Generate study notes

Pass any YouTube video URL, playlist URL, or a batch file of URLs to notewise process:
# Single video
notewise process "https://youtube.com/watch?v=VIDEO_ID"

# Full playlist
notewise process "https://youtube.com/playlist?list=PLAYLIST_ID"

# Batch file (one URL per line)
notewise process my-course-urls.txt -o ./course-notes
NoteWise shows a live progress dashboard while it fetches transcripts and generates notes. When it finishes, the Markdown files are in ./output/ (or the directory you specified with -o).

Output structure

Generated files are organized by video title under your output directory:
output/
├── Video Title/
│   ├── study_notes.md           # Full study notes
│   ├── quiz.md                  # (optional, --quiz)
│   └── transcript.txt           # (optional, --export-transcript txt)
└── Another Video Title/
    └── study_notes.md
For chapter-aware videos (longer than 1 hour with defined chapters), notes are split into one file per chapter:
output/
└── Long Course Title/
    ├── Chapter 01 - Introduction.md
    ├── Chapter 02 - Core Concepts.md
    └── ...

Common options

The process command accepts several flags to override your config on a per-run basis:
FlagShortDescription
--model TEXT-mLLM model to use (e.g. gpt-4o, claude-3-5-sonnet-20241022)
--output PATH-oOutput directory
--language TEXT-lPreferred transcript language (repeatable)
--temperature FLOAT-tLLM temperature, 0.0–1.0
--max-tokens INT-kMax tokens per LLM response
--force-FRe-process already-processed videos
--no-uiPlain stdout output for CI/cron
--quizAlso generate a multiple-choice quiz
--export-transcriptExport raw transcript as txt or json
--cookie-file PATHNetscape cookies file for private videos
Run notewise process --help to see all options with full descriptions.

Using a different model

You can override the model for any single run with --model. NoteWise uses LiteLLM internally, so any model string LiteLLM supports works here:
# Use GPT-4o
notewise process "https://youtube.com/watch?v=VIDEO_ID" --model gpt-4o

# Use Claude
notewise process "https://youtube.com/watch?v=VIDEO_ID" --model claude-3-5-sonnet-20241022

# Use Groq
notewise process "https://youtube.com/watch?v=VIDEO_ID" --model groq/llama3-70b-8192
Make sure the corresponding API key is set in your config file or exported as an environment variable. See Configuration for the full list of provider keys.

Build docs developers (and LLMs) love