Supported providers
| Provider | API key env var | Example model string | Notes |
|---|---|---|---|
| Google Gemini | GEMINI_API_KEY | gemini/gemini-2.5-flash | Default provider. Free tier available. Also accepts vertex/ prefix for Vertex AI. |
| OpenAI | OPENAI_API_KEY | gpt-4o | Supports gpt-4o, gpt-4o-mini, o1, o3-mini. |
| Anthropic | ANTHROPIC_API_KEY | claude-3-5-sonnet-20241022 | Supports Claude 3.5 Sonnet and Haiku. |
| Groq | GROQ_API_KEY | groq/llama3-70b-8192 | Very fast inference. Requires groq/ prefix. |
| xAI | XAI_API_KEY | xai/grok-2 | Requires xai/ prefix. |
| Mistral | MISTRAL_API_KEY | mistral/mistral-large-latest | Requires mistral/ prefix. |
| Cohere | COHERE_API_KEY | command-r-plus | Model name alone (no prefix) is sufficient. |
| DeepSeek | DEEPSEEK_API_KEY | deepseek/deepseek-chat | Requires deepseek/ prefix. |
Model string format
LiteLLM model strings follow the patternprovider/model-name. NoteWise uses the prefix to determine which API key environment variable to load:
/ prefix, NoteWise falls back to prefix-matching on the model name itself (gpt-* → OpenAI, claude-* → Anthropic, gemini-* → Gemini, etc.).
Provider details
Google Gemini (default)
Google Gemini (default)
Gemini is the default provider. Gemini 2.5 Flash offers a generous free tier that covers most personal use without a billing account.Get a free API key at aistudio.google.com/app/apikey.Recommended model strings:Vertex AI is also supported using the same
GEMINI_API_KEY:OpenAI
OpenAI
All current OpenAI chat and reasoning models work. The model string can be used with or without the Reasoning models (
openai/ prefix.Recommended model strings:o1, o3, o4 series) are detected automatically and temperature is not forwarded to them, as those models do not accept a temperature parameter.Anthropic
Anthropic
Claude 3.5 models offer strong performance on long-context tasks like transcript summarization.Recommended model strings:
Groq
Groq
Groq’s LPU inference delivers very fast response times, which reduces end-to-end processing time for chunked transcripts.Recommended model strings:The
groq/ prefix is required.xAI
xAI
Grok models from xAI. The
xai/ prefix is required.Recommended model strings:Mistral
Mistral
Mistral AI models. The
mistral/ prefix is required.Recommended model strings:Cohere
Cohere
Cohere Command models. No prefix is required for
command-r-plus.Recommended model strings:DeepSeek
DeepSeek
DeepSeek models. The
deepseek/ prefix is required.Recommended model strings:Configuring your API key
Option 1: Setup wizard (recommended)
The interactive wizard prompts you to choose a provider and paste your API key, then writes it to~/.notewise/config.env:
Option 2: Edit config.env directly
Add the key for your chosen provider to~/.notewise/config.env:
config.env
Option 3: Environment variable
Export the key in your shell before running NoteWise. Environment variables overrideconfig.env:
Selecting a model per run
Use--model to override DEFAULT_MODEL for a single invocation without changing your config:
--model flag takes the same LiteLLM model string format as DEFAULT_MODEL.