Overview
installed connects to your local Ollama instance, retrieves all downloaded models, and scores them by compatibility with your current hardware. It is useful for auditing which models you have and identifying low-value or oversized models to remove.
Example Output
Flags
Column to sort by. Accepted values:
score, size, name.Default: scoreOutput the ranked model list as JSON. Each entry includes
name, size, fileSizeGB, quantization, useCase, score, and command.Usage Examples
How Scoring Works
Each installed model receives a compatibility score (0–100) based on:- RAM fit ratio — how well the model’s file size fits within 80% of available system RAM
- Hardware tier match — whether the model size is optimal for your CPU/RAM tier
- Deterministic selector match — if the model appears in the main analysis, its score is averaged in
| Name pattern | Inferred use case |
|---|---|
code, coder, deepseek-coder | coding |
embed, nomic, bge | embeddings |
llava, vision, bakllava | multimodal |
r1, qwq, reasoning | reasoning |
chat, instruct | chat |
wizard, creative | creative |
Requirements
Ollama must be running and accessible. If Ollama is not detected,installed will exit with an error and a hint for starting it.
Models with a score below 50/100 are listed separately as candidates for removal. Use
ollama rm <model> to free disk space.
