Skip to main content

Overview

installed connects to your local Ollama instance, retrieves all downloaded models, and scores them by compatibility with your current hardware. It is useful for auditing which models you have and identifying low-value or oversized models to remove.
llm-checker installed

Example Output

INSTALLED MODELS RANKING
────────────────────────────────────────────────────────────────────────────────
Sorted by: score | Hardware: 24GB RAM

 #   Model                      Size     Score      Use Case    Command
 ─   ─────────────────────────  ───────  ─────────  ──────────  ──────────────────
 🥇  qwen2.5-coder:14b          9.1GB    87/100     coding      ollama run qwen2.5-coder
 🥈  deepseek-r1:14b            9.0GB    83/100     reasoning   ollama run deepseek-r1
 🥉  llama3.2:3b                2.0GB    78/100     general     ollama run llama3.2
 4.  nomic-embed-text           274MB    74/100     embeddings  ollama run nomic-embed-text
 5.  llava:13b                  8.0GB    61/100     multimodal  ollama run llava

Consider removing these low-ranking models to free up space:
  ollama rm llava:13b  # Score: 61/100, Size: 8.0GB

Flags

--sort
string
Column to sort by. Accepted values: score, size, name.Default: score
--json
flag
Output the ranked model list as JSON. Each entry includes name, size, fileSizeGB, quantization, useCase, score, and command.

Usage Examples

# Rank by compatibility score (default)
llm-checker installed

# Sort by file size
llm-checker installed --sort size

# Alphabetical listing
llm-checker installed --sort name

# JSON output for scripting
llm-checker installed --json

How Scoring Works

Each installed model receives a compatibility score (0–100) based on:
  1. RAM fit ratio — how well the model’s file size fits within 80% of available system RAM
  2. Hardware tier match — whether the model size is optimal for your CPU/RAM tier
  3. Deterministic selector match — if the model appears in the main analysis, its score is averaged in
Use case is inferred from the model name:
Name patternInferred use case
code, coder, deepseek-codercoding
embed, nomic, bgeembeddings
llava, vision, bakllavamultimodal
r1, qwq, reasoningreasoning
chat, instructchat
wizard, creativecreative

Requirements

Ollama must be running and accessible. If Ollama is not detected, installed will exit with an error and a hint for starting it.
# Install a model first if none are present
ollama pull llama3.2:3b

# Then run installed
llm-checker installed
Models with a score below 50/100 are listed separately as candidates for removal. Use ollama rm <model> to free disk space.

Build docs developers (and LLMs) love