~/.llm-checker.json that overrides built-in defaults for every invocation. No flags required — set your preferences once and they apply globally.
File location
Create or edit the file at:Example configuration
The following example enables coding-optimized analysis with compact output and filters out models that are too large for typical developer machines:Full schema
analysis
Controls how LLM Checker scores and benchmarks models.
The use case applied when no
--use-case flag is passed. Affects scoring weights across the Quality, Speed, Fit, and Context dimensions.Accepted values: general, coding, chat, reasoning, creative, fast.When
true, LLM Checker runs an active benchmark against installed Ollama models instead of relying solely on hardware-estimated speed. Equivalent to passing --performance-test on every check call.Enable this only if you want real tok/s measurements rather than theoretical estimates.display
Controls the appearance of the CLI output tables.
Maximum number of model rows shown in compatibility and recommendation tables. Increase this if you want to see a longer ranked list without passing
--limit each time.When
true, reduces table padding and omits secondary detail rows. Useful on narrow terminals or when piping output to a log file.filters
Pre-filters the model pool before scoring and ranking.
Minimum compatibility score (0–100) a model must achieve to appear in results. Models that fall below this threshold are silently dropped before the ranked list is printed.Setting this to
75 hides marginal models and surfaces only well-matched candidates.An array of model name substrings to exclude from all results. Any model whose identifier contains one of these strings (case-insensitive) is dropped before scoring.Example:
["very-large-model", "uncensored"]Minimal configuration
You do not need to include every key. LLM Checker merges your file with built-in defaults — only the keys you specify are overridden:Validation
LLM Checker does not currently expose aconfig validate command. To verify your file parses correctly, run:

