ai-check
ai-check performs a multi-objective meta-evaluation of candidate models for a given category. It selects an evaluator model from your local Ollama instance and uses it to reason about which model is best for the specified task.
Example Output
Flags
Task category for evaluation. Accepted values:
coding, reasoning, multimodal, general.Default: generalNumber of top candidate models to pass into the meta-evaluation.Default:
12Target context length in tokens. Used to bias selection towards models with suitable context windows.Default:
8192Evaluator model identifier. Use
auto to let LLM Checker pick the best available local model.Default: autoWeight (0.0–1.0) applied to the AI evaluation score when blending with the deterministic score.Default:
0.3Usage Examples
ai-run
ai-run automatically selects the best available local model for a task and launches it via Ollama. If a prompt is supplied, it is passed directly to the model. With --calibrated, routing uses your calibration policy instead of the AI selector.
Example Output
Flags
Task category hint. Used for model selection and calibrated routing task resolution. Accepted values:
coding, reasoning, multimodal, general, chat, creative.Prompt to pass directly to the selected model at launch. If omitted, an interactive Ollama session starts.
Enable calibrated routing. Optionally provide a file path. If omitted, auto-discovers from
~/.llm-checker/calibration-policy.{yaml,yml,json}.Explicit calibration policy file. Takes precedence over
--calibrated.Explicit list of model identifiers to select from, instead of all installed Ollama models.
Usage Examples
Routing Precedence
| Source | Precedence |
|---|---|
--policy <file> | Highest — explicit enterprise/calibration policy |
--calibrated <file> | Second — explicit calibration file |
--calibrated (no path) | Third — auto-discovery from ~/.llm-checker/ |
| AI selector | Fallback — heuristic multi-objective selection |

