macOS
Homebrew (recommended)
Install via Homebrew:brew upgrade.
curl install script
Alternatively, use the quick install script:/usr/local/bin.
Install without sudo (installs to ~/.local/bin):
Make sure
~/.local/bin is in your $PATH. Add export PATH="$HOME/.local/bin:$PATH" to your shell profile if needed.Linux
curl install script (recommended)
Install via the quick install script:/usr/local/bin (requires sudo).
Install without sudo (installs to ~/.local/bin):
Make sure
~/.local/bin is in your $PATH. Add export PATH="$HOME/.local/bin:$PATH" to your .bashrc or .zshrc.Manual installation
- Download the latest release for your architecture from GitHub Releases
- Extract the tarball:
- Move the binary to a directory in your
$PATH: - Verify installation:
Windows
Scoop (recommended)
Install via Scoop:Manual installation
- Download the latest Windows release from GitHub Releases
- Extract the ZIP archive
- Add the extracted directory to your
PATHenvironment variable - Open a new terminal and verify:
Build from source
Build the latest development version from source using Rust:Install Rust
If you don’t have Rust installed, get it from rustup.rs:Ensure you have Rust 1.85+ (edition 2024 support).
Clone and build
Clone the repository and build the release binary:The compiled binary will be at
target/release/llmfit.Install the binary
Move the binary to a directory in your On Windows, move
$PATH:target\release\llmfit.exe to a directory in your PATH.System requirements
Minimum requirements
- OS: Linux (x86_64, aarch64), macOS (Intel, Apple Silicon), or Windows (x86_64)
- RAM: No specific requirement (llmfit itself uses less than 50 MB)
- Disk: ~10 MB for the binary
Optional: GPU detection tools
For GPU detection to work, install the appropriate tool for your hardware:| GPU Vendor | Detection Tool | Purpose |
|---|---|---|
| NVIDIA | nvidia-smi | VRAM reporting, multi-GPU support |
| AMD | rocm-smi | ROCm GPU detection |
| Intel Arc | sysfs, lspci | Discrete/integrated GPU detection |
| Apple Silicon | system_profiler | Unified memory reporting (built-in) |
| Ascend | npu-smi | NPU detection |
GPU detection is best-effort. If autodetection fails, use
llmfit --memory=24G to manually specify your VRAM size.Optional: Runtime providers
To download and run models from the TUI, install a runtime provider:- Ollama: Download from ollama.com (macOS, Linux, Windows)
- llama.cpp: Build from github.com/ggml-org/llama.cpp or install via package manager
- MLX: Install via pip on Apple Silicon:
pip install mlx-lm
Verify installation
After installing, verify llmfit is working:Troubleshooting
Command not found
If you seecommand not found: llmfit, the binary is not in your $PATH:
- Check where llmfit is installed:
- If it’s in
~/.local/bin, add this to your shell profile (~/.bashrc,~/.zshrc, etc.): - Reload your shell:
GPU not detected
If your GPU is not detected:- Check if the detection tool is installed:
- If the tool is missing, install it via your package manager or GPU vendor’s website
- If detection still fails, manually specify VRAM:
Incorrect VRAM reported
On some systems (VMs, passthrough setups, broken drivers), VRAM autodetection may fail or report incorrect values. Override with--memory:
G/GB/GiB (gigabytes), M/MB/MiB (megabytes), T/TB/TiB (terabytes). Case-insensitive.
Permission denied (install script)
If the install script fails with permission denied:- Try the
--localflag to install without sudo: - Make sure
~/.local/binis in your$PATH
Update llmfit
Update to the latest version using the same method you used to install:- Homebrew (macOS/Linux)
- Scoop (Windows)
- curl install script
- From source
Uninstall
Remove llmfit by deleting the binary:- Homebrew
- Scoop
- Manual
Next steps
Quickstart
Get from installation to first successful run
TUI Mode
Learn keyboard shortcuts and navigation
CLI Mode
Use llmfit in scripts and automation
System Commands
Check detected hardware specs
