Skip to main content
nrvna-ai is built from source using CMake. All dependencies are vendored, including llama.cpp, so you don’t need to install anything separately.

Requirements

Before you begin, ensure you have:
  • macOS or Linux operating system
  • CMake 3.16 or higher
  • A C++17 compatible compiler (GCC, Clang, or Apple Clang)
  • Git (for cloning with submodules)
On macOS with Apple Silicon, Metal GPU acceleration is automatically enabled. On Linux, CUDA support is detected and enabled if available.

Build from source

1

Clone the repository

Clone with the --recursive flag to include all submodules, including llama.cpp:
git clone --recursive https://github.com/sanmathigb/nrvna-ai.git
cd nrvna-ai
Don’t forget the --recursive flag. If you already cloned without it, run:
git submodule update --init --recursive
2

Configure and build

Use CMake to configure and build the project:
cmake -S . -B build && cmake --build build -j4
This creates a build/ directory and compiles:
  • nrvnad - the daemon that processes jobs
  • wrk - the client for submitting work
  • flw - the client for retrieving results
  • llama.cpp and all dependencies
The -j4 flag uses 4 parallel jobs. Adjust based on your CPU cores (e.g., -j8 for 8 cores).
3

Install system-wide

Install the binaries to your system:
sudo cmake --install build
This installs all three binaries (nrvnad, wrk, flw) to /usr/local/bin by default.
4

Verify installation

Check that the binaries are installed and in your PATH:
nrvnad --help
You should see help output for each command.

Build configuration

The build system is configured via CMakeLists.txt with these key settings:
  • C++ Standard: C++17 (required)
  • llama.cpp: Built automatically as a submodule
  • Multimodal support: Enabled via mtmd library
  • Compiler warnings: -Wall -Wextra -Wpedantic

Custom install prefix

To install to a different location:
cmake -S . -B build -DCMAKE_INSTALL_PREFIX=/your/path
cmake --build build -j4
cmake --install build

Get a model

After installation, download a GGUF model to use with nrvna-ai:
mkdir -p models
# Download any GGUF model from HuggingFace
# Example: https://huggingface.co/models?search=gguf
Place your models in the ./models/ directory, or set the NRVNA_MODELS_DIR environment variable to point to your models:
export NRVNA_MODELS_DIR=/path/to/your/models

Next steps

Quickstart

Get running in 5 minutes

Introduction

Learn about the core concepts

Build docs developers (and LLMs) love