Requirements
Before you begin, ensure you have:- macOS or Linux operating system
- CMake 3.16 or higher
- A C++17 compatible compiler (GCC, Clang, or Apple Clang)
- Git (for cloning with submodules)
On macOS with Apple Silicon, Metal GPU acceleration is automatically enabled. On Linux, CUDA support is detected and enabled if available.
Build from source
Clone the repository
Clone with the
--recursive flag to include all submodules, including llama.cpp:Configure and build
Use CMake to configure and build the project:This creates a
build/ directory and compiles:nrvnad- the daemon that processes jobswrk- the client for submitting workflw- the client for retrieving results- llama.cpp and all dependencies
The
-j4 flag uses 4 parallel jobs. Adjust based on your CPU cores (e.g., -j8 for 8 cores).Install system-wide
Install the binaries to your system:This installs all three binaries (
nrvnad, wrk, flw) to /usr/local/bin by default.Build configuration
The build system is configured via CMakeLists.txt with these key settings:- C++ Standard: C++17 (required)
- llama.cpp: Built automatically as a submodule
- Multimodal support: Enabled via mtmd library
- Compiler warnings:
-Wall -Wextra -Wpedantic
Custom install prefix
To install to a different location:Get a model
After installation, download a GGUF model to use with nrvna-ai:./models/ directory, or set the NRVNA_MODELS_DIR environment variable to point to your models:
Next steps
Quickstart
Get running in 5 minutes
Introduction
Learn about the core concepts