Skip to main content
RCLI can be built from source on macOS with Apple Silicon (M1 or later). The build system uses CMake and automatically fetches all dependencies except for llama.cpp and sherpa-onnx, which are cloned via the setup script.

Prerequisites

Before building RCLI, ensure you have:
  • macOS 13+ on Apple Silicon (M1 or later)
  • CMake 3.15 or later
  • Apple Clang (ships with Xcode or Command Line Tools)
  • Git for cloning repositories
RCLI requires Apple Silicon (arm64 architecture) and does not support Intel Macs. The build system uses Metal GPU acceleration and Apple Accelerate framework.

Quick Build

1

Clone the repository

git clone https://github.com/RunanywhereAI/RCLI.git && cd RCLI
2

Run setup script

The setup script clones llama.cpp and sherpa-onnx into the deps/ directory:
bash scripts/setup.sh
This downloads:
  • llama.cpp — LLM + embedding inference with Metal GPU
  • sherpa-onnx — STT/TTS/VAD via ONNX Runtime
3

Download AI models

Download the default model set (~1 GB):
bash scripts/download_models.sh
This fetches:
  • Qwen3 0.6B (LLM)
  • Whisper base.en (STT)
  • Zipformer (streaming STT)
  • Piper Lessac (TTS)
  • Silero VAD
  • Snowflake Arctic Embed S (embeddings)
4

Configure and build

mkdir -p build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build . -j$(sysctl -n hw.ncpu)
This produces:
  • ./rcli — the main CLI executable
  • ./rcli_test — test executable
5

Run RCLI

./rcli

Build Types

cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build . -j$(sysctl -n hw.ncpu)
Optimizations:
  • -O3 optimization level
  • NDEBUG defined (disables assertions)
  • Link-time optimization (LTO) when supported
  • Dead code stripping (-Wl,-dead_strip)
  • Native CPU features (-mcpu=native for arm64)

Debug Build

cmake .. -DCMAKE_BUILD_TYPE=Debug
cmake --build . -j$(sysctl -n hw.ncpu)
Debug features:
  • No optimization
  • Debug symbols included
  • Assertions enabled
  • Useful for development and troubleshooting

Dependencies

All dependencies are vendored or fetched automatically by CMake. No external package manager is required.
DependencyPurposeIntegration
llama.cppLLM + embedding inference with Metal GPUadd_subdirectory(deps/llama.cpp)
sherpa-onnxSTT/TTS/VAD via ONNX Runtimeadd_subdirectory(deps/sherpa-onnx)
USearch v2.16.5HNSW vector index for RAGFetchContent (header-only)
FTXUI v5.0.0Terminal UI libraryFetchContent
RCLI links against the following macOS frameworks:
  • CoreAudio — Audio input/output
  • AudioToolbox — Audio conversion
  • AudioUnit — Audio processing
  • Foundation — Core Objective-C runtime
  • AVFoundation — Media playback
  • IOKit — Hardware monitoring (CPU, RAM)
  • Metal — GPU acceleration (when GGML_METAL=ON)
  • MetalKit — Metal utilities

Build Configuration

CMake Options

Key CMake variables configured in CMakeLists.txt:
# Metal GPU acceleration (default: ON)
set(GGML_METAL ON CACHE BOOL "Enable Metal GPU acceleration" FORCE)

# Apple Accelerate framework (default: ON)
set(GGML_ACCELERATE ON CACHE BOOL "Enable Apple Accelerate framework" FORCE)

# Build shared sherpa-onnx libraries
set(BUILD_SHARED_LIBS ON CACHE BOOL "" FORCE)

Compiler Flags

Release mode:
set(CMAKE_CXX_FLAGS_RELEASE "-O3 -DNDEBUG")
set(CMAKE_C_FLAGS_RELEASE "-O3 -DNDEBUG")
ARM64 optimizations:
add_compile_options(-mcpu=native)
Warnings:
target_compile_options(rcli PRIVATE
    -Wall -Wextra -Wpedantic
    -Wno-unused-parameter
    -Wno-missing-field-initializers
)

Build Targets

librcli (Static Library)

The core engine compiled as a static library:
# Automatically built as dependency
Includes:
  • Voice pipeline engines (STT, LLM, TTS, VAD, embeddings)
  • RAG system (vector index, BM25, hybrid retriever)
  • Action system (43 macOS actions)
  • Public C API (rcli_api.h)

rcli (CLI Executable)

The main interactive CLI:
./rcli                  # Interactive TUI mode
./rcli listen           # Continuous voice mode
./rcli ask "command"     # One-shot text mode

rcli_test (Test Executable)

Test harness for the pipeline:
./rcli_test ~/Library/RCLI/models
./rcli_test ~/Library/RCLI/models --actions-only    # Fast, no models
./rcli_test ~/Library/RCLI/models --llm-only
./rcli_test ~/Library/RCLI/models --stt-only

Installation

Install to system directories (for Homebrew packaging):
cmake --install . --prefix /usr/local
Installs:
  • Binary: /usr/local/bin/rcli
  • Header: /usr/local/include/rcli/rcli_api.h
  • Libraries: /usr/local/lib/*.dylib

Packaging

Generate a portable tarball:
cpack
Produces: rcli-0.1.5-Darwin.tar.gz

Troubleshooting

# Install latest CMake via Homebrew
brew install cmake
xcode-select --install
Run the setup script first:
bash scripts/setup.sh
Download models:
bash scripts/download_models.sh
# Or from the built binary:
./rcli setup
RCLI requires Apple Silicon (M1 or later). Intel Macs are not supported due to Metal GPU requirements.

Next Steps

Contributing

Learn how to contribute to RCLI

Adding Actions

Extend RCLI with custom macOS actions

Project Structure

Understand the codebase organization

Build docs developers (and LLMs) love