Skip to main content

Overview

YOLO-Pi can run on Raspberry Pi 3 and newer models. Due to the computational requirements of deep learning inference, compilation and runtime performance require special configuration.
Compiling dependencies on Raspberry Pi takes several hours and requires a USB swap drive with at least 2GB of space.

Hardware Requirements

  • Raspberry Pi 3 or newer (RPi 3+ recommended)
  • USB camera or Pi Camera Module (connected to camera port)
  • USB drive with at least 2GB for swap space
  • 4GB+ microSD card for OS and application
  • Stable power supply (2.5A recommended)

Performance Expectations

On a Raspberry Pi 3, expect approximately 0.5 FPS (1 frame every 2 seconds). This is significantly slower than running on a laptop or desktop computer, but sufficient for many real-time detection applications.

USB Swap Setup

Compiling OpenCV and TensorFlow requires more RAM than the Raspberry Pi provides. A USB swap drive extends available memory.
1

Prepare USB Drive

Insert a USB drive (at least 2GB) and partition it:
sudo fdisk /dev/sda
Create a new partition, then format it:
sudo mkfs.ext4 /dev/sda1
sudo mkdir /usb
2

Configure Auto-Mount

Edit /etc/fstab to automatically mount the USB drive:
sudo nano /etc/fstab
Add this line:
/dev/sda1	/usb	ext4	defaults 0 0
Mount the drive:
sudo mount -a
3

Configure Swap File

Edit /etc/dphys-swapfile:
sudo nano /etc/dphys-swapfile
Comment out existing CONF_SWAPFILE and CONF_SWAPSIZE settings and add:
CONF_SWAPFILE=/usb/swap
CONF_SWAPSIZE=2048
4

Enable Swap

Recreate and activate the swap file:
sudo dphys-swapfile setup
sudo dphys-swapfile swapon
Verify swap is active:
free -h
Swap setup instructions are courtesy of this guide.

Camera Setup

YOLO-Pi supports both USB cameras and the Raspberry Pi Camera Module.
USB cameras are automatically detected as /dev/video0. Verify your camera:
ls -l /dev/video0
Test video capture:
v4l2-ctl --list-devices
The YOLO-Pi script uses OpenCV’s cv2.VideoCapture(0) to access the camera.
The Raspberry Pi Camera Module connects to the camera port on the board.Enable the camera:
sudo raspi-config
# Navigate to Interface Options > Camera > Enable
You may need to use raspivid or configure OpenCV to use the camera module instead of /dev/video0.
The easiest way to run YOLO-Pi on Raspberry Pi is using Docker. This handles all dependency compilation automatically.
1

Install Docker

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
Log out and back in for group changes to take effect.
2

Prepare TensorFlow Wheel

Download the pre-compiled TensorFlow binary for ARM:
cd ~/src
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/releases/download/v1.1.0/tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl
mv tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl tensorflow-1.1.0-cp35-cp35m-linux_armv7l.whl
We rename the wheel file to match Python 3.5 compatibility.
3

Build Docker Image

Copy Dockerfile.rpi to your source directory and build:
docker build -t ashya/yolo-pi -f Dockerfile.rpi .
This build process takes several hours on Raspberry Pi 3. Use screen to maintain your session:
screen
docker build -t ashya/yolo-pi -f Dockerfile.rpi .
# Press Ctrl+A then D to detach
# Later: screen -r to reattach
4

Run Container

Run YOLO-Pi with camera access:
docker run -it --rm --device /dev/video0 ashya/yolo-pi /bin/bash
Or run automatically in detached mode:
docker run -d --device /dev/video0 ashya/yolo-pi

Manual Installation

If you prefer not to use Docker, you can install dependencies manually. Follow the installation guide, but note these Raspberry Pi-specific requirements:

System Packages

Install required system libraries:
sudo apt-get update
sudo apt-get install -y \
  build-essential cmake git wget unzip yasm pkg-config \
  libswscale-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev \
  libtiff-dev libjasper-dev libavformat-dev libpq-dev \
  libgtk2.0-dev python3 python3-pip python3-setuptools \
  python3-dev libblas-dev liblapack-dev libhdf5-dev \
  python3-h5py python3-scipy python3-pil

Python Packages

pip3 install numpy keras==2.1.2 paho-mqtt h5py==2.7.1

TensorFlow for ARM

Use the pre-compiled wheel:
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/releases/download/v1.1.0/tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl
mv tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl tensorflow-1.1.0-cp35-cp35m-linux_armv7l.whl
pip3 install ./tensorflow-1.1.0-cp35-cp35m-linux_armv7l.whl

OpenCV 3.3.0

Build OpenCV from source (requires 2+ hours):
wget https://github.com/opencv/opencv/archive/3.3.0.zip
unzip 3.3.0.zip
mkdir /opencv-3.3.0/cmake_binary
cd /opencv-3.3.0/cmake_binary

cmake -DBUILD_TIFF=ON \
  -DBUILD_opencv_java=OFF \
  -DWITH_CUDA=OFF \
  -DBUILD_TESTS=OFF \
  -DBUILD_PERF_TESTS=OFF \
  -DCMAKE_BUILD_TYPE=RELEASE \
  -DCMAKE_INSTALL_PREFIX=$(python3 -c "import sys; print(sys.prefix)") \
  -DPYTHON_EXECUTABLE=$(which python3) \
  -DPYTHON_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
  -DPYTHON_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") ..

make install

Running YOLO-Pi

Set the MQTT server environment variable and run:
export MQTT=your-mqtt-server-ip
python3 yolo-pi.py
The script will:
  1. Connect to the MQTT server
  2. Load the YOLO model (tiny-yolo-voc.h5)
  3. Start video capture from /dev/video0
  4. Publish detected objects to the MQTT topic yolo

Optimization Tips

Use Tiny YOLO

The Tiny YOLO VOC model is optimized for embedded devices and provides the best performance on Raspberry Pi.

Disable Video Preview

The default configuration has video preview disabled (line 174 in yolo-pi.py is commented out) to save resources.

Overclock (Advanced)

Consider overclocking your Raspberry Pi for better performance, but ensure adequate cooling.

Troubleshooting

Check device permissions:
ls -l /dev/video0
sudo usermod -aG video $USER
Reboot after adding to the video group.
Ensure your swap file is active:
free -h
sudo dphys-swapfile swapon
Verify the MQTT environment variable:
echo $MQTT
export MQTT=192.168.1.100  # Your MQTT server IP

Next Steps

Docker Setup

Learn more about Docker deployment options

Running Inference

Configure model paths and run detection

Build docs developers (and LLMs) love