Skip to main content

Quick Start Guide

Get YOLO-Pi running on your hardware in just a few steps. This guide will walk you through the fastest path to real-time object detection.

Prerequisites

Before you begin, ensure you have:

Hardware

  • Raspberry Pi 3 or newer (or x86 computer for testing)
  • USB camera or Raspberry Pi Camera Module
  • 8GB+ microSD card
  • Internet connection

Software

  • Docker installed (recommended), or
  • Python 3.5+ with pip
  • Git for cloning the repository

Installation Methods

Choose the installation method that best fits your needs:

Model Setup

Download and convert a pre-trained YOLO model to Keras format.
1

Download YOLO model

Download the tiny-yolo-voc model (recommended for Raspberry Pi):
cd model_data
wget https://pjreddie.com/media/files/tiny-yolo-voc.weights
wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/tiny-yolo-voc.cfg
cd ..
The tiny-yolo-voc model provides a good balance between speed and accuracy for edge devices.
2

Convert to Keras format

Use the YAD2K converter to create a Keras model:
python3 yad2k/yad2k.py \
  model_data/tiny-yolo-voc.cfg \
  model_data/tiny-yolo-voc.weights \
  model_data/tiny-yolo-voc.h5
This creates tiny-yolo-voc.h5 which contains the Keras model.
3

Verify model files

Ensure you have all required files:
ls model_data/
You should see:
  • tiny-yolo-voc.h5 - Keras model
  • tiny-yolo-voc_anchors.txt - Anchor boxes
  • pascal_classes.txt - Object class names

Configuration

Configure YOLO-Pi to use your model and MQTT broker.
1

Set MQTT broker

Set the MQTT environment variable:
export MQTT=mqtt.example.com
Replace with your MQTT broker hostname or IP address.
2

Configure model paths

Edit src/yolo-pi.py and update the model configuration:
model_path = 'model_data/tiny-yolo-voc.h5'
anchors_path = 'model_data/tiny-yolo-voc_anchors.txt'
classes_path = 'model_data/pascal_classes.txt'
These paths are correct for the tiny-yolo-voc model. Change them if using a different model.

First Detection

Run YOLO-Pi and start detecting objects!
1

Start the detection script

cd src
python3 yolo-pi.py
The script will:
  1. Load the YOLO model
  2. Connect to your MQTT broker
  3. Start capturing from the camera
  4. Process frames and detect objects
2

Monitor detections

Watch the console output for detected objects:
image: [{"item": "person", "score": "0.87"}, {"item": "cat", "score": "0.92"}]
Detections are also published to the MQTT topic yolo in JSON format.
3

Subscribe to MQTT (optional)

In another terminal, subscribe to see detections:
mosquitto_sub -h mqtt.example.com -t yolo
You’ll receive JSON messages with detected objects and confidence scores.

Performance Expectations

Detection speed varies by hardware:
HardwareFPSNotes
Raspberry Pi 3~0.5 FPSOne frame every 2 seconds
Raspberry Pi 4~1 FPSImproved performance
MacBook Pro~5-10 FPSM1/M2 or recent Intel
For better performance on Raspberry Pi, ensure adequate cooling and consider using the GPU-accelerated TensorFlow build.

Visualization (Optional)

By default, visualization is disabled. To see the detection output:
  1. Edit src/yolo-pi.py
  2. Uncomment line 174:
cv2.imshow("preview", open_cv_image)
  1. Run the script with display access
Enabling visualization adds overhead and reduces FPS, especially on Raspberry Pi.

Next Steps

Model Conversion

Learn how to convert other YOLO models

MQTT Integration

Integrate detections with your IoT platform

Raspberry Pi Setup

Optimize for Raspberry Pi hardware

Production Deployment

Deploy in production environments

Troubleshooting

Ensure your camera is connected and recognized:
ls -la /dev/video*
You should see /dev/video0. If using Docker, ensure the device is passed with --device=/dev/video0:/dev/video0.
Check the MQTT broker is accessible:
ping mqtt.example.com
Verify the MQTT environment variable is set:
echo $MQTT
On Raspberry Pi, ensure adequate swap space:
free -h
See Raspberry Pi Setup for swap configuration.
Verify all model files exist:
ls -la model_data/
Ensure the paths in yolo-pi.py match your actual file locations.

Get Help

View Full Documentation

Explore comprehensive guides, API reference, and deployment options

Build docs developers (and LLMs) love