Skip to main content

Installation

This guide covers multiple installation methods for the EVM Vital Signs Monitor, from standard Python setup to Docker deployment and Raspberry Pi optimization.

Choose Your Installation Method

Python

Standard installation using pip

Docker

Containerized deployment

Raspberry Pi

Optimized for embedded systems

Python Installation

Prerequisites

  • Python 3.8 or higher
  • pip package manager
  • 2GB+ RAM available
  • (Optional) Virtual environment tool

Step-by-Step Installation

1

Clone the repository

git clone https://github.com/your-org/evm-vital-signs-monitor.git
cd evm-vital-signs-monitor
2

Create a virtual environment (recommended)

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
Using a virtual environment prevents dependency conflicts with other Python projects.
3

Install dependencies

pip install -r requirements.txt
This installs the following core packages:
requirements.txt
mediapipe==0.10.21
mtcnn==1.0.0
numpy>=1.24.0,<2.0.0
opencv-python==4.10.0.84
psutil==6.0.0
scipy>=1.10.0,<2.0.0
ultralytics==8.3.235
tensorflow==2.16.1
tf-keras==2.16.0
pytest
Installation may take 5-15 minutes depending on your system and internet connection. TensorFlow and Ultralytics are large packages.
4

Verify installation

# Test imports
python -c "import cv2, mediapipe, numpy, scipy; print('✓ All core dependencies installed')"

# Run unit tests (optional)
cd Python
pytest unit_test

Platform-Specific Notes

Additional system dependencies for OpenCV:
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y ffmpeg libgl1 libglib2.0-0

# Fedora/RHEL
sudo dnf install -y ffmpeg mesa-libGL glib2

Docker Installation

Docker provides a consistent environment across different platforms and simplifies dependency management.

Prerequisites

  • Docker Engine 20.10+ installed
  • Docker Compose (optional, for orchestration)
  • 4GB+ RAM allocated to Docker

Using the Dockerfile

The project includes a production-ready Dockerfile:
FROM tensorflow/tensorflow:2.16.1

WORKDIR /app

RUN apt-get update && apt-get install -y \
    ffmpeg \
    libgl1 \
    libglib2.0-0 \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY Python ./Python
WORKDIR /app/Python

CMD ["pytest", "unit_test"]

Docker Compose Setup (Optional)

For easier management and development:
docker-compose.yml
version: '3.8'

services:
  evm-monitor:
    build: .
    volumes:
      - ./Python:/app/Python
      - ./data:/data
    environment:
      - PYTHONUNBUFFERED=1
    command: python experiments/simple_run_EVM.py
# Start service
docker-compose up

# Run specific experiment
docker-compose run evm-monitor python experiments/simple_run_ROI.py
The Docker image is based on tensorflow/tensorflow:2.16.1 which provides optimized TensorFlow binaries for better performance.

Raspberry Pi Installation

Optimized installation for Raspberry Pi 4 (4GB+ RAM recommended).

Prerequisites

  • Raspberry Pi 4 (4GB or 8GB RAM)
  • Raspberry Pi OS 64-bit (Bullseye or later) or Ubuntu Server
  • 16GB+ microSD card (Class 10 or UHS-I)
  • Internet connection
  • (Optional) Raspberry Pi Camera Module v2 or USB webcam

Step-by-Step Setup

1

Update system packages

sudo apt-get update
sudo apt-get upgrade -y
2

Install system dependencies

sudo apt-get install -y \
    python3-pip \
    python3-venv \
    ffmpeg \
    libgl1 \
    libglib2.0-0 \
    libatlas-base-dev \
    libhdf5-dev \
    libjpeg-dev \
    libpng-dev
3

Increase swap space (recommended)

TensorFlow compilation may require additional swap:
# Edit swap configuration
sudo nano /etc/dphys-swapfile

# Change CONF_SWAPSIZE to 2048
# CONF_SWAPSIZE=2048

# Restart swap service
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start
4

Clone repository and setup environment

cd ~
git clone https://github.com/your-org/evm-vital-signs-monitor.git
cd evm-vital-signs-monitor

# Create virtual environment
python3 -m venv venv
source venv/bin/activate
5

Install Python dependencies

# Upgrade pip
pip install --upgrade pip

# Install dependencies (this may take 20-30 minutes)
pip install -r requirements.txt
TensorFlow installation on Raspberry Pi can take 20-30 minutes. Be patient and ensure stable power supply.
6

Configure for Raspberry Pi

The system automatically detects Raspberry Pi and uses optimized settings from src/config.py:
src/config.py
# Raspberry Pi optimizations
LEVELS_RPI = 3  # Reduced pyramid levels for performance
TARGET_ROI_SIZE = (320, 240)  # Smaller ROI for faster processing
For camera setup:
# Enable camera interface
sudo raspi-config
# Navigate to: Interface Options > Camera > Enable
7

Test installation

# Test face detection
cd Python/experiments
python simple_run_ROI.py

Raspberry Pi Performance Tips

Use MediaPipe

MediaPipe provides the best speed/accuracy balance on RPi:
FaceDetector(model_type='mediapipe')

Optimize Config

Reduce computational load:
  • Lower TARGET_ROI_SIZE to (240, 180)
  • Set LEVELS_RPI = 2 for faster pyramid processing
  • Decrease BUFFER_SIZE to 150 frames

Disable GUI

Run headless for better performance:
# Boot to console
sudo systemctl set-default multi-user.target

Overclock (Optional)

Safely boost performance in /boot/config.txt:
over_voltage=2
arm_freq=1750
Ensure adequate cooling!
Raspberry Pi Limitations:
  • YOLOv8/v12 models may be too slow for real-time use
  • MTCNN is not recommended due to low FPS
  • Ensure adequate cooling during extended operation
  • Use quality power supply (5V 3A minimum)

Verifying Installation

Run these checks to ensure everything is working correctly:
python -c "
import cv2
import mediapipe
import numpy
import scipy
print('✓ OpenCV:', cv2.__version__)
print('✓ MediaPipe:', mediapipe.__version__)
print('✓ NumPy:', numpy.__version__)
print('✓ SciPy:', scipy.__version__)
print('\nAll dependencies installed successfully!')
"

Troubleshooting

On Raspberry Pi:
# Use pip3 explicitly
pip3 install tensorflow==2.16.1

# If still failing, try pre-built wheel:
pip3 install https://tf.kmtea.eu/whl/stable/tensorflow-2.16.1-cp311-cp311-linux_aarch64.whl
On other platforms:
  • Ensure Python version is 3.8-3.11 (3.12+ not yet fully supported)
  • Update pip: pip install --upgrade pip
  • Try installing with --no-cache-dir flag
# Install missing system libraries
sudo apt-get install -y libgl1-mesa-glx libglib2.0-0

# Verify installation
python -c "import cv2; print(cv2.__version__)"
MediaPipe requires specific dependencies:
sudo apt-get install -y \
    libopencv-dev \
    libopencv-contrib-dev \
    python3-opencv

pip3 install mediapipe==0.10.21
Reduce memory usage:
src/config.py
# Lower resolution
TARGET_ROI_SIZE = (240, 180)  # Instead of (320, 240)

# Reduce pyramid levels
LEVELS_RPI = 2  # Instead of 3

# Smaller buffer
BUFFER_SIZE = 150  # Instead of 200
On Raspberry Pi:
  • Increase swap space (see RPi installation steps)
  • Close unnecessary applications
  • Use lightweight desktop environment or run headless
# Check Docker daemon is running
sudo systemctl status docker

# Verify sufficient resources
docker system df

# View container logs
docker logs <container-id>

# Rebuild from scratch
docker build --no-cache -t evm-vital-signs .
USB Webcam:
# List video devices
ls -l /dev/video*

# Test with v4l2
v4l2-ctl --list-devices
Raspberry Pi Camera:
# Enable camera
sudo raspi-config
# Interface Options > Camera > Enable

# Test capture
raspistill -o test.jpg

# For OpenCV, use:
# cap = cv2.VideoCapture(0, cv2.CAP_V4L2)

Updating the System

To update to the latest version:
# Pull latest changes
git pull origin main

# Update dependencies
pip install -r requirements.txt --upgrade

# Run tests to verify
pytest Python/unit_test

Uninstallation

# Deactivate and remove virtual environment
deactivate
rm -rf venv

# Remove project directory
cd ..
rm -rf evm-vital-signs-monitor

Next Steps

Quick Start

Run your first vital signs measurement

Configuration Guide

Customize system parameters for your use case

Choose a Detector

Select the optimal face detection model

Raspberry Pi Guide

Optimize for embedded deployment

Build docs developers (and LLMs) love