Overview
Multi-stream mode allows you to monitor multiple RTSP cameras simultaneously. Each stream:- Runs in its own dedicated thread
- Has independent person detection
- Saves to a separate output directory
- Can be viewed together in a grid display
Multi-stream mode is designed for 2-16 cameras. For larger deployments, consider running multiple instances with different camera groups.
Basic Usage
Two Methods for Specifying Streams
- Direct URL List
- URL File
Pass URLs directly as command arguments:
Command Options
One or more RTSP stream URLs separated by spaces
Path to text file containing RTSP URLs (one per line)
Save mode:
image for snapshots or video for clipsEnable grid display window showing all streams
RTSP URL File Format
Create a text file with one RTSP URL per line:rtsp_streams.txt
main.py:119-124):
File format rules:
- One URL per line
- Blank lines are ignored
- Lines starting with
#are comments - Leading/trailing whitespace is trimmed
Grid Display
When--display is enabled, all streams appear in a single composited window.
Enable Grid Display
Grid Layout
Streams are automatically arranged in a grid:| Streams | Grid Layout | Example |
|---|---|---|
| 1 | 1×1 | Single full window |
| 2-4 | 2×2 | Four quadrants |
| 5-9 | 3×3 | Nine tiles |
| 10-16 | 4×4 | Sixteen tiles |
- 2 Streams (2x2)
- 4 Streams (2x2)
- 6 Streams (3x3)
- Each stream shows person count and entry counter
- Green bounding boxes around detected persons
- Confidence scores displayed
- Streams update independently
- Press ‘q’ to quit
Without Display (Headless)
Omit--display to run without GUI (headless servers):
- No display window
- Lower resource usage
- Still processes all streams
- Saves output files normally
- Logs to console
Threading Architecture
Each stream processes independently in its own thread.Thread Creation
Frommulti_stream_manager.py:71-80:
Daemon Threads
Threads are daemon threads - they terminate when main program exits.
Independent Processing
Each stream has its own connection, detection loop, and reconnection logic.
Shared Detector
All threads share a single
PersonDetector instance with thread-safe inference.Separate Outputs
Each stream saves to its own
stream_<id>/ directory.Thread-Safe Detection
ThePersonDetector uses a lock to ensure only one thread runs inference at a time.
From person_detector.py:228-244:
- OpenCV’s DNN module is not thread-safe
- Multiple threads calling
net.forward()simultaneously causes crashes - Lock ensures sequential inference
- Other operations (frame reading, saving) remain parallel
Thread Monitoring
The main thread waits for all worker threads: Frommulti_stream_manager.py:82-94:
- User presses ‘q’ (closes display window)
- User presses Ctrl+C
- All streams disconnect and exhaust retries
Output Organization
Multi-stream mode creates a sub-directory for each stream.Directory Structure
Stream ID Assignment
Stream IDs are assigned based on input method:- --rtsp-list
- --rtsp-file
Auto-numbered starting from 1:From
multi_stream_manager.py:56-59:Directory Creation
Fromstream_processor.py:55-58:
Directories are created automatically when the first person is detected in each stream.
Performance Considerations
CPU/GPU Bottlenecks
Inference bottleneck:- Thread-safe lock means only one detection at a time
- With 8 streams and 100ms inference time:
- Each stream gets detection every 800ms minimum
- Plus frame_skip delays
Enable GPU acceleration
CUDA-enabled OpenCV reduces inference from ~100ms to ~10ms.See GPU Acceleration Guide
Memory Usage
Per-stream overhead:- Frame buffer: ~6 MB (1920×1080 RGB)
- Video writer buffer: ~10-20 MB
- Network buffers: ~5 MB
- Total per stream: ~20-30 MB
- 16 streams: ~400 MB
- Plus model weights: ~250 MB (YOLOv4)
- Total: ~650 MB baseline
Network Bandwidth
Bandwidth calculation:| Resolution | Bitrate (typical) | Streams | Total Bandwidth |
|---|---|---|---|
| 1920×1080 | 4 Mbps | 4 | 16 Mbps |
| 1920×1080 | 4 Mbps | 8 | 32 Mbps |
| 1280×720 | 2 Mbps | 8 | 16 Mbps |
| 1280×720 | 2 Mbps | 16 | 32 Mbps |
Scaling Guidelines
| Streams | CPU (no GPU) | GPU (CUDA) | RAM | Network | Recommendation |
|---|---|---|---|---|---|
| 1-4 | OK | Excellent | Low | Low | Any hardware |
| 5-8 | Slow | Good | Medium | Medium | GPU recommended |
| 9-16 | Very Slow | OK | High | High | GPU required |
| 17+ | Unusable | Slow | Very High | Very High | Multiple instances |
Console Output
Understanding multi-stream console output:[Stream N]prefix identifies which stream each message is from- Thread start messages confirm all streams launched
- Connection messages show parallel connection attempts
- Detection events include stream ID and bounding box coordinates
- Final summary shows per-stream statistics
Real-World Examples
Example 1: Retail Store (4 Cameras)
Setup:- Front entrance
- Back entrance
- Checkout area
- Stock room
cameras.txt
- Grid display shows all 4 cameras
- Snapshot saved when person enters each area
- Higher thresholds reduce false positives
Example 2: Warehouse (12 Cameras)
Setup:- Loading docks (4)
- Main aisles (6)
- Offices (2)
- No display (12 streams = too many for useful grid)
- Higher frame_skip (30 = 1 fps) for performance
- Video mode captures full activity
- Run on server with GPU
Example 3: Office Building (8 Cameras)
Setup:- Lobby
- Elevator banks (3)
- Conference rooms (2)
- Server room
- Parking garage
- Balanced settings
- Video clips for security review
- Grid display for monitoring
- Standard detection frequency
Troubleshooting
One stream failing affects others
One stream failing affects others
Issue: One bad URL causes problemsSolution:
Threads are independent. A failing stream won’t crash others, but will keep retrying:Comment out the bad URL in your file:
Grid display not updating
Grid display not updating
Issue: Some tiles frozen or blackPossible causes:
- Stream connection issue
- Thread crashed
- Very slow inference
[Stream N] messages. Missing messages indicate that stream has issues.Slow detection with many streams
Slow detection with many streams
Issue: Long delays between detectionsCause: Thread-safe inference lock is bottleneckSolutions:
-
Enable GPU acceleration:
See GPU Acceleration
-
Increase frame_skip:
- Reduce number of streams: Split into multiple instances
Memory errors with many streams
Memory errors with many streams
Error:Solutions:
- Reduce number of streams
- Disable display:
--displayadds overhead - Check available RAM:
- Use lower resolution streams (configure at camera)
Can't press 'q' to quit
Can't press 'q' to quit
Issue: ‘q’ key doesn’t stop processingCause: Display window must have focusSolution:
- Click on the grid window first
- Then press ‘q’
- Or use Ctrl+C in terminal
Best Practices
Use URL Files
Easier to manage, edit, and version control than command-line lists.
Test Streams First
Verify each RTSP URL works with VLC before adding to multi-stream setup.
Start Small, Scale Up
Test with 2-4 streams first, then add more as you tune performance.
Monitor System Resources
Use
htop or similar to watch CPU, RAM, and network usage.Enable GPU for >4 Streams
GPU acceleration is essential for processing many streams efficiently.
Label Your Streams
Use comments in URL file to document which camera is which.
Next Steps
GPU Acceleration
Essential for multi-stream performance
Configuration Tuning
Optimize settings for your camera setup
Single Stream Guide
Understand single-stream processing
Model Setup
Configure detection models