Overview
BeamFinder is a specialized detection pipeline that uses YOLO26s to locate drones in images and outputs bounding box coordinates for THz (terahertz) beam steering applications. Built as part of a research study on line-of-sight beam steering for THz communication, BeamFinder fine-tunes state-of-the-art object detection models on drone imagery and provides precise spatial coordinates for targeting systems.Quick Start
Get detecting drones in under 5 minutes with our quickstart guide
Installation
Complete installation and setup instructions for training and inference
Training Guide
Fine-tune YOLO26s on custom drone datasets with optimized hyperparameters
Detection Guide
Run inference and export bounding box coordinates to CSV
How It Works
BeamFinder operates in two phases:Training Phase
Fine-tunes YOLO26s on the DeepSense Scenario 23 drone dataset (11,387 annotated images) to learn drone-specific features. The pretrained COCO model has no “drone” class, so this fine-tuning step is essential.
Key Features
YOLO26s Architecture
BeamFinder uses the latest YOLO26s model from Ultralytics, providing state-of-the-art detection accuracy with real-time inference speeds. The model is optimized for single-class detection (drones) with custom training augmentations.Production-Ready Pipeline
- CSV Export: Outputs detection coordinates in a structured format ready for beam steering systems
- Batch Inference: Process multiple images efficiently with configurable batch sizes
- Confidence Filtering: Configurable confidence threshold (default 0.4) to control precision/recall tradeoff
- Annotated Outputs: Generates visualizations with bounding boxes overlaid on images
Optimized for A100 GPUs
The training pipeline includes A100-specific optimizations:compile=True for torch.compile support (10-30% faster training on PyTorch 2.x).
Dataset
BeamFinder is trained on DeepSense Scenario 23, a comprehensive drone detection dataset:- 11,387 total images with YOLO-format bounding box annotations
- 7,970 training images (70%)
- 1,708 validation images (15%)
- 1,709 test images (15%)
- 960×540 resolution (16:9 aspect ratio)
- Single class: drone
Images come from 51 different capture sessions with varying lighting conditions, backgrounds, and drone positions. The dataset is shuffled before splitting to ensure diverse representation across all sets.
Output Format
Detections are saved tooutput/detections.csv with the following structure:
| Column | Type | Description |
|---|---|---|
image | string | Source image filename |
x_center | float | Bounding box center X coordinate (pixels) |
y_center | float | Bounding box center Y coordinate (pixels) |
width | float | Bounding box width (pixels) |
height | float | Bounding box height (pixels) |
confidence | float | Detection confidence score (0-1) |
class | string | Detected object class (“drone”) |
Performance
After 100 epochs of training with optimized hyperparameters:- Image size: 960×540 (rectangular training with
rect=True) - Batch size: Automatic (90% GPU memory utilization)
- Data augmentation: Rotation (±15°), vertical flip (50%), scale (0.9), translate (0.2)
- Learning rate: Cosine annealing scheduler
- Early stopping: 20 epochs patience
Use Cases
- THz Beam Steering: Provide real-time target coordinates for terahertz communication systems
- Counter-UAS Systems: Detect and track unauthorized drones in restricted airspace
- Research: Study line-of-sight communication challenges with mobile aerial targets
- Dataset Generation: Create annotated datasets for downstream ML tasks
Next Steps
Run Your First Detection
Follow the quickstart to detect drones in under 5 minutes
Set Up Your Environment
Install dependencies and configure your training environment