Prerequisites
- Python 3.10 or higher
- CUDA-capable GPU (recommended for faster inference)
- Test images to run detection on
Quick start
Install dependencies
Install the required Python packages:
On Lightning.ai A100 instances, CUDA and PyTorch come pre-installed.
Get a trained model
You need a trained model checkpoint to run detection. You can either:
- Train your own model following the Installation and Training Guide, or
- Use a pre-trained checkpoint if available
runs/drone_detect/weights/best.pt.Prepare test images
Place your test images in the Images should be in common formats (JPG, PNG) with drones visible in the frame.
data/images/test/ directory:Configuration options
Adjust detection parameters indetect.py:
Lower confidence thresholds produce more detections but may include false positives. Higher thresholds are more conservative but may miss some drones.
Output format
The CSV file contains 7 columns for each detection:| Column | Type | Description |
|---|---|---|
image | string | Source image filename |
x_center | float | Bounding box center X coordinate (pixels) |
y_center | float | Bounding box center Y coordinate (pixels) |
width | float | Bounding box width (pixels) |
height | float | Bounding box height (pixels) |
confidence | float | Detection confidence score (0-1) |
class | string | Object class (always “drone”) |
Next steps
Train your own model
Fine-tune YOLO26s on the drone dataset
Dataset setup
Prepare the DeepSense Scenario 23 dataset
Configuration
Explore all detection parameters
Troubleshooting
Solve common problems
Troubleshooting
FileNotFoundError: Model not found
FileNotFoundError: Model not found
Solution: Train a model first using the Training Guide, or verify the
MODEL path points to your checkpoint location.No detections in CSV
No detections in CSV
Problem: CSV is empty or only contains the header row.Solutions:
- Lower the confidence threshold:
CONF = 0.25 - Verify images contain drones
- Check that the model was trained on similar data
CUDA out of memory
CUDA out of memory
Solution: Reduce batch size in the
model.predict() call:Slow inference
Slow inference
Solutions:
- Enable GPU inference (requires CUDA)
- Use FP16 precision:
half=True(already enabled) - Increase batch size:
batch=32 - Enable cudnn.benchmark (already enabled)
For more troubleshooting help, see the Troubleshooting page.