Skip to main content
Get BeamFinder up and running quickly to detect drones in images and export bounding box coordinates to CSV.

Prerequisites

  • Python 3.10 or higher
  • CUDA-capable GPU (recommended for faster inference)
  • Test images to run detection on

Quick start

1

Install dependencies

Install the required Python packages:
pip install ultralytics>=8.4.0 matplotlib>=3.7.0
On Lightning.ai A100 instances, CUDA and PyTorch come pre-installed.
2

Get a trained model

You need a trained model checkpoint to run detection. You can either:
  1. Train your own model following the Installation and Training Guide, or
  2. Use a pre-trained checkpoint if available
The detection script expects the model at runs/drone_detect/weights/best.pt.
3

Prepare test images

Place your test images in the data/images/test/ directory:
mkdir -p data/images/test
# Copy your images here
Images should be in common formats (JPG, PNG) with drones visible in the frame.
4

Run detection

Create a detect.py script:
detect.py
import csv
from pathlib import Path
import torch
from ultralytics import YOLO

# Configuration
SCRIPT_DIR = Path(__file__).resolve().parent
MODEL = str(SCRIPT_DIR / "runs" / "drone_detect" / "weights" / "best.pt")
IMAGE_DIR = SCRIPT_DIR / "data" / "images" / "test"
OUTPUT_DIR = SCRIPT_DIR / "output"
CONF = 0.4  # Confidence threshold
IMGSZ = 960  # Image size

if __name__ == "__main__":
    # GPU optimizations
    torch.backends.cuda.matmul.allow_tf32 = True
    torch.backends.cudnn.allow_tf32 = True
    torch.backends.cudnn.benchmark = True

    model = YOLO(MODEL)
    csv_path = OUTPUT_DIR / "detections.csv"
    annotated_dir = OUTPUT_DIR / "annotated"
    csv_path.parent.mkdir(parents=True, exist_ok=True)
    annotated_dir.mkdir(parents=True, exist_ok=True)

    total = 0

    with open(csv_path, "w", newline="", encoding="utf-8") as f:
        writer = csv.writer(f)
        writer.writerow(["image", "x_center", "y_center", "width", "height", "confidence", "class"])

        results = model.predict(
            source=str(IMAGE_DIR), conf=CONF, imgsz=IMGSZ,
            save=True, project=str(OUTPUT_DIR), name="annotated",
            exist_ok=True, half=True, batch=16,
        )
        for r in results:
            name = Path(r.path).name
            if r.boxes is not None and len(r.boxes):
                for box in r.boxes:
                    cx, cy, w, h = box.xywh[0].tolist()
                    writer.writerow([name, round(cx, 2), round(cy, 2),
                                     round(w, 2), round(h, 2),
                                     round(box.conf.item(), 4),
                                     r.names[int(box.cls.item())]])
                    total += 1

    print(f"{total} detections saved to {csv_path}")
Run the script:
python detect.py
5

View results

Detection results are saved to two locations:
  • CSV file: output/detections.csv with bounding box coordinates
  • Annotated images: output/annotated/ with visual bounding boxes
Example CSV output:
image,x_center,y_center,width,height,confidence,class
image_BS1_10006_17_56_03.jpg,480.23,270.45,120.5,68.3,0.9234,drone
image_BS1_10009_17_56_04.jpg,512.67,245.89,135.2,75.4,0.8876,drone

Configuration options

Adjust detection parameters in detect.py:
CONF = 0.6  # Only detections above 60% confidence
IMGSZ = 960
Lower confidence thresholds produce more detections but may include false positives. Higher thresholds are more conservative but may miss some drones.

Output format

The CSV file contains 7 columns for each detection:
ColumnTypeDescription
imagestringSource image filename
x_centerfloatBounding box center X coordinate (pixels)
y_centerfloatBounding box center Y coordinate (pixels)
widthfloatBounding box width (pixels)
heightfloatBounding box height (pixels)
confidencefloatDetection confidence score (0-1)
classstringObject class (always “drone”)
See the Output Format reference for coordinate conversion formulas.

Next steps

Train your own model

Fine-tune YOLO26s on the drone dataset

Dataset setup

Prepare the DeepSense Scenario 23 dataset

Configuration

Explore all detection parameters

Troubleshooting

Solve common problems

Troubleshooting

Solution: Train a model first using the Training Guide, or verify the MODEL path points to your checkpoint location.
Problem: CSV is empty or only contains the header row.Solutions:
  • Lower the confidence threshold: CONF = 0.25
  • Verify images contain drones
  • Check that the model was trained on similar data
Solution: Reduce batch size in the model.predict() call:
results = model.predict(..., batch=8)  # or batch=4
Solutions:
  • Enable GPU inference (requires CUDA)
  • Use FP16 precision: half=True (already enabled)
  • Increase batch size: batch=32
  • Enable cudnn.benchmark (already enabled)
For more troubleshooting help, see the Troubleshooting page.

Build docs developers (and LLMs) love