Skip to main content

Overview

The BoundingBoxDrawer class implements the BoundingBoxDrawerInterface to visualize object detection bounding boxes with class labels on images. It uses Ultralytics’ Annotator utility for consistent styling.

Class Definition

BoundingBoxDrawer

Draws bounding boxes with class labels using the Ultralytics annotation framework.
from trash_classificator.drawing.main import BoundingBoxDrawer

bbox_drawer = BoundingBoxDrawer()

Attributes

thickness
int
Line thickness for bounding box borders. Default is 2 pixels.

Methods

draw

Draws bounding boxes with class labels on an image.
result = bbox_drawer.draw(image, boxes, trash_classes, classes)

Parameters

image
np.ndarray
required
Input image in BGR format (OpenCV format). The image will be annotated with bounding boxes and labels.
boxes
np.ndarray
required
Array of bounding boxes in xyxy format, where each box is [x1, y1, x2, y2]:
  • x1, y1: Top-left corner coordinates
  • x2, y2: Bottom-right corner coordinates
Shape: (N, 4) where N is the number of boxes.
trash_classes
Dict[int, str]
required
Dictionary mapping class IDs to class names. For example:
{
    0: "plastic",
    1: "paper",
    2: "metal"
}
Used to display readable labels on bounding boxes.
classes
List[int]
required
List of class IDs corresponding to each bounding box. Used to retrieve class names from trash_classes and determine box colors.

Returns

result
np.ndarray
The annotated image with:
  • Bounding boxes drawn with class-specific colors
  • Class labels displayed on each box
  • Line thickness of 2 pixels

Annotator Usage

Internally, the drawer uses Ultralytics’ Annotator class:
from ultralytics.utils.plotting import Annotator, colors

annotator = Annotator(image, self.thickness, trash_classes)
for box, cls in zip(boxes, classes):
    annotator.box_label(box, trash_classes[cls], color=colors(cls, True))
return annotator.result()
The colors() function from Ultralytics generates consistent, visually distinct colors for each class ID.

Usage Example

import cv2
import numpy as np
from trash_classificator.drawing.main import BoundingBoxDrawer

# Initialize drawer
bbox_drawer = BoundingBoxDrawer()

# Load image
image = cv2.imread("trash_scene.jpg")

# Bounding boxes in xyxy format
boxes = np.array([
    [100, 100, 200, 200],  # Plastic bottle
    [300, 150, 400, 250],  # Paper cup
    [500, 100, 600, 180]   # Metal can
])

# Class mapping
trash_classes = {
    0: "plastic",
    1: "paper",
    2: "metal"
}

# Class IDs for each box
classes = [0, 1, 2]

# Draw bounding boxes
annotated_image = bbox_drawer.draw(image, boxes, trash_classes, classes)

# Save or display
cv2.imwrite("result.jpg", annotated_image)

Customization

To change the line thickness:
bbox_drawer = BoundingBoxDrawer()
bbox_drawer.thickness = 3  # Thicker borders

Interface

BoundingBoxDrawerInterface

Abstract base class defining the contract for bounding box drawing implementations.
from abc import ABC, abstractmethod
import numpy as np
from typing import Dict, List

class BoundingBoxDrawerInterface(ABC):
    def draw(self, image: np.ndarray, boxes: np.ndarray, trash_classes: Dict[int, str], classes: List[int]) -> np.ndarray:
        """draw bounding box in image"""
        raise NotImplementedError

Integration with Detection Pipeline

Typically used in the main drawing pipeline after mask visualization:
from trash_classificator.drawing.main import Drawing

drawing = Drawing()
image = drawing.draw(image, trash_track, trash_classes, device)
The Drawing class orchestrates:
  1. Mask drawing (semi-transparent overlays)
  2. Bounding box drawing (labeled boxes)
  3. Track drawing (motion trails)

Build docs developers (and LLMs) love