Overview
The TrashClassificator class is the primary interface for processing video frames to detect, classify, and visualize trash objects. It orchestrates the segmentation model and drawing components to provide a complete trash detection pipeline.
This class automatically initializes both segmentation and drawing components, making it ready to use immediately after instantiation.
Class Definition
class TrashClassificator :
def __init__ ( self ):
self .segmentation: SegmentationModelInterface = SegmentationModel()
self .draw_detections: DrawingInterface = Drawing()
Constructor
__init__()
Initializes the TrashClassificator with segmentation and drawing components.
Attributes Initialized:
segmentation
SegmentationModelInterface
Instance of SegmentationModel for trash detection and tracking
Instance of Drawing for visualizing detections on frames
Example:
import numpy as np
from trash_classificator.processor import TrashClassificator
# Initialize the classificator
classificator = TrashClassificator()
Methods
frame_processing()
Processes a single video frame to detect trash objects and draw visualizations.
def frame_processing ( self , image : np.ndarray)
Parameters
Input image as a NumPy array in BGR format (OpenCV format). The image should be a valid 3-channel color image.
Returns
Processed image with trash detections drawn (if trash detected) or original image (if no trash detected)
Detection status message:
"No trash detected" - No trash objects found in the frame
"Trash detected" - Trash objects successfully detected and drawn
The method returns early with "No trash detected" status if no tracking IDs are assigned to detected objects.
Implementation Details
The method follows a two-step pipeline:
Trash Segmentation : Uses the segmentation model to detect and track trash objects
Visualization : Draws bounding boxes, masks, and tracking lines on detected trash
Source Code Reference:
def frame_processing ( self , image : np.ndarray):
# step 1: trash segmentation
trash_image = image.copy()
trash_track, trash_classes, device = self .segmentation.inference(trash_image)
for trash in trash_track:
if trash.boxes.id is None :
return image, 'No trash detected'
# step 2: draw detections
image_draw = image.copy()
image_draw = self .draw_detections.draw(image_draw, trash, trash_classes, device)
return image_draw, 'Trash detected'
return image, 'Trash detected'
Usage Example
Complete example showing how to use TrashClassificator with video processing:
import cv2
import numpy as np
from trash_classificator.processor import TrashClassificator
# Initialize the classificator
classificator = TrashClassificator()
# Open video file or camera
cap = cv2.VideoCapture( 'video.mp4' )
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Process the frame
processed_frame, status = classificator.frame_processing(frame)
# Display the result
cv2.imshow( 'Trash Detection' , processed_frame)
print ( f "Status: { status } " )
if cv2.waitKey( 1 ) & 0x FF == ord ( 'q' ):
break
cap.release()
cv2.destroyAllWindows()
Real-time Camera Processing
import cv2
from trash_classificator.processor import TrashClassificator
classificator = TrashClassificator()
cap = cv2.VideoCapture( 0 ) # Use default camera
while True :
ret, frame = cap.read()
if not ret:
break
result_frame, status = classificator.frame_processing(frame)
# Add status text to frame
cv2.putText(result_frame, status, ( 10 , 30 ),
cv2. FONT_HERSHEY_SIMPLEX , 1 , ( 0 , 255 , 0 ), 2 )
cv2.imshow( 'Real-time Trash Detection' , result_frame)
if cv2.waitKey( 1 ) & 0x FF == ord ( 'q' ):
break
cap.release()
cv2.destroyAllWindows()
Integration
The TrashClassificator requires:
numpy for array operations
SegmentationModel from trash_classificator.segmentation.main
Drawing from trash_classificator.drawing.main
OpenCV (cv2) for image processing in user applications
Show Performance Considerations
Image copies are created at each processing step to preserve the original input
The segmentation model uses GPU acceleration when available (managed by DeviceManager)
Tracking persistence is maintained across frames for consistent object IDs
Stream processing mode is enabled for efficient video frame handling