Skip to main content

System Requirements

Before installing, ensure your system meets these requirements:

Software Requirements

  • Python: 3.8 or higher
  • Operating System: Windows, macOS, or Linux
  • Package Manager: pip or conda

Hardware Requirements

Minimum Configuration

  • CPU: Multi-core processor (4+ cores)
  • RAM: 8GB
  • Storage: 5GB free space
  • Camera: USB webcam or video source

Recommended Configuration

  • CPU: Modern multi-core processor
  • GPU: NVIDIA GPU with 4GB+ VRAM (RTX 2060 or better)
  • RAM: 16GB or more
  • Storage: 10GB free space
  • Camera: HD webcam (720p+)

Installation Steps

1

Clone the Repository

Clone the project from GitHub:
git clone https://github.com/AprendeIngenia/trash-classification.git
cd trash-classification
2

Create Virtual Environment (Recommended)

Create and activate a Python virtual environment:
python -m venv venv
source venv/bin/activate
3

Install Dependencies

Install all required packages using pip:
pip install -r requirements.txt
The main dependencies include:
  • torch 2.5.0 - PyTorch deep learning framework
  • ultralytics 8.3.22 - YOLO implementation
  • opencv-python 4.10.0.84 - Computer vision library
  • numpy 2.1.2 - Numerical computing
  • pyserial 3.5 - Serial communication for robotics
  • matplotlib 3.9.2 - Visualization tools
4

Download Pre-trained Model

The repository includes the YOLOv11 model file (yolo11n.pt) in the root directory. If not present, it will be downloaded automatically on first run.
The model file is approximately 5.4MB. Ensure you have a stable internet connection for the initial download.
5

Verify Installation

Test your installation by importing the main classifier:
from trash_classificator.processor import TrashClassificator

# Initialize the classifier
classificator = TrashClassificator()
print("Installation successful!")
You should see device information in the output:
Model is using device: NVIDIA GeForce RTX 3060
Installation successful!

Hardware Acceleration Setup

The system automatically detects and uses the best available hardware acceleration:

CUDA (NVIDIA GPUs)

1

Install NVIDIA Drivers

Download and install the latest NVIDIA GPU drivers from nvidia.com
2

Install CUDA Toolkit

The PyTorch installation includes CUDA support. For CUDA 11.8:
pip install torch==2.5.0+cu118 torchvision==0.20.0+cu118 --index-url https://download.pytorch.org/whl/cu118
3

Verify CUDA

Check CUDA availability:
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA device: {torch.cuda.get_device_name(0)}")

MPS (Apple Silicon)

For Apple M1/M2 Macs, MPS (Metal Performance Shaders) is automatically detected:
import torch
print(f"MPS available: {torch.backends.mps.is_available()}")
MPS provides significant acceleration on Apple Silicon compared to CPU-only processing.

CPU Only

If no GPU is available, the system automatically falls back to CPU processing. While slower, it’s fully functional for testing and development.

VEX Robotics Setup (Optional)

For robotic arm integration:
1

Install VEX Software

Download and install VEXcode from vex.com
2

Configure Serial Port

The system uses serial communication on port COM7 (Windows) or /dev/serial1 (Linux). Update the port in your configuration:
from examples.serial_com import CommunicationManager

comm = CommunicationManager(port='COM7', baudrate=115200)
3

Upload VEX Program

Upload the controller program from arm_controller/src/main.py to your VEX Brain using VEXcode.

Troubleshooting

Ensure you’ve activated your virtual environment and installed all requirements:
source venv/bin/activate  # or venv\Scripts\activate on Windows
pip install -r requirements.txt
If you encounter GPU memory errors, try:
  • Reducing the image size parameter (imgsz)
  • Processing fewer frames per second
  • Using a smaller batch size
  • Falling back to CPU processing
Ensure your camera is properly connected:
import cv2
cap = cv2.VideoCapture(0)
if not cap.isOpened():
    print("Cannot open camera")
Try different camera indices (0, 1, 2) if the default doesn’t work.
The trained model should be at trash_classificator/segmentation/models/trash_segmentation_model_v2.pt. Ensure this file exists after cloning the repository.

Next Steps

Quick Start

Run your first trash classification

Core Concepts

Understand the system architecture

Training Guide

Train custom models

API Reference

Explore the API documentation

System Verification Checklist

After installation, verify these components:
  • Python 3.8+ installed
  • All pip dependencies installed successfully
  • PyTorch can detect your GPU (if available)
  • Camera/video source accessible
  • Model file present and loadable
  • Serial port configured (for robotics)
Start with CPU-only testing before configuring GPU acceleration to isolate any hardware-specific issues.

Build docs developers (and LLMs) love