Skip to main content

Architecture Overview

The robotic arm system consists of three main hardware components working together to create an autonomous pick-and-place robot with computer vision capabilities:

Raspberry Pi

High-level control, AI inference, and camera processing

VEX Brain

Real-time motor control and sensor management

Camera System

Object detection using YOLO models

System Architecture Diagram

Main Modules

Communication Module

The communication system enables real-time data exchange between the Raspberry Pi and VEX Brain using serial communication with JSON messaging.
Location: arm_system/communication/serial_manager.py:16Key Features:
  • Serial port configuration (default: /dev/ttyACM1 @ 115200 baud)
  • JSON message encoding/decoding
  • Asynchronous message reading with threading
  • Callback system for handling responses
  • Event-based synchronization for movements
Message Types:
  • check_service — Verify hardware status
  • safety_service — Execute safety protocols
  • scan_service — Perform environment scanning
  • pick_service / place_service — Object manipulation
  • get_angles — Query current joint positions
Location: arm_system/vex_brain/src/main.py:26Key Features:
  • Serial communication via /dev/serial1
  • JSON message parsing and sending
  • Message buffering with newline delimiters
  • Real-time response to Raspberry Pi commands
Communication Protocol:
{
  "type": "scan_service",
  "data": {
    "speed": 20,
    "angle": 90,
    "distance": 150
  }
}

Perception Module

The perception system combines camera hardware, computer vision, and AI models to detect and classify objects.
The system uses YOLO11s models optimized with NCNN for efficient inference on Raspberry Pi.
Components:
  1. Camera Manager (arm_system/perception/vision/camera/main.py:5)
    • Captures images using OpenCV (cv2.VideoCapture)
    • Default resolution: 1280x720
    • Frame grabbing technique for image stability
    • Automatic image saving with timestamps
  2. Image Processor (arm_system/perception/vision/image_processing.py:8)
    • YOLO-based object detection inference
    • Configurable confidence threshold (default: 0.45)
    • Detects: apple, orange, bottle
    • Returns best detection with bounding box coordinates
    • Draws detection results on images
  3. Detection Model (arm_system/perception/vision/detection/main.py:15)
    • YOLO11s model with NCNN backend
    • Half-precision inference for speed
    • Input image size: 640x640
    • Confidence threshold: 0.55 during inference
Detection Workflow:
1

Image Capture

Camera captures frame when VEX Brain detects an object via distance sensor
2

Object Detection

YOLO model processes the image and returns detection results
3

Classification

System filters for target classes (apple, orange, bottle)
4

Best Detection Selection

Highest confidence detection is selected for the object
5

Result Storage

Detection data (class, confidence, position) is sent back to main robot controller

Control Module

The control system manages all motors and actuators on the VEX Brain. VEX Control Module (arm_system/vex_brain/src/main.py:169) Hardware Configuration:
MotorPortFunctionTorque Limit
Base MotorPORT1Rotates the arm base100%
Shoulder MotorPORT2Controls shoulder joint95%
Elbow MotorPORT3Controls elbow joint95%
Gripper MotorPORT4Opens/closes gripper100%
Key Features:
  • Inertial sensor-based base rotation (accurate angle control)
  • Torque-limited movement for safety
  • Current monitoring for grip detection
  • Synchronized multi-joint movements
Movement Control:
# Base rotation uses inertial sensor for precision
self.control.move_motor_to_angle(
    motor=self.control.base_motor,
    target=90,  # degrees
    speed=20    # RPM
)

Mapping Module

The mapping system builds a probabilistic occupancy grid of the environment. Occupancy Grid (arm_system/mapping/occupancy_grid.py:5)
  • Grid size: 100x100 cells (configurable)
  • Resolution: 0.5 units per cell (configurable)
  • Cell values: -1 (unknown), 0-100 (occupancy probability)
  • Origin: Center of grid (50, 50)
Uses Bayesian inference to update cell occupancy:
  • Prior: Current cell probability
  • Sensor Model: 90% accuracy (configurable)
  • Posterior: Updated probability based on observation
Free cells in the sensor ray path are also updated to reduce false positives.
Converts polar coordinates (angle, distance) from sensors to Cartesian grid coordinates:
world_x = robot_x + distance * cos(robot_theta + angle)
world_y = robot_y + distance * sin(robot_theta + angle)

Data Flow

The system follows this operational flow for autonomous pick-and-place:

1. System Initialization

1

Hardware Check

VEX Brain verifies all motors and sensors are installed
2

Serial Connection

Raspberry Pi establishes serial connection to VEX Brain
3

Camera Setup

Camera manager initializes video capture device
4

Model Loading

YOLO detection model is loaded into memory

2. Environment Scanning

3. Pick and Place Operation

1

Object Selection

User selects object from scanned list on Raspberry Pi
2

Base Rotation

VEX rotates base to object’s angle using inertial sensor
3

Arm Extension

Shoulder and elbow motors extend toward object based on distance
4

Grip Detection

Gripper closes while monitoring motor current (threshold: 0.5A)
5

Retraction

Arm lifts object using bumper switch for safety limit
6

Transport

Base rotates to placement zone (predefined angle per object class)
7

Release

Gripper opens to release object
8

Return Home

Arm returns to safe home position

4. Safety Protocol

The safety system continuously monitors for collision and overload conditions:
Bumper Switch: Triggers immediate stop and retraction if pressed during operation
Safety Features:
  • Torque limiting on shoulder and elbow (95%)
  • Current monitoring for grip force
  • Bumper switch collision detection
  • Timeout mechanisms (30s for scan, 20s for pick)
  • Emergency stop and safe retraction sequence

Hardware-Software Integration

Sensor Integration

SensorVEX PortPurposeRange
Inertial SensorDefaultBase angle measurement0-360°
Base DistancePORT9Object detection during scan50-345mm
Gripper DistancePORT7Pick operation guidance0-40mm
Bumper SwitchPORT10Collision detectionBoolean
TouchledPORT8Visual status indicatorRGB

Message Protocol

All communication uses JSON over serial with newline delimiters:
{
  "type": "pick_service",
  "data": {
    "joint": "base",
    "angle": 90,
    "speed": 30
  }
}

Object Placement Zones

The system uses predefined placement zones based on object class:
placement_zones = {
    'apple': {'angle': 90, 'distance': 200},
    'orange': {'angle': 180, 'distance': 200},
    'bottle': {'angle': 45, 'distance': 200},
    'default': {'angle': 270, 'distance': 200}
}
Each detected object is automatically assigned to its corresponding zone during the scan phase.

Next Steps

Installation Guide

Set up the software environment on your Raspberry Pi

Hardware Setup

Learn about the physical components and assembly

Communication Details

Deep dive into the serial communication protocol

Quick Start Tutorial

Run your first autonomous operation

Build docs developers (and LLMs) love