Skip to main content

Overview

The OccupancyGrid class implements a probabilistic occupancy grid map for representing the robot’s environment. It uses Bayesian updates to maintain obstacle probabilities based on sensor data. Source: arm_system/mapping/occupancy_grid.py:5

Class Definition

OccupancyGrid

class OccupancyGrid:
    def __init__(self, width: int = 100, height: int = 100, resolution: float = 0.5)
Initializes a probabilistic occupancy grid map.
width
int
default:"100"
Grid width in cells
height
int
default:"100"
Grid height in cells
resolution
float
default:"0.5"
Cell resolution in world units (e.g., meters per cell)

Attributes

grid
np.ndarray
Occupancy grid array (width × height)
  • Values 0-100: Probability of cell being occupied (%)
  • Value -1: Unknown (never observed)
origin
tuple
Grid origin coordinates (center of grid)
Grid Representation:
  • 0 - Definitely free
  • 50 - Unknown/uncertain
  • 100 - Definitely occupied
  • -1 - Never observed
Source: arm_system/mapping/occupancy_grid.py:6

Methods

update_cell

def update_cell(self, x: int, y: int, occupied: bool, sensor_accuracy: float = 0.9)
Updates a single cell using Bayesian probabilistic sensor model.
x
int
required
Grid X coordinate
y
int
required
Grid Y coordinate
occupied
bool
required
True if sensor detected obstacle, False if free space
sensor_accuracy
float
default:"0.9"
Sensor accuracy/precision (0.0 to 1.0)
Algorithm:
  1. First observation: Sets to 100 (occupied) or 50 (free)
  2. Subsequent observations: Applies Bayesian update
    • Prior: Current probability
    • Likelihood: Sensor accuracy
    • Posterior: Updated probability
Bayesian Update Formulas: If occupied:
posterior = (accuracy × prior) / (accuracy × prior + (1 - accuracy) × (1 - prior))
If free:
posterior = ((1 - accuracy) × prior) / ((1 - accuracy) × prior + accuracy × (1 - prior))
Source: arm_system/mapping/occupancy_grid.py:27

Example

grid = OccupancyGrid(width=100, height=100, resolution=0.5)

# Update cell as occupied (90% confidence)
grid.update_cell(50, 50, occupied=True, sensor_accuracy=0.9)

# Update same cell again (increases confidence)
grid.update_cell(50, 50, occupied=True, sensor_accuracy=0.9)

# Update as free (decreases occupancy probability)
grid.update_cell(50, 50, occupied=False, sensor_accuracy=0.9)

world_to_grid

def world_to_grid(self, world_x: float, world_y: float) -> tuple[int, int]
Converts world coordinates to grid coordinates.
world_x
float
required
X coordinate in world frame
world_y
float
required
Y coordinate in world frame
grid_coords
tuple[int, int]
(grid_x, grid_y) coordinates in grid frame
Transformation:
grid_x = int(world_x / resolution) + origin[0]
grid_y = int(world_y / resolution) + origin[1]
Source: arm_system/mapping/occupancy_grid.py:52

Example

grid = OccupancyGrid(width=100, height=100, resolution=0.5)

# Convert world coordinates (meters) to grid
grid_x, grid_y = grid.world_to_grid(2.5, 1.0)
print(f"World (2.5, 1.0) -> Grid ({grid_x}, {grid_y})")

update_from_scan

def update_from_scan(self, robot_pose: Tuple[float, float, float], scan_data: List[Dict[str, float]])
Updates map using complete scan data from robot sensors.
robot_pose
Tuple[float, float, float]
required
Robot pose (x, y, theta) in world coordinates:
  • x: X position
  • y: Y position
  • theta: Heading angle in radians
scan_data
List[Dict[str, float]]
required
List of sensor readings, each containing:
  • inertial_angle: Sensor angle in radians
  • base_distance: Distance to obstacle in world units
Process:
  1. For each scan point:
    • Converts polar coordinates (angle, distance) to Cartesian (x, y)
    • Transforms to world frame using robot pose
    • Converts to grid coordinates
    • Marks endpoint as occupied
    • Traces ray from robot to endpoint, marking intermediate cells as free
Source: arm_system/mapping/occupancy_grid.py:58

Example

import numpy as np

grid = OccupancyGrid(width=200, height=200, resolution=0.05)

# Robot at origin, facing forward
robot_pose = (0.0, 0.0, 0.0)

# Scan data from sensors
scan_data = [
    {'inertial_angle': 0.0, 'base_distance': 1.5},
    {'inertial_angle': 0.785, 'base_distance': 2.0},  # 45 degrees
    {'inertial_angle': 1.571, 'base_distance': 1.2},  # 90 degrees
]

# Update map with scan
grid.update_from_scan(robot_pose, scan_data)

# Visualize
import matplotlib.pyplot as plt
plt.imshow(grid.grid, cmap='gray', origin='lower')
plt.show()

_mark_free_cells

def _mark_free_cells(self, x0: int, y0: int, x1: int, y1: int)
Marks all cells along the ray from (x0, y0) to (x1, y1) as free using Bresenham’s line algorithm.
x0
int
required
Start X coordinate (robot position)
y0
int
required
Start Y coordinate (robot position)
x1
int
required
End X coordinate (obstacle position)
y1
int
required
End Y coordinate (obstacle position)
Algorithm: Bresenham’s line algorithm for efficient ray tracing Source: arm_system/mapping/occupancy_grid.py:84

Complete Example

import numpy as np
from arm_system.mapping.occupancy_grid import OccupancyGrid
import matplotlib.pyplot as plt

# Initialize map
# 20m × 20m area with 10cm resolution
grid = OccupancyGrid(width=200, height=200, resolution=0.1)

# Simulate robot movement and scanning
robot_poses = [
    (0.0, 0.0, 0.0),           # Start
    (1.0, 0.0, 0.0),           # Move forward 1m
    (2.0, 0.0, np.pi/4),       # Move and turn
    (3.0, 1.0, np.pi/2),       # Move and turn more
]

# Simulate scan at each pose
for pose in robot_poses:
    # Generate scan data (simulate 180-degree scan)
    scan_data = []
    for angle in np.linspace(-np.pi/2, np.pi/2, 20):
        distance = 2.0 + np.random.normal(0, 0.1)  # 2m ± noise
        scan_data.append({
            'inertial_angle': angle,
            'base_distance': distance
        })
    
    # Update map
    grid.update_from_scan(pose, scan_data)

# Visualize map
plt.figure(figsize=(10, 10))
plt.imshow(grid.grid, cmap='gray', origin='lower', vmin=-1, vmax=100)
plt.colorbar(label='Occupancy Probability (%)')
plt.title('Occupancy Grid Map')
plt.xlabel('Grid X')
plt.ylabel('Grid Y')
plt.show()

# Get occupancy at specific location
world_x, world_y = 2.0, 1.0
grid_x, grid_y = grid.world_to_grid(world_x, world_y)
occupancy = grid.grid[grid_y, grid_x]
print(f"Occupancy at ({world_x}, {world_y}): {occupancy}%")

Use Cases

# Find free path through grid
def is_cell_free(grid, x, y, threshold=30):
    if grid.grid[y, x] == -1:  # Unknown
        return True
    return grid.grid[y, x] < threshold

Object Detection

# Find occupied regions
occupied_cells = np.argwhere(grid.grid > 70)
print(f"Found {len(occupied_cells)} occupied cells")

Map Export

# Export as image
from PIL import Image
img = Image.fromarray(grid.grid.astype(np.uint8))
img.save('occupancy_map.png')

Coordinate Systems

World Frame

  • Origin: Robot starting position
  • Units: Meters (or specified world units)
  • Axes: X (forward), Y (left)

Grid Frame

  • Origin: Center of grid (width//2, height//2)
  • Units: Cells
  • Axes: X (columns), Y (rows)

Transformation

grid_coords = (world_coords / resolution) + origin

Performance Considerations

  • Grid Size: Larger grids consume more memory (width × height × 1 byte)
  • Resolution: Finer resolution increases accuracy but memory usage
  • Ray Tracing: O(distance) complexity per scan point
  • Bayesian Update: O(1) per cell update
Recommended Settings:
  • Small indoor area (10m × 10m): 200×200 cells, 0.05m resolution
  • Large warehouse: 400×400 cells, 0.1m resolution

Build docs developers (and LLMs) love