Overview
The Mapping Module (mapping/occupancy_grid.py) provides spatial awareness through a probabilistic occupancy grid. It uses Bayesian inference to fuse sensor measurements and maintain a map of the robot’s environment.
OccupancyGrid Class
Initialization (occupancy_grid.py:5-25)
class OccupancyGrid :
def __init__ ( self , width : int = 100 , height : int = 100 , resolution : float = 0.5 ):
"""
Initialize occupancy grid
args:
width: int: width of grid
height: int: height of grid
resolution: float: resolution of grid
"""
self .width = width
self .height = height
self .resolution = resolution
# Initialize grid
# 0-100 -> probability of cell being occupied
# -1 -> cell is unknown
self .grid = np.full(( self .width, self .height), - 1 , dtype = np.int8)
# Map origin (center)
self .origin = (width // 2 , height // 2 )
Grid Properties
Parameter Default Description width100 Grid width in cells height100 Grid height in cells resolution0.5 Meters per cell origin(50, 50) Center point in grid coordinates
Cell Value Encoding
-1 : Unknown (never observed)
0-100 : Occupancy probability (0% to 100%)
The grid uses np.int8 for memory efficiency. A 100×100 grid only requires 10KB of memory.
Coordinate Systems
World vs Grid Coordinates
def world_to_grid ( self , world_x : float , world_y : float ) -> tuple[ int , int ]:
"""Convert coordinates from world to grid"""
grid_x = int (world_x / self .resolution) + self .origin[ 0 ]
grid_y = int (world_y / self .resolution) + self .origin[ 1 ]
return grid_x, grid_y
grid_x = floor(world_x / resolution) + origin_x
grid_y = floor(world_y / resolution) + origin_y
Example
grid = OccupancyGrid( width = 100 , height = 100 , resolution = 0.5 )
# Robot at 1 meter right, 0.5 meters forward
world_x, world_y = 1.0 , 0.5
# Convert to grid
grid_x, grid_y = grid.world_to_grid(world_x, world_y)
# grid_x = int(1.0 / 0.5) + 50 = 52
# grid_y = int(0.5 / 0.5) + 50 = 51
The origin is at the grid center (50, 50), so world (0, 0) maps to grid (50, 50).
Probabilistic Sensor Model
Bayesian Update (occupancy_grid.py:27-50)
def update_cell ( self , x : int , y : int , occupied : bool , sensor_accuracy : float = 0.9 ):
"""
Update cell in grid using probabilistic sensor model
args:
x, y: coordinates of cell
occupied: true if the sensor detected an obstacle
sensor_accuracy: precision of sensor
"""
if not ( 0 <= x < self .width and 0 <= y < self .height):
return
current = self .grid[y, x]
if current == - 1 :
# First observation
self .grid[y, x] = 100 if occupied else 50
else :
# Bayesian update
prior = current / 100.0
if occupied:
posterior = (sensor_accuracy * prior) / \
(sensor_accuracy * prior +
( 1 - sensor_accuracy) * ( 1 - prior))
else :
posterior = (( 1 - sensor_accuracy) * prior) / \
(( 1 - sensor_accuracy) * prior +
sensor_accuracy * ( 1 - prior))
self .grid[y, x] = int (posterior * 100 )
Bayes’ Rule Application
For Occupied Observation:
P(occupied|detected) = P(detected|occupied) × P(occupied) / P(detected)
where:
P(detected|occupied) = sensor_accuracy
P(occupied) = prior
P(detected) = sensor_accuracy × prior + (1 - sensor_accuracy) × (1 - prior)
For Free Observation:
P(occupied|not_detected) = P(not_detected|occupied) × P(occupied) / P(not_detected)
where:
P(not_detected|occupied) = 1 - sensor_accuracy
P(occupied) = prior
P(not_detected) = (1 - sensor_accuracy) × prior + sensor_accuracy × (1 - prior)
Convergence Behavior
Example Convergence
With sensor_accuracy=0.9:
Observation Prior Posterior 1st: Occupied -1 100 2nd: Occupied 100 100 3rd: Free 100 52 4th: Free 52 15 5th: Free 15 2
Repeated consistent observations drive the probability toward certainty (0% or 100%).
Scan Data Integration
Update from Scan (occupancy_grid.py:58-82)
def update_from_scan ( self , robot_pose : Tuple[ float , float , float ],
scan_data : List[Dict[ str , float ]]):
"""
Update map using scan data
args:
robot_pose: (x, y, theta) in world coordinates
scan_data: list of sensor readings (distance, angle)
"""
robot_x, robot_y, robot_theta = robot_pose
for scan in scan_data:
angle = scan[ 'inertial_angle' ]
distance = scan[ 'base_distance' ]
# Convert coordinates: polar to cartesian
world_x = robot_x + distance * np.cos(robot_theta + angle)
world_y = robot_y + distance * np.sin(robot_theta + angle)
# Update occupied cells
grid_x, grid_y = self .world_to_grid(world_x, world_y)
self .update_cell(grid_x, grid_y, occupied = True )
# Mark the cells in the path as free
robot_grid_x, robot_grid_y = self .world_to_grid(robot_x, robot_y)
self ._mark_free_cells(robot_grid_x, robot_grid_y, grid_x, grid_y)
Polar to Cartesian Conversion
world_x = robot_x + distance * cos(robot_theta + angle)
world_y = robot_y + distance * sin(robot_theta + angle)
Coordinate Frame
Y (Forward)
^
|
| θ (robot heading)
| /
| /
| /
|/________> X (Right)
Robot
Scan Data Structure
scan_data = [
{
'inertial_angle' : 0.785 , # radians (45°)
'base_distance' : 1.5 # meters
},
{
'inertial_angle' : 1.571 , # radians (90°)
'base_distance' : 2.0 # meters
}
]
Ray Tracing
Bresenham’s Line Algorithm (occupancy_grid.py:84-111)
def _mark_free_cells ( self , x0 : int , y0 : int , x1 : int , y1 : int ):
"""
Mark cells in the path as free
args:
x0, y0: start coordinates
x1, y1: end coordinates
"""
dx = abs (x1 - x0)
dy = abs (y1 - y0)
x = x0
y = y0
n = 1 + dx + dy
x_inc = 1 if x1 > x0 else - 1
y_inc = 1 if y1 > y0 else - 1
error = dx - dy
dx *= 2
dy *= 2
for _ in range (n):
self .update_cell(x, y, occupied = False )
if error > 0 :
x += x_inc
error -= dy
else :
y += y_inc
error += dx
Ray Tracing Visualization
Bresenham’s algorithm efficiently finds all grid cells intersected by the ray from robot to obstacle.
Why Ray Tracing?
If a sensor detects an obstacle at distance d, all cells between the robot and the obstacle must be free (otherwise the sensor couldn’t see through them).
# Robot at (50, 50), obstacle at (55, 50)
# Mark cells (50,50) -> (54,50) as free
# Mark cell (55, 50) as occupied
self ._mark_free_cells(robot_grid_x, robot_grid_y, grid_x, grid_y)
self .update_cell(grid_x, grid_y, occupied = True )
Complete Usage Example
Initialization
from mapping.occupancy_grid import OccupancyGrid
import numpy as np
# Create 50m × 50m map with 0.5m resolution
grid = OccupancyGrid( width = 100 , height = 100 , resolution = 0.5 )
Single Scan Update
# Robot pose: x=0, y=0, facing east (θ=0)
robot_pose = ( 0.0 , 0.0 , 0.0 )
# Scan data from robot
scan_data = [
{ 'inertial_angle' : 0.0 , 'base_distance' : 2.0 }, # Obstacle 2m ahead
{ 'inertial_angle' : np.pi / 4 , 'base_distance' : 1.5 }, # Obstacle 1.5m at 45°
]
# Update grid
grid.update_from_scan(robot_pose, scan_data)
Querying the Grid
# Check if a world position is occupied
world_x, world_y = 2.0 , 0.0
grid_x, grid_y = grid.world_to_grid(world_x, world_y)
occupancy = grid.grid[grid_y, grid_x]
if occupancy == - 1 :
print ( "Unknown" )
elif occupancy > 50 :
print ( f "Likely occupied ( { occupancy } % confidence)" )
else :
print ( f "Likely free ( { 100 - occupancy } % confidence)" )
Visualization Example
import matplotlib.pyplot as plt
# Visualize grid
plt.imshow(grid.grid, cmap = 'gray' , origin = 'lower' )
plt.colorbar( label = 'Occupancy Probability' )
plt.title( 'Occupancy Grid' )
plt.xlabel( 'Grid X' )
plt.ylabel( 'Grid Y' )
plt.show()
Integration with Robot System
While not currently used in main.py, the occupancy grid can be integrated:
class Robot :
def __init__ ( self ):
self .serial_manager = CommunicationManager( ... )
# Add occupancy grid
self .map = OccupancyGrid(
width = 200 , # 100m × 100m
height = 200 ,
resolution = 0.5 # 50cm per cell
)
def update_map_from_scan ( self ):
"""Update map after scanning"""
# Get robot pose
robot_pose = ( 0.0 , 0.0 , 0.0 ) # Would come from localization
# Convert scan results to map updates
scan_data = []
for obj in self .scan_results:
angle_deg = obj[ 'position' ][ 'angle' ]
distance_mm = obj[ 'position' ][ 'distance' ]
# Convert to radians and meters
angle_rad = np.radians(angle_deg)
distance_m = distance_mm / 1000.0
scan_data.append({
'inertial_angle' : angle_rad,
'base_distance' : distance_m
})
# Update occupancy grid
self .map.update_from_scan(robot_pose, scan_data)
Memory Usage
# Grid size calculation
bytes_per_cell = 1 # np.int8
total_cells = width × height
memory_bytes = total_cells × bytes_per_cell
# Example: 100×100 grid
memory = 100 × 100 × 1 = 10 , 000 bytes = 10 KB
Computational Complexity
Operation Complexity Notes update_cellO(1) Single cell update world_to_gridO(1) Simple arithmetic _mark_free_cellsO(d) d = distance in cells update_from_scanO(n×d) n = scan points, d = avg distance
Optimization Tips
For large grids or frequent updates, consider:
Using sparse representations (only store observed cells)
Implementing multi-resolution grids (coarse + fine)
Limiting ray tracing distance
Probabilistic Properties
Sensor Accuracy Impact
Accuracy Convergence Speed False Positive Rate 0.99 Very fast (2-3 obs) Very low 0.90 Fast (4-5 obs) Low 0.70 Slow (10+ obs) Moderate 0.50 Never converges High
With sensor_accuracy=0.5, the sensor is no better than random guessing, and the map will not converge.
Handling Uncertainty
# Check confidence before making decisions
if grid.grid[y, x] > 80 : # >80% occupied
# High confidence obstacle
avoid_cell(x, y)
elif grid.grid[y, x] < 20 : # <20% occupied
# High confidence free
plan_through_cell(x, y)
else :
# Uncertain - gather more data
request_additional_scan(x, y)
Advanced Topics
Multi-Sensor Fusion
The Bayesian update naturally fuses data from multiple sensors:
# Update from ultrasonic sensor
grid.update_cell(x, y, occupied = True , sensor_accuracy = 0.85 )
# Update same cell from lidar
grid.update_cell(x, y, occupied = True , sensor_accuracy = 0.95 )
# Result: Higher confidence due to agreement
Decay Over Time
Implement temporal decay for dynamic environments:
def decay_grid ( self , decay_rate = 0.01 ):
"""Move unknown cells back toward 50% over time"""
for y in range ( self .height):
for x in range ( self .width):
if self .grid[y, x] != - 1 :
current = self .grid[y, x]
# Move toward 50 (unknown)
self .grid[y, x] = int (current + decay_rate * ( 50 - current))
Next Steps
Overview See how mapping fits into the overall architecture
Control Learn how control uses spatial information