Simulation Environments
LeRobot provides a comprehensive suite of simulation environments for training and evaluating robotic policies. Simulation enables rapid iteration, scalable data collection, and reproducible benchmarking before deploying to real robots.Why Simulation?
Simulation environments are essential for:- Rapid prototyping: Test ideas quickly without hardware constraints
- Scalable data collection: Generate large datasets efficiently
- Reproducible evaluation: Benchmark policies with standardized tasks
- Safe exploration: Train policies without risking physical damage
- Parallel training: Run multiple environments simultaneously for faster learning
Available Environments
LeRobot integrates several popular simulation platforms:GPU-Accelerated Environments
-
LeIsaac: IsaacLab-based everyday manipulation tasks with SO101 robots
- Teleoperation workflows for data collection
- Single-arm and bi-arm tasks
- Everyday skills: picking, lifting, cleaning, folding
-
NVIDIA IsaacLab Arena: High-fidelity humanoid manipulation
- GR1, G1, and Galileo humanoid robots
- RTX rendering for vision-based policies
- Massively parallel GPU rollouts
Benchmark Environments
-
LIBERO: Lifelong learning benchmark with 130 tasks
- Five task suites focusing on spatial, object, and goal reasoning
- Long-horizon manipulation tasks
- Knowledge transfer evaluation
-
MetaWorld: Multi-task reinforcement learning benchmark
- 50 diverse tabletop manipulation tasks
- Standardized difficulty splits (easy, medium, hard)
- Generalization evaluation
Using Environments from the Hub
LeRobot’s EnvHub provides one-line environment loading from HuggingFace Hub:- Instant environment sharing and reproducibility
- Version control for environments
- Community-contributed tasks
- Zero-setup environment loading
Environment Integration
All LeRobot simulation environments follow a unified interface:Observation Structure
LeRobot uses a consistent observation format across environments:Action Space
Actions are continuous control commands:- Format:
torch.Tensorornp.ndarray - Range: Typically
[-1, 1]normalized - Shape: Environment-specific (e.g., 7-DoF for LIBERO, 4-DoF for MetaWorld)
Training with Simulation
LeRobot provides integrated training loops that combine simulation environments with policy learning:- Online evaluation: Automatically evaluate during training
- Multi-suite support: Train on multiple task suites simultaneously
- Flexible scheduling: Control evaluation frequency and episode counts
- Automatic checkpointing: Save best models based on success rate
Evaluation
Benchmark trained policies on simulation environments:- Multi-task evaluation: Test across multiple tasks/suites
- Parallel rollouts: Run multiple environments simultaneously
- Success metrics: Automatic success rate computation
- Video recording: Save rollout videos for analysis
Environment Configuration
Each environment can be configured through dataclass configs:Performance Tips
GPU Acceleration
For IsaacLab-based environments (LeIsaac, Arena):- Use
--eval.batch_sizeto control parallel environments - Enable RTX rendering for vision-based policies
- Run headless for maximum throughput
CPU Environments
For LIBERO and MetaWorld:- Set
MUJOCO_GL=eglfor headless rendering - Use
--eval.batch_sizefor parallel rollouts - Consider
AsyncVectorEnvfor better CPU utilization
Memory Management
- Reduce
observation_widthandobservation_heightto save memory - Limit
n_envsbased on available RAM/VRAM - Use lower batch sizes for evaluation on limited hardware
Next Steps
EnvHub
Load and share environments from HuggingFace Hub
LeIsaac
Control SO101 robots in IsaacLab simulation
IsaacLab Arena
GPU-accelerated humanoid manipulation
LIBERO
Lifelong learning benchmark
MetaWorld
Multi-task RL benchmark
Contributing Environments
We welcome new simulation environments! To contribute:- Implement the environment following Gym interface
- Create an EnvConfig in
lerobot/envs/configs.py - Add factory logic in
lerobot/envs/factory.py - Upload to EnvHub for easy sharing
- Submit a PR with documentation