Skip to main content
Find answers to common questions about LeRobot installation, usage, and troubleshooting.

Getting Started

LeRobot is an open-source library that provides models, datasets, and tools for real-world robotics in PyTorch. It aims to lower the barrier to entry for robot learning by providing:
  • Hardware-agnostic robot control interface
  • Standardized dataset format hosted on Hugging Face Hub
  • State-of-the-art policies for imitation learning, RL, and VLA models
  • Tools for data collection, training, and evaluation
LeRobot requires Python 3.12 or higher. We recommend using conda or uv to manage your Python environment.
# With conda
conda create -y -n lerobot python=3.12
conda activate lerobot
You can install LeRobot via pip or from source:
# From PyPI (stable release)
pip install lerobot

# From source (latest development)
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e .
See the Installation Guide for detailed instructions.
LeRobot supports a wide range of hardware:
  • Robot Arms: SO100, Koch, LeKiwi, HopeJR, OpenARM
  • Mobile Manipulators: OMX, EarthRover
  • Humanoids: Reachy2, Unitree G1
  • Teleoperation Devices: Gamepads, Keyboards, Phones
The library is designed to be extensible - you can implement the Robot interface for custom hardware.

Datasets

Use the LeRobotDataset class:
from lerobot.datasets.lerobot_dataset import LeRobotDataset

# Load from Hub
dataset = LeRobotDataset("lerobot/aloha_mobile_cabinet")

# Access data
frame = dataset[0]
print(frame.keys())  # See available features
LeRobotDataset uses:
  • Parquet files for state/action data (efficient columnar storage)
  • MP4 videos or images for visual observations
  • metadata.json for dataset information
This format enables efficient streaming, storage, and visualization of large robotic datasets.
Yes! You can convert your dataset to LeRobotDataset format or load it directly. See:
You can visualize datasets directly on the Hugging Face Hub or programmatically:
# View on Hub
# Visit: https://huggingface.co/datasets/lerobot/aloha_mobile_cabinet

# Visualize in Python
import matplotlib.pyplot as plt

frame = dataset[0]
plt.imshow(frame['observation.images.top'])
plt.title(f"Episode {frame['episode_index']}, Frame {frame['frame_index']}")
plt.show()

Training

Use the lerobot-train command:
lerobot-train \
  --policy=act \
  --dataset.repo_id=lerobot/aloha_mobile_cabinet
Or use the Python API for more control:
from lerobot.policies.act.modeling_act import ACTPolicy
from lerobot.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("lerobot/aloha_mobile_cabinet")
policy = ACTPolicy(config, dataset.meta)
# ... training loop ...
LeRobot provides several state-of-the-art policies:Imitation Learning:
  • ACT (Action Chunking Transformer)
  • Diffusion Policy
  • VQ-BeT
Reinforcement Learning:
  • HIL-SERL
  • TDMPC
Vision-Language-Action:
  • Pi0Fast, Pi0.5
  • GR00T N1.5
  • SmolVLA
  • XVLA
Training time varies significantly based on:
  • Policy type: Diffusion policies generally take longer than ACT
  • Dataset size: More episodes = longer training
  • Hardware: GPU type and number of GPUs
  • Hyperparameters: Batch size, number of steps
Example: Training ACT on PushT with a single GPU typically takes 2-4 hours for 100k steps.
Yes! Use the --resume flag:
lerobot-train \
  --policy=act \
  --dataset.repo_id=lerobot/aloha_mobile_cabinet \
  --resume=outputs/train/my_checkpoint
LeRobot supports distributed training via PyTorch DDP:
# Multi-GPU training
torchrun --nproc_per_node=4 -m lerobot.scripts.train \
  --policy=act \
  --dataset.repo_id=lerobot/aloha_mobile_cabinet

Inference and Deployment

Use the make_policy factory:
from lerobot.policies.factory import make_policy

# Load from Hub
policy = make_policy(pretrained="lerobot/act_aloha_sim_transfer_cube_human")

# Run inference
action = policy.select_action(observation)
Use the lerobot-eval command:
# Evaluate in simulation
lerobot-eval \
  --policy.path=lerobot/pi0_libero_finetuned \
  --env.type=libero \
  --env.task=libero_object \
  --eval.n_episodes=10
Yes! LeRobot is designed for real-world deployment:
  1. Train your policy on real robot data or simulation
  2. Load the policy and connect to your robot
  3. Run the control loop
from lerobot.robots.myrobot import MyRobot
from lerobot.policies.factory import make_policy

robot = MyRobot()
policy = make_policy(pretrained="my_model")

while True:
    obs = robot.get_observation()
    action = policy.select_action(obs)
    robot.send_action(action)
For low-latency control, consider:
  • Using async inference (see examples/tutorial/async-inf/)
  • Running on GPU for faster inference
  • Optimizing with TorchScript or ONNX
  • Using action chunking (ACT-style policies)

Hardware Integration

Implement the Robot interface:
  1. Create a new robot class inheriting from base Robot
  2. Implement required methods: connect(), get_observation(), send_action()
  3. Define your robot’s features and configuration
See the Robot API for details on implementing custom robots.
Not necessarily! You can:
  • Start with simulation environments (PushT, LIBERO)
  • Use low-cost hardware like SO-100 arms
  • Implement support for your existing robots
LeRobot works with a wide range of hardware from hobbyist to professional.
Calibration depends on your specific robot. Generally:
  1. Follow your robot’s calibration procedure
  2. Use LeRobot’s calibration tools if available
  3. Record the calibration parameters in your robot config
For SO-100/101 robots, see the hardware-specific documentation.

Troubleshooting

Try these solutions:
  • Reduce batch size in your config
  • Use gradient accumulation
  • Enable mixed precision training (AMP)
  • Use a smaller model variant
  • Clear CUDA cache: torch.cuda.empty_cache()
See Troubleshooting for more details.
Ensure you have ffmpeg with libsvtav1 support:
# Check encoders
ffmpeg -encoders | grep svt

# Reinstall if needed (with conda)
conda install ffmpeg=7.1.1 -c conda-forge

Contributing

There are many ways to contribute:
  • Fix bugs or add features
  • Improve documentation
  • Share datasets on the Hub
  • Add support for new robots or policies
  • Help others in Discord
See the Contributing Guide to get started.
Not at all! Contributions at all levels are welcome:
  • Documentation improvements
  • Bug reports
  • Example notebooks
  • Community support
Everyone started as a beginner - we’re here to help you learn!

Can't find your question?

Ask in the LeRobot Discord - the community is happy to help!

Build docs developers (and LLMs) love