System Requirements
PARC has been tested on Ubuntu 22.04 with CUDA-enabled GPUs.Recommended Hardware:
- NVIDIA GPU with CUDA support (RTX 4090 for real-time kinematic control, RTX 3090+ for training)
- 16GB+ RAM
- Ubuntu 22.04 or compatible Linux distribution
Installation Steps
Install Dependencies
Install all required packages from requirements.txt:This will install:
- PyTorch 2.2.0 - Deep learning framework
- Polyscope - 3D visualization for Motionscope
- Trimesh & embreex - Mesh processing
- WandB - Experiment tracking (v0.17.4+)
- NumPy, SciPy, Matplotlib - Scientific computing
- Gym 0.26.2 - RL environment interface
- ImageIO - Video/image I/O with FFmpeg support
Fix CUDA Detection (if needed)
If PyTorch cannot detect CUDA after installation, reinstall with explicit CUDA version:Verify CUDA is available:
Configure Data Directory
Set up the user configuration file to specify your data directory. Create or edit The
user_config.yaml at the repository root:user_config.yaml
DATA_DIR placeholder is referenced throughout configuration files for data, checkpoints, and outputs. Make sure this is an existing absolute path.Optional: Isaac Gym Setup
Isaac Gym is NVIDIA’s deprecated physics simulator used for PARC’s motion tracking module. While deprecated, it’s still functional for this use case.Why Isaac Gym?
PARC’s tracking environment was built on an early version of MimicKit using Isaac Gym. For new projects, consider using the recommended open-source alternative: MimicKit.Installation Process
Download Isaac Gym
Register and download Isaac Gym from NVIDIA:https://developer.nvidia.com/isaac-gym
Create Isaac Gym Environment
Use the Isaac Gym installation script within a conda environment. Create a helper YAML file:Then create the environment:
isaac_gym_env.yaml
Install Isaac Gym
Follow the Isaac Gym installation instructions from their documentation. Typically:
Download Pre-trained Models & Datasets
PARC provides pre-trained models and motion datasets on HuggingFace:PARC Dataset
Download datasets from initial iteration and each PARC training stage
Available Datasets
- Dec 2024 experiment: 4 PARC iterations
- April 2025 experiment: 5 PARC iterations
- Small model: ~30 MB checkpoint files
- Motion sequences with terrain data
- Trained motion diffusion models
- Tracking controller checkpoints
Loading Motion Data
Motion files are loaded using:PARC/anim/motion_lib.py- Motion library utilitiesPARC/anim/kin_char_model.py- Character model definitions
motion_filepath parameter in PARC/motionscope/motionscope_config.yaml.
Standalone Data Reading: If you only want to read motion data without installing PARC, use
scripts/read_motion_data.py - it only requires NumPy and optionally PyTorch.Troubleshooting
PyTorch CUDA Not Detected
Iftorch.cuda.is_available() returns False:
- Verify NVIDIA driver installation:
nvidia-smi - Check CUDA version compatibility with PyTorch 2.2.0
- Reinstall PyTorch with explicit CUDA version (see Step 4 above)
Import Errors
If you encounter import errors when running scripts:Polyscope Won’t Launch
Polyscope requires OpenGL. On headless servers:- Use X11 forwarding or VNC
- Consider running Motionscope locally after downloading motion files
Old Dataset Format
If using datasets from the old SharePoint release (password: “PARC”), note that the file format is only compatible with PARC v0.1. Use the new HuggingFace datasets for the current version.Next Steps
With PARC installed, you’re ready to:Quick Start
Run Motionscope to visualize motions and start experimenting
PARC Guide
Learn about the 4-stage PARC training loop and key components