Overview
The motion generator is a transformer-based diffusion model that learns to generate character motions based on:- Local heightmaps: Terrain geometry around the character
- Target directions: Where the character should move
- Previous states: For autoregressive generation
- Contact labels: Ground contact information
Quick Start
Prepare your configuration
Create a YAML config file or use the default:If no config is provided, it defaults to
data/configs/parc_1_train_gen.yaml.Run training
- Create or load a motion sampler (cached as
.pklfor faster loading) - Initialize the MDM model
- Train using wandb for tracking (if enabled)
- Save checkpoints periodically
- Save the final model
Monitor training
If
use_wandb: True in your config, view training progress at wandb.ai under project “train-mdm”.Configuration Guide
Core Training Parameters
Model Architecture
Diffusion Settings
Dropout Rates
Motion Data Configuration
Heightmap Configuration
Target Direction Settings
Loss Weights
Data Augmentation
Output Paths
Important Files
parc/motion_generator/mdm.py- Main MDM class with heightmap and target conditionsparc/motion_generator/mdm_transformer.py- Transformer module implementationparc/motion_generator/mdm_heightfield_contact_motion_sampler.py- Weighted dataset sampler
Continuing Training
To continue training from a checkpoint:Output Structure
After training, you’ll find:Troubleshooting
Sampler Loading is Slow
The first time you run training, creating the sampler can take several minutes. The sampler is saved tosampler_save_filepath and will be loaded much faster on subsequent runs.
Out of Memory
Reduce these parameters:batch_size- Try 32 or 16num_heads- Try 4d_model- Try 128
Training Not Improving
Check:- Loss weights - Ensure
w_hfand other weights are balanced - Learning rate - Try 0.00005 or 0.000005
- Data augmentation - May need adjustment for your dataset