Skip to main content

Overview

The MotionSampler class provides an interface for sampling motion sequences from a motion library. It handles motion data extraction, temporal sampling, and batching for training motion generation models.

Class Definition

class MotionSampler

Constructor

MotionSampler(cfg: dict)
cfg
dict
required
Configuration dictionary for the motion sampler.

Configuration Keys

cfg.device
str
required
Device to use for tensors (‘cuda’ or ‘cpu’)
cfg.char_file
str
required
Path to the character model XML file
cfg.motion_lib_file
str | MotionLib
required
Path to motion library file or MotionLib instance

Attributes

_device
str
Device for tensor operations
_seq_len
int
Length of motion sequences to sample (set after initialization)
_fps
float
Frames per second for motion sampling (set after initialization)
_kin_char_model
KinCharModel
Kinematic character model for the motion data
_mlib
MotionLib
Motion library containing all motion clips
_motion_lib_file
str | MotionLib
Path or instance of the motion library
_random_start_times
bool
Whether to randomly sample start times within motions

Methods

check_init

Verifies that required attributes have been initialized.
check_init()
Raises an assertion error if _seq_len or _fps have not been set.

get_seq_len

Returns the sequence length.
get_seq_len() -> int
Returns: The length of motion sequences being sampled.

_sample_motion_start_times

Samples start times for motion clips.
_sample_motion_start_times(
    motion_ids: torch.Tensor,
    seq_duration: float
) -> torch.Tensor
motion_ids
torch.Tensor
required
Tensor of motion IDs to sample from, shape [batch_size]
seq_duration
float
required
Duration of the sequence in seconds
Returns: Tensor of start times for each motion, shape [batch_size]. Behavior:
  • For motions with LoopMode.WRAP: Samples uniformly from entire motion duration
  • For motions with LoopMode.CLAMP: Samples uniformly ensuring enough frames remain
  • If _random_start_times is False: Returns zeros (always start from beginning)

_extract_motion_data

Extracts motion data for specified motion IDs and start times.
_extract_motion_data(
    motion_ids: torch.Tensor,
    motion_start_times: torch.Tensor
) -> tuple[torch.Tensor, torch.Tensor]
motion_ids
torch.Tensor
required
Tensor of motion IDs, shape [batch_size]
motion_start_times
torch.Tensor
required
Tensor of start times in seconds, shape [batch_size]
Returns: Tuple of:
  • motion_samples: Motion data tensor of shape [batch_size, seq_len, num_dof]
  • contacts: Contact labels of shape [batch_size, seq_len, num_rb]
Processing:
  1. Samples frames from motion library at specified times
  2. Converts quaternion rotations to exponential map representation
  3. Converts joint rotations to DOF representation
  4. Concatenates root position, root rotation, and joint rotations
  5. Reshapes into batch format

Usage Example

import torch
from parc.motion_generator.motion_sampler import MotionSampler
from parc.anim.motion_lib import MotionLib

# Configuration
cfg = {
    'device': 'cuda',
    'char_file': 'data/characters/humanoid.xml',
    'motion_lib_file': 'data/motions/motion_lib.yaml'
}

# Initialize sampler
sampler = MotionSampler(cfg)

# Set sequence parameters (typically done by the motion generator)
sampler._seq_len = 64
sampler._fps = 30.0
sampler._random_start_times = True

# Get number of motions
num_motions = sampler._mlib.num_motions()
print(f"Loaded {num_motions} motion clips")

# Sample random motions
batch_size = 32
motion_ids = sampler._mlib.sample_motions(batch_size)

# Sample start times
seq_duration = sampler._seq_len / sampler._fps  # Duration in seconds
start_times = sampler._sample_motion_start_times(motion_ids, seq_duration)

# Extract motion data
motion_samples, contacts = sampler._extract_motion_data(motion_ids, start_times)

print(f"Motion samples shape: {motion_samples.shape}")  # [32, 64, num_dof]
print(f"Contacts shape: {contacts.shape}")              # [32, 64, num_rb]

Integration with Motion Generators

Motion generators like MDM use MotionSampler to load training data:
from parc.motion_generator.mdm import MDM
from parc.motion_generator.motion_sampler import MotionSampler

# Create sampler
sampler_cfg = {
    'device': 'cuda',
    'char_file': 'path/to/character.xml',
    'motion_lib_file': 'path/to/motions.yaml'
}
motion_sampler = MotionSampler(sampler_cfg)

# Create and train MDM model
mdm_cfg = {...}  # MDM configuration
mdm = MDM(mdm_cfg)
mdm.train(motion_sampler, checkpoint_dir='checkpoints/')

Motion Library Integration

The sampler integrates with the MotionLib class:
from parc.anim.motion_lib import MotionLib, LoopMode

# Load motion library
mlib = MotionLib.from_file(
    motion_file='motions.yaml',
    char_model=char_model,
    device='cuda',
    contact_info=True
)

# Use with sampler
cfg = {
    'device': 'cuda',
    'char_file': 'character.xml',
    'motion_lib_file': mlib  # Pass MotionLib instance directly
}
sampler = MotionSampler(cfg)

Motion Data Format

The extracted motion data is structured as:

Motion Samples

Concatenated tensor containing:
  • Root position: XYZ coordinates of character root (3 values)
  • Root rotation: Exponential map representation (3 values)
  • Joint rotations: DOF values for all joints (num_joints × dof_per_joint)

Contacts

Binary contact labels for each rigid body:
  • 1.0: Body is in contact with ground
  • 0.0: Body is not in contact

Loop Modes

The sampler respects motion loop modes:
  • WRAP: Motion repeats cyclically, can sample any start time
  • CLAMP: Motion stops at end, start time limited to ensure full sequence fits
# Example: Different sampling behavior
motion_length = 2.0  # seconds
seq_duration = 1.0   # seconds

# WRAP mode: can start at t=1.5, wraps to beginning
# CLAMP mode: max start time is 1.0 (2.0 - 1.0)

See Also

Build docs developers (and LLMs) love