Skip to main content
Benchmarking is critical for evaluating edge AI models under realistic hardware constraints. The Edge AI Hardware Optimization framework provides comprehensive tools to measure accuracy, latency, memory usage, throughput, and energy consumption.

Overview

The benchmarking system in src/edge_opt/metrics.py collects performance metrics that matter for edge deployment:
  • Accuracy: Classification correctness on validation data
  • Latency: Inference time with statistical analysis (mean, std dev, p95)
  • Memory: Model size in megabytes
  • Throughput: Samples processed per second
  • Energy Proxy: Estimated energy consumption per inference

PerfMetrics Dataclass

All metrics are returned in a structured dataclass defined in src/edge_opt/metrics.py:11-19:
from dataclasses import dataclass

@dataclass
class PerfMetrics:
    accuracy: float           # Classification accuracy (0.0 to 1.0)
    latency_ms: float         # Mean latency in milliseconds
    latency_std_ms: float     # Standard deviation of latency
    latency_p95_ms: float     # 95th percentile latency
    throughput_sps: float     # Throughput in samples per second
    memory_mb: float          # Model memory footprint in MB
    energy_proxy_j: float     # Energy proxy in joules

Core Functions

The framework provides several benchmarking functions:

1. evaluate_accuracy

Measure classification accuracy on a validation dataset.
def evaluate_accuracy(
    model: nn.Module,
    loader: DataLoader,
    device: torch.device,
    precision: str = "fp32"
) -> float:
    """Evaluate model accuracy on a dataset.
    
    Args:
        model: PyTorch model to evaluate
        loader: DataLoader with validation data
        device: Device to run evaluation on (cpu/cuda)
        precision: Precision mode ('fp32', 'fp16', or 'int8')
        
    Returns:
        Accuracy as a float between 0.0 and 1.0
    """
Defined in src/edge_opt/metrics.py:22-36
import torch
from edge_opt.metrics import evaluate_accuracy
from torch.utils.data import DataLoader

device = torch.device('cpu')
model = model.to(device)

accuracy = evaluate_accuracy(
    model=model,
    loader=val_loader,
    device=device,
    precision='fp32'
)

print(f"Validation accuracy: {accuracy:.2%}")
# Output: Validation accuracy: 89.23%

2. measure_latency

Measure average inference latency with warmup.
def measure_latency(
    model: nn.Module,
    sample_input: torch.Tensor,
    num_runs: int = 100,
    warmup: int = 10
) -> float:
    """Measure model inference latency.
    
    Args:
        model: PyTorch model to benchmark
        sample_input: Sample input tensor
        num_runs: Number of inference runs to average
        warmup: Number of warmup runs before measurement
        
    Returns:
        Average latency in milliseconds
    """
Defined in src/edge_opt/metrics.py:39-48
import torch
from edge_opt.metrics import measure_latency

# Create sample input
sample_input = torch.randn(128, 1, 28, 28)  # batch_size=128
sample_input = sample_input.to(device)

# Measure latency
latency = measure_latency(
    model=model,
    sample_input=sample_input,
    num_runs=100,
    warmup=10
)

print(f"Average latency: {latency:.2f} ms")
# Output: Average latency: 12.45 ms

3. measure_latency_distribution

Measure latency statistics across multiple benchmark repeats.
def measure_latency_distribution(
    model: nn.Module,
    sample_input: torch.Tensor,
    repeats: int = 5,
    num_runs: int = 100,
    warmup: int = 10
) -> tuple[float, float, float]:
    """Measure latency distribution with statistics.
    
    Args:
        model: PyTorch model to benchmark
        sample_input: Sample input tensor
        repeats: Number of times to repeat measurement
        num_runs: Number of runs per repeat
        warmup: Number of warmup runs
        
    Returns:
        Tuple of (mean_ms, std_ms, p95_ms)
    """
Defined in src/edge_opt/metrics.py:53-56
import torch
from edge_opt.metrics import measure_latency_distribution

sample_input = torch.randn(128, 1, 28, 28).to(device)

# Measure with statistics
mean_lat, std_lat, p95_lat = measure_latency_distribution(
    model=model,
    sample_input=sample_input,
    repeats=5,      # Repeat 5 times
    num_runs=100,   # 100 runs per repeat
    warmup=10       # 10 warmup runs
)

print(f"Latency: {mean_lat:.2f} ± {std_lat:.2f} ms")
print(f"95th percentile: {p95_lat:.2f} ms")
# Output:
# Latency: 12.45 ± 0.83 ms
# 95th percentile: 13.67 ms

Implementation Details

The latency measurement function in src/edge_opt/metrics.py:39-48 implements proper benchmarking practices:
def measure_latency(model: nn.Module, sample_input: torch.Tensor, num_runs: int = 100, warmup: int = 10) -> float:
    model.eval()
    with torch.no_grad():
        # Warmup phase: prime caches and stabilize CPU frequency
        for _ in range(warmup):
            _ = model(sample_input)
        
        # Timed measurement
        start = time.perf_counter()
        for _ in range(num_runs):
            _ = model(sample_input)
        elapsed = time.perf_counter() - start
    
    # Return average latency in milliseconds
    return (elapsed / num_runs) * 1000.0
Key aspects:
  • Warmup: Runs inference multiple times before measurement to prime CPU caches and stabilize frequency scaling
  • No gradients: Uses torch.no_grad() to disable gradient computation
  • High-precision timer: Uses time.perf_counter() for accurate timing
  • Averaging: Divides total time by number of runs for stable measurements

Statistical Distribution

The measure_latency_distribution function in src/edge_opt/metrics.py:53-56 repeats measurements multiple times:
def measure_latency_distribution(model: nn.Module, sample_input: torch.Tensor, repeats: int = 5, num_runs: int = 100, warmup: int = 10) -> tuple[float, float, float]:
    # Measure latency multiple times
    latencies = [measure_latency(model, sample_input, num_runs=num_runs, warmup=warmup) for _ in range(repeats)]
    
    # Compute statistics
    latency_tensor = torch.tensor(latencies, dtype=torch.float32)
    return float(latency_tensor.mean()), float(latency_tensor.std(unbiased=False)), float(torch.quantile(latency_tensor, 0.95))
From configs/default.yaml, the default is benchmark_repeats: 5, which provides a good balance between measurement reliability and benchmarking time.
The repeats parameter (default 5) determines how many independent latency measurements are taken. Higher values provide more reliable statistics but increase benchmarking time linearly.

4. model_memory_mb

Calculate model memory footprint.
def model_memory_mb(model: nn.Module) -> float:
    """Calculate model memory footprint.
    
    Args:
        model: PyTorch model
        
    Returns:
        Memory footprint in megabytes
    """
Defined in src/edge_opt/metrics.py:58-63
from edge_opt.metrics import model_memory_mb
from edge_opt.model import SmallCNN

# Create model
model = SmallCNN(conv1_channels=16, conv2_channels=32)

# Measure memory
memory = model_memory_mb(model)
print(f"Model size: {memory:.2f} MB")
# Output: Model size: 2.85 MB
Implementation in src/edge_opt/metrics.py:58-63:
def model_memory_mb(model: nn.Module) -> float:
    total_bytes = 0
    for tensor in model.state_dict().values():
        if isinstance(tensor, torch.Tensor):
            total_bytes += tensor.numel() * tensor.element_size()
    return total_bytes / (1024**2)
This calculates exact memory by:
  1. Iterating through all model parameters in state_dict()
  2. Multiplying element count (numel()) by bytes per element (element_size())
  3. Converting bytes to megabytes

5. collect_metrics

Comprehensive metric collection function that combines all measurements.
def collect_metrics(
    model: nn.Module,
    loader: DataLoader,
    device: torch.device,
    power_watts: float,
    precision: str,
    latency_multiplier: float = 1.0,
    benchmark_repeats: int = 5,
) -> PerfMetrics:
    """Collect all performance metrics for a model.
    
    Args:
        model: PyTorch model to evaluate
        loader: DataLoader for evaluation
        device: Device to run on
        power_watts: Device power consumption for energy proxy
        precision: Precision mode ('fp32', 'fp16', 'int8')
        latency_multiplier: Scale factor for latency (e.g., CPU throttling)
        benchmark_repeats: Number of latency measurement repeats
        
    Returns:
        PerfMetrics dataclass with all metrics
    """
Defined in src/edge_opt/metrics.py:70-99
import yaml
import torch
from edge_opt.metrics import collect_metrics

# Load configuration
with open('configs/default.yaml', 'r') as f:
    config = yaml.safe_load(f)

device = torch.device('cpu')

# Collect all metrics
metrics = collect_metrics(
    model=model,
    loader=val_loader,
    device=device,
    power_watts=config['power_watts'],           # 5.0
    precision='fp32',
    latency_multiplier=1.0,
    benchmark_repeats=config['benchmark_repeats'] # 5
)

# Access metrics
print(f"Accuracy: {metrics.accuracy:.2%}")
print(f"Latency: {metrics.latency_ms:.2f} ± {metrics.latency_std_ms:.2f} ms")
print(f"P95 Latency: {metrics.latency_p95_ms:.2f} ms")
print(f"Throughput: {metrics.throughput_sps:.1f} samples/sec")
print(f"Memory: {metrics.memory_mb:.2f} MB")
print(f"Energy: {metrics.energy_proxy_j:.4f} J")

Configuration Parameters

From configs/default.yaml:
benchmark_repeats
integer
default:"5"
Number of times to repeat latency measurements for statistical analysis. Higher values provide more reliable statistics but increase benchmarking time.Recommended values:
  • Fast iteration: 3-5 repeats
  • Production benchmarks: 10-20 repeats
  • Publication-quality: 50-100 repeats
power_watts
float
default:"5.0"
Device power consumption in watts for energy proxy calculation:
  • Raspberry Pi 3B: 3-5W
  • Raspberry Pi 4: 5-7W
  • NVIDIA Jetson Nano: 5-10W
  • Mobile devices: 2-4W
cpu_frequency_scale
float
default:"0.7"
CPU frequency scaling factor (0.0 to 1.0). Used to simulate throttled performance:
  • 1.0: Full performance
  • 0.7: 70% performance (power saving)
  • 0.5: 50% performance (aggressive throttling)
Applied as: latency_multiplier = 1.0 / cpu_frequency_scale

Energy Proxy Calculation

The energy proxy provides an estimate of energy consumption per inference:
# From src/edge_opt/metrics.py:89
energy_proxy = (latency / 1000.0) * power_watts
This assumes constant power draw during inference:
  • Convert latency from ms to seconds: latency / 1000.0
  • Multiply by power consumption: × power_watts
  • Result is in joules (J)
Example:
  • Latency: 10ms = 0.01s
  • Power: 5W
  • Energy: 0.01s × 5W = 0.05 J
The energy proxy is a simplified model. Real energy consumption varies with CPU utilization, memory bandwidth, and dynamic power states. For precise measurements, use hardware power monitors.

Throughput Calculation

Throughput measures how many samples can be processed per second:
# From src/edge_opt/metrics.py:87
throughput = sample_input.shape[0] / (latency / 1000.0)
  • Batch size: sample_input.shape[0] (e.g., 128)
  • Latency in seconds: latency / 1000.0
  • Throughput: samples ÷ time = samples/second
Example:
  • Batch size: 128
  • Latency: 10ms = 0.01s
  • Throughput: 128 / 0.01 = 12,800 samples/sec

Best Practices

Use warmup runs: Always include warmup iterations before measurement to prime CPU caches and stabilize frequency scaling. The default of 10 warmup runs is usually sufficient.
Measure on target hardware: Benchmarks on development machines may not reflect edge device performance. Test on the actual deployment hardware when possible.
Avoid background processes: Close unnecessary applications during benchmarking. CPU throttling, background tasks, and thermal throttling can introduce noise in measurements.
Increase repeats for stability: If you see high standard deviation in latency measurements, increase benchmark_repeats to 10-20 for more reliable statistics.

Memory Budget Validation

The framework provides a utility to check memory violations:
def memory_violations(memory_mb: float, budgets_mb: list[float]) -> dict[str, bool]:
    """Check if memory exceeds budget thresholds."""
    return {f"violates_{budget}mb": memory_mb > budget for budget in budgets_mb}
Defined in src/edge_opt/metrics.py:66-67
import yaml
from edge_opt.metrics import model_memory_mb, memory_violations

with open('configs/default.yaml', 'r') as f:
    config = yaml.safe_load(f)

memory = model_memory_mb(optimized_model)
budgets = config['memory_budgets_mb']  # [1.0, 2.0, 4.0]

violations = memory_violations(memory, budgets)

# Filter configurations that fit within active budget
active_budget = config['active_memory_budget_mb']  # 2.0
fits_budget = memory <= active_budget

if fits_budget:
    print(f"✓ Model fits within {active_budget}MB budget")
else:
    print(f"❌ Model exceeds {active_budget}MB budget ({memory:.2f}MB)")

for budget_name, violated in violations.items():
    print(f"  {budget_name}: {'VIOLATED' if violated else 'OK'}")

Complete Benchmarking Pipeline

import yaml
import torch
import pandas as pd
from pathlib import Path
from itertools import product

from edge_opt.model import SmallCNN
from edge_opt.pruning import structured_channel_prune
from edge_opt.quantization import to_fp16, to_int8
from edge_opt.metrics import collect_metrics, memory_violations

# Load configuration
with open('configs/default.yaml', 'r') as f:
    config = yaml.safe_load(f)

device = torch.device('cpu')

# Load trained model
model = SmallCNN()
model.load_state_dict(torch.load('outputs/trained_model.pth'))

# Prepare calibration loader for INT8
calib_loader = DataLoader(calib_dataset, batch_size=config['batch_size'])

# CPU frequency scaling
latency_multiplier = 1.0 / config['cpu_frequency_scale']

# Benchmark all configurations
results = []
for pruning_level, precision in product(config['pruning_levels'], config['precisions']):
    print(f"\nBenchmarking: pruning={pruning_level}, precision={precision}")
    
    # Apply optimizations
    optimized = structured_channel_prune(model, pruning_level)
    
    if precision == 'fp16':
        optimized = to_fp16(optimized)
    elif precision == 'int8':
        optimized = to_int8(optimized, calib_loader, config['calibration_batches'])
    
    optimized = optimized.to(device)
    
    # Collect metrics
    metrics = collect_metrics(
        model=optimized,
        loader=val_loader,
        device=device,
        power_watts=config['power_watts'],
        precision=precision,
        latency_multiplier=latency_multiplier,
        benchmark_repeats=config['benchmark_repeats']
    )
    
    # Check memory budgets
    violations = memory_violations(metrics.memory_mb, config['memory_budgets_mb'])
    fits_active_budget = metrics.memory_mb <= config['active_memory_budget_mb']
    
    # Store results
    results.append({
        'pruning_level': pruning_level,
        'precision': precision,
        'accuracy': metrics.accuracy,
        'latency_ms': metrics.latency_ms,
        'latency_std_ms': metrics.latency_std_ms,
        'latency_p95_ms': metrics.latency_p95_ms,
        'throughput_sps': metrics.throughput_sps,
        'memory_mb': metrics.memory_mb,
        'energy_j': metrics.energy_proxy_j,
        'fits_budget': fits_active_budget,
        **violations
    })
    
    print(f"  Accuracy: {metrics.accuracy:.2%}")
    print(f"  Latency: {metrics.latency_ms:.2f}ms (±{metrics.latency_std_ms:.2f}ms)")
    print(f"  Memory: {metrics.memory_mb:.2f}MB ({'✓' if fits_active_budget else '❌'})")

# Save results
output_dir = Path(config['output_dir'])
output_dir.mkdir(exist_ok=True)

df = pd.DataFrame(results)
df.to_csv(output_dir / 'benchmark_results.csv', index=False)

print(f"\n✓ Benchmarking complete. Results saved to {output_dir / 'benchmark_results.csv'}")
print(f"\nTop 5 configurations by throughput:")
print(df.nlargest(5, 'throughput_sps')[['pruning_level', 'precision', 'throughput_sps', 'accuracy', 'memory_mb']])

Next Steps

After benchmarking your optimized models:
  1. Analyze trade-offs: Plot Pareto frontiers for accuracy vs. latency, memory vs. accuracy
  2. Select optimal configuration: Choose based on your deployment constraints
  3. Validate on device: Test the selected model on actual edge hardware
  4. Deploy: Export and integrate into your production pipeline
See the Configuration guide for tuning benchmark parameters and the Pruning and Quantization guides for optimization techniques.

Build docs developers (and LLMs) love