Skip to main content

Overview

The reporting module provides functions to build comprehensive summaries of optimization studies and write results to disk in structured formats (CSV and JSON).

Core Functions

build_summary

Builds a comprehensive summary dictionary from optimization study results.
def build_summary(
    baseline: dict[str, Any],
    memory_budgets_mb: list[float],
    active_memory_budget_mb: float,
    cpu_frequency_scale: float,
    latency_multiplier: float,
    sweep_df: pd.DataFrame,
    deployment: dict[str, float],
) -> dict[str, Any]
baseline
dict[str, Any]
required
Baseline model metrics before optimization (typically contains accuracy, latency, memory, etc.)
memory_budgets_mb
list[float]
required
List of all memory budget constraints evaluated during the study
active_memory_budget_mb
float
required
The active/selected memory budget constraint for filtering configurations
cpu_frequency_scale
float
required
CPU frequency scaling factor applied during evaluation
latency_multiplier
float
required
Latency adjustment multiplier used in the study
sweep_df
pd.DataFrame
required
DataFrame containing all sweep configurations with an accepted column indicating which configurations meet constraints
deployment
dict[str, float]
required
Deployment simulation results from deployment_simulation
return
dict[str, Any]
Summary dictionary containing:
  • baseline: Original baseline metrics
  • memory_budgets_mb: List of evaluated budgets
  • active_memory_budget_mb: Active budget constraint
  • cpu_frequency_scale: CPU scaling factor
  • latency_multiplier: Applied latency multiplier
  • study_rows: Total number of configurations evaluated
  • accepted_rows: Number of configurations meeting all constraints
  • rejected_rows: Number of configurations rejected
  • best_accuracy_accepted: Highest accuracy among accepted configurations (None if no accepted configs)
  • lowest_latency_ms_accepted: Lowest latency among accepted configurations (None if no accepted configs)
  • deployment: Deployment simulation metrics
from edge_opt.reporting import build_summary
import pandas as pd

baseline_metrics = {
    "accuracy": 0.985,
    "latency_ms": 45.2,
    "memory_mb": 8.5
}

deployment_results = {
    "cpu_frequency_scale": 0.5,
    "batch_latency_ms": 90.4,
    "stream_avg_latency_ms": 2.8
}

# sweep_df has columns: precision, accuracy, latency_ms, memory_mb, accepted
summary = build_summary(
    baseline=baseline_metrics,
    memory_budgets_mb=[2.0, 4.0, 8.0],
    active_memory_budget_mb=4.0,
    cpu_frequency_scale=0.5,
    latency_multiplier=2.0,
    sweep_df=sweep_results_df,
    deployment=deployment_results
)

print(f"Evaluated {summary['study_rows']} configurations")
print(f"Accepted: {summary['accepted_rows']}, Rejected: {summary['rejected_rows']}")
print(f"Best accuracy: {summary['best_accuracy_accepted']:.4f}")
print(f"Lowest latency: {summary['lowest_latency_ms_accepted']:.2f} ms")
The function filters the sweep DataFrame to identify accepted configurations and computes statistics only on those that meet all constraints. If no configurations are accepted, best_accuracy_accepted and lowest_latency_ms_accepted will be None.

write_outputs

Writes optimization study results to disk in structured formats.
def write_outputs(
    output_dir: Path,
    sweep_df: pd.DataFrame,
    latency_frontier: pd.DataFrame,
    energy_frontier: pd.DataFrame,
    summary: dict[str, Any],
) -> None
output_dir
Path
required
Directory path where output files will be written (created if it doesn’t exist)
sweep_df
pd.DataFrame
required
Complete sweep results DataFrame with all configurations evaluated
latency_frontier
pd.DataFrame
required
Pareto frontier DataFrame for latency-accuracy tradeoff
energy_frontier
pd.DataFrame
required
Pareto frontier DataFrame for energy-accuracy tradeoff
summary
dict[str, Any]
required
Summary dictionary from build_summary
files_created
list[str]
This function creates the following files:
  • sweep_results.csv: Complete sweep data with all configurations
  • pareto_frontier_latency.csv: Pareto-optimal configurations for latency vs. accuracy
  • pareto_frontier_energy.csv: Pareto-optimal configurations for energy vs. accuracy
  • summary.json: JSON file with study summary and key metrics
from pathlib import Path
from edge_opt.reporting import build_summary, write_outputs
import pandas as pd

# Build comprehensive summary
summary = build_summary(
    baseline=baseline_metrics,
    memory_budgets_mb=[2.0, 4.0, 8.0],
    active_memory_budget_mb=4.0,
    cpu_frequency_scale=0.5,
    latency_multiplier=2.0,
    sweep_df=all_results_df,
    deployment=deployment_sim_results
)

# Write all outputs
write_outputs(
    output_dir=Path("./optimization_results"),
    sweep_df=all_results_df,
    latency_frontier=latency_pareto_df,
    energy_frontier=energy_pareto_df,
    summary=summary
)

print("Results saved to ./optimization_results/")

Output File Structure

sweep_results.csv

Contains all evaluated configurations with columns depending on the sweep parameters. Typical columns include:
  • precision: Numeric precision (e.g., fp32, fp16)
  • accuracy: Model accuracy
  • latency_ms: Inference latency in milliseconds
  • memory_mb: Memory footprint in megabytes
  • energy_proxy_j: Energy consumption estimate in joules
  • accepted: Boolean indicating if configuration meets constraints
  • Additional custom columns from the sweep

pareto_frontier_latency.csv

Contains Pareto-optimal configurations that represent the best latency-accuracy tradeoffs. Each row is a configuration where no other configuration has both better latency and better accuracy.

pareto_frontier_energy.csv

Contains Pareto-optimal configurations for energy efficiency vs. accuracy tradeoff.

summary.json

JSON file with the complete summary structure from build_summary. Example structure:
{
  "baseline": {
    "accuracy": 0.985,
    "latency_ms": 45.2,
    "memory_mb": 8.5
  },
  "memory_budgets_mb": [2.0, 4.0, 8.0],
  "active_memory_budget_mb": 4.0,
  "cpu_frequency_scale": 0.5,
  "latency_multiplier": 2.0,
  "study_rows": 120,
  "accepted_rows": 45,
  "rejected_rows": 75,
  "best_accuracy_accepted": 0.983,
  "lowest_latency_ms_accepted": 22.5,
  "deployment": {
    "cpu_frequency_scale": 0.5,
    "batch_latency_ms": 90.4,
    "stream_avg_latency_ms": 2.8,
    "batch_throughput_sps": 354.6,
    "stream_throughput_sps": 357.1
  }
}
All CSV files are written with UTF-8 encoding and without row indices. The summary JSON is formatted with 2-space indentation for readability.

Best Practices

from pathlib import Path
from edge_opt.metrics import collect_metrics
from edge_opt.deploy import deployment_simulation
from edge_opt.reporting import build_summary, write_outputs
import pandas as pd

# 1. Collect baseline metrics
baseline = collect_metrics(
    model=baseline_model,
    loader=test_loader,
    device=device,
    power_watts=5.0,
    precision="fp32"
)

# 2. Run parameter sweep (example)
sweep_results = []
for precision in ["fp32", "fp16"]:
    metrics = collect_metrics(
        model=model,
        loader=test_loader,
        device=device,
        power_watts=5.0,
        precision=precision
    )
    sweep_results.append({
        "precision": precision,
        "accuracy": metrics.accuracy,
        "latency_ms": metrics.latency_ms,
        "memory_mb": metrics.memory_mb,
        "energy_proxy_j": metrics.energy_proxy_j,
        "accepted": metrics.memory_mb <= 4.0
    })

sweep_df = pd.DataFrame(sweep_results)

# 3. Run deployment simulation
deployment_results = deployment_simulation(
    model=best_model,
    loader=test_loader,
    cpu_frequency_scale=0.5
)

# 4. Build summary and write outputs
summary = build_summary(
    baseline=vars(baseline),
    memory_budgets_mb=[2.0, 4.0, 8.0],
    active_memory_budget_mb=4.0,
    cpu_frequency_scale=0.5,
    latency_multiplier=2.0,
    sweep_df=sweep_df,
    deployment=deployment_results
)

write_outputs(
    output_dir=Path("./results"),
    sweep_df=sweep_df,
    latency_frontier=sweep_df[sweep_df["accepted"]],  # simplified
    energy_frontier=sweep_df[sweep_df["accepted"]],
    summary=summary
)

Build docs developers (and LLMs) love