Skip to main content
The analyzer module provides advanced pattern detection and performance insights from profiling data.

Classes

MemoryPattern

Represents a detected memory usage pattern.
from gpumemprof.analyzer import MemoryPattern

Attributes

pattern_type
str
Type of pattern: ‘memory_leak’, ‘fragmentation’, ‘inefficient_allocation’, ‘memory_spikes’, ‘repeated_allocations’
description
str
Human-readable description of the pattern
severity
str
Severity level: ‘info’, ‘warning’, ‘critical’
affected_functions
List[str]
List of function names affected by this pattern
metrics
Dict[str, Any]
Quantitative metrics about the pattern
suggestions
List[str]
Actionable suggestions to address the issue

PerformanceInsight

Performance insight derived from profiling data.
from gpumemprof.analyzer import PerformanceInsight

Attributes

category
str
Insight category: ‘execution_time’, ‘memory_efficiency’, ‘correlation’, ‘temporal’
title
str
Insight title
description
str
Detailed description
impact
str
Impact level: ‘low’, ‘medium’, ‘high’
confidence
float
Confidence score from 0.0 to 1.0
data
Dict[str, Any]
Supporting data for the insight
recommendations
List[str]
Recommended actions

MemoryAnalyzer

Advanced analyzer for memory profiling data.
from gpumemprof import GPUMemoryProfiler
from gpumemprof.analyzer import MemoryAnalyzer

profiler = GPUMemoryProfiler()
# ... run profiling ...

analyzer = MemoryAnalyzer(profiler)

Constructor

profiler
Optional[GPUMemoryProfiler]
default:"None"
GPUMemoryProfiler instance to analyze

Methods

analyze_memory_patterns()
Detect memory usage patterns in profiling data.
patterns = analyzer.analyze_memory_patterns()

for pattern in patterns:
    print(f"{pattern.severity.upper()}: {pattern.description}")
    print(f"Affected functions: {', '.join(pattern.affected_functions)}")
    print("Suggestions:")
    for suggestion in pattern.suggestions:
        print(f"  - {suggestion}")
results
Optional[List[ProfileResult]]
default:"None"
List of ProfileResults to analyze (uses profiler.results if None)
return
List[MemoryPattern]
Detected memory patterns including:
  • Memory leaks
  • Fragmentation issues
  • Inefficient allocations
  • Memory spikes
  • Repeated allocations
generate_performance_insights()
Generate performance insights from profiling data.
insights = analyzer.generate_performance_insights()

for insight in insights:
    print(f"[{insight.impact.upper()}] {insight.title}")
    print(f"Confidence: {insight.confidence:.1%}")
    print(f"{insight.description}")
    print("Recommendations:")
    for rec in insight.recommendations:
        print(f"  - {rec}")
results
Optional[List[ProfileResult]]
default:"None"
List of ProfileResults to analyze
return
List[PerformanceInsight]
Performance insights including:
  • Slow function detection
  • Execution time variance analysis
  • Memory-intensive functions
  • Time-memory correlations
  • Memory growth trends
analyze_memory_gaps()
Classify allocator-vs-device hidden memory gaps over time.
from gpumemprof.telemetry import load_telemetry_events

events = load_telemetry_events("memory_events.json")
gap_findings = analyzer.analyze_memory_gaps(events)

for finding in gap_findings:
    print(f"{finding.severity}: {finding.classification}")
    print(f"Description: {finding.description}")
    for remedy in finding.remediation:
        print(f"  - {remedy}")
events
List[TelemetryEventV2]
Chronologically ordered telemetry samples
return
List[GapFinding]
Prioritized list of gap findings (by severity and confidence)
generate_optimization_report()
Generate a comprehensive optimization report.
report = analyzer.generate_optimization_report()

print(f"Optimization Score: {report['optimization_score']['score']}")
print(f"Grade: {report['optimization_score']['grade']}")
print(f"\nCritical Issues: {len(report['critical_issues'])}")
print(f"High Impact Insights: {len(report['high_impact_insights'])}")

print("\nTop Recommendations:")
for rec in report['recommendations'][:5]:
    print(f"  [{rec['priority']}] {rec['description']}")
results
Optional[List[ProfileResult]]
default:"None"
List of ProfileResults to analyze
events
Optional[List[TelemetryEventV2]]
default:"None"
Optional telemetry events for gap analysis
return
Dict[str, Any]
Comprehensive report including:
  • summary: Overall statistics
  • critical_issues: Critical memory patterns
  • high_impact_insights: High-impact performance insights
  • all_patterns: All detected patterns
  • all_insights: All generated insights
  • recommendations: Prioritized recommendations
  • optimization_score: Score, grade, and description
  • gap_analysis: Hidden-memory gap findings (when events provided)

Configuration

Analysis thresholds can be customized:
analyzer.thresholds['memory_leak_ratio'] = 0.15  # 15% growth threshold
analyzer.thresholds['fragmentation_ratio'] = 0.25  # 25% fragmentation
analyzer.thresholds['gap_ratio_threshold'] = 0.1  # 10% gap threshold
Available thresholds:
  • memory_leak_ratio: Memory growth indicating potential leak (default: 0.1)
  • fragmentation_ratio: Fragmentation concern threshold (default: 0.3)
  • inefficient_allocation_ratio: Allocation waste threshold (default: 0.5)
  • slow_function_percentile: Top percentile for slow functions (default: 0.9)
  • high_memory_percentile: Top percentile for memory-heavy (default: 0.9)
  • min_calls_for_analysis: Minimum calls to analyze (default: 3)
  • gap_ratio_threshold: Gap size threshold (default: 0.05)
  • gap_spike_zscore: Z-score for spike detection (default: 2.0)
  • gap_drift_r_squared: R-squared for drift (default: 0.6)
  • gap_fragmentation_ratio: Reserved-allocated ratio (default: 0.3)

Example Usage

import torch
from gpumemprof import GPUMemoryProfiler
from gpumemprof.analyzer import MemoryAnalyzer

# Run profiling
profiler = GPUMemoryProfiler(device="cuda:0")

@profile_function
def train_model(model, data):
    output = model(data)
    loss = criterion(output)
    loss.backward()
    return loss

for epoch in range(10):
    for batch in dataloader:
        train_model(model, batch)

# Analyze results
analyzer = MemoryAnalyzer(profiler)

# Detect patterns
patterns = analyzer.analyze_memory_patterns()
print(f"Found {len(patterns)} memory patterns")

for pattern in patterns:
    if pattern.severity == 'critical':
        print(f"CRITICAL: {pattern.description}")
        print(f"Affected: {pattern.affected_functions}")
        print("Actions:")
        for suggestion in pattern.suggestions:
            print(f"  - {suggestion}")

# Generate insights
insights = analyzer.generate_performance_insights()

for insight in insights:
    if insight.impact == 'high':
        print(f"\nHigh Impact: {insight.title}")
        print(f"Confidence: {insight.confidence:.0%}")
        print(insight.description)

# Full optimization report
report = analyzer.generate_optimization_report()

print(f"\nOptimization Score: {report['optimization_score']['score']}/100")
print(f"Grade: {report['optimization_score']['grade']}")
print(f"Description: {report['optimization_score']['description']}")

print("\nTop 3 Recommendations:")
for i, rec in enumerate(report['recommendations'][:3], 1):
    print(f"{i}. [{rec['priority']}] {rec['description']}")
    for suggestion in rec['suggestions'][:2]:
        print(f"   - {suggestion}")

# Gap analysis with telemetry events
from gpumemprof.telemetry import load_telemetry_events

events = load_telemetry_events("memory_telemetry.json")
gap_report = analyzer.generate_optimization_report(events=events)

if 'gap_analysis' in gap_report:
    print("\nHidden Memory Gap Analysis:")
    for finding in gap_report['gap_analysis']:
        print(f"  - {finding['classification']}: {finding['description']}")

Build docs developers (and LLMs) love