MemoryAnalyzer
Advanced TensorFlow memory analysis for detecting leaks, fragmentation, and optimization opportunities.
Constructor
MemoryAnalyzer(sensitivity: float = 0.05)
Analysis sensitivity threshold (0.0-1.0). Lower values detect smaller anomalies
Methods
detect_memory_leaks
Detect potential memory leaks using statistical analysis.
def detect_memory_leaks(self, tracking_results: TrackingResult) -> List[Dict[str, Any]]
Results from MemoryTracker
List of detected leaks with type, severity, and descriptionLeak type: ‘monotonic_increase’, ‘memory_spikes’, or ‘insufficient_cleanup’
Severity level: ‘high’, ‘medium’, or ‘low’
Human-readable description
Example:
analyzer = MemoryAnalyzer(sensitivity=0.05)
leaks = analyzer.detect_memory_leaks(tracking_results)
for leak in leaks:
print(f"{leak['severity'].upper()}: {leak['description']}")
analyze_fragmentation
Analyze memory fragmentation patterns.
def analyze_fragmentation(self, profile_result: ProfileResult) -> Dict[str, float]
Results from TFMemoryProfiler
Overall fragmentation score (0.0-1.0)
Fragmentation trend (positive = increasing)
Maximum fragmentation observed
Minimum fragmentation observed
detect_patterns
Detect memory usage patterns.
def detect_patterns(self, tracking_results: TrackingResult) -> List[Dict[str, Any]]
Results from MemoryTracker
Detected patterns including ‘periodic_pattern’ and ‘step_pattern’ types
Example:
patterns = analyzer.detect_patterns(tracking_results)
for pattern in patterns:
print(f"Pattern: {pattern['type']} - {pattern['description']}")
analyze_efficiency
Analyze memory usage efficiency.
def analyze_efficiency(self, profile_result: ProfileResult) -> float
Results from TFMemoryProfiler
Efficiency score from 0-10 (higher is better)
Correlate memory usage with performance metrics.
def correlate_with_performance(self, profile_result: ProfileResult) -> Dict[str, Any]
Results from TFMemoryProfiler
memory_duration_correlation
Correlation between memory and duration
Per-function efficiency metrics
Performance optimization recommendations
analyze_memory_gaps
Classify allocator-vs-device hidden memory gaps over time.
def analyze_memory_gaps(self, events: List[TelemetryEventV2]) -> List[GapFinding]
Chronologically ordered telemetry samples
Prioritized list of gap findings sorted by severity and confidence
Example:
events = tracker.get_tracking_results().events
gaps = analyzer.analyze_memory_gaps(events)
for gap in gaps:
print(f"Severity: {gap.severity}")
print(f"Classification: {gap.classification}")
print(f"Description: {gap.description}")
score_optimization
Score optimization opportunities across multiple dimensions.
def score_optimization(
self,
profile_result: ProfileResult,
events: Optional[List[TelemetryEventV2]] = None
) -> Dict[str, Any]
TensorFlow profiling result object
Optional telemetry event series for gap analysis. When provided, result includes gap_analysis section
Overall optimization score (0-10)
Category-specific scores (memory_efficiency, fragmentation, performance)
Top optimization suggestions
Hidden memory gap findings (when events provided)
Configuration
The analyzer uses configurable thresholds for gap analysis:
analyzer.thresholds = {
'gap_ratio_threshold': 0.05, # 5% of device total
'gap_spike_zscore': 2.0, # z-score for spike detection
'gap_drift_r_squared': 0.6, # R-squared for drift
'gap_fragmentation_ratio': 0.3, # fragmentation threshold
}
Example
from tfmemprof.analyzer import MemoryAnalyzer
from tfmemprof.profiler import TFMemoryProfiler
from tfmemprof.tracker import MemoryTracker
# Run profiling and tracking
profiler = TFMemoryProfiler()
tracker = MemoryTracker(sampling_interval=0.5)
tracker.start_tracking()
with profiler.profile_context("training"):
model.fit(x_train, y_train, epochs=5)
tracking_results = tracker.stop_tracking()
profile_results = profiler.get_results()
# Analyze results
analyzer = MemoryAnalyzer(sensitivity=0.05)
# Detect memory leaks
leaks = analyzer.detect_memory_leaks(tracking_results)
if leaks:
print("\nMemory Leaks Detected:")
for leak in leaks:
print(f" [{leak['severity']}] {leak['description']}")
# Analyze fragmentation
frag = analyzer.analyze_fragmentation(profile_results)
print(f"\nFragmentation Score: {frag['fragmentation_score']:.3f}")
print(f"Trend: {'increasing' if frag['trend'] > 0 else 'decreasing'}")
# Detect patterns
patterns = analyzer.detect_patterns(tracking_results)
for pattern in patterns:
print(f"\nPattern: {pattern['type']}")
print(f" {pattern['description']}")
# Get efficiency score
efficiency = analyzer.analyze_efficiency(profile_results)
print(f"\nMemory Efficiency: {efficiency:.1f}/10")
# Performance correlation
perf = analyzer.correlate_with_performance(profile_results)
print("\nPerformance Recommendations:")
for rec in perf['recommendations']:
print(f" - {rec}")
# Comprehensive optimization scoring
events = tracking_results.events
opt_score = analyzer.score_optimization(profile_results, events=events)
print(f"\nOptimization Score: {opt_score['overall_score']:.1f}/10")
print("\nCategory Scores:")
for cat, score in opt_score['categories'].items():
print(f" {cat}: {score:.1f}/10")
print("\nPriority Actions:")
for action in opt_score['priority_actions']:
print(f" - {action}")
if 'gap_analysis' in opt_score:
print("\nHidden Memory Gaps:")
for gap in opt_score['gap_analysis']:
print(f" [{gap['severity']}] {gap['description']}")