Overview
Understanding memory management in .NET is crucial for building high-performance applications. This guide covers the fundamentals of memory allocation, garbage collection, and modern zero-allocation techniques.Stack vs Heap
The stack stores value types and method frames — allocated and deallocated deterministically in LIFO order. The heap stores reference types managed by the garbage collector.Stack allocation is essentially free — the stack pointer moves; no GC involvement. Heap allocation requires finding a free memory block, updating GC bookkeeping, and eventual collection.
Key Differences
| Aspect | Stack | Heap |
|---|---|---|
| Allocation | Deterministic, LIFO | GC-managed |
| Performance | Fast (pointer move) | Slower (GC overhead) |
| Types | Value types (int, struct, bool) | Reference types (classes) |
| Lifetime | Scoped to method | Until GC collects |
| Size | ~1MB per thread | Limited by system memory |
Memory Allocation Patterns
- Stack: value types (int, struct, bool), method arguments, local variables
- Heap: reference types (class instances), boxed value types, arrays
- Boxing: value type copied to heap-allocated object — triggers GC pressure
- Stack size: typically 1MB per thread; stack overflow = uncatchable exception
- ref struct (e.g. Span<T>): guaranteed stack-only — cannot escape to heap
- stackalloc: allocates array on stack — fast, no GC, use with Span<T>
Code Examples
Best Practices
Do
- Use struct for small, immutable, frequently-copied data without identity
- Use stackalloc + Span<T> for temporary byte buffers in hot paths
- Use generics instead of object to prevent boxing in collections and APIs
Don't
- Use struct for types larger than 16 bytes that are frequently copied
- Ignore boxing in tight loops — it generates GC pressure that causes pauses
- Use stackalloc for large allocations — risk stack overflow
Garbage Collection & Generations
The .NET GC uses generational collection based on the hypothesis that most objects die young. Three short-lived generations (0,1,2) and the Large Object Heap each have different collection frequencies.Generation Overview
- Gen0
- Gen1
- Gen2
- LOH
Generation 0 is the youngest generation, collected most frequently (milliseconds). Most short-lived objects die here.
- Size: ~256KB
- Collection frequency: Many times per second in high-throughput apps
- Objects: Request handlers, DTOs, temporary buffers
Generation Lifecycle
Key Considerations
- Gen0: ~256KB; collected many times per second in high-throughput apps
- Gen1: buffer between Gen0 and full GC — typically collected 10× less than Gen0
- Gen2: full GC — expensive, stop-the-world (background GC reduces pause in .NET 5+)
- LOH (Large Object Heap): objects ≥ 85,000 bytes — no compaction by default
- LOH fragmentation: repeated alloc/free of different-sized large objects fragments it
GC.Collect(): forces collection — use only in tests or known idle periods, not production hot paths
Best Practices
Do
- Pool large objects (≥85KB) with ArrayPool<T> or MemoryPool<T>
- Use
GC.TryStartNoGCRegion()for latency-critical sections that must avoid pauses - Monitor GC generation counts and promotion rates with dotnet-counters or EventSource
Don't
- Call
GC.Collect()in production hot paths (defeats generational GC hypothesis) - Allocate large byte arrays in loops without pooling (LOH fragmentation)
- Ignore Gen2 collection frequency — it is the primary GC latency signal
IDisposable & Resource Handling
IDisposable provides deterministic, explicit resource cleanup. Theusing statement guarantees Dispose() is called even on exceptions. Weak references allow reference to an object without preventing GC collection.
The using Statement
The IDisposable pattern provides the only mechanism for deterministic cleanup in .NET. Theusing statement is syntactic sugar for a try/finally that calls Dispose().
- using declaration (C# 8+)
- using statement
Key Patterns
- using statement: compiles to
try { } finally { obj.Dispose(); } - using declaration (C# 8+):
using var x = ...;— disposes at end of enclosing scope - IAsyncDisposable + await using: async cleanup (database connections, streams)
- WeakReference<T>:
TryGetTarget()returns false if GC has collected the object - ConditionalWeakTable<TKey,TValue>: GC-aware dictionary that does not root keys
- GC.SuppressFinalize(this): removes object from finalization queue — call in Dispose()
Weak References for Caching
Weak references allow caches to hold references to objects without preventing their collection — the GC can collect weakly-referenced objects at any time.Best Practices
Do
- Always dispose IDisposable objects with using or await using
- Implement IAsyncDisposable for types that own I/O resources
- Use WeakReference<T> for caches that should not prevent GC collection
Don't
- Access disposed objects — always check _disposed flag and throw ObjectDisposedException
- Swallow exceptions thrown during Dispose() — log them
- Forget await using for IAsyncDisposable — the async cleanup path will not run
Span<T> & Memory<T>
Span<T> is a ref struct representing a contiguous sequence of memory — stack, heap, or native. Memory<T> is its heap-compatible counterpart. Both enable slicing without allocation.Span<T> and Memory<T> enable manipulation of subranges of arrays, strings, and native memory without copying. They are the foundation of high-performance .NET parsing and protocol libraries.
Key Differences
| Feature | Span<T> | Memory<T> |
|---|---|---|
| Storage | Stack only (ref struct) | Heap-compatible |
| Async | Cannot cross await | Can cross await |
| Fields | Cannot be in class fields | Can be stored in fields |
| Performance | Slightly faster | Minimal overhead |
| Use case | Synchronous parsing | Async I/O pipelines |
Characteristics
- Span<T>: ref struct — stack only, no heap escape, no async use
- Memory<T>: heap-compatible — can be stored in fields, used across await
- ReadOnlySpan<string>: parse substrings without String.Substring allocation
- MemoryMarshal: reinterpret Span<byte> as Span<int> or custom struct — zero copy
- ArrayPool<T>: rent/return large arrays and wrap in Span/Memory for parsing
- System.IO.Pipelines: high-performance async I/O built on Memory<byte>
Zero-Allocation Parsing
Best Practices
Do
- Use ReadOnlySpan<char> for string parsing to eliminate Substring allocations
- Use Memory<T> when you need to store a slice across an async boundary
- Benchmark with BenchmarkDotNet to measure the allocation reduction
Don't
- Use Span<T> across await boundaries — it is a ref struct and cannot survive heap promotion
- Slice beyond the original buffer bounds — it will throw at runtime
- Return a Span<T> that points to a stackalloc buffer — the stack frame will be gone
Performance Comparison
Substring vs Span Slicing Benchmark
Substring vs Span Slicing Benchmark