Skip to main content
WinFsp-MemFs-Extended achieves significantly better performance than the original WinFsp memfs by using vectors of sectors instead of direct heap allocation. This architectural choice provides dramatic improvements for unpreallocated file writes.

The heap allocation problem

The original WinFsp memfs allocates heap memory directly for each file as it grows. This approach has several performance issues:

Heap fragmentation

When you repeatedly allocate and free variable-sized blocks, the heap becomes fragmented. This forces the allocator to:
  • Search longer for suitable free blocks
  • Merge adjacent free blocks
  • Request more memory from the OS when contiguous space isn’t available

Slow reallocation

When a file grows, the heap allocator must:
  1. Find a new larger block
  2. Copy all existing data to the new location
  3. Free the old block
For large files, this copying becomes extremely expensive, especially when files grow incrementally (like during downloads).

Unpredictable performance

Heap allocation time varies based on:
  • Current heap state and fragmentation
  • Size of the allocation request
  • Available contiguous memory
This makes performance unpredictable and often poor for real-world usage patterns.

The vector-based solution

WinFsp-MemFs-Extended uses a different approach:
struct SectorNode {
    SectorVector Sectors;  // std::vector<Sector*>
};
Instead of allocating the entire file as one block, it allocates fixed-size sectors and stores pointers in a vector.

Performance advantages

No data copying on growth

When you resize a file:
  1. Vector resizes (only pointer array grows/shrinks)
  2. New sectors allocated (each is a fixed size)
  3. Existing sector data stays in place (no copying!)
The file data itself never moves, regardless of how large the file grows. Only the vector of pointers needs to be reallocated, which is much smaller and faster.

Fixed-size allocation benefits

All sectors are the same size (FULL_SECTOR_SIZE), which provides:

Faster allocation

The heap can optimize for fixed-size blocks:
  • No need to search for variable-sized free blocks
  • Can maintain free lists of exactly the right size
  • Allocation time is constant regardless of file size

Reduced fragmentation

Fixed-size allocations fragment much less:
  • Freed sectors can be reused for any file
  • No unusable gaps between allocations
  • Better memory utilization over time

Predictable performance

Every sector allocation takes the same amount of time:
  • No performance degradation as files grow
  • Consistent behavior regardless of file size
  • Reliable performance characteristics

Private heap optimization

The SectorManager uses a dedicated Windows heap:
SectorManager::SectorManager() {
    this->heap = HeapCreate(0, 0, 0);
}
Benefits of a private heap:

Reduced contention

The default process heap is shared by all allocations in your application. A private heap eliminates lock contention from:
  • Other threads allocating memory
  • Different subsystems competing for heap locks
  • Background allocations from libraries and frameworks

Optimized for access patterns

The heap can optimize specifically for sector allocation patterns:
  • Frequent fixed-size allocations
  • Predictable allocation/deallocation sequences
  • High allocation churn for temporary files

Easier cleanup

Destroying the entire heap at once is much faster than freeing individual allocations:
SectorManager::~SectorManager() {
    HeapDestroy(this->heap); // Frees all sectors instantly
}

Dynamic allocation benefits

Unlike traditional RAM disks that pre-allocate all memory, WinFsp-MemFs-Extended allocates dynamically:

Memory efficiency

You only use memory for actual file content:
  • Empty file system uses minimal memory
  • Memory grows as you add files
  • Memory shrinks as you delete files
  • No wasted pre-allocated space

Flexible sizing

You can set a maximum size limit, but the file system:
  • Starts with minimal memory usage
  • Grows on demand up to the limit
  • Adapts to actual usage patterns
  • Doesn’t waste memory on unused capacity

Better for long-running systems

Dynamic allocation makes the RAM disk suitable for:
  • Servers that run continuously
  • Development environments with varying needs
  • Build systems with temporary file requirements
  • Any scenario where memory usage fluctuates

Benchmark comparison

The performance difference is dramatic for unpreallocated file writes:

Original memfs

Writing large files without preallocation:
  • Performance degrades significantly as file size grows
  • Copy overhead dominates write time
  • Effectively unusable for downloads or streaming writes
  • Performance unpredictable based on heap state

WinFsp-MemFs-Extended

Writing the same files:
  • Consistent performance regardless of file size
  • No copy overhead as file grows
  • Suitable for web downloads and streaming
  • Predictable performance characteristics
From the README benchmarks:
The unpreallocated file write times make the original memfs unusable, especially for web downloads.
See the benchmark graphs in the source repository’s README.md for detailed performance comparisons.

When to preallocate

Despite the improvements, preallocation still helps in specific scenarios:

Maximum sequential speed

If you:
  • Know the exact file size in advance
  • Need absolute maximum sequential write speed
  • Can preallocate using NtCreateFile with AllocationSize
Then the original memfs may be slightly faster for that specific use case.

Typical usage

For most real-world scenarios:
  • File sizes aren’t known in advance (downloads, streams)
  • Files grow incrementally (logs, databases)
  • Many small files are created and deleted
  • Memory efficiency matters
WinFsp-MemFs-Extended provides superior overall performance.

Memory tracking

The system tracks memory usage in real-time:
UINT64 SectorManager::GetAllocatedSectors() {
    return InterlockedExchangeAdd(&this->allocatedSectors, 0ULL);
}
You can query:
  • Total allocated sectors
  • Current memory usage
  • Available capacity
  • Usage trends over time
This allows:
  • Enforcing memory limits
  • Monitoring memory consumption
  • Predicting when limits will be reached
  • Optimizing file placement strategies
See sectors.cpp:114-116 and memfs.h:44-46 for the tracking implementation.

Build docs developers (and LLMs) love