Skip to main content
memefs is designed for high-performance RAM disk operations, but understanding its performance characteristics helps you use it effectively.

Core improvements over original memfs

memefs was rewritten to address critical performance issues in the original WinFsp memfs:

Vector-based sector storage

Original memfs: Used heap allocation for every file write, causing severe performance degradation for unpreallocated files (see benchmarks). memefs: Uses std::vector<Sector*> (sectors.h:13) with a private heap for sector management. This provides:
  • Much faster unpreallocated writes: No heap fragmentation from individual allocations
  • Predictable memory layout: Sectors stored contiguously in vectors
  • Efficient resizing: Vector growth handles most reallocation scenarios

Memory management

memefs uses a dedicated private heap for sector allocations:
HANDLE heap; // Private heap in SectorManager
volatile UINT64 allocatedSectors; // Track allocated memory
This provides:
  • Isolated memory allocation: Doesn’t interfere with system heap
  • Better locality: Related data allocated together
  • Accurate tracking: Know exactly how much memory is used

Performance trade-offs

When to use memefs

Use memefs when you:
  • Need a constantly-running RAM disk for general file operations
  • Download files or create new files without preallocation
  • Want dynamic memory allocation (only uses memory actually needed)
  • Need reasonable sequential and random I/O performance
  • Value stability and modern C++ safety guarantees

When to use original memfs

Use the original memfs when you:
  • Can preallocate files with NtCreateFile and its AllocationSize parameter
  • Need maximum sequential write speed for preallocated files
  • Have very specific performance requirements for preallocated scenarios
As noted in the benchmarks: “if you need maximum sequential speed and are able to preallocate the file with NtCreateFile and its AllocationSize, then you should use the original memfs.”

Read/write performance

The SectorManager::ReadWrite<IsReading>() template handles all I/O:
template <bool IsReading>
static bool ReadWrite(SectorNode& node, void* buffer, 
                      const size_t size, const size_t offset);

Read operations (io.cpp:4-24)

  1. Check if offset is beyond file size
  2. Calculate end offset (capped at file size)
  3. Call ReadWrite<true>() to copy data from sectors to buffer
  4. Return bytes transferred
Performance: Reads are very fast as they’re just memory copies from the sector vector.

Write operations (io.cpp:26-65)

  1. Handle write modes (append, constrained, normal)
  2. Resize file if needed via SetFileSizeInternal()
  3. Call ReadWrite<false>() to copy data from buffer to sectors
  4. Return bytes transferred
Performance: Writes may trigger reallocation if the file grows, but the vector-based approach minimizes overhead.

Memory limits

Set maximum memory usage with the -s flag:
memefs -s 4294967296 -m R:  # 4GB limit
memefs tracks:
  • Used size: Actual data stored (GetUsedTotalSize())
  • Max size: User-specified limit (maxFsSize)
  • Available size: Remaining space (CalculateAvailableTotalSize())
When memory is full, operations fail with STATUS_DISK_FULL.

Sector alignment

All allocations are aligned to sector boundaries:
static size_t AlignSize(const size_t size, const bool alignUp = true);
static UINT64 GetSectorAmount(const size_t alignedSize);
  • Sector size: 512 bytes (MEMFS_SECTOR_SIZE)
  • Alignment: Rounds up to nearest sector boundary
  • Overhead: Small files (< 512 bytes) still use 512 bytes
This matches traditional disk behavior and ensures compatibility.

Concurrency

memefs handles concurrent access efficiently:

Sector-level locking

Each SectorNode has its own mutex:
std::shared_mutex SectorsMutex; // Per-file locking
This allows:
  • Multiple readers: Concurrent reads from the same file
  • Single writer: Exclusive write access per file
  • File-level parallelism: Different files accessed simultaneously

File node reference counting

File nodes use atomic reference counting:
volatile long refCount{0}; // Atomic operations via InterlockedIncrement
This ensures:
  • Safe concurrent access: Multiple handles to the same file
  • Proper cleanup: File deleted only when refcount reaches zero
  • No race conditions: Atomic increment/decrement operations

WinFsp dispatcher threading

WinFsp automatically creates a thread pool (typically 8-16 threads) to dispatch operations. memefs is designed to be fully thread-safe.

Optimization tips

1. Set appropriate memory limits

Don’t set -s too low or you’ll run out of space. Don’t set it too high or you’ll waste address space:
memefs -s 8589934592 -m R:  # 8GB is reasonable for most systems

2. Use case-insensitive mode for compatibility

Windows applications expect case-insensitive file systems:
memefs -i -m R:  # Case insensitive

3. Avoid flush-and-purge unless needed

The -f flag forces cache flushes, which can slow down operations:
memefs -f -m R:  # Only use if you need guaranteed cache invalidation

4. Use NTFS file system name

Some applications check the file system name:
memefs -F NTFS -m R:  # Better compatibility

5. Preallocate large files if possible

While memefs handles unpreallocated files well, preallocation is still faster for very large files:
// Using Windows API
HANDLE h = CreateFile(..., CREATE_ALWAYS, ...);
FILE_ALLOCATION_INFO info = { .AllocationSize = { .QuadPart = size } };
SetFileInformationByHandle(h, FileAllocationInfo, &info, sizeof(info));

Cache management

WinFsp integrates with the Windows cache manager:
  • FileInfoTimeout: 15 seconds (create.cpp:41)
  • PostCleanupWhenModifiedOnly: Reduces cleanup overhead (create.cpp:49)
  • PostDispositionWhenNecessaryOnly: Reduces disposition overhead (create.cpp:50)
The cache manager reduces round-trips to memefs for frequently-accessed metadata.

Monitoring performance

To understand performance:
  1. Enable debug logging: Use -d -1 -D memefs.log to see all operations
  2. Check operation counts: Look at the debug log for operation frequency
  3. Monitor memory usage: Check Task Manager or Process Explorer
  4. Run benchmarks: Use tools like fsbench or diskspd
See the benchmarks page for detailed performance comparisons.

Common performance issues

Issue: Slow writes

Cause: May be hitting memory limit or system memory pressure Solution:
  • Check available RAM
  • Increase -s limit if too low
  • Check for memory leaks in your application

Issue: Slow directory enumeration

Cause: Large directories with thousands of files Solution:
  • memefs uses a map for file storage, which is O(log n) for lookups
  • Consider organizing files into subdirectories
  • Use directory markers to enumerate in chunks

Issue: High CPU usage

Cause: Too many small I/O operations Solution:
  • Use larger buffer sizes in your application
  • Enable write caching in your application
  • Check if antivirus is scanning the RAM disk

Performance compared to physical disks

As a RAM disk, memefs is:
  • Much faster than traditional HDDs (100-1000x)
  • Faster than most SSDs for random I/O
  • Similar or faster than NVMe SSDs for sequential I/O
  • Limited by RAM speed and CPU memory copy performance
The key advantage is zero seek time and consistent low latency.

Build docs developers (and LLMs) love