Memory Allocators
Zig’s allocator system is one of its most distinctive features. Rather than having a single global allocator, Zig encourages passing allocators explicitly, giving you complete control over memory management.
The Allocator Interface
All allocators in Zig implement the std.mem.Allocator interface:
lib/std/mem/Allocator.zig:1-21
const std = @import("../std.zig");
const Allocator = @This();
pub const Error = error{OutOfMemory};
/// The type erased pointer to the allocator implementation.
ptr: *anyopaque,
vtable: *const VTable,
pub const VTable = struct {
/// Return a pointer to `len` bytes with specified `alignment`, or return
/// `null` indicating the allocation failed.
alloc: *const fn (*anyopaque, len: usize, alignment: Alignment, ret_addr: usize) ?[*]u8,
/// Attempt to expand or shrink memory in place.
resize: *const fn (*anyopaque, memory: []u8, alignment: Alignment, new_len: usize, ret_addr: usize) bool,
/// Attempt to expand or shrink memory, allowing relocation.
remap: *const fn (*anyopaque, memory: []u8, alignment: Alignment, new_len: usize, ret_addr: usize) ?[*]u8,
/// Free and invalidate a region of memory.
free: *const fn (*anyopaque, memory: []u8, alignment: Alignment, ret_addr: usize) void,
};
Core Allocator Methods
The allocator provides high-level convenience methods:
lib/std/mem/Allocator.zig:163-195
/// Allocates an array of `n` items of type `T` and sets all the
/// items to `undefined`.
pub fn alloc(self: Allocator, comptime T: type, n: usize) Error![]T {
return self.allocAdvancedWithRetAddr(T, null, n, @returnAddress());
}
/// Returns a pointer to undefined memory.
/// Call `destroy` with the result to free the memory.
pub fn create(a: Allocator, comptime T: type) Error!*T {
if (@sizeOf(T) == 0) {
const ptr = comptime std.mem.alignBackward(usize, math.maxInt(usize), @alignOf(T));
return @ptrFromInt(ptr);
}
const ptr: *T = @ptrCast(try a.allocBytesWithAlignment(.of(T), @sizeOf(T), @returnAddress()));
return ptr;
}
/// `ptr` should be the return value of `create`, or otherwise
/// have the same address and alignment property.
pub fn destroy(self: Allocator, ptr: anytype) void {
const info = @typeInfo(@TypeOf(ptr)).pointer;
if (info.size != .one) @compileError("ptr must be a single item pointer");
const T = info.child;
if (@sizeOf(T) == 0) return;
const non_const_ptr = @as([*]u8, @ptrCast(@constCast(ptr)));
self.rawFree(non_const_ptr[0..@sizeOf(T)], .fromByteUnits(info.alignment), @returnAddress());
}
The allocator interface uses a vtable pattern, allowing different allocator implementations to be used interchangeably. This is Zig’s approach to polymorphism without inheritance.
Memory Alignment
Zig’s allocator system has sophisticated alignment support:
pub const Alignment = enum(math.Log2Int(usize)) {
@"1" = 0,
@"2" = 1,
@"4" = 2,
@"8" = 3,
@"16" = 4,
@"32" = 5,
@"64" = 6,
_,
pub fn toByteUnits(a: Alignment) usize {
return @as(usize, 1) << @intFromEnum(a);
}
pub fn fromByteUnits(n: usize) Alignment {
assert(std.math.isPowerOfTwo(n));
return @enumFromInt(@ctz(n));
}
pub inline fn of(comptime T: type) Alignment {
return comptime fromByteUnits(@alignOf(T));
}
/// Return next address with this alignment.
pub fn forward(a: Alignment, address: usize) usize {
const x = (@as(usize, 1) << @intFromEnum(a)) - 1;
return (address + x) & ~x;
}
/// Return whether address is aligned to this amount.
pub fn check(a: Alignment, address: usize) bool {
return @ctz(address) >= @intFromEnum(a);
}
};
Built-in Allocators
Zig provides several allocator implementations in std.heap:
Page Allocator
Direct syscalls for every allocation - simple but slow:
/// On operating systems that support memory mapping, this allocator makes a
/// syscall directly for every allocation and free.
///
/// Thread-safe.
pub const page_allocator: Allocator = if (@hasDecl(root, "os") and
@hasDecl(root.os, "heap") and
@hasDecl(root.os.heap, "page_allocator"))
root.os.heap.page_allocator
else if (builtin.target.cpu.arch.isWasm()) .{
.ptr = undefined,
.vtable = &WasmAllocator.vtable,
} else if (builtin.target.os.tag == .plan9) .{
.ptr = undefined,
.vtable = &SbrkAllocator(std.os.plan9.sbrk).vtable,
} else .{
.ptr = undefined,
.vtable = &PageAllocator.vtable,
};
Use case: When you need guaranteed OS-level memory protection, or for backing other allocators.
const std = @import("std");
pub fn main() !void {
const allocator = std.heap.page_allocator;
const memory = try allocator.alloc(u8, 1024);
defer allocator.free(memory);
// Each allocation/free is a syscall
}
General Purpose Allocator (Debug Allocator)
Production-quality allocator with extensive debugging capabilities:
pub const DebugAllocatorConfig = @import("heap/debug_allocator.zig").Config;
pub const DebugAllocator = @import("heap/debug_allocator.zig").DebugAllocator;
/// Deprecated; to be removed after 0.14.0 is tagged.
pub const GeneralPurposeAllocatorConfig = DebugAllocatorConfig;
/// Deprecated; to be removed after 0.14.0 is tagged.
pub const GeneralPurposeAllocator = DebugAllocator;
Features:
- Memory leak detection
- Double-free detection
- Use-after-free detection (when safety checks enabled)
- Stack traces for allocations
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{
.stack_trace_frames = 10,
.resize_stack_traces = true,
}){};
defer {
const check = gpa.deinit();
if (check == .leak) {
@panic("Memory leak detected!");
}
}
const allocator = gpa.allocator();
const data = try allocator.alloc(i32, 100);
defer allocator.free(data);
}
Use GeneralPurposeAllocator during development to catch memory bugs early. The debug checks have minimal overhead in debug builds.
Arena Allocator
Allocate many objects, free them all at once:
lib/std/heap/arena_allocator.zig:1-63
/// This allocator takes an existing allocator, wraps it, and provides an interface where
/// you can allocate and then free it all together. Calls to free an individual item only
/// free the item if it was the most recent allocation, otherwise calls to free do
/// nothing.
pub const ArenaAllocator = struct {
child_allocator: Allocator,
state: State,
pub const State = struct {
buffer_list: std.SinglyLinkedList = .{},
end_index: usize = 0,
pub fn promote(self: State, child_allocator: Allocator) ArenaAllocator {
return .{
.child_allocator = child_allocator,
.state = self,
};
}
};
pub fn allocator(self: *ArenaAllocator) Allocator {
return .{
.ptr = self,
.vtable = &.{
.alloc = alloc,
.resize = resize,
.remap = remap,
.free = free,
},
};
}
pub fn init(child_allocator: Allocator) ArenaAllocator {
return (State{}).promote(child_allocator);
}
pub fn deinit(self: ArenaAllocator) void {
var it = self.state.buffer_list.first;
while (it) |node| {
const next_it = node.next;
const buf_node: *BufNode = @fieldParentPtr("node", node);
const alloc_buf = @as([*]u8, @ptrCast(buf_node))[0..buf_node.data];
self.child_allocator.rawFree(alloc_buf, BufNode_alignment, @returnAddress());
it = next_it;
}
}
};
Use case: When you have many small allocations with the same lifetime.
const std = @import("std");
pub fn processRequest(backing_allocator: std.mem.Allocator) !void {
var arena = std.heap.ArenaAllocator.init(backing_allocator);
defer arena.deinit(); // Free everything at once
const allocator = arena.allocator();
// Make many allocations
const name = try allocator.dupe(u8, "John Doe");
const items = try allocator.alloc(i32, 100);
const map = std.StringHashMap([]const u8).init(allocator);
// No need to free individual allocations
// arena.deinit() frees everything
}
Arena allocators are perfect for request/response processing, compilation passes, or any scenario where you can free all memory at once.
Fixed Buffer Allocator
Allocate from a fixed-size buffer - no syscalls:
pub const FixedBufferAllocator = @import("heap/FixedBufferAllocator.zig");
const std = @import("std");
pub fn main() !void {
var buffer: [8192]u8 = undefined;
var fba = std.heap.FixedBufferAllocator.init(&buffer);
const allocator = fba.allocator();
// Allocates from the buffer
const data = try allocator.alloc(i32, 100);
allocator.free(data);
// Reset and reuse
fba.reset();
}
Use case: Embedded systems, hot paths, or when you want stack-like allocation.
C Allocator
Wrapper around libc’s malloc/free:
/// Supports the full Allocator interface, including alignment, and exploiting
/// `malloc_usable_size` if available. For an allocator that directly calls
/// `malloc`/`free`, see `raw_c_allocator`.
pub const c_allocator: Allocator = .{
.ptr = undefined,
.vtable = &CAllocator.vtable,
};
/// Asserts allocations are within `@alignOf(std.c.max_align_t)` and directly
/// calls `malloc`/`free`. Does not attempt to utilize `malloc_usable_size`.
pub const raw_c_allocator: Allocator = .{
.ptr = undefined,
.vtable = &raw_c_allocator_vtable,
};
Use case: Interfacing with C libraries or when you need malloc compatibility.
Thread-Safe Allocator
Wrapper that adds thread-safety to any allocator:
pub const ThreadSafeAllocator = @import("heap/ThreadSafeAllocator.zig");
const std = @import("std");
pub fn main() !void {
var backing_allocator = std.heap.FixedBufferAllocator.init(&buffer);
var tsa = std.heap.ThreadSafeAllocator.init(backing_allocator.allocator());
const allocator = tsa.allocator();
// Safe to use from multiple threads
}
Memory Pool
Fast allocation for single-type objects:
/// A memory pool that can allocate objects of a single type very quickly.
/// Use this when you need to allocate a lot of objects of the same type,
/// because it outperforms general purpose allocators.
pub fn MemoryPool(comptime Item: type) type {
return memory_pool.Extra(Item, .{ .alignment = null });
}
pub const memory_pool = @import("heap/memory_pool.zig");
const std = @import("std");
const Node = struct {
value: i32,
next: ?*Node,
};
pub fn main() !void {
var pool = std.heap.MemoryPool(Node).init(std.heap.page_allocator);
defer pool.deinit();
// Very fast allocation of Node objects
const node1 = try pool.create();
const node2 = try pool.create();
// Free individual nodes
pool.destroy(node1);
pool.destroy(node2);
}
Allocator Patterns
Passing Allocators
Always pass allocators explicitly:
pub fn processData(allocator: std.mem.Allocator, input: []const u8) ![]u8 {
const result = try allocator.alloc(u8, input.len * 2);
// process...
return result;
}
Allocator Wrappers
You can create custom allocator wrappers:
const std = @import("std");
const LoggingAllocator = struct {
parent_allocator: std.mem.Allocator,
pub fn allocator(self: *LoggingAllocator) std.mem.Allocator {
return .{
.ptr = self,
.vtable = &.{
.alloc = alloc,
.resize = resize,
.remap = remap,
.free = free,
},
};
}
fn alloc(
ctx: *anyopaque,
len: usize,
alignment: std.mem.Alignment,
ret_addr: usize,
) ?[*]u8 {
const self: *LoggingAllocator = @ptrCast(@alignCast(ctx));
std.debug.print("Allocating {} bytes\n", .{len});
return self.parent_allocator.rawAlloc(len, alignment, ret_addr);
}
// Implement other vtable methods...
};
Validation Allocator
Zig provides a built-in validation wrapper:
/// Detects and asserts if the std.mem.Allocator interface is violated by the caller
/// or the allocator.
pub fn ValidationAllocator(comptime T: type) type {
return struct {
underlying_allocator: T,
pub fn init(underlying_allocator: T) @This() {
return .{
.underlying_allocator = underlying_allocator,
};
}
pub fn allocator(self: *Self) Allocator {
return .{
.ptr = self,
.vtable = &.{
.alloc = alloc,
.resize = resize,
.remap = remap,
.free = free,
},
};
}
// Validates alignment, size > 0, etc.
};
}
pub fn validationWrap(allocator: anytype) ValidationAllocator(@TypeOf(allocator)) {
return ValidationAllocator(@TypeOf(allocator)).init(allocator);
}
Common Memory Utilities
The std.mem module provides essential memory utilities:
/// Copy all of source into dest at position 0.
pub fn copyForwards(comptime T: type, dest: []T, source: []const T) void {
for (dest[0..source.len], source) |*d, s| d.* = s;
}
/// Zero initializes the type.
pub fn zeroes(comptime T: type) T {
switch (@typeInfo(T)) {
.int, .float => return @as(T, 0),
.bool => return false,
.optional => return null,
.@"struct" => |struct_info| {
if (@sizeOf(T) == 0) return undefined;
if (struct_info.layout == .@"extern") {
var item: T = undefined;
@memset(asBytes(&item), 0);
return item;
} else {
var structure: T = undefined;
inline for (struct_info.fields) |field| {
if (!field.is_comptime) {
@field(structure, field.name) = zeroes(field.type);
}
}
return structure;
}
},
// ...
}
}
Use @memset, @memcpy, and builtin functions when possible - they’re optimized and often compile to single instructions.
Best Practices
1. Choose the Right Allocator
- Testing: Use
std.testing.allocator (DebugAllocator with leak detection)
- Request handling: Use
ArenaAllocator
- Hot paths: Use
FixedBufferAllocator with stack buffer
- General purpose: Use
GeneralPurposeAllocator
- Production long-running: Consider
c_allocator or custom allocator
2. Always Handle OutOfMemory
const data = try allocator.alloc(u8, size);
defer allocator.free(data);
3. Use defer for Cleanup
var arena = std.heap.ArenaAllocator.init(allocator);
defer arena.deinit();
4. Consider Unmanaged Types
For data structures you’ll use repeatedly with the same allocator:
// Managed - stores allocator
var list = std.ArrayList(i32).init(allocator);
// Unmanaged - doesn't store allocator, slightly smaller
var list = std.ArrayListUnmanaged(i32){};
try list.append(allocator, 42);
list.deinit(allocator);
Next Steps
Testing Framework
Learn about testing with allocators
Key Modules
Explore data structures that use allocators