malloc/free/realloc). Unlike the stack, it has no fixed layout, making exploitation more complex but also more powerful. The dominant allocator on Linux is ptmalloc2 (glibc), and the techniques in this section target it specifically.
Heap fundamentals
Chunks
Every allocation is wrapped in a chunk — a header plus user data. The header stores the chunk size and status flags.mchunk_size are flags:
P(bit 0): previous chunk in useM(bit 1): chunk was mmap’dA(bit 2): chunk belongs to a non-main arena
Bins
When chunks are freed they enter bins — linked lists from which the allocator recycles memory.| Bin | Size range | Structure |
|---|---|---|
| tcache (per-thread, glibc ≥ 2.26) | 24–1032 bytes | Singly-linked, 7 entries max per size |
| Fast bins | 16–176 bytes | Singly-linked LIFO |
| Unsorted bin | Any | Doubly-linked, staging area |
| Small bins | 16–1008 bytes | Doubly-linked FIFO |
| Large bins | ≥1024 bytes | Doubly-linked with size ordering |
Arenas
In multi-threaded programs, each thread may have its own arena (a separate heap with its own mutex), reducing contention. The main arena expands withbrk; secondary arenas use mmap to create subheaps.
Vulnerability primitives
Use-After-Free (UAF)
A UAF occurs when code continues using a pointer after the referenced chunk has beenfree’d. If an attacker can cause a new allocation to reuse the freed chunk and control its contents, they can manipulate the object’s fields — including virtual-function pointers.
Double-Free
Freeing the same chunk twice corrupts the bin’s free list. In older glibc versions without tcache, this directly corrupts the forward/backward pointers. In modern glibc with tcache, a double-free corrupts thenext pointer in the tcache bin.
tcache_entry->key to detect double-frees within the same bin.
Heap Overflow
Writing past the end of a heap chunk corrupts the next chunk’s header, allowing an attacker to forge the size field or P/M/A flags — which leads to overlapping allocations or controlled writes.Attack techniques
Tcache poisoning
Tcache poisoning
Overwrite the
next pointer of a freed tcache chunk with a target address. The next two malloc() calls of the same size will return:- The poisoned chunk itself
- The forged target address — enabling a write-what-where primitive
Since glibc 2.32, tcache pointers are XOR-obfuscated with
(ptr >> 12) (safe-linking). A heap address leak is required to compute the correct obfuscated value.Fast bin attack
Fast bin attack
Corrupt the
fd pointer of a freed fast bin chunk to point to a fake chunk at a target address. When that fake chunk is allocated, any subsequent write lands at the target.The fake chunk’s size field must fall within the fast bin’s size class and the address must be aligned.Unsorted bin attack
Unsorted bin attack
Overwrite the
bk pointer of an unsorted bin chunk. When the allocator processes the chunk, it writes main_arena+X to bk->fd, giving a large pointer write-what-where. Typically used to overwrite _IO_list_all or a malloc hook.House of Spirit
House of Spirit
Free a crafted (fake) chunk to insert it into a bin. Requirements:
- The fake chunk’s size matches the target bin class
- The next fake chunk (at
fake + size) hasPbit clear and a plausible size - The address is returned by the next
malloc()of the matching size
House of Force (pre-glibc 2.29)
House of Force (pre-glibc 2.29)
Overwrite the
top chunk’s size field with 0xffffffffffffffff. The next allocation of size target - heap_base - overhead will return a pointer that points directly into the target memory region.Heap debugging workflow
Tools for heap exploitation
| Tool | Use |
|---|---|
pwndbg | GDB plugin with heap, bins, tcachebins commands |
GEF | Alternative GDB plugin with heap chunks / heap bins |
heaptrace | Trace malloc/free calls with call sites |
pwntools | Automate interaction and payload construction |
musl / Alpine notes
The muslmallocng allocator used in Alpine Linux differs significantly from ptmalloc:
- Allocations live in mmap’d groups organised by size class (stride)
- Out-of-band metadata with per-group cookies complicates metadata corruption
- A cycling offset may shift the user-data start by
0x10bytes on slot reuse — verify withmuslheap’smchunkinfobefore tuning offsets - Target higher-level objects (e.g., Lua
Table->arraypointers) rather than allocator metadata