The Thread Cache (tcache) is a per-thread caching mechanism introduced in glibc 2.26 to improve malloc performance. Each thread maintains 64 singly-linked lists of freed chunks, called tcache bins, organized by size. Understanding how requested sizes map to tcache indices is crucial for heap exploitation.
tcache is checked first before any other bins, making it the fastest allocation path and a prime target for exploitation.
Here’s an interactive calculator for tcache indices:
calc_tcache_idx.c
#include <stdio.h>#include <stdlib.h>#include <string.h>#include <malloc.h>struct malloc_chunk { size_t mchunk_prev_size; /* Size of previous chunk (if free). */ size_t mchunk_size; /* Size in bytes, including overhead. */ struct malloc_chunk* fd; /* double links -- used only if free. */ struct malloc_chunk* bk; /* Only used for large blocks: pointer to next larger size. */ struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */ struct malloc_chunk* bk_nextsize;};/* The corresponding word size. */#define SIZE_SZ (sizeof (size_t))#define MALLOC_ALIGNMENT (2 * SIZE_SZ < __alignof__ (long double) \ ? __alignof__ (long double) : 2 * SIZE_SZ)/* The corresponding bit mask value. */#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)/* The smallest possible chunk */#define MIN_CHUNK_SIZE (offsetof(struct malloc_chunk, fd_nextsize))/* The smallest size we can malloc is an aligned minimal chunk */#define MINSIZE \ (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))#define request2size(req) \ (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \ MINSIZE : \ ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)/* When "x" is from chunksize(). */# define csize2tidx(x) (((x) - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT)/* When "x" is a user-provided size. */# define usize2tidx(x) csize2tidx (request2size (x))int main(){ unsigned long long req; unsigned long long tidx; fprintf(stderr, "This file doesn't demonstrate an attack, but calculates the tcache idx for a given chunk size.\n"); fprintf(stderr, "The basic formula is as follows:\n"); fprintf(stderr, "\tIDX = (CHUNKSIZE - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT\n"); fprintf(stderr, "\tOn a 64 bit system the current values are:\n"); fprintf(stderr, "\t\tMINSIZE: 0x%lx\n", MINSIZE); fprintf(stderr, "\t\tMALLOC_ALIGNMENT: 0x%lx\n", MALLOC_ALIGNMENT); fprintf(stderr, "\tSo we get the following equation:\n"); fprintf(stderr, "\tIDX = (CHUNKSIZE - 0x%lx) / 0x%lx\n\n", MINSIZE-MALLOC_ALIGNMENT+1, MALLOC_ALIGNMENT); fprintf(stderr, "BUT be AWARE that CHUNKSIZE is not the x in malloc(x)\n"); fprintf(stderr, "It is calculated as follows:\n"); fprintf(stderr, "\tIF x + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE(0x%lx) CHUNKSIZE = MINSIZE (0x%lx)\n", MINSIZE, MINSIZE); fprintf(stderr, "\tELSE: CHUNKSIZE = (x + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK) \n"); fprintf(stderr, "\t=> CHUNKSIZE = (x + 0x%lx + 0x%lx) & ~0x%lx\n\n\n", SIZE_SZ, MALLOC_ALIGN_MASK, MALLOC_ALIGN_MASK); while(1) { fprintf(stderr, "[CTRL-C to exit] Please enter a size x (malloc(x)) in hex (e.g. 0x10): "); scanf("%llx", &req); tidx = usize2tidx(req); if (tidx > 63) { fprintf(stderr, "\nWARNING: NOT IN TCACHE RANGE!\n"); } fprintf(stderr, "\nTCache Idx: %llu\n", tidx); } return 0;}
By controlling the forward pointer of a freed chunk in tcache, you can make malloc return an arbitrary address:
// Allocate and free a chunk to get it in tcachechar* a = malloc(0x88); // Goes to tcache bin 7free(a);// Overflow or UAF to overwrite a's forward pointer*(long*)a = 0x41414141; // Point to attacker-controlled address// Next allocation from this bin returns the controlled addresschar* b = malloc(0x88); // Gets original chunkchar* c = malloc(0x88); // Gets 0x41414141!
Modern glibc (2.32+) includes safe linking to mitigate this, but tcache poisoning remains effective on older versions.
char* a = malloc(0x88);free(a);free(a); // Double free - creates a loop in tcachechar* b = malloc(0x88); // Gets achar* c = malloc(0x88); // Gets a again!// Now b and c point to the same memory
In glibc 2.29+, a simple double-free check was added, but it only checks if the head of the tcache list matches the chunk being freed.
If you can corrupt a chunk’s size field, you can control which tcache bin it goes to:
char* a = malloc(0x88); // Chunk size 0x90, should go to bin 7// Corrupt size field via overflow*(size_t*)((char*)a - 8) = 0x40; // Change to size 0x40free(a); // Now goes to tcache bin 2 instead of 7!char* b = malloc(0x30); // Gets the chunk from bin 2// Unexpected memory reuse
Fill tcache before exploiting: Many exploits require filling tcache bins (7 chunks) so that subsequent frees go to other bins like fastbins or unsorted bins.
for (int i = 0; i < 7; i++) { free(chunks[i]); // Fill tcache bin}free(chunks[7]); // This one goes to fastbin/unsorted bin
Avoid tcache with large sizes: Allocate > 0x400 bytes to bypass tcache entirely:
char* a = malloc(0x500); // Too large for tcachefree(a); // Goes directly to unsorted bin
Use the calculator during CTFs: This program is invaluable during competitions for quickly determining which bin your allocations will use.