Skip to main content

Overview

The Thread Cache (tcache) is a per-thread caching mechanism introduced in glibc 2.26 to improve malloc performance. Each thread maintains 64 singly-linked lists of freed chunks, called tcache bins, organized by size. Understanding how requested sizes map to tcache indices is crucial for heap exploitation.
tcache is checked first before any other bins, making it the fastest allocation path and a prime target for exploitation.

What is tcache?

1

Per-thread cache

Each thread has its own tcache, eliminating lock contention for small allocations.
2

64 bins

Bins are indexed 0-63, covering small allocation sizes.
3

7 chunks per bin

By default, each bin holds up to 7 freed chunks (configurable via tcache_count).
4

Singly-linked list

Chunks in tcache are linked via a forward pointer, unlike doubly-linked bins.
5

LIFO allocation

Last freed chunk is the first to be reallocated (Last-In-First-Out).
tcache has minimal security checks compared to other bins, making it a popular exploitation target.

Index Calculation Formula

The tcache index is calculated differently depending on whether you have a chunk size or a requested size:

From Chunk Size

#define SIZE_SZ (sizeof(size_t))                    // 8 on 64-bit
#define MALLOC_ALIGNMENT (2 * SIZE_SZ)              // 16 on 64-bit
#define MINSIZE (offsetof(struct malloc_chunk, fd_nextsize))  // 32 on 64-bit

// When "x" is from chunksize()
#define csize2tidx(x) (((x) - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT)
On 64-bit systems:
IDX = (CHUNKSIZE - 0x11) / 0x10

From User Request Size

// Convert user request to chunk size
#define request2size(req) \
  (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
   MINSIZE : \
   ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)

// Then calculate index
#define usize2tidx(x) csize2tidx(request2size(x))
On 64-bit systems:
IF (request + 0x8 + 0xf) < 0x20:
    CHUNKSIZE = 0x20
ELSE:
    CHUNKSIZE = (request + 0x17) & ~0xf

IDX = (CHUNKSIZE - 0x11) / 0x10
The key insight: chunk sizes are aligned to 0x10 bytes, so each tcache index represents a 0x10-byte size class.

Demonstration Program

Here’s an interactive calculator for tcache indices:
calc_tcache_idx.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <malloc.h>

struct malloc_chunk {
  size_t      mchunk_prev_size;  /* Size of previous chunk (if free).  */
  size_t      mchunk_size;       /* Size in bytes, including overhead. */

  struct malloc_chunk* fd;         /* double links -- used only if free. */
  struct malloc_chunk* bk;

  /* Only used for large blocks: pointer to next larger size.  */
  struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */
  struct malloc_chunk* bk_nextsize;
};

/* The corresponding word size.  */
#define SIZE_SZ (sizeof (size_t))

#define MALLOC_ALIGNMENT (2 * SIZE_SZ < __alignof__ (long double) \
			  ? __alignof__ (long double) : 2 * SIZE_SZ)

/* The corresponding bit mask value.  */
#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)

/* The smallest possible chunk */
#define MIN_CHUNK_SIZE        (offsetof(struct malloc_chunk, fd_nextsize))

/* The smallest size we can malloc is an aligned minimal chunk */
#define MINSIZE  \
  (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))

#define request2size(req)                                         \
  (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE)  ?             \
   MINSIZE :                                                      \
   ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)

/* When "x" is from chunksize().  */
# define csize2tidx(x) (((x) - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT)

/* When "x" is a user-provided size.  */
# define usize2tidx(x) csize2tidx (request2size (x))

int main()
{
    unsigned long long req;
    unsigned long long tidx;
	fprintf(stderr, "This file doesn't demonstrate an attack, but calculates the tcache idx for a given chunk size.\n");
	fprintf(stderr, "The basic formula is as follows:\n");
    fprintf(stderr, "\tIDX = (CHUNKSIZE - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT\n");
    fprintf(stderr, "\tOn a 64 bit system the current values are:\n");
    fprintf(stderr, "\t\tMINSIZE: 0x%lx\n", MINSIZE);
    fprintf(stderr, "\t\tMALLOC_ALIGNMENT: 0x%lx\n", MALLOC_ALIGNMENT);
    fprintf(stderr, "\tSo we get the following equation:\n");
    fprintf(stderr, "\tIDX = (CHUNKSIZE - 0x%lx) / 0x%lx\n\n", MINSIZE-MALLOC_ALIGNMENT+1, MALLOC_ALIGNMENT);
    fprintf(stderr, "BUT be AWARE that CHUNKSIZE is not the x in malloc(x)\n");
    fprintf(stderr, "It is calculated as follows:\n");
    fprintf(stderr, "\tIF x + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE(0x%lx) CHUNKSIZE = MINSIZE (0x%lx)\n", MINSIZE, MINSIZE);
    fprintf(stderr, "\tELSE: CHUNKSIZE = (x + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK) \n");
    fprintf(stderr, "\t=> CHUNKSIZE = (x + 0x%lx + 0x%lx) & ~0x%lx\n\n\n", SIZE_SZ, MALLOC_ALIGN_MASK, MALLOC_ALIGN_MASK);
    while(1) {
        fprintf(stderr, "[CTRL-C to exit] Please enter a size x (malloc(x)) in hex (e.g. 0x10): ");
        scanf("%llx", &req);
        tidx = usize2tidx(req);
        if (tidx > 63) {
            fprintf(stderr, "\nWARNING: NOT IN TCACHE RANGE!\n");
        }
        fprintf(stderr, "\nTCache Idx: %llu\n", tidx);
    }
    return 0;
}

Running the Calculator

gcc calc_tcache_idx.c -o calc_tcache_idx
./calc_tcache_idx

Size to Index Mapping

Here’s a reference table for common allocations on 64-bit systems:
malloc(x)Chunk Sizetcache IndexNotes
0x1 - 0x180x200Minimum size
0x19 - 0x280x301
0x29 - 0x380x402
0x39 - 0x480x503
0x49 - 0x580x604
0x59 - 0x680x705
0x69 - 0x780x806
0x79 - 0x880x907
0x3f1 - 0x4000x41063Max tcache
0x401+0x420+N/AToo large for tcache
Pattern: Each index covers a 0x10-byte range of chunk sizes. To find the index for a chunk size, use: (chunksize - 0x20) / 0x10

Step-by-Step Calculation Example

Let’s calculate the tcache index for malloc(0x88):
1

Convert request to chunk size

req = 0x88
chunksize = (req + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK
chunksize = (0x88 + 0x8 + 0xf) & ~0xf
chunksize = 0x9f & 0xfffffffffffffff0
chunksize = 0x90
2

Calculate tcache index

idx = (chunksize - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT
idx = (0x90 - 0x20 + 0x10 - 1) / 0x10
idx = (0x7f) / 0x10
idx = 7
3

Verify range

if (idx > 63) {
    // Too large for tcache, will use other bins
}
// idx = 7 is valid
Quick calculation: For chunk size 0x90, use: (0x90 - 0x20) / 0x10 = 0x70 / 0x10 = 7

Exploitation Implications

tcache Poisoning

By controlling the forward pointer of a freed chunk in tcache, you can make malloc return an arbitrary address:
// Allocate and free a chunk to get it in tcache
char* a = malloc(0x88);  // Goes to tcache bin 7
free(a);

// Overflow or UAF to overwrite a's forward pointer
*(long*)a = 0x41414141;  // Point to attacker-controlled address

// Next allocation from this bin returns the controlled address
char* b = malloc(0x88);  // Gets original chunk
char* c = malloc(0x88);  // Gets 0x41414141!
Modern glibc (2.32+) includes safe linking to mitigate this, but tcache poisoning remains effective on older versions.

tcache Dup (Double Free)

char* a = malloc(0x88);
free(a);
free(a);  // Double free - creates a loop in tcache

char* b = malloc(0x88);  // Gets a
char* c = malloc(0x88);  // Gets a again!
// Now b and c point to the same memory
In glibc 2.29+, a simple double-free check was added, but it only checks if the head of the tcache list matches the chunk being freed.

Size Confusion

If you can corrupt a chunk’s size field, you can control which tcache bin it goes to:
char* a = malloc(0x88);  // Chunk size 0x90, should go to bin 7

// Corrupt size field via overflow
*(size_t*)((char*)a - 8) = 0x40;  // Change to size 0x40

free(a);  // Now goes to tcache bin 2 instead of 7!

char* b = malloc(0x30);  // Gets the chunk from bin 2
// Unexpected memory reuse

Why tcache Index Matters

Understanding tcache indexing is critical for:

Heap Grooming

Precisely control which bins are filled to influence allocation behavior

Size Confusion

Exploit size field corruption to misplace chunks in bins

tcache Poisoning

Target specific bins for arbitrary allocation exploits

Bypass Mitigations

Understand which sizes avoid security checks

Practical Tips

Fill tcache before exploiting: Many exploits require filling tcache bins (7 chunks) so that subsequent frees go to other bins like fastbins or unsorted bins.
for (int i = 0; i < 7; i++) {
    free(chunks[i]);  // Fill tcache bin
}
free(chunks[7]);  // This one goes to fastbin/unsorted bin
Avoid tcache with large sizes: Allocate > 0x400 bytes to bypass tcache entirely:
char* a = malloc(0x500);  // Too large for tcache
free(a);  // Goes directly to unsorted bin
Use the calculator during CTFs: This program is invaluable during competitions for quickly determining which bin your allocations will use.

tcache vs Other Bins

Featuretcachefastbinssmallbins
SpeedFastestFastMedium
ChecksMinimalMediumMore
LinksSinglySinglyDoubly
Max size0x4100x800x3f0
Per-threadYesNoNo
CoalescingNoNoYes
tcache’s minimal security checks and per-thread nature make it the easiest target for exploitation on modern glibc.

tcache Poisoning

Full exploitation technique using tcache

House of Botcake

Modern tcache double-free bypass

Heap Basics

Overview of all bin types

First Fit

Allocation strategy fundamentals

Summary

Key Takeaways:
  • tcache uses 64 bins (indices 0-63) for chunks up to 0x410 bytes
  • Index formula: (chunksize - 0x20) / 0x10 on 64-bit
  • User request size must be converted to chunk size first
  • tcache has minimal security checks, making it exploitable
  • Understanding index calculation is essential for heap grooming

Build docs developers (and LLMs) love