Skip to main content
Linux provides several memory allocation APIs suited to different sizes, contexts, and lifetime requirements. Most allocations should use kzalloc(size, GFP_KERNEL). Understanding when to reach for other allocators — and which GFP flags to pass — is essential to writing correct, efficient kernel code.

Slab allocator

kmalloc / kzalloc for small objects up to page size. Physically contiguous, cache-aligned.

vmalloc

Large, virtually contiguous mappings that do not need to be physically contiguous.

Page allocator

Direct page-level allocation with alloc_pages / __get_free_pages for power-of-two page counts.

Managed allocation

devm_kmalloc and friends free memory automatically when a device is unbound.

GFP flags

All allocation APIs accept a gfp_t bitmask that controls reclaim behavior, zone selection, and other attributes. The GFP acronym stands for get free pages, the underlying page-allocation function.

Common flag combinations

#define GFP_KERNEL  (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
The default for kernel-internal allocations. Allows direct reclaim, filesystem operations, and physical I/O. The calling context must be allowed to sleep.Use for: data structures, caches, driver allocations from process context.
#define GFP_ATOMIC  (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)
Non-sleeping allocation that accesses a portion of memory reserves. Use in interrupt handlers, softirqs, and other atomic contexts where sleeping is prohibited. Failures are more likely under memory pressure — always check the return value.
GFP_ATOMIC is not supported in NMI context or in contexts that disable preemption under PREEMPT_RT. For those, use a pre-allocated pool.
#define GFP_NOWAIT  (__GFP_KSWAPD_RECLAIM | __GFP_NOWARN)
Optimistic allocation without direct reclaim. Wakes kswapd if the zone is below the low watermark but does not block. Suitable when a fallback path exists for failure. Shares the same calling-context restrictions as GFP_ATOMIC.
#define GFP_DMA   __GFP_DMA
#define GFP_DMA32 __GFP_DMA32
Restrict allocation to the ZONE_DMA or ZONE_DMA32 memory zone for hardware with limited addressing. Prefer the dma_alloc_* APIs over these flags for DMA buffers.

Reclaim behavior modifiers

CombinationReclaim behavior
GFP_KERNEL & ~__GFP_RECLAIMNo reclaim attempt at all — lightest weight
GFP_NOWAITWakes kswapd; no direct reclaim
GFP_ATOMICNo direct reclaim; uses emergency reserves
GFP_KERNELBackground and direct reclaim; standard behavior
GFP_KERNEL | __GFP_NORETRYBacks off after one round of reclaim; no OOM killer
GFP_KERNEL | __GFP_RETRY_MAYFAILRetries hard; fails only if no progress possible
GFP_KERNEL | __GFP_NOFAILLoops until success — dangerous for large orders

Slab allocator

The slab allocator (kmalloc family) handles small to medium allocations up to roughly page size. Objects are physically contiguous and aligned to at least ARCH_KMALLOC_MINALIGN bytes. For power-of-two sizes the alignment is at least the size itself.

kmalloc

void *kmalloc(size_t size, gfp_t flags);
Allocate size bytes of physically contiguous, kernel-accessible memory. Returns a pointer to the allocation or NULL on failure.
size
size_t
required
Number of bytes to allocate. Should be smaller than page size; maximum is architecture and configuration dependent (typically 4 MB).
flags
gfp_t
required
GFP flags controlling reclaim behavior and zone selection. Use GFP_KERNEL from process context, GFP_ATOMIC from interrupt context.
/* Typical usage */
struct my_data *p = kmalloc(sizeof(*p), GFP_KERNEL);
if (!p)
    return -ENOMEM;

kzalloc

void *kzalloc(size_t size, gfp_t flags);
Identical to kmalloc but zeroes the allocated memory before returning. Prefer kzalloc over kmalloc to avoid uninitialized-memory bugs.
struct my_data *p = kzalloc(sizeof(*p), GFP_KERNEL);

krealloc

void *krealloc(const void *p, size_t new_size, gfp_t flags);
Resize a kmalloc-allocated block. If new_size is zero, the block is freed and ZERO_SIZE_PTR is returned. The contents up to min(old_size, new_size) are preserved.
p
const void *
required
Pointer previously returned by kmalloc, kzalloc, or krealloc. May be NULL, in which case this behaves like kmalloc.
new_size
size_t
required
Desired new size in bytes.
flags
gfp_t
required
GFP flags for any new allocation needed.

kfree

void kfree(const void *objp);
Free memory previously allocated with kmalloc, kzalloc, krealloc, or kmem_cache_alloc. Passing NULL is safe and is a no-op.
Never call kfree on memory allocated with vmalloc — use vfree. Never double-free. After kfree, set the pointer to NULL to catch use-after-free bugs.

Array helpers

void *kmalloc_array(size_t n, size_t size, gfp_t flags);
void *kcalloc(size_t n, size_t size, gfp_t flags);       /* zeroed */
Allocate an array of n elements each of size bytes. These helpers check for multiplication overflow, making them safer than kmalloc(n * size, flags).

vmalloc — virtually contiguous allocations

For large allocations that do not need to be physically contiguous, vmalloc maps a range of virtual address space backed by individual pages from the page allocator.
void *vmalloc(unsigned long size);
void *vzalloc(unsigned long size);   /* zeroed */
void  vfree(const void *addr);
size
unsigned long
required
Number of bytes to allocate. No hard upper limit beyond available virtual address space and physical memory.
vmalloc always implies GFP_KERNEL semantics — it may sleep. Do not call it from atomic context.
char *buf = vmalloc(1024 * 1024);  /* 1 MiB */
if (!buf)
    return -ENOMEM;
/* ... use buf ... */
vfree(buf);

kvmalloc — best-effort contiguous

void *kvmalloc(size_t size, gfp_t flags);
void  kvfree(const void *addr);
kvmalloc first tries kmalloc; if that fails it falls back to vmalloc. The returned memory may or may not be physically contiguous. Use kvfree to release memory from either path.

Page allocator

For allocations measured in pages (always power-of-two counts), use the page allocator directly.
unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order);
void          free_pages(unsigned long addr, unsigned int order);

struct page  *alloc_pages(gfp_t gfp_mask, unsigned int order);
void          __free_pages(struct page *page, unsigned int order);
order
unsigned int
required
Allocate 2^order contiguous pages. Order 0 = one page (typically 4 KiB). Maximum practical order is MAX_ORDER (usually 10 or 11).
/* Allocate 4 contiguous pages (order 2 = 2^2 = 4 pages) */
unsigned long addr = __get_free_pages(GFP_KERNEL, 2);
if (!addr)
    return -ENOMEM;
/* ... */
free_pages(addr, 2);

Slab cache — kmem_cache

For high-volume allocation of identically sized objects, create a dedicated slab cache. This reduces fragmentation and enables per-object constructors.
struct kmem_cache *kmem_cache_create(
    const char *name,
    unsigned int object_size,
    struct kmem_cache_args *args,
    slab_flags_t flags);

void  kmem_cache_destroy(struct kmem_cache *s);
void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags);
void  kmem_cache_free(struct kmem_cache *cachep, void *objp);
static struct kmem_cache *my_cache;

static int __init my_init(void)
{
    my_cache = KMEM_CACHE(my_struct, SLAB_HWCACHE_ALIGN);
    if (!my_cache)
        return -ENOMEM;
    return 0;
}

static void __exit my_exit(void)
{
    kmem_cache_destroy(my_cache);
}

Managed allocations — devm_kmalloc

Device-managed allocations are freed automatically when the device is detached. This eliminates the need to track and free allocations in error paths and .remove callbacks.
void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp);
void *devm_kzalloc(struct device *dev, size_t size, gfp_t gfp);
void *devm_krealloc(struct device *dev, void *ptr, size_t size, gfp_t gfp);
void  devm_kfree(struct device *dev, const void *p);
dev
struct device *
required
The device whose lifetime governs this allocation. The memory is freed when dev is unbound.
static int my_probe(struct platform_device *pdev)
{
    struct my_priv *priv;

    priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
    if (!priv)
        return -ENOMEM;

    /* No need to free priv on error or in .remove */
    platform_set_drvdata(pdev, priv);
    return 0;
}

Choosing the right allocator

1

Default choice

Use kzalloc(size, GFP_KERNEL) for any small object (under ~4 KiB) from process context. The zeroing eliminates a class of initialization bugs.
2

Atomic context

If the allocation occurs in an interrupt handler, softirq, or with a spinlock held, use GFP_ATOMIC instead of GFP_KERNEL. Always provide a fallback for failure.
3

Large allocations

For allocations larger than a page that do not require physical contiguity, use vmalloc. If you are unsure about the size at compile time, use kvmalloc.
4

Many identical objects

For a hot allocation path creating many identical structures, create a dedicated slab cache with kmem_cache_create / KMEM_CACHE. This improves locality and reduces fragmentation.
5

Driver probe/remove

In device drivers, use devm_kzalloc so that memory is released automatically when the device is detached, simplifying error handling.
6

DMA buffers

For DMA-capable memory, use the dma_alloc_coherent / dma_alloc_noncoherent APIs rather than GFP_DMA. These handle IOMMU mapping and cache coherency correctly.
Use struct_size(), array_size(), and array3_size() to calculate allocation sizes safely. These helpers detect and reject overflow that would otherwise produce undersized buffers.

Build docs developers (and LLMs) love