Skip to main content
The kernel provides several locking primitives, each with different performance characteristics, sleeping behavior, and valid calling contexts. Choosing the wrong primitive leads to deadlocks, priority inversion, or subtle data corruption.

spinlock_t

Busy-wait lock. Cannot sleep. Safe from any context including interrupt handlers.

struct mutex

Sleeping lock. Only usable from process context. Preferred for most driver and subsystem code.

rwlock_t / rw_semaphore

Allow concurrent readers, exclusive writers. Use when reads vastly outnumber writes.

atomic_t

Lock-free integer operations via hardware atomic instructions. No lock overhead.

Spinlocks

Spinlocks are the most basic locking primitive. A thread acquiring a contested spinlock busy-waits (spins) rather than sleeping. This makes them usable from any context — including interrupt handlers — but also means they must be held for very short durations.

Initialization

/* Static */
static DEFINE_SPINLOCK(my_lock);

/* Dynamic */
spinlock_t my_lock;
spin_lock_init(&my_lock);

Core operations

void spin_lock(spinlock_t *lock);
void spin_unlock(spinlock_t *lock);
Acquire or release the spinlock. Use only when the lock is never acquired from interrupt context on the same CPU.
spin_lock(&my_lock);
/* critical section */
spin_unlock(&my_lock);

IRQ-safe variants

void spin_lock_irqsave(spinlock_t *lock, unsigned long flags);
void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags);
lock
spinlock_t *
required
The spinlock to acquire or release.
flags
unsigned long
required
Storage for the saved interrupt state. Must be declared by the caller as unsigned long flags;.
Disables local interrupts before acquiring the lock, saving the interrupt state in flags. Restores interrupt state on release. Use this variant whenever the lock may also be acquired from an interrupt handler.
unsigned long flags;

spin_lock_irqsave(&my_lock, flags);
/* critical section — interrupts disabled locally */
spin_unlock_irqrestore(&my_lock, flags);
If you hold a spinlock and an interrupt fires on the same CPU, and the interrupt handler tries to acquire the same spinlock, you get a deadlock. Always use spin_lock_irqsave when the lock can be acquired from interrupt context.

BH-safe variants

void spin_lock_bh(spinlock_t *lock);
void spin_unlock_bh(spinlock_t *lock);
Disables software interrupts (bottom halves) before acquiring the lock. Use when the lock is shared between process context and softirq/tasklet handlers but not hardware interrupt handlers.

Mutexes

Mutexes are sleeping locks. A thread that cannot acquire a mutex is put to sleep and woken when the lock becomes available. This makes them suitable for protecting longer critical sections, but they cannot be used from interrupt context.
#include <linux/mutex.h>

Initialization

/* Static */
static DEFINE_MUTEX(my_mutex);

/* Dynamic */
struct mutex my_mutex;
mutex_init(&my_mutex);

Core operations

void mutex_lock(struct mutex *lock);
void mutex_unlock(struct mutex *lock);
int  mutex_trylock(struct mutex *lock);
bool mutex_is_locked(struct mutex *lock);
lock
struct mutex *
required
The mutex to operate on.
mutex_lock sleeps until the mutex is acquired. mutex_trylock returns 1 on success and 0 if the mutex is already held — it never sleeps. mutex_unlock releases the lock; only the owner may call it.
mutex_lock(&my_mutex);
/* critical section */
mutex_unlock(&my_mutex);

Interruptible and killable variants

int mutex_lock_interruptible(struct mutex *lock);
int mutex_lock_killable(struct mutex *lock);
Both variants sleep while waiting but can be interrupted. mutex_lock_interruptible returns -EINTR if a signal is delivered. mutex_lock_killable returns -EINTR only for fatal signals. Always check the return value.
if (mutex_lock_interruptible(&my_mutex))
    return -EINTR;
/* critical section */
mutex_unlock(&my_mutex);

Mutex semantics

The kernel enforces the following rules (verified by CONFIG_DEBUG_MUTEXES):
  • Only one task may hold the mutex at a time.
  • Only the owner may call mutex_unlock.
  • Recursive locking is not permitted.
  • A task must not exit while holding a mutex.
  • Mutexes may not be used in interrupt or softirq context.
struct mutex uses optimistic spinning (MCS lock) before falling back to sleeping. In practice this makes it competitive with spinlocks for short critical sections while still allowing the holder to be preempted.

Reader-writer spinlocks

rwlock_t allows multiple concurrent readers or one exclusive writer. Readers do not block each other.
/* Static */
static DEFINE_RWLOCK(my_rwlock);

/* Dynamic */
rwlock_t my_rwlock;
rwlock_init(&my_rwlock);
/* Reader */
unsigned long flags;
read_lock_irqsave(&my_rwlock, flags);
/* read shared data */
read_unlock_irqrestore(&my_rwlock, flags);

/* Writer */
write_lock_irqsave(&my_rwlock, flags);
/* modify shared data */
write_unlock_irqrestore(&my_rwlock, flags);
Reader-writer spinlocks require more atomic memory operations than plain spinlocks. Unless read-side critical sections are long, a plain spinlock is often faster. The kernel is actively removing rwlock_t from many subsystems in favor of RCU. Do not add new uses without prior review.
You cannot upgrade a read lock to a write lock. If you ever need to write — even rarely — acquire the write lock from the start.

Reader-writer semaphores

struct rw_semaphore is the sleeping equivalent of rwlock_t. It allows concurrent readers and exclusive writers, and the holder may sleep.
#include <linux/rwsem.h>

/* Static */
static DECLARE_RWSEM(my_rwsem);

/* Dynamic */
struct rw_semaphore my_rwsem;
init_rwsem(&my_rwsem);
void down_read(struct rw_semaphore *sem);
void up_read(struct rw_semaphore *sem);
void down_write(struct rw_semaphore *sem);
void up_write(struct rw_semaphore *sem);
int  down_read_trylock(struct rw_semaphore *sem);   /* 1 = acquired */
int  down_write_trylock(struct rw_semaphore *sem);  /* 1 = acquired */
/* Read-side */
down_read(&my_rwsem);
/* read shared data */
up_read(&my_rwsem);

/* Write-side */
down_write(&my_rwsem);
/* modify shared data */
up_write(&my_rwsem);

Atomic operations

For single-variable shared state, atomic operations avoid the overhead of a lock entirely.
#include <linux/atomic.h>

typedef struct { int counter; } atomic_t;

Core operations

void atomic_set(atomic_t *v, int i);       /* v->counter = i */
int  atomic_read(const atomic_t *v);       /* return v->counter */
void atomic_inc(atomic_t *v);              /* v->counter++ */
void atomic_dec(atomic_t *v);              /* v->counter-- */
int  atomic_dec_and_test(atomic_t *v);     /* decrement; return true if 0 */
int  atomic_inc_and_test(atomic_t *v);     /* increment; return true if 0 */
int  atomic_add_return(int i, atomic_t *v); /* v += i; return new value */
int  atomic_sub_return(int i, atomic_t *v); /* v -= i; return new value */
int  atomic_cmpxchg(atomic_t *v, int old, int new); /* CAS; return old value */
static atomic_t refcount = ATOMIC_INIT(0);

atomic_inc(&refcount);
/* ... */
if (atomic_dec_and_test(&refcount))
    /* last reference — clean up */;
For reference counting, prefer struct kref (which wraps atomic_t) or refcount_t (which adds overflow protection) over raw atomic_t.

64-bit atomics

typedef struct { s64 counter; } atomic64_t;

void atomic64_set(atomic64_t *v, s64 i);
s64  atomic64_read(const atomic64_t *v);
void atomic64_inc(atomic64_t *v);
void atomic64_dec(atomic64_t *v);
s64  atomic64_add_return(s64 i, atomic64_t *v);

Lockdep — lock ordering validation

Lockdep is the kernel’s runtime lock dependency validator. It detects potential deadlocks by tracking the order in which locks are acquired and flagging circular dependency chains.
/* Annotate a lock with a custom class key to teach lockdep
 * that two logically different locks of the same type are
 * in distinct dependency classes. */
static struct lock_class_key my_lock_key;
lockdep_set_class(&my_lock, &my_lock_key);
Enable lockdep with CONFIG_PROVE_LOCKING=y and CONFIG_DEBUG_LOCKDEP=y. Violations are printed to the kernel log with a full dependency chain. Lock ordering rule: Always acquire locks in a consistent global order. If lock A is ever acquired while B is held, then B must never be acquired while A is held anywhere else in the kernel.

Choosing the right lock

Use struct mutex. It is the safest and most debuggable sleeping lock. Prefer mutex_lock_interruptible in paths that may block for a long time so that the process remains killable.
PrimitiveCan sleepIRQ-safeUse case
spinlock_tNoYes (with _irqsave)Short critical sections, any context
struct mutexYesNoProcess context, longer critical sections
rwlock_tNoYes (with _irqsave)Read-heavy, short sections, any context
rw_semaphoreYesNoRead-heavy, process context
atomic_tNoYesSingle integer, lock-free

Build docs developers (and LLMs) love