Skip to main content
The Linux kernel is the core of any Linux operating system. It manages hardware and system resources, and provides the fundamental services—process scheduling, memory management, filesystem access, and networking—that every program running on the system depends on. First released by Linus Torvalds in 1991, the kernel is today developed by thousands of contributors worldwide and is licensed under the GNU General Public License version 2 (GPL-2.0).

Build the kernel

Install dependencies, configure, compile, and install a custom kernel on your system.

Contribute patches

Learn the process for writing, formatting, and submitting patches to the upstream kernel.

Kernel documentation

Browse the full kernel documentation rendered from the Documentation/ tree.

Mailing lists

Search and subscribe to kernel subsystem mailing lists at lore.kernel.org.

What the kernel does

The Linux kernel sits between user-space programs and physical hardware. It is responsible for:
  • Process management — creating, scheduling, and terminating processes and threads.
  • Memory management — virtual memory, demand paging, slab allocation, and memory protection.
  • Device drivers — a rich driver model that supports thousands of hardware devices via loadable modules.
  • Filesystems — a Virtual File System (VFS) layer that presents a uniform interface across ext4, XFS, Btrfs, tmpfs, NFS, and dozens of other filesystem implementations.
  • Networking — a complete TCP/IP networking stack with socket interfaces, netfilter, traffic control, and support for Wi-Fi, Ethernet, and virtual networking.
  • Security — mandatory access control through the Linux Security Module (LSM) framework, including SELinux, AppArmor, and seccomp.
  • Interprocess communication — signals, pipes, POSIX message queues, shared memory, and futexes.

Architecture overview

Monolithic kernel with loadable modules

Linux is a monolithic kernel: the core kernel code—scheduler, memory manager, VFS, and networking stack—runs in a single privileged address space. At the same time, drivers and optional subsystems are built as loadable kernel modules (.ko files) that can be inserted into or removed from a running kernel without rebooting. This gives Linux the performance of a monolithic design while retaining much of the flexibility associated with microkernels.

Virtual File System (VFS)

The VFS is an abstraction layer that intercepts every filesystem-related system call (open, read, write, stat, …) and dispatches it to the correct filesystem implementation. All filesystems register a set of operations (file_operations, inode_operations, super_operations) with the VFS, so user-space code interacts with a consistent POSIX interface regardless of the underlying storage format.

Process scheduler

The kernel’s Completely Fair Scheduler (CFS) uses a red-black tree of runnable tasks weighted by their virtual runtime to achieve fairness across workloads. The scheduler supports multiple scheduling classes—SCHED_NORMAL, SCHED_BATCH, SCHED_IDLE, SCHED_FIFO, and SCHED_RR—allowing real-time and interactive tasks to coexist efficiently. The scheduler documentation lives in Documentation/scheduler/.

Memory management

The memory management subsystem handles virtual-to-physical address translation through a multi-level page table hierarchy, physical page allocation via the buddy allocator, and high-frequency small-object allocation via the SLAB/SLUB allocator. Features such as transparent huge pages, memory compaction, and kernel same-page merging (KSM) optimize performance and memory utilization. The mm subsystem is documented in Documentation/mm/.

Networking stack

The networking stack is organized in layers: socket API → protocol layer (TCP, UDP, ICMP) → IP layer → neighbour/routing subsystem → network device driver. Netfilter hooks at various points in the stack allow packet filtering, NAT, and connection tracking. The tc (traffic control) subsystem provides sophisticated queuing disciplines for bandwidth shaping. Networking documentation lives in Documentation/networking/.

Linux Security Modules

The LSM framework inserts hook calls at security-sensitive kernel operations. Implementations such as SELinux, AppArmor, and Smack attach policies to those hooks without modifying the core kernel. Multiple stacking LSMs are supported since Linux 5.1.

Supported architectures

The kernel supports a wide range of CPU architectures. Each architecture lives under the arch/ directory in the source tree:

x86 / x86-64

The primary architecture for desktop, laptop, and server workloads. Covers both 32-bit (i386) and 64-bit (x86_64) variants.

arm / arm64

Dominant in embedded, mobile, and increasingly server workloads. arm64 (AArch64) is the 64-bit ARMv8+ ISA.

powerpc

IBM POWER servers, embedded PowerPC, and the Cell processor.

s390

IBM Z mainframes running Linux natively and in LPAR/z/VM environments.

riscv

The open RISC-V ISA, supported for both 32-bit (rv32) and 64-bit (rv64) variants.

mips

MIPS32 and MIPS64, used in networking equipment, set-top boxes, and embedded systems.
Additional architectures in the tree include alpha, arc, csky, hexagon, loongarch, m68k, microblaze, nios2, openrisc, parisc, sh, sparc, um (User Mode Linux), and xtensa.

Who develops the kernel

The kernel is one of the largest collaborative software projects in existence. A typical development cycle brings in contributions from hundreds of companies and thousands of individual developers. Key groups include:
  • Subsystem maintainers — developers who own a specific area of the kernel (networking, filesystems, drivers, etc.) and are responsible for reviewing and merging patches into their trees before they flow upstream to Linus Torvalds.
  • Hardware vendors — companies writing or maintaining drivers for their own products.
  • Distribution maintainers — teams at Red Hat, SUSE, Canonical, Debian, and others who backport fixes and configurations to long-lived stable kernels.
  • Academic and research contributors — groups studying kernel internals, writing experimental subsystems, or improving performance on specific workloads.
  • Individual developers — contributors fixing bugs, improving documentation, or adding features independently.
The full list of subsystem maintainers and their mailing lists is tracked in the MAINTAINERS file at the root of the source tree.

Development model

The kernel follows a roughly 9–10 week release cycle:
  1. Merge window (2 weeks) — Linus opens the tree to new feature pull requests from subsystem maintainers. This is the only time significant new code enters mainline.
  2. Stabilization / RC phase (6–8 weeks) — Weekly release candidates (-rc1 through -rc7 or -rc8) are tagged. Only bug fixes are accepted.
  3. Final release — The kernel is tagged as a stable release (e.g., v6.9). From here, the linux-stable team maintains a 6.9.y branch with selected bug and security fixes.
Long-term support (LTS) kernels are designated by the kernel community and receive fixes for 2–6 years. The current LTS kernels are listed at kernel.org.

Further reading

Development process

A detailed guide to how the kernel development community works, from early design through patch acceptance.

Core API documentation

Reference for fundamental kernel APIs: memory allocation, locking primitives, linked lists, and more.

Driver API guide

Everything needed to write a device driver: bus models, DMA, power management, and firmware loading.

Kernel hacking guide

An introduction to kernel internals aimed at new kernel hackers.

Build docs developers (and LLMs) love