Default Optimizations
NSD includes performance optimizations by default:- libevent: Automatically uses the most efficient event mechanism for your platform (e.g.,
epollon Linux,kqueueon FreeBSD) - recvmmsg: Can be enabled at compile time with
--enable-recvmmsgto read multiple messages from a socket in one system call
Server Scaling
The simplest way to improve performance is to match the number of server processes to your CPU cores.Basic Multi-Server Configuration
server-count: Set this to the number of CPU cores availablereuseport: Distributes incoming packets evenly across server processes (requires OS support)
TCP Performance
Increase TCP capacity for high-traffic environments:tcp-count: Maximum number of concurrent TCP connections (default is much lower)tcp-timeout: Seconds before idle connections are closedtcp-reject-overflow: Prevents kernel connection queue from growing indefinitely
Processor Affinity
For maximum throughput, bind server processes to specific CPU cores to improve cache hit rates and reduce context switching.How CPU Affinity Works
When a process switches between CPU cores, performance degrades because:- L1/L2 cache must be rebuilt on the new core
- Pipeline state is lost
- Frequently accessed data must be reloaded
Basic CPU Affinity
cpu-affinity: Lists all cores designated for NSDserver-N-cpu-affinity: Binds each server process to a specific corexfrd-cpu-affinity: Binds the zone transfer daemon to a dedicated core
Platform Support
CPU affinity is currently supported on:- Linux
- FreeBSD
Socket Partitioning
Assign specific network interfaces to specific server processes to avoid the thundering herd problem and reduce socket scanning overhead.servers parameter are:
- Only opened by the specified server process
- Closed by all other servers on startup
- Isolated from unnecessary wake-ups
Network Interface Binding
Bind directly to network devices to ensure responses exit through the same interface queries arrive on.Linux: bindtodevice
- Guaranteed symmetric routing
- Slight performance improvement
FreeBSD: setfib
FreeBSD does not create routing tables on demand. Configure multiple routing tables in the system before using
setfib. Consult the FreeBSD Handbook for setup instructions.Optimal Configuration
Field tests show best performance when combining all tuning options to bind each network interface to a dedicated CPU core.Complete High-Performance Example
- Binds each IP address to a specific server process
- Pins each server to a dedicated CPU core
- Assigns the zone transfer daemon its own core
- Binds directly to network devices
- Optimizes TCP handling
Testing and Validation
After implementing tuning changes:- Benchmark: Use DNS query tools to measure queries per second (QPS)
- Monitor CPU: Check core utilization and context switch rates
- Test routing: Verify responses use the correct source addresses
- Load test: Gradually increase traffic to identify bottlenecks
- Compare: Measure performance before and after tuning