Skip to main content

Overview

Dedalus provides specialized solvers for each problem type and a variety of timestepping schemes for time integration. Solvers manage matrix construction, linear algebra, and iteration. All solver classes are in dedalus/core/solvers.py and timestepping schemes in dedalus/core/timesteppers.py.

Solver Types

InitialValueSolver (IVP)

Solver for time-dependent problems.
import numpy as np
import dedalus.public as d3

# Create IVP problem
problem = d3.IVP([u, b, p, tau_p, tau_b1, tau_b2, tau_u1, tau_u2],
                  namespace=locals())
# ... add equations ...

# Build solver with timestepper
solver = problem.build_solver(d3.RK222)

# Set stop conditions
solver.stop_sim_time = 50
solver.stop_wall_time = 3600  # 1 hour
solver.stop_iteration = 1000
Attributes:
  • state: List of problem variables (fields)
  • sim_time: Current simulation time
  • iteration: Current iteration number
  • evaluator: Evaluator for analysis outputs
Key Methods:
# Advance solution by one timestep
solver.step(dt)

# Evaluate stopping conditions
if solver.proceed:
    solver.step(dt)

EigenvalueSolver (EVP)

Solver for eigenvalue problems.
# Create EVP problem
eigenvalue = dist.Field(name='λ')
problem = d3.EVP([u, b, p, tau_u1, tau_u2, tau_b1, tau_b2, tau_p],
                  eigenvalue, namespace=locals())
# ... add equations ...

# Build solver
solver = problem.build_solver()

# Select subproblem (usually group with desired modes)
subproblem = solver.subproblems[0]

# Solve for all eigenvalues (dense)
solver.solve_dense(subproblem)

# Or solve for specific eigenvalues (sparse)
N = 10  # Number of eigenvalues
target = 0 + 1j  # Target eigenvalue
solver.solve_sparse(subproblem, N=N, target=target)
Attributes:
  • eigenvalues: Computed eigenvalues
  • eigenvectors: Corresponding eigenvectors
  • eigenvalue_subproblem: Subproblem for which EVP was solved
Key Methods:
# Dense solve (all eigenvalues)
solver.solve_dense(subproblem, rebuild_matrices=False)

# Sparse solve (targeted eigenvalues)  
solver.solve_sparse(subproblem, N=10, target=0, rebuild_matrices=False)

# Set state to specific eigenmode
solver.set_state(index=0, subsystem=0)

print(f"Growth rate: {solver.eigenvalues[0].real}")
print(f"Frequency: {solver.eigenvalues[0].imag}")

LinearBoundaryValueSolver (LBVP)

Solver for linear boundary value problems.
# Create LBVP problem
problem = d3.LBVP([u, p, tau_p, tau_u1, tau_u2], namespace=locals())
# ... add equations ...

# Build solver
solver = problem.build_solver()

# Solve
solver.solve()

# Solution is now in solver.state
print(f"Max velocity: {np.max(np.abs(u['g']))}")
Key Methods:
# Solve all subproblems
solver.solve()

# Solve specific subproblems
solver.solve(subproblems=[sp0, sp1])

# Rebuild matrices if coefficients changed
solver.solve(rebuild_matrices=True)

NonlinearBoundaryValueSolver (NLBVP)

Solver for nonlinear boundary value problems using Newton iteration.
# Create NLBVP problem
problem = d3.NLBVP([u, T, p, tau_p, tau_T1, tau_T2, tau_u1, tau_u2],
                    namespace=locals())
# ... add equations ...

# Build solver
solver = problem.build_solver()

# Newton iteration
pert_norm = np.inf
while pert_norm > 1e-10:
    solver.newton_iteration()
    pert_norm = sum(pert.allreduce_data_norm('c', 2) 
                   for pert in solver.perturbations)
    print(f"Perturbation norm: {pert_norm:.3e}")
Key Methods:
# Perform one Newton iteration
solver.newton_iteration()

# Check convergence
pert_norm = sum(pert.allreduce_data_norm('c', 2) 
               for pert in solver.perturbations)

Solver Parameters

All solvers accept common parameters:
solver = problem.build_solver(
    timestepper,                    # For IVP only
    ncc_cutoff=1e-6,               # NCC expansion cutoff
    max_ncc_terms=None,            # Max terms in NCC expansion
    entry_cutoff=1e-12,            # Matrix entry cutoff
    matrix_coupling=None,          # Override matrix coupling
    matsolver='MATRIX_FACTORIZER', # Matrix solver
    bc_top=True,                   # BC placement
    tau_left=False,                # Tau column placement
    interleave_components=True,    # Component ordering
    store_expanded_matrices=False  # Store preconditioned matrices
)
Key Parameters:
  • ncc_cutoff: Amplitude cutoff for non-constant coefficient expansions
  • matsolver: Choose from ‘MATRIX_FACTORIZER’, ‘MATRIX_SOLVER’, or custom
  • matrix_coupling: Override automatic coupling detection

Timestepping Schemes

All timestepping schemes are in dedalus/core/timesteppers.py.

Multistep IMEX Schemes

Implicit-Explicit multistep methods for IVPs of the form M·∂X/∂t + L·X = F.

SBDF1

1st-order semi-implicit backwards difference.
solver = problem.build_solver(d3.SBDF1)
Properties:
  • Order: 1
  • Steps: 1
  • Implicit: 1st-order BDF
  • Explicit: 1st-order extrapolation

SBDF2

2nd-order semi-implicit backwards difference.
solver = problem.build_solver(d3.SBDF2)
Properties:
  • Order: 2
  • Steps: 2 (uses SBDF1 for first step)
  • Implicit: 2nd-order BDF
  • Explicit: 2nd-order extrapolation
  • Recommended for most IVP simulations

SBDF3

3rd-order semi-implicit backwards difference.
solver = problem.build_solver(d3.SBDF3)
Properties:
  • Order: 3
  • Steps: 3 (uses SBDF2, then SBDF1 for startup)
  • Implicit: 3rd-order BDF
  • Explicit: 3rd-order extrapolation

SBDF4

4th-order semi-implicit backwards difference.
solver = problem.build_solver(d3.SBDF4)
Properties:
  • Order: 4
  • Steps: 4 (uses SBDF3, SBDF2, SBDF1 for startup)
  • Implicit: 4th-order BDF
  • Explicit: 4th-order extrapolation

Runge-Kutta IMEX Schemes

Implicit-Explicit Runge-Kutta methods with multiple stages per timestep.

RK111

1st-order 1-stage scheme.
solver = problem.build_solver(d3.RK111)
Properties:
  • Order: 1
  • Stages: 1
  • Equivalent to forward Euler

RK222

2nd-order 2-stage scheme.
solver = problem.build_solver(d3.RK222)
Properties:
  • Order: 2
  • Stages: 2
  • Good for moderate stiffness
  • Recommended for startup/testing

RK443

3rd-order 4-stage scheme.
solver = problem.build_solver(d3.RK443)
Properties:
  • Order: 3
  • Stages: 4
  • Higher accuracy per timestep
  • More expensive per timestep

Other Schemes

CNAB1, CNAB2

Crank-Nicolson Adams-Bashforth schemes.
solver = problem.build_solver(d3.CNAB2)
CNAB2 Properties:
  • Order: 2
  • Implicit: 2nd-order Crank-Nicolson
  • Explicit: 2nd-order Adams-Bashforth

Time Integration

Basic Integration Loop

import numpy as np
import dedalus.public as d3

# Build solver
solver = problem.build_solver(d3.SBDF2)

# Set stop conditions
solver.stop_sim_time = 10.0
solver.stop_wall_time = 3600  # seconds
solver.stop_iteration = np.inf

# Fixed timestep
dt = 0.01

# Integration loop
while solver.proceed:
    solver.step(dt)
    if solver.iteration % 100 == 0:
        print(f"Iteration: {solver.iteration}, Time: {solver.sim_time:.2f}")

Adaptive Timestepping with CFL

CFL conditions ensure numerical stability.
import dedalus.public as d3

# Create CFL object
CFL = d3.CFL(
    solver,
    initial_dt=0.01,     # Initial timestep
    cadence=10,           # Check every N iterations
    safety=0.5,           # Safety factor
    threshold=0.05,       # Relative change threshold
    max_change=1.5,       # Max timestep increase factor
    min_change=0.5,       # Max timestep decrease factor  
    max_dt=0.1,           # Maximum allowed timestep
    min_dt=1e-6           # Minimum allowed timestep
)

# Add velocity field for advective CFL
CFL.add_velocity(u)

# Integration loop with adaptive timestepping
while solver.proceed:
    dt = CFL.compute_timestep()
    solver.step(dt)
CFL Computation: For advective problems, CFL condition is:
CFL = max(|u|) * dt / dx
dt_max = CFL_safety * dx / max(|u|)

Custom Timestep Control

# Compute custom timestep
def compute_timestep():
    max_u = np.max(np.abs(u['g']))
    if max_u > 0:
        dt_cfl = CFL_safety * dx / max_u
    else:
        dt_cfl = max_dt
    return min(dt_cfl, max_dt)

# Integration loop
while solver.proceed:
    dt = compute_timestep()
    solver.step(dt)

Analysis and Output

Evaluate and save data during integration.

File Handlers

# Add file handler for snapshots
snapshots = solver.evaluator.add_file_handler(
    'snapshots',           # Directory name
    sim_dt=0.25,           # Output every 0.25 sim time
    max_writes=50          # Max files to write
)

# Add tasks to save
snapshots.add_task(b, name='buoyancy')
snapshots.add_task(u, name='velocity')
snapshots.add_task(-d3.div(d3.skew(u)), name='vorticity')

# Time series
timeseries = solver.evaluator.add_file_handler(
    'timeseries',
    sim_dt=0.01
)
timeseries.add_task(d3.integ(0.5*u@u), name='KE')
timeseries.add_task(d3.integ(0.5*b*b), name='PE')

Iteration-based Output

# Output every N iterations
snapshots = solver.evaluator.add_file_handler(
    'snapshots',
    iter=100,              # Every 100 iterations
    max_writes=50
)

# Wall-time based
snapshots = solver.evaluator.add_file_handler(
    'snapshots', 
    wall_dt=300,           # Every 5 minutes
    max_writes=50
)

Matrix Construction

Solvers automatically construct sparse matrices.

Subproblems

Problems are divided into subproblems by mode coupling.
# Access subproblems
print(f"Number of subproblems: {len(solver.subproblems)}")

for sp in solver.subproblems:
    print(f"Group: {sp.group}")
    print(f"Size: {sp.size}")
    print(f"Coefficient shape: {sp.coeff_shape(u.domain)}")

Matrix Inspection

# Build matrices explicitly
solver.build_matrices()

# Access matrices for a subproblem
sp = solver.subproblems[0]

# For IVP
M_matrix = sp.M_min  # Mass matrix
L_matrix = sp.L_min  # LHS matrix

# For EVP  
M_matrix = sp.M_min  # Eigenvalue matrix
L_matrix = sp.L_min  # System matrix

# For LBVP/NLBVP
L_matrix = sp.L_min  # System matrix

print(f"Matrix shape: {L_matrix.shape}")
print(f"Matrix sparsity: {L_matrix.nnz / np.prod(L_matrix.shape):.2%}")

Rebuilding Matrices

Rebuild when coefficients change.
# Change parameter
Ra = 1e7
kappa = (Ra * Pr)**(-1/2)

# Rebuild matrices
if hasattr(solver, 'solve'):  # LBVP/NLBVP
    solver.solve(rebuild_matrices=True)
else:  # EVP
    solver.solve_dense(subproblem, rebuild_matrices=True)

Complete Example: IVP with Adaptive Timestepping

import numpy as np
import dedalus.public as d3
import logging
logger = logging.getLogger(__name__)

# Parameters
Lx, Lz = 4, 1
Nx, Nz = 256, 64
Rayleigh = 2e6
Prandtl = 1
dealias = 3/2
stop_sim_time = 50
max_timestep = 0.125

# Setup
coords = d3.CartesianCoordinates('x', 'z')
dist = d3.Distributor(coords, dtype=np.float64)

xbasis = d3.RealFourier(coords['x'], size=Nx, bounds=(0, Lx), dealias=dealias)
zbasis = d3.ChebyshevT(coords['z'], size=Nz, bounds=(0, Lz), dealias=dealias)

# Fields
p = dist.Field(name='p', bases=(xbasis, zbasis))
b = dist.Field(name='b', bases=(xbasis, zbasis))
u = dist.VectorField(coords, name='u', bases=(xbasis, zbasis))

tau_p = dist.Field(name='tau_p')
tau_b1 = dist.Field(name='tau_b1', bases=xbasis)
tau_b2 = dist.Field(name='tau_b2', bases=xbasis)
tau_u1 = dist.VectorField(coords, name='tau_u1', bases=xbasis)
tau_u2 = dist.VectorField(coords, name='tau_u2', bases=xbasis)

# Parameters
kappa = (Rayleigh * Prandtl)**(-1/2)
nu = (Rayleigh / Prandtl)**(-1/2)
x, z = dist.local_grids(xbasis, zbasis)
ex, ez = coords.unit_vector_fields(dist)

lift_basis = zbasis.derivative_basis(1)
lift = lambda A: d3.Lift(A, lift_basis, -1)

grad_u = d3.grad(u) + ez*lift(tau_u1)
grad_b = d3.grad(b) + ez*lift(tau_b1)

# Problem
problem = d3.IVP([p, b, u, tau_p, tau_b1, tau_b2, tau_u1, tau_u2], 
                  namespace=locals())

problem.add_equation("trace(grad_u) + tau_p = 0")
problem.add_equation("dt(b) - kappa*div(grad_b) + lift(tau_b2) = -u@grad(b)")
problem.add_equation("dt(u) - nu*div(grad_u) + grad(p) - b*ez + lift(tau_u2) = -u@grad(u)")

problem.add_equation("b(z=0) = Lz")
problem.add_equation("u(z=0) = 0")
problem.add_equation("b(z=Lz) = 0")
problem.add_equation("u(z=Lz) = 0")
problem.add_equation("integ(p) = 0")

# Solver
solver = problem.build_solver(d3.SBDF2)
solver.stop_sim_time = stop_sim_time

# Initial conditions
b.fill_random('g', seed=42, distribution='normal', scale=1e-3)
b['g'] *= z * (Lz - z)
b['g'] += Lz - z

# Analysis
snapshots = solver.evaluator.add_file_handler('snapshots', sim_dt=0.25, max_writes=50)
snapshots.add_task(b, name='buoyancy')
snapshots.add_task(-d3.div(d3.skew(u)), name='vorticity')

# CFL
CFL = d3.CFL(solver, initial_dt=max_timestep, cadence=10, safety=0.5,
             threshold=0.05, max_change=1.5, min_change=0.5, max_dt=max_timestep)
CFL.add_velocity(u)

# Main loop
logger.info('Starting main loop')
while solver.proceed:
    dt = CFL.compute_timestep()
    solver.step(dt)
    if solver.iteration % 100 == 0:
        logger.info(f"Iteration={solver.iteration}, Time={solver.sim_time:.3f}, dt={dt:.3e}")

logger.info('Run complete')

Performance Considerations

Timestepper Selection

General Recommendations:
  • SBDF2: Best overall for most problems (2nd-order, efficient)
  • RK222: Good for startup, testing, moderate stiffness
  • RK443: Higher accuracy but more expensive
  • SBDF3/SBDF4: Very stiff problems requiring higher order

Matrix Solver Selection

# Factorization (faster for repeated solves)
solver = problem.build_solver(matsolver='MATRIX_FACTORIZER')

# Direct solve (less memory)
solver = problem.build_solver(matsolver='MATRIX_SOLVER')

Memory Management

# Store expanded matrices (faster but more memory)
solver = problem.build_solver(store_expanded_matrices=True)

# Recompute on-the-fly (slower but less memory)
solver = problem.build_solver(store_expanded_matrices=False)

Best Practices

Timestep Stability: Always use CFL-based adaptive timestepping for advection-dominated problems. Fixed timesteps can lead to instability.
NCC Cutoff: The ncc_cutoff parameter controls the accuracy of non-constant coefficient expansions. Decrease (e.g., 1e-8) for higher accuracy, increase (e.g., 1e-4) for faster matrix construction.
Analysis Output: Use max_writes to limit file sizes. Use sim_dt (not iter) for consistent temporal sampling.

See Also

Build docs developers (and LLMs) love