Skip to main content

What is LoRA?

LoRA (Low-Rank Adaptation) is a technique for fine-tuning large neural networks without retraining all of their weights. Instead of updating every parameter in the model, LoRA inserts small trainable adapter matrices into the network’s layers. The original model weights stay frozen; only the adapter weights change. In practice this means you can teach a Stable Diffusion model to generate a specific character, art style, or concept by training on a few hundred images and a modest GPU, then distribute the result as a file that is a few megabytes instead of several gigabytes.

How it works

A LoRA adapter decomposes a weight update ΔW into two low-rank matrices:
ΔW = A × B
where A has shape [in_features, rank] and B has shape [rank, out_features]. The rank (also called the dimension, controlled by --network_dim) determines how expressive the adapter is. A rank of 16 is a common starting point; higher ranks capture more detail at the cost of a larger file and higher overfitting risk.

Benefits of LoRA over full fine-tuning

PropertyLoRAFull fine-tuning
VRAM requiredLow (8–16 GB for SD 1.x)Very high (40+ GB)
Training timeMinutes to hoursHours to days
Output file size2–200 MB2–7 GB
Overfitting riskLowerHigher
StackableYes — multiple LoRAs can be combinedNo
Base model untouchedYesNo

LoRA types in sd-scripts

The repository uses two named LoRA variants:
  • LoRA-LierLa — applies LoRA to Linear layers and Conv2d layers with a 1×1 kernel. This is the default when you use --network_module=networks.lora.
  • LoRA-C3Lier — extends LoRA-LierLa to also cover Conv2d layers with a 3×3 kernel. Enable it by passing conv_dim in --network_args.

Model-specific training scripts

Each model architecture requires a dedicated script. Choose the page that matches your base model:

SD 1.x / 2.x

Train LoRA with train_network.py for Stable Diffusion v1 and v2 models.

SDXL

Train LoRA with sdxl_train_network.py for Stable Diffusion XL.

FLUX.1

Train LoRA with flux_train_network.py for FLUX.1 dev and schnell.

SD3 / SD3.5

Train LoRA with sd3_train_network.py for Stable Diffusion 3 and 3.5.
For power users who want to go deeper, see Advanced LoRA Options which covers block-wise dimensions, LoRA+, DyLoRA, optimizers, noise techniques, and more.

Common workflow

1

Prepare your dataset

Collect training images and write captions. Organize them into a folder and create a TOML dataset config file that points to that folder, sets the resolution, and configures repetition count and bucketing options.
[general]
shuffle_caption = true
caption_extension = ".txt"

[[datasets]]
resolution = 512
batch_size = 4

  [[datasets.subsets]]
  image_dir = "/path/to/images"
  num_repeats = 10
2

Configure your training command

Choose the script for your model architecture and set the required arguments — at minimum the base model path, dataset config, output directory, network module, rank, and alpha.
3

Run training

Launch training through accelerate launch. Monitor the loss in the console or with TensorBoard.
accelerate launch --num_cpu_threads_per_process 1 train_network.py \
  --pretrained_model_name_or_path="path/to/model.safetensors" \
  --dataset_config="my_dataset.toml" \
  --output_dir="./output" \
  --output_name="my_lora" \
  --network_module=networks.lora \
  --network_dim=16 \
  --network_alpha=8 \
  --learning_rate=1e-4 \
  --max_train_epochs=10
4

Use the trained LoRA

After training completes, load the .safetensors file from your output directory into a compatible inference tool such as ComfyUI or AUTOMATIC1111 stable-diffusion-webui.

Key parameters at a glance

ParameterWhat it controlsTypical range
--network_dimLoRA rank — expressiveness vs. file size4 – 128
--network_alphaScaling factor for the LoRA output1 – same as dim
--learning_rateHow fast the adapter learns1e-41e-3
--max_train_epochsTotal training epochs5 – 20
--mixed_precisionNumeric format for trainingfp16 or bf16
Start with --network_dim=16 and --network_alpha=8. Increase the rank only if results lack detail after a full training run.

Build docs developers (and LLMs) love