Overview
train_network.py is the core LoRA training script for Stable Diffusion v1.x and v2.x models. It supports LoRA-LierLa (the default), LoRA-C3Lier (with 3×3 convolution layers), and a range of optimizers and learning rate schedulers.
For SDXL, FLUX.1, or SD3 models, use the dedicated scripts described in their respective pages. This page covers SD 1.x and 2.x only.
Prerequisites
- The
sd-scriptsrepository cloned and the Python environment set up. - A prepared dataset and a TOML dataset config file.
- A Stable Diffusion v1.x or v2.x
.safetensorsor.ckptbase model.
Training command
Below is a complete example that you can adapt to your setup. Use\ for line continuation on Linux/macOS or ^ on Windows.
Write the command on a single line or use the appropriate line-continuation character for your shell.
Key arguments
Required arguments
Path to the base Stable Diffusion model. Accepts a local
.ckpt or .safetensors file, a Diffusers-format directory, or a Hugging Face Hub model ID such as "stabilityai/stable-diffusion-2-1-base".Path to the
.toml file that describes the training dataset. See the dataset configuration guide for the full format.Directory where trained LoRA models, sample images, and logs are saved.
Base filename for the saved LoRA (without extension). The final file will be
<output_name>.safetensors.Network type to train. Use
networks.lora for standard LoRA. Use networks.dylora for DyLoRA.LoRA rank (dimension). Higher values increase expressiveness and file size. Common values: 4, 8, 16, 32, 64, 128.
Network parameters
Alpha value that scales the LoRA output. Setting this equal to
network_dim matches older behavior. A value of half network_dim is a common default.Dropout rate (0.0–1.0) applied inside LoRA modules. Can help reduce overfitting. Omit to disable.
Additional key=value arguments passed to the network module. For LoRA-C3Lier (3×3 convolution layers), add
"conv_dim=4" "conv_alpha=1".Saving
File format for the saved model. Options:
safetensors (recommended), ckpt, pt.Save a checkpoint every N epochs. If omitted, only the final model is saved.
Save a checkpoint every N steps. Can be used alongside
--save_every_n_epochs.Learning rate
Global learning rate used when
--unet_lr and --text_encoder_lr are not specified.Learning rate for LoRA modules inside the U-Net. Falls back to
--learning_rate when omitted.Learning rate for LoRA modules inside the text encoder. Recommended to be lower than the U-Net rate (e.g.,
1e-5).Learning rate schedule. Options:
constant, cosine, linear, constant_with_warmup, cosine_with_restarts, polynomial.Number of steps over which the learning rate ramps up from zero to the target value at the start of training.
Training duration
Number of epochs to train. When set,
--max_train_steps is ignored.Total number of training steps. Used only when
--max_train_epochs is not set.Optimizer
Optimizer to use. Options include
AdamW8bit (requires bitsandbytes), AdamW, Lion (requires lion-pytorch), Adafactor, DAdaptation, Prodigy, schedulefree.Memory and speed
Mixed precision training mode. Use
fp16 or bf16 to reduce VRAM usage and speed up training. Requires GPU support.Enables gradient checkpointing to reduce VRAM at the cost of slightly slower training.
Uses PyTorch’s Scaled Dot-Product Attention. Reduces memory and can improve speed.
Accumulate gradients over N steps before updating weights. Effectively multiplies the batch size without increasing VRAM.
SD 2.x specific flags
When training on Stable Diffusion v2.x models, add the following flags:- SD 2.x (512px, epsilon)
- SD 2.x (768px, v-prediction)
--v2 for v2.x models that use epsilon (noise) parameterization, such as stable-diffusion-2-base.Conv2d LoRA (LoRA-C3Lier)
By default, LoRA targets only Linear layers and 1×1 Conv2d layers. To also apply LoRA to 3×3 Conv2d layers (LoRA-C3Lier), passconv_dim in --network_args:
Monitoring training
If you specify--logging_dir, you can visualize the training loss and learning rate with TensorBoard:
Using the trained LoRA
When training completes, a.safetensors file is saved to your --output_dir. You can load it in:
- AUTOMATIC1111 stable-diffusion-webui — place in
models/Lora/and reference with<lora:my_lora:1>in your prompt. - ComfyUI — use a
LoraLoadernode pointing to the file.
