.safetensors file you can load in any SD-compatible UI or inference pipeline.
Install sd-scripts
Follow the Installation guide to set up your Python environment, install PyTorch, and configure
accelerate. Come back here once accelerate config completes successfully.Prepare your dataset
Create a folder for your training images and place 10–50 high-quality images inside it. Each image needs a matching caption file with the same name and a The folder name Next, create a dataset configuration file named For more dataset options, see the Dataset Preparation documentation.
.txt extension.10_my_subject tells sd-scripts to repeat each image 10 times per epoch and use my_subject as the class token. Each .txt file should contain a short caption describing the image, for example:my_dataset.toml:Train the LoRA
With your virtual environment activated, run the following command from the Replace Key parameters explained:
Training progress and loss are logged to TensorBoard. Start TensorBoard in a separate terminal:
sd-scripts root directory:<path to SD model> with the path to your base model checkpoint (.safetensors or .ckpt).All training commands use
accelerate launch rather than calling the script directly. This ensures the correct device placement, mixed precision settings, and CPU thread configuration you set up during accelerate config are applied. Never run python train_network.py without accelerate launch.| Parameter | Value | Description |
|---|---|---|
--network_module | networks.lora | Selects the LoRA network implementation |
--network_dim | 16 | LoRA rank — higher values capture more detail but increase file size |
--network_alpha | 8 | Scaling factor, typically set to half of network_dim |
--learning_rate | 1e-4 | Learning rate for the LoRA weights |
--max_train_steps | 1000 | Total number of training steps |
Find your output
When training finishes, your LoRA file is saved to
./output/my_lora.safetensors. You can load this file in AUTOMATIC1111, ComfyUI, or any other tool that supports LoRA adapters.The output directory also contains TensorBoard logs for reviewing your training curve.Next steps
This quickstart uses minimal settings to get you running quickly. Once your first LoRA works, explore these topics to improve your results:
Dataset Preparation
Bucket-based resolution grouping, caption strategies, and regularization images.
LoRA Training
Full parameter reference for
train_network.py, advanced network architectures (LoHa, LoKr), and SDXL/FLUX.1-specific options.Fine-tuning
DreamBooth and native fine-tuning for deeper model adaptation.
Supported models
Overview of all supported model architectures and their capabilities.
