Flux LoRA
Flux supports loading LoRA adapters to customize image generation without retraining the full model.Not all LoRA formats have been tested. If a specific LoRA doesn’t load, please report it to the team.
Tested LoRA models
Basic usage
First, download the LoRA file to a local directory:LoRA configuration parameters
Thelora_config parameter accepts a JSON string with the following fields:
lora_model_name_or_path: Path to the LoRA weights fileweight_name: Name of the weights fileadapter_name: Identifier for the adapterscale: Strength of the LoRA effect (0.0 to 1.0, typically 0.6-0.8)from_pt: Whether to load from PyTorch format (use “true” for safetensors)
Wan LoRA
Wan models support LoRA adapters in ComfyUI and AI Toolkit formats for video generation customization.Not all LoRA formats have been tested. Currently supports ComfyUI and AI Toolkit formats. If a specific LoRA doesn’t load, please let the team know.
Setup
- Create a copy of the relevant config file (e.g.,
src/maxdiffusion/configs/base_wan_i2v_14b.yml) - Update the prompt and LoRA details in the config
- Set
enable_lora: Truein the config
Running inference
How it works
MaxDiffusion’s LoRA implementation supports:- Standard LoRA: Low-rank decomposition with down and up projection matrices
- Weight diffs: Direct weight modifications for fine-tuning
- Bias diffs: Bias parameter adjustments
- LoCON: LoRA for convolutional layers with kernel size > 1x1
Related resources
- Multi-LoRA loading - Load multiple LoRAs simultaneously
- Hyper SDXL LoRA - Fast inference with Hyper-SD