Core Concepts
Node Types
ComfyUI nodes are categorized by their function:- Loaders: Load models, checkpoints, LoRAs, VAEs, and other resources
- Conditioning: Process text prompts and control image generation guidance
- Sampling: Generate images through the denoising process
- Latent: Manipulate images in latent space
- Image: Process and save final images
Data Flow
Nodes pass data between each other through typed connections:MODEL: Diffusion models for denoising latentsCLIP: Text encoder models for processing promptsVAE: Variational autoencoder for encoding/decoding between pixel and latent spaceCONDITIONING: Encoded text prompts that guide generationLATENT: Images in compressed latent spaceIMAGE: Final pixel-space images
Basic Workflow Structure
A typical ComfyUI workflow follows this pattern:- Load Resources - Load checkpoint, VAE, and other models
- Encode Prompts - Convert text to conditioning using CLIP
- Create Latent - Generate empty latent or encode input image
- Sample - Denoise latent using model and conditioning
- Decode - Convert latent to pixel-space image
- Save - Output the final image
Node Categories
Loaders
Load models and resources into the workflow. See Loaders for details.Conditioning
Process text prompts and control how they guide generation. See Conditioning for details.Sampling
Generate images through iterative denoising. See Sampling for details.Latent Operations
Manipulate images in latent space. See Latent for details.Image Processing
Process and save pixel-space images. See Image for details.Node Parameters
Most nodes accept parameters that control their behavior:- Required Parameters: Must be provided for the node to function
- Optional Parameters: Have default values but can be customized
- Advanced Parameters: Hidden by default, for fine-tuning behavior
Custom Nodes
ComfyUI supports custom nodes through thecustom_nodes directory. Custom nodes extend functionality beyond the built-in nodes and can be installed from the community.