Skip to main content
This guide will help you create your first AI-generated image using ComfyUI.

Prerequisites

Before you begin, make sure you have:
  • Installed ComfyUI
  • At least one checkpoint model (e.g., SD 1.5 or SDXL)
  • Basic understanding of text-to-image generation

Starting ComfyUI

Launch ComfyUI using your preferred method:
Double-click the ComfyUI application icon.
ComfyUI will start and open in your web browser at http://127.0.0.1:8188

Your First Workflow

ComfyUI loads with a default text-to-image workflow. Here’s how to use it:
1

Load a checkpoint model

  1. Click on the CheckpointLoaderSimple node
  2. Select a model from the dropdown (e.g., v1-5-pruned-emaonly.safetensors)
  3. Place your checkpoint files in models/checkpoints/ directory
2

Enter your prompts

Positive Prompt (CLIPTextEncode):
masterpiece, best quality, beautiful landscape, mountains, sunset
Negative Prompt (CLIPTextEncode):
blurry, low quality, distorted
3

Configure generation settings

In the KSampler node:
  • Steps: 20-30 (more steps = higher quality, slower)
  • CFG Scale: 7-8 (how closely to follow your prompt)
  • Sampler: euler or dpmpp_2m (sampling algorithm)
  • Scheduler: normal or karras
  • Seed: Random number (use same seed for consistent results)
4

Set image dimensions

In the EmptyLatentImage node:
  • Width: 512 or 768
  • Height: 512 or 768
  • Keep dimensions as multiples of 64
5

Generate your image

Click Queue Prompt or press Ctrl+EnterWatch the progress bar and wait for generation to complete.
6

View and save your result

  • Your image appears in the SaveImage node
  • Images are automatically saved to output/ directory
  • Right-click the image to download or save

Understanding the Basic Workflow

The default workflow connects these nodes: Node Roles:
  • CheckpointLoaderSimple: Loads the AI model
  • CLIPTextEncode: Converts text prompts to embeddings
  • EmptyLatentImage: Creates blank latent space
  • KSampler: Generates image in latent space
  • VAEDecode: Converts latent to viewable image
  • SaveImage: Saves the final result

Quick Tips

  • Increase steps to 30-40
  • Use samplers like dpmpp_2m or dpmpp_sde
  • Add more descriptive keywords to your prompt
  • Use negative prompts to avoid unwanted elements
  • Reduce steps to 15-20
  • Use smaller image dimensions (512x512)
  • Enable --highvram if you have enough VRAM
  • Use faster samplers like euler_a or lcm
  • Use the same seed value
  • Lock your prompt and settings
  • Save your workflow: File → Save
  • Change the seed number
  • Adjust CFG scale (7-15 range)
  • Try different samplers and schedulers
  • Modify prompt keywords

Keyboard Shortcuts

Essential shortcuts to speed up your workflow:
ShortcutAction
Ctrl+EnterQueue prompt for generation
Ctrl+SSave workflow
Ctrl+OLoad workflow
SpacePan canvas
Double-ClickOpen node search
Ctrl+ZUndo
DeleteDelete selected nodes

Next Steps

Learn Core Concepts

Understand how workflows and nodes work together

Explore Models

Discover supported image generation models

Try Tutorials

Follow step-by-step guides for common tasks

Use the API

Integrate ComfyUI into your applications

Troubleshooting

If you encounter issues, check the Common Issues guide or visit the Discord community for help.
Common problems:
  • No models available: Place checkpoint files in models/checkpoints/
  • Out of memory: Reduce image size or use --lowvram flag
  • Black images: Try --force-fp32 flag (GTX 16 series cards)
  • Slow generation: Enable --fast for optimizations

Build docs developers (and LLMs) love