Supported Hardware
ComfyUI works on:- NVIDIA GPUs (CUDA) - Best performance and compatibility
- AMD GPUs (ROCm) - Good performance on supported cards
- Intel GPUs (Arc, Xe) - Experimental support via oneAPI/XPU
- Apple Silicon (M1/M2/M3/M4) - MPS acceleration support
- Ascend NPUs - Huawei Ascend accelerators
- Cambricon MLUs - Cambricon machine learning units
- Iluvatar Corex - Iluvatar AI accelerators
- CPU - Works on any system (slow)
NVIDIA GPUs
NVIDIA GPUs offer the best performance and compatibility with ComfyUI. Requires CUDA-capable hardware.Requirements
- NVIDIA GPU with CUDA compute capability 3.5 or higher
- Updated NVIDIA drivers
- Python 3.10+ (3.13 recommended)
- PyTorch 2.4 or newer
Installation
Install PyTorch with CUDA 13.0 (Recommended)
Install PyTorch with CUDA 12.6 (For Older GPUs)
For NVIDIA 10-series and older GPUs:Install PyTorch Nightly (Latest Features)
Portable Windows Builds
Pre-built portable packages are available for Windows:- NVIDIA GPUs (CUDA 13.0) - Python 3.13, CUDA 13.0
- NVIDIA GPUs (CUDA 12.6) - Python 3.12, supports 10-series GPUs
Update your NVIDIA drivers if ComfyUI doesn’t start. The portable builds come with Python and PyTorch pre-configured.
Recommended Settings by VRAM
- 24GB+ VRAM
- 12GB VRAM
- 8GB VRAM
- 4-6GB VRAM
- <4GB VRAM
Best performance with all models loaded in VRAM:Or for maximum performance:
Troubleshooting
”Torch not compiled with CUDA enabled”
Uninstall and reinstall PyTorch:CUDA malloc errors
Disable cudaMallocAsync:Out of memory errors
Reduce VRAM usage:AMD GPUs
AMD GPUs are supported through ROCm on Linux and experimental Windows builds for RDNA 3/4.Linux (Recommended)
Supported on most modern AMD GPUs with ROCm.Install PyTorch with ROCm 7.1 (Stable)
Install PyTorch with ROCm 7.2 (Nightly)
May have performance improvements:Running ComfyUI
Windows (Experimental)
Experimental support for RDNA 3, 3.5, and 4 GPUs only.RDNA 3 (RX 7000 Series)
RDNA 3.5 (Strix Halo / Ryzen AI Max+ 365)
RDNA 4 (RX 9000 Series)
Portable Windows Build
Experimental AMD Portable BuildUnsupported AMD Cards (Linux)
For older RDNA2 or unsupported cards, use theHSA_OVERRIDE_GFX_VERSION environment variable:
AMD ROCm Optimizations
Enable experimental memory-efficient attention:These optimizations are already enabled by default on RDNA3 GPUs.
Intel GPUs
Intel Arc and Xe GPUs are supported via Intel Extension for PyTorch.Requirements
- Intel Arc GPU (A-series) or Intel Xe integrated graphics
- Windows or Linux
- Python 3.10+
Installation
Install PyTorch XPU (Stable)
Install PyTorch XPU (Nightly)
May have performance improvements:See the PyTorch XPU documentation for more details.
Running ComfyUI
Device Selection
Select specific Intel device:Disable IPEX Optimizations
If you encounter issues:Apple Silicon (M1/M2/M3/M4)
ComfyUI supports Apple Silicon Macs with MPS (Metal Performance Shaders) acceleration.Requirements
- Mac with Apple Silicon (M1, M2, M3, or M4)
- macOS 12.0 or later
- Python 3.10+ (3.13 recommended)
Installation
- Install PyTorch with MPS support:
See Apple’s PyTorch guide for installation instructions.
- Install ComfyUI dependencies:
- Add models to the appropriate folders as described in the installation guide.
Running ComfyUI
Performance Tips
- Unified Memory: Apple Silicon benefits from unified memory architecture
- Model Size: Can handle larger models than GPU VRAM alone would suggest
- Performance: Expect 2-4x slower than equivalent NVIDIA GPU, but very capable
Recommended Models
Apple Silicon works well with:- SD 1.5 models
- SDXL (on M2 Pro/Max/Ultra or M3/M4)
- Flux (on M3 Max/Ultra or M4 Max/Ultra with 64GB+ RAM)
Ascend NPUs
Huawei Ascend NPUs are supported via Ascend Extension for PyTorch.Requirements
- Ascend NPU hardware
- Supported Linux kernel (see Ascend documentation)
- Ascend Basekit (driver, firmware, CANN)
Installation
- Install the recommended Linux kernel version from the torch-npu installation page
- Install Ascend Basekit (driver, firmware, and CANN) following platform-specific instructions
- Install torch-npu packages following the installation guide
- Install ComfyUI dependencies:
Running ComfyUI
Cambricon MLUs
Cambricon Machine Learning Units are supported via Cambricon Extension for PyTorch.Requirements
- Cambricon MLU hardware
- Cambricon CNToolkit
- PyTorch with torch_mlu
Installation
- Install Cambricon CNToolkit following the installation guide
- Install PyTorch with torch_mlu support following the user guide
- Install ComfyUI dependencies:
Running ComfyUI
Iluvatar Corex
Iluvatar AI accelerators are supported via Iluvatar Extension for PyTorch.Requirements
- Iluvatar Corex hardware
- Iluvatar Corex Toolkit
Installation
- Install Iluvatar Corex Toolkit following the installation guide
- Install ComfyUI dependencies:
Running ComfyUI
CPU Only
ComfyUI can run on CPU alone, though it will be significantly slower than GPU acceleration.Requirements
- Any x86_64 or ARM CPU
- 16GB+ RAM recommended (32GB+ for larger models)
- Python 3.10+
Installation
Install PyTorch CPU version:Running ComfyUI
Performance Expectations
- SD 1.5: 2-5 minutes per image on modern CPUs
- SDXL: 10-30 minutes per image
- Flux: Not recommended (extremely slow)
Optimization Tips
Hardware Selection Guide
Which GPU Should I Buy?
See the ComfyUI GPU recommendations wiki for detailed buying advice.Quick Recommendations
Budget
NVIDIA RTX 3060 12GB or RTX 4060 Ti 16GBGood for SD 1.5, SDXL, and smaller models.
Enthusiast
NVIDIA RTX 4070 Ti Super 16GB or RTX 4080 16GBGreat for SDXL, Flux schnell, and most workflows.
Professional
NVIDIA RTX 4090 24GB or RTX 6000 Ada 48GBBest for all models including Flux dev, video generation, and heavy workflows.
Server/Multi-GPU
NVIDIA A6000 48GB or H100 80GBEnterprise solutions for maximum performance and multi-user setups.
VRAM Requirements by Model
| Model Type | Minimum VRAM | Recommended VRAM |
|---|---|---|
| SD 1.5 | 4GB | 8GB |
| SDXL | 6GB | 12GB |
| Flux Schnell | 12GB | 16GB |
| Flux Dev | 16GB | 24GB |
| Video (SVD) | 12GB | 24GB |
| Mochi | 20GB | 32GB |
| Hunyuan Video | 32GB | 60GB |
Lower VRAM amounts can work with
--lowvram, --novram, and FP8 quantization, but will be slower.Multi-GPU Setup
ComfyUI supports multi-GPU systems.Select Primary GPU
Environment Variables
Alternatively, set CUDA device visibility:Benchmarking Your Hardware
To test your hardware performance:- Load a standard workflow (e.g., SDXL text-to-image)
- Run with different settings and note generation times:
- Compare execution times in the console output
Getting Help
If you encounter hardware-specific issues:- Check the ComfyUI GitHub Issues
- Ask in the ComfyUI Discord #help channel
- Join the ComfyUI Matrix chat
- Your hardware (GPU model, VRAM, CPU, RAM)
- Operating system and version
- PyTorch version (
python -c "import torch; print(torch.__version__)") - Full error messages and console output