Skip to main content

Requirements

hls4ml requires Python 3.10 or later and is compatible with Linux and macOS.
Windows is not officially supported, but may work using WSL (Windows Subsystem for Linux).

Basic Installation

The simplest way to install hls4ml is using pip:
pip install hls4ml
This installs the core package with minimal dependencies:
  • h5py - HDF5 file format support
  • numpy - Numerical computing
  • pyyaml - YAML configuration files
  • pydigitalwavetools - Waveform handling
  • quantizers - Quantization utilities
Do not use conda-forge! Previously available conda packages are outdated and unsupported. Only pip installation is currently maintained.

Development Version

hls4ml evolves rapidly with new features and bug fixes on the development branch:
pip install git+https://github.com/fastmachinelearning/hls4ml@main
The development version may have experimental features that are not yet stable. Use for testing new functionality or bug fixes not yet in a release.

Optional Dependencies

hls4ml uses optional dependency groups for specific features. Install only what you need:
Tools for analyzing model weights, activations, and finding optimal precisions:
pip install hls4ml[profiling]
Includes: matplotlib, pandas, seabornUse case: Generate distribution plots for weights and activations to optimize bit widths
from hls4ml.model.profiling import numerical
plots = numerical(model=keras_model, hls_model=hls_model, X=test_data)
Support for Keras 3.x (multi-backend Keras):
pip install hls4ml[keras-v3]
Includes: keras>=3.10
Keras v2 and v3 cannot coexist in the same Python environment. If you need both, use separate virtual environments.
Use case: Convert models trained with Keras 3.x (TensorFlow, JAX, or PyTorch backend)
Quantization-aware training with QKeras (Keras v2):
pip install hls4ml[qkeras]
Includes: qkeras, tensorflow>=2.8,<=2.14.1, tensorflow-model-optimization<=0.7.5Use case: Train quantized models with QKeras for better FPGA resource efficiency
from qkeras import QDense, QActivation
# Use quantized layers in your model
Heterogeneous Quantization with gradient-based bit-width optimization:HGQ (Keras v2):
pip install hls4ml[hgq]
Includes: hgq>=0.2.3HGQ2 (Keras v3):
pip install hls4ml[hgq2]
Includes: hgq2>=0.0.1
Cannot install both hgq and hgq2 simultaneously due to Keras version conflicts.
Support for ONNX model format:
pip install hls4ml[onnx]
Includes: onnx>=1.4Use case: Convert models from PyTorch, TensorFlow, or any framework supporting ONNX export
DSP-aware pruning and hyperparameter optimization:
pip install hls4ml[optimization]
Includes: keras-tuner==1.1.3, ortools==9.4.1874, packagingUse case: Automatically find optimal precision and reuse factor configurations
Enable distributed arithmetic for efficient computation:
pip install hls4ml[da]
Includes: da4ml>=0.5.2,<0.6Use case: Alternative computation method that can reduce DSP usage
Symbolic regression for activation function approximation:
pip install hls4ml[sr]
Includes: sympy>=1.13.1Use case: Find mathematical expressions to approximate complex activation functions
Parse Intel Quartus synthesis reports:
pip install hls4ml[quartus-report]
Includes: calmjs-parse, tabulateUse case: Extract resource usage from Quartus reports when using Intel FPGAs

Installing Multiple Options

You can install multiple optional dependencies at once:
# Profiling + ONNX support
pip install hls4ml[profiling,onnx]

# Keras v3 + HGQ2 + optimization
pip install hls4ml[keras-v3,hgq2,optimization]
Conflicting dependencies:These combinations will NOT work:
  • qkeras + hgq2 (Keras v2 vs v3)
  • keras-v3 + qkeras (Keras v3 vs v2)
  • hgq + hgq2 (Keras v2 vs v3)
Use separate virtual environments if you need both Keras v2 and v3 workflows.

HLS Tool Installation

To synthesize FPGA designs (not just C simulation), you need vendor HLS tools:

Vivado Backend

Vivado HLS 2020.1 or later recommended
# Download from AMD/Xilinx website
# After installation, source the settings:
source /tools/Xilinx/Vivado/2020.1/settings64.sh

Vitis Backend

Vitis HLS 2022.2 or later required
# Download Vitis from AMD/Xilinx website  
# Source the settings:
source /tools/Xilinx/Vitis/2022.2/settings64.sh
Vitis backend is recommended over Vivado for new projects. It supports newer devices and has improved optimization.

C/C++ Compiler Requirements

For C simulation (running hls_model.compile() and hls_model.predict()), you need:

Linux

# Most distributions include g++, verify with:
g++ --version

# If not installed:
sudo apt-get install g++  # Debian/Ubuntu
sudo yum install gcc-c++   # RHEL/CentOS

macOS

# Install Xcode command line tools:
xcode-select --install

# If clang-based g++ has issues with ap_types, install GCC:
brew install gcc

oneAPI Backend

For oneAPI backend, C/SYCL simulation requires:
# Install oneAPI compiler
source /opt/intel/oneapi/setvars.sh

# oneAPI 2025.0 required (2025.1 is known not to work)

Verification

Verify your installation:
import hls4ml
print(f"hls4ml version: {hls4ml.__version__}")

# Check available converters
from hls4ml import converters
print("Converters loaded successfully")

# Test basic functionality
import numpy as np
from keras.models import Sequential
from keras.layers import Dense

model = Sequential([Dense(10, input_shape=(5,))])
config = hls4ml.utils.config_from_keras_model(model)
hls_model = hls4ml.converters.convert_from_keras_model(
    model, hls_config=config, backend='Vivado'
)
print("✓ Installation successful!")

Python Version Support

hls4ml officially supports:
  • Python 3.10
  • Python 3.11
  • Python 3.12
  • Python 3.13
  • Python 3.14
Older Python versions (3.9 and below) are not supported as of hls4ml 1.1.0+.

Virtual Environments

We strongly recommend using virtual environments:
# Create virtual environment
python -m venv hls4ml-env

# Activate (Linux/macOS)
source hls4ml-env/bin/activate

# Activate (Windows)
hls4ml-env\Scripts\activate

# Install hls4ml
pip install hls4ml[profiling,onnx]

Troubleshooting

TensorFlow is optional - only install if using Keras v2:
pip install tensorflow>=2.8,<=2.14.1
PyTorch is optional - only install if converting PyTorch models:
# Follow instructions at pytorch.org for your system
pip install torch torchvision
Cannot have both installed. Create separate environments:
# Environment 1: Keras v2
python -m venv keras2-env
source keras2-env/bin/activate
pip install hls4ml[qkeras]

# Environment 2: Keras v3
python -m venv keras3-env
source keras3-env/bin/activate
pip install hls4ml[keras-v3,hgq2]
Ensure HLS tools are in your PATH:
# Check Vivado HLS
which vivado_hls

# Check Vitis HLS
which vitis_hls

# If not found, source the settings file:
source /tools/Xilinx/Vivado/2020.1/settings64.sh
The system clang may not work with Xilinx headers. Install GCC:
brew install gcc

# Set compiler explicitly
export CXX=g++-11  # adjust version number

Uninstalling

To remove hls4ml:
pip uninstall hls4ml

# Also remove cache
rm -rf ~/.hls4ml

Next Steps

Quickstart

Build your first FPGA inference model

Tutorials

Hands-on Jupyter notebooks

Build docs developers (and LLMs) love