Skip to main content
hls4ml Logo

Welcome to hls4ml

hls4ml is a Python package for machine learning inference in FPGAs. We create firmware implementations of machine learning algorithms using high-level synthesis (HLS). We translate traditional open-source machine learning package models into HLS that can be configured for your use-case.

Ultra-Low-Latency Inference

hls4ml is designed for ultra-low-latency inference on FPGAs, achieving inference times below 1 microsecond. While it has strong roots in high-energy physics applications (e.g., L1 trigger systems at the CERN Large Hadron Collider), it has also been adopted across diverse scientific and industrial domains.

Real-World Applications

High-Energy Physics

L1 trigger systems at CERN’s Large Hadron Collider for real-time particle detection

Quantum Computing

Control systems for quantum computing with ultra-low-latency requirements

Nuclear Fusion

Feedback loops in nuclear fusion reactors requiring microsecond response times

Satellite Systems

Low-power environmental monitoring on satellites with limited resources

Biomedical

Biomedical signal processing including arrhythmia classification

Autonomous Systems

Real-time inference for autonomous vehicles and robotics

Key Features

Multi-Framework Support

Convert models from Keras, PyTorch, and ONNX to optimized FPGA firmware

Multiple HLS Backends

Support for Vivado, Vitis, Quartus, Catapult, and oneAPI

Precision Optimization

Automated precision optimization and quantization for resource efficiency

Profiling Tools

Built-in profiling and performance analysis tools

Plugin System

Extensible architecture for custom layers and operations

Open Source

Apache 2.0 licensed with an active community

Get Started

Quickstart

Get up and running with hls4ml in minutes

Installation

Install hls4ml and its dependencies

Core Concepts

Learn about model conversion and HLS synthesis

API Reference

Explore the complete API documentation

Community and Support

If you have any questions, comments, or ideas regarding hls4ml, or want to show us how you use hls4ml, please reach out through our GitHub Discussions.
For introductory material on FPGAs, HLS, and ML inference using hls4ml, check out this video tutorial.

Citing hls4ml

If you use hls4ml in your research, please cite:
@software{fastml_hls4ml,
  author       = {{FastML Team}},
  title        = {fastmachinelearning/hls4ml},
  year         = 2025,
  publisher    = {Zenodo},
  version      = {v1.2.0},
  doi          = {10.5281/zenodo.1201549},
  url          = {https://github.com/fastmachinelearning/hls4ml}
}
And the original publication:
@article{Duarte:2018ite,
    author = "Duarte, Javier and others",
    title = "{Fast inference of deep neural networks in FPGAs for particle physics}",
    eprint = "1804.06913",
    archivePrefix = "arXiv",
    primaryClass = "physics.ins-det",
    doi = "10.1088/1748-0221/13/07/P07027",
    journal = "JINST",
    volume = "13",
    number = "07",
    pages = "P07027",
    year = "2018"
}

Build docs developers (and LLMs) love