The ONNX frontend enables conversion of ONNX (Open Neural Network Exchange) models to optimized FPGA firmware. This frontend provides an interoperable path from multiple frameworks including PyTorch, TensorFlow, scikit-learn, and more.
# Check which operations are in your modelimport onnxmodel = onnx.load('model.onnx')ops = set()for node in model.graph.node: ops.add(node.op_type)print("Operations in model:", ops)# Get supported operationsfrom hls4ml.converters import get_supported_onnx_layersprint("Supported operations:", get_supported_onnx_layers())# Find unsupported operationsunsupported = ops - set(get_supported_onnx_layers())print("Unsupported operations:", unsupported)
Solutions:
Use a different opset version when exporting
Simplify the model to avoid unsupported operations
Implement a custom layer handler
Channels format error for convolutions
Convolutions require channels-last format:
python
# Error: "Please convert the model to channels-last format"# Solution: Use QONNX transformationimport qonnx.util.to_channels_lastqonnx.util.to_channels_last.to_channels_last( 'model.onnx', make_input_channels_last=True, out_file='model_channels_last.onnx')# Then convertonnx_model = onnx.load('model_channels_last.onnx')hls_model = hls4ml.converters.convert_from_onnx_model(onnx_model, hls_config=config)
Model validation errors
Validate ONNX model before conversion:
python
import onnx# Load and check modelmodel = onnx.load('model.onnx')try: onnx.checker.check_model(model) print("Model is valid")except onnx.checker.ValidationError as e: print(f"Model is invalid: {e}")# Check for common issuesprint(f"IR version: {model.ir_version}")print(f"Opset version: {model.opset_import[0].version}")# Fix with shape inferencefrom onnx import shape_inferenceinferred_model = shape_inference.infer_shapes(model)onnx.save(inferred_model, 'model_inferred.onnx')
Input/output shape issues
Verify input and output shapes:
python
import onnximport numpy as npmodel = onnx.load('model.onnx')# Check input shapesfor input in model.graph.input: print(f"Input {input.name}:") shape = [d.dim_value for d in input.type.tensor_type.shape.dim] print(f" Shape: {shape}")# Check output shapesfor output in model.graph.output: print(f"Output {output.name}:") shape = [d.dim_value for d in output.type.tensor_type.shape.dim] print(f" Shape: {shape}")# Note: hls4ml removes batch dimension automatically# Input shape [1, 10] becomes [10] in hls4ml
QONNX preprocessing issues
Proper QONNX model preprocessing:
python
import qonnx.util.cleanupfrom qonnx.transformation.gemm_to_matmul import GemmToMatMulfrom qonnx.core.modelwrapper import ModelWrapper# Full preprocessing pipelinemodel = ModelWrapper('qonnx_model.onnx')# 1. Convert Gemm to MatMulmodel = model.transform(GemmToMatMul())# 2. Cleanupqonnx.util.cleanup.cleanup( 'qonnx_model.onnx', out_file='step1_clean.onnx')# 3. Convert to channels-lastfrom qonnx.util.to_channels_last import to_channels_lastto_channels_last( 'step1_clean.onnx', make_input_channels_last=True, out_file='step2_channels_last.onnx')# 4. Final cleanupqonnx.util.cleanup.cleanup( 'step2_channels_last.onnx', out_file='final_model.onnx')# Now convert to hls4mlimport onnxonnx_model = onnx.load('final_model.onnx')hls_model = hls4ml.converters.convert_from_onnx_model(onnx_model, hls_config=config)