Classification Types
ClassificationProps
Props for theuseClassification hook.
Model configuration object
Boolean that can prevent automatic model loading (and downloading the data if you load it for the first time) after running the hook.
ClassificationType
Return type for theuseClassification hook. Manages the state and operations for Computer Vision image classification.
Contains the error object if the model failed to load, download, or encountered a runtime error during classification.
Indicates whether the classification model is loaded and ready to process images.
Indicates whether the model is currently processing an image.
Represents the download progress of the model binary as a value between 0 and 1.
Executes the model’s forward pass to classify the provided image.Parameters:
imageSource(string) - A string representing the image source (e.g., a file path, URI, or base64 string) to be classified.
RnExecutorchError if the model is not loaded or is currently processing another image.Object Detection Types
Bbox
Represents a bounding box for a detected object in an image.The x-coordinate of the bottom-left corner of the bounding box.
The y-coordinate of the bottom-left corner of the bounding box.
The x-coordinate of the top-right corner of the bounding box.
The y-coordinate of the top-right corner of the bounding box.
Detection
Represents a detected object within an image, including its bounding box, label, and confidence score.The bounding box of the detected object, defined by its top-left (x1, y1) and bottom-right (x2, y2) coordinates.
The class label of the detected object.
The confidence score of the detection, typically ranging from 0 to 1.
ObjectDetectionModelSources
Per-model config forObjectDetectionModule.fromModelName. Each model name maps to its required fields.
ObjectDetectionModelName
Union of all built-in object detection model names.ObjectDetectionConfig
Configuration for a custom object detection model.The label map for the model.
Optional preprocessor configuration
ObjectDetectionProps
Props for theuseObjectDetection hook.
The model config containing
modelName and modelSource.Boolean that can prevent automatic model loading (and downloading the data if you load it for the first time) after running the hook.
ObjectDetectionType
Return type for theuseObjectDetection hook. Manages the state and operations for Computer Vision object detection tasks.
Contains the error object if the model failed to load, download, or encountered a runtime error during detection.
Indicates whether the object detection model is loaded and ready to process images.
Indicates whether the model is currently processing an image.
Represents the download progress of the model binary as a value between 0 and 1.
Executes the model’s forward pass with automatic input type detection.Parameters:
input(string | PixelData) - Image source (string path/URI or PixelData object)detectionThreshold(number, optional) - An optional number between 0 and 1 representing the minimum confidence score. Default is 0.7.
Detection objects.Throws: RnExecutorchError if the model is not loaded or is currently processing another image.Synchronous worklet function for real-time VisionCamera frame processing. Automatically handles native buffer extraction and cleanup.Use this for VisionCamera frame processing in worklets. For async processing, use
forward() instead.Available after model is loaded (isReady: true).Parameters:frame(Frame) - VisionCamera Frame objectdetectionThreshold(number) - The threshold for detection sensitivity.
Semantic Segmentation Types
SemanticSegmentationConfig
Configuration for a custom semantic segmentation model.The enum-like object mapping class names to indices.
Optional preprocessing parameters.
SemanticSegmentationModelSources
Per-model config forSemanticSegmentationModule.fromModelName. Each model name maps to its required fields.
SemanticSegmentationModelName
Union of all built-in semantic segmentation model names.DeeplabLabel
Labels used in the DeepLab semantic segmentation model.SelfieSegmentationLabel
Labels used in the selfie semantic segmentation model.SemanticSegmentationProps
Props for theuseSemanticSegmentation hook.
The model config containing
modelName and modelSource.Boolean that can prevent automatic model loading (and downloading the data if you load it for the first time) after running the hook.
SemanticSegmentationType
Return type for theuseSemanticSegmentation hook. Manages the state and operations for semantic segmentation models.
Contains the error object if the model failed to load, download, or encountered a runtime error during segmentation.
Indicates whether the segmentation model is loaded and ready to process images.
Indicates whether the model is currently processing an image.
Represents the download progress of the model binary as a value between 0 and 1.
Executes the model’s forward pass to perform semantic segmentation on the provided image.Parameters:
imageSource(string) - A string representing the image source (e.g., a file path, URI, or base64 string) to be processed.classesOfInterest(K[], optional) - An optional array of label keys indicating which per-class probability masks to include in the output.ARGMAXis always returned regardless.resizeToInput(boolean, optional) - Whether to resize the output masks to the original input image dimensions. Iffalse, returns the raw model output dimensions. Defaults totrue.
'ARGMAX' Int32Array of per-pixel class indices, and each requested class label mapped to a Float32Array of per-pixel probabilities.Throws: RnExecutorchError if the model is not loaded or is currently processing another image.