Skip to main content

Overview

ClassificationModule provides a class-based interface for image classification tasks. It loads classification models and performs inference on images to predict categories with confidence scores.

When to Use

Use ClassificationModule when:
  • You need to manage the model lifecycle manually
  • You’re working outside React components
  • You need multiple classification model instances
  • You want direct control over loading and inference
Use useClassification hook when:
  • Building React components
  • You want automatic lifecycle management and cleanup
  • You prefer declarative state management with loading states
  • You need React state integration

Extends

ClassificationModule extends BaseModule, inheriting core functionality like generateFromFrame() for real-time camera frame processing.

Constructor

new ClassificationModule()
Creates a new classification module instance.

Example

import { ClassificationModule } from 'react-native-executorch';

const classifier = new ClassificationModule();

Methods

load()

async load(
  model: { modelSource: ResourceSource },
  onDownloadProgressCallback?: (progress: number) => void
): Promise<void>
Loads the classification model from the specified source.

Parameters

model.modelSource
ResourceSource
required
Resource location of the model binary. Can be a URL, file path, or require statement.
onDownloadProgressCallback
(progress: number) => void
Optional callback to monitor download progress (value between 0 and 1).

Example

await classifier.load(
  { modelSource: 'https://example.com/mobilenet_v3.pte' },
  (progress) => {
    console.log(`Download: ${(progress * 100).toFixed(1)}%`);
  }
);

forward()

async forward(imageSource: string): Promise<{ [category: string]: number }>
Executes the model’s forward pass on the provided image.

Parameters

imageSource
string
required
The image source to classify. Can be:
  • A file path (e.g., '/path/to/image.jpg')
  • A URI (e.g., 'file:///path/to/image.jpg' or 'https://example.com/image.jpg')
  • A Base64-encoded string (e.g., 'data:image/jpeg;base64,...')

Returns

An object mapping category names to confidence scores (0-1).

Example

const results = await classifier.forward('file:///path/to/cat.jpg');
console.log(results);
// {
//   'tabby cat': 0.87,
//   'tiger cat': 0.08,
//   'Egyptian cat': 0.03,
//   ...
// }

// Find top prediction
const topCategory = Object.entries(results)
  .sort(([, a], [, b]) => b - a)[0];
console.log(`Top: ${topCategory[0]} (${(topCategory[1] * 100).toFixed(1)}%)`);

delete()

delete(): void
Unloads the model from memory and releases native resources. Always call this when done with the model to prevent memory leaks.

Example

classifier.delete();

generateFromFrame()

generateFromFrame(frameData: Frame, ...args: any[]): any
Inherited from BaseModule. Process a camera frame directly for real-time inference. This method is bound to a native JSI function after calling load(), making it worklet-compatible and safe to call from VisionCamera’s frame processor thread.

Parameters

frameData
Frame
required
Frame data object with either nativeBuffer (zero-copy) or data (ArrayBuffer). Include width and height.
args
any[]
Additional model-specific arguments.

Returns

Classification results in the same format as forward().

Example

import { useFrameOutput } from 'react-native-vision-camera';

const frameOutput = useFrameOutput({
  pixelFormat: 'rgb',
  onFrame(frame) {
    'worklet';
    const nativeBuffer = frame.getNativeBuffer();
    const results = classifier.generateFromFrame(
      { 
        nativeBuffer: nativeBuffer.pointer, 
        width: frame.width, 
        height: frame.height 
      }
    );
    nativeBuffer.release();
    frame.dispose();
    
    console.log('Real-time classification:', results);
  }
});

Complete Example

import { ClassificationModule } from 'react-native-executorch';
import RNFS from 'react-native-fs';

class ImageClassifier {
  private classifier: ClassificationModule;

  constructor() {
    this.classifier = new ClassificationModule();
  }

  async initialize() {
    await this.classifier.load(
      { modelSource: 'https://example.com/mobilenet.pte' },
      (progress) => {
        console.log(`Loading model: ${(progress * 100).toFixed(0)}%`);
      }
    );
    console.log('Model ready!');
  }

  async classifyImage(imagePath: string) {
    const results = await this.classifier.forward(imagePath);
    
    // Get top 5 predictions
    const top5 = Object.entries(results)
      .sort(([, a], [, b]) => b - a)
      .slice(0, 5)
      .map(([category, score]) => ({
        category,
        confidence: (score * 100).toFixed(2) + '%'
      }));
    
    return top5;
  }

  cleanup() {
    this.classifier.delete();
  }
}

// Usage
const classifier = new ImageClassifier();
await classifier.initialize();

const predictions = await classifier.classifyImage('/path/to/image.jpg');
console.log('Predictions:', predictions);
// [
//   { category: 'golden retriever', confidence: '92.45%' },
//   { category: 'Labrador retriever', confidence: '5.32%' },
//   ...
// ]

classifier.cleanup();

Performance Considerations

  • For real-time camera processing, use generateFromFrame() with VisionCamera’s zero-copy path
  • For single images, use forward() with appropriate image sources
  • Always call delete() when done to free memory
  • Consider using the useClassification hook for React components to get automatic cleanup

See Also

Build docs developers (and LLMs) love