Installation
First, ensure you have the package installed:
npm install react-native-executorch
Basic Usage
The useTextEmbeddings hook manages the text embeddings model lifecycle and provides methods to generate embeddings.
import { useTextEmbeddings, ALL_MINILM_L6_V2 } from 'react-native-executorch';
function MyComponent() {
const model = useTextEmbeddings({ model: ALL_MINILM_L6_V2 });
const generateEmbedding = async () => {
if (!model.isReady) return;
try {
const embedding = await model.forward('Hello world');
console.log('Embedding dimensions:', embedding.length);
// Output: Embedding dimensions: 384
} catch (error) {
console.error('Error:', error);
}
};
return (
<View>
<Text>Model Status: {model.isReady ? 'Ready' : 'Loading...'}</Text>
<Button title="Generate" onPress={generateEmbedding} />
</View>
);
}
API Reference
useTextEmbeddings
React hook for managing a text embeddings model instance.
const model = useTextEmbeddings(props: TextEmbeddingsProps): TextEmbeddingsType
Parameters
Configuration object containing model and tokenizer sources
The source of the text embeddings model binary (URL or local file)
The source of the tokenizer JSON file (URL or local file)
Prevent automatic model loading on mount. Useful for conditional loading.
Returns
Contains error information if model loading or inference fails
Indicates whether the model has successfully loaded and is ready for inference
Indicates whether the model is currently generating embeddings
Download progress value between 0 and 1
forward
(input: string) => Promise<Float32Array>
Generates embeddings for the provided text inputParameters:
input (string): The text to embed
Returns: Promise resolving to Float32Array containing the embedding vectorThrows: RnExecutorchError if model is not loaded or is currently processing
Model Configuration
The library provides pre-configured model constants:
import {
useTextEmbeddings,
ALL_MINILM_L6_V2,
CLIP_VIT_BASE_PATCH32_TEXT
} from 'react-native-executorch';
// MiniLM model (384-dimensional embeddings)
const miniLMModel = useTextEmbeddings({ model: ALL_MINILM_L6_V2 });
// CLIP text model (512-dimensional embeddings)
const clipModel = useTextEmbeddings({ model: CLIP_VIT_BASE_PATCH32_TEXT });
Using Custom Models
You can use your own models by providing custom sources:
const model = useTextEmbeddings({
model: {
modelSource: 'https://example.com/my-model.pte',
tokenizerSource: 'https://example.com/tokenizer.json'
}
});
Handling Model State
Loading Progress
Monitor download progress while the model loads:
function ModelStatus() {
const model = useTextEmbeddings({ model: ALL_MINILM_L6_V2 });
const getStatusText = () => {
if (model.error) {
return `Error: ${model.error.message}`;
}
if (!model.isReady) {
return `Loading model ${(model.downloadProgress * 100).toFixed(2)}%`;
}
return model.isGenerating ? 'Generating...' : 'Model is ready';
};
return <Text>{getStatusText()}</Text>;
}
Conditional Loading
Defer model loading until needed:
function ConditionalEmbeddings() {
const [shouldLoad, setShouldLoad] = useState(false);
const model = useTextEmbeddings({
model: ALL_MINILM_L6_V2,
preventLoad: !shouldLoad
});
return (
<View>
<Button
title="Load Model"
onPress={() => setShouldLoad(true)}
/>
{model.isReady && <Text>Model ready!</Text>}
</View>
);
}
Error Handling
const model = useTextEmbeddings({ model: ALL_MINILM_L6_V2 });
if (model.error) {
return (
<View>
<Text>Failed to load model</Text>
<Text>Error: {model.error.message}</Text>
<Text>Code: {model.error.code}</Text>
</View>
);
}
Generating Embeddings
Single Text Embedding
const embedding = await model.forward('This is a sample sentence');
console.log('Vector length:', embedding.length); // 384 for MiniLM
console.log('First 5 values:', embedding.slice(0, 5));
Batch Processing
Generate embeddings for multiple texts:
const texts = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.'
];
const embeddings = [];
for (const text of texts) {
const embedding = await model.forward(text);
embeddings.push({ text, embedding });
}
console.log(`Generated ${embeddings.length} embeddings`);
The forward() method processes one text at a time. For multiple texts, call it sequentially in a loop.
With State Management
function EmbeddingGenerator() {
const model = useTextEmbeddings({ model: ALL_MINILM_L6_V2 });
const [input, setInput] = useState('');
const [embedding, setEmbedding] = useState<Float32Array | null>(null);
const handleGenerate = async () => {
if (!model.isReady || !input.trim()) return;
try {
const result = await model.forward(input);
setEmbedding(result);
} catch (error) {
console.error('Generation failed:', error);
}
};
return (
<View>
<TextInput
value={input}
onChangeText={setInput}
placeholder="Enter text..."
/>
<Button
title="Generate Embedding"
onPress={handleGenerate}
disabled={!model.isReady || model.isGenerating}
/>
{embedding && (
<Text>Embedding dimensions: {embedding.length}</Text>
)}
</View>
);
}
Best Practices
Check isReady before inference
Always verify the model is ready before calling forward():if (!model.isReady) {
console.log('Model not ready yet');
return;
}
const embedding = await model.forward(text);
Provide feedback during model download and loading:{!model.isReady && (
<ActivityIndicator />
)}
{model.isReady && (
<Button title="Generate" onPress={handleGenerate} />
)}
Wrap inference calls in try-catch blocks:try {
const embedding = await model.forward(text);
} catch (error) {
console.error('Inference error:', error);
// Handle error appropriately
}
Store embeddings efficiently
Embeddings are Float32Arrays. Store them in state or cache for reuse:const [cache, setCache] = useState<Map<string, Float32Array>>(new Map());
const getEmbedding = async (text: string) => {
if (cache.has(text)) {
return cache.get(text)!;
}
const embedding = await model.forward(text);
setCache(prev => new Map(prev).set(text, embedding));
return embedding;
};
Next Steps
Semantic Search
Build a semantic search feature with embeddings
Overview
Learn more about text embeddings concepts