Overview
The useVerticalOCR hook manages a vertical OCR instance for detecting and recognizing text oriented vertically. It’s specifically designed for languages and scenarios where text is written top-to-bottom.
Import
import { useVerticalOCR } from 'react-native-executorch' ;
Hook Signature
const verticalOcr = useVerticalOCR ({
model ,
independentCharacters ,
preventLoad
}: VerticalOCRProps ): OCRType
Parameters
Object containing model sources and configuration Source location of the text detector model binary (.pte)
Source location of the text recognizer model binary (.pte)
Language configuration for OCR (e.g., ‘ch’ for Chinese)
If true, treats each character independently during recognition. Useful for languages where characters don’t form continuous words.
If true, prevents automatic model loading and downloading when the hook mounts
Return Value
Returns an object with the following properties and methods:
State Properties
Indicates whether both detector and recognizer models are loaded and ready to process images.
Indicates whether the OCR pipeline is currently processing an image.
Combined download progress as a value between 0 and 1.
Contains error details if the models fail to load or encounter an error during OCR.
Methods
Executes the complete vertical OCR pipeline on the provided image. forward ( imageSource : string ): Promise < OCRDetection [] >
Image source as a file path, URI, or base64 string
Returns a promise that resolves to an array of OCRDetection objects.
Types
See useOCR for shared type definitions (OCRDetection, Point).
Usage Examples
Chinese Vertical Text Recognition
import { useVerticalOCR } from 'react-native-executorch' ;
import { useState } from 'react' ;
import { launchImageLibrary } from 'react-native-image-picker' ;
function ChineseVerticalTextRecognizer () {
const [ imageUri , setImageUri ] = useState < string | null >( null );
const [ detections , setDetections ] = useState < any []>([]);
const verticalOcr = useVerticalOCR ({
model: {
detectorSource: 'https://huggingface.co/.../vertical-detector.pte' ,
recognizerSource: 'https://huggingface.co/.../vertical-recognizer.pte' ,
language: 'ch' ,
},
independentCharacters: false ,
});
const recognizeText = async ( uri : string ) => {
if ( ! verticalOcr . isReady ) return ;
try {
const results = await verticalOcr . forward ( uri );
setDetections ( results );
console . log ( 'Detected vertical text blocks:' , results . length );
results . forEach (( det , idx ) => {
console . log ( `[ ${ idx } ] " ${ det . text } " ( ${ ( det . score * 100 ). toFixed ( 1 ) } %)` );
});
} catch ( error ) {
console . error ( 'Vertical OCR failed:' , error );
}
};
const pickAndRecognize = async () => {
const result = await launchImageLibrary ({ mediaType: 'photo' });
if ( result . assets ?.[ 0 ]?. uri ) {
const uri = result . assets [ 0 ]. uri ;
setImageUri ( uri );
await recognizeText ( uri );
}
};
return (
< View >
< Text > Status : { verticalOcr . isReady ? 'Ready' : 'Loading...' }</ Text >
< Text > Progress : {(verticalOcr.downloadProgress * 100). toFixed (0)}%</ Text >
< Button
title = "Pick Image & Recognize"
onPress = { pickAndRecognize }
disabled = {!verticalOcr. isReady }
/>
{ imageUri && (
< Image source = {{ uri : imageUri }} style = {{ width : 400 , height : 400 }} />
)}
{ verticalOcr . isGenerating && < ActivityIndicator />}
< ScrollView >
{ detections . map (( det , idx ) => (
< View key = { idx } style = {{ padding : 10 , borderBottomWidth : 1 }} >
< Text style = {{ fontWeight : 'bold' }} > {det. text } </ Text >
< Text style = {{ color : 'gray' }} >
Confidence : {(det.score * 100). toFixed (1)}%
</ Text >
</ View >
))}
</ ScrollView >
</ View >
);
}
Independent Character Recognition
import { useVerticalOCR } from 'react-native-executorch' ;
import { useState } from 'react' ;
function IndependentCharRecognizer () {
const [ characters , setCharacters ] = useState < string []>([]);
const verticalOcr = useVerticalOCR ({
model: {
detectorSource: require ( './models/vertical-detector.pte' ),
recognizerSource: require ( './models/vertical-recognizer.pte' ),
language: 'ch' ,
},
independentCharacters: true , // Treat each character separately
});
const recognizeCharacters = async ( imageUri : string ) => {
if ( ! verticalOcr . isReady ) return ;
try {
const results = await verticalOcr . forward ( imageUri );
// Extract individual characters
const chars = results
. filter ( det => det . score > 0.7 )
. map ( det => det . text );
setCharacters ( chars );
console . log ( 'Recognized characters:' , chars . join ( '' ));
} catch ( error ) {
console . error ( 'Character recognition failed:' , error );
}
};
return (
< View >
< Text > Recognized Characters : </ Text >
< View style = {{ flexDirection : 'column' , alignItems : 'center' }} >
{ characters . map (( char , idx ) => (
< Text key = { idx } style = {{ fontSize : 24 , padding : 5 }} >
{ char }
</ Text >
))}
</ View >
</ View >
);
}
Drawing Vertical Text Bounding Boxes
import { useVerticalOCR } from 'react-native-executorch' ;
import { useState } from 'react' ;
import Svg , { Polygon , Text as SvgText } from 'react-native-svg' ;
function VerticalTextVisualizer () {
const [ imageUri , setImageUri ] = useState < string | null >( null );
const [ imageDimensions , setImageDimensions ] = useState ({ width: 0 , height: 0 });
const [ detections , setDetections ] = useState < any []>([]);
const verticalOcr = useVerticalOCR ({
model: {
detectorSource: 'https://example.com/vertical-detector.pte' ,
recognizerSource: 'https://example.com/vertical-recognizer.pte' ,
language: 'ch' ,
},
independentCharacters: false ,
});
const processImage = async ( uri : string ) => {
// Get image dimensions
Image . getSize ( uri , ( width , height ) => {
setImageDimensions ({ width , height });
});
if ( ! verticalOcr . isReady ) return ;
try {
const results = await verticalOcr . forward ( uri );
setDetections ( results );
} catch ( error ) {
console . error ( 'Vertical OCR failed:' , error );
}
};
return (
< View >
{ imageUri && (
< View >
< Image
source = {{ uri : imageUri }}
style = {{ width : 400 , height : 400 }}
/>
< Svg
style = {{ position : 'absolute' , top : 0 , left : 0 }}
width = { 400 }
height = { 400 }
>
{ detections . map (( det , idx ) => {
const scaleX = 400 / imageDimensions . width ;
const scaleY = 400 / imageDimensions . height ;
const points = det . bbox
.map( p => ` ${ p . x * scaleX } , ${ p . y * scaleY } ` )
.join( ' ' );
return (
<React.Fragment key={idx}>
<Polygon
points={points}
stroke= "blue"
strokeWidth= "2"
fill= "none"
/>
{ /* Rotated text label for vertical orientation */ }
<SvgText
x={det.bbox[0].x * scaleX}
y={det.bbox[0].y * scaleY - 10}
fill= "blue"
fontSize= "10"
fontWeight= "bold"
transform={ `rotate(-90 ${ det . bbox [ 0 ]. x * scaleX } ${ det . bbox [ 0 ]. y * scaleY } )` }
>
{ det . text }
</ SvgText >
</ React . Fragment >
);
})}
</ Svg >
</ View >
)}
</ View >
);
}
import { useVerticalOCR } from 'react-native-executorch' ;
import { useState } from 'react' ;
function VerticalTextExtractor () {
const [ extractedText , setExtractedText ] = useState ( '' );
const verticalOcr = useVerticalOCR ({
model: {
detectorSource: require ( './models/vertical-detector.pte' ),
recognizerSource: require ( './models/vertical-recognizer.pte' ),
language: 'ch' ,
},
});
const extractVerticalText = async ( imageUri : string ) => {
if ( ! verticalOcr . isReady ) return ;
try {
const detections = await verticalOcr . forward ( imageUri );
// Sort by horizontal position (right to left for traditional Chinese)
const sorted = detections . sort (( a , b ) => {
const avgXA = a . bbox . reduce (( sum , p ) => sum + p . x , 0 ) / a . bbox . length ;
const avgXB = b . bbox . reduce (( sum , p ) => sum + p . x , 0 ) / b . bbox . length ;
return avgXB - avgXA ; // Right to left
});
// Concatenate all text columns
const fullText = sorted . map ( det => det . text ). join ( ' ' );
setExtractedText ( fullText );
} catch ( error ) {
console . error ( 'Text extraction failed:' , error );
}
};
return (
< View >
< Text style = {{ fontWeight : 'bold' }} > Extracted Vertical Text : </ Text >
< Text style = {{ writingDirection : 'rtl' }} > { extractedText } </ Text >
</ View >
);
}
import { useVerticalOCR } from 'react-native-executorch' ;
import { useState } from 'react' ;
function TraditionalScrollReader () {
const [ columns , setColumns ] = useState < string []>([]);
const verticalOcr = useVerticalOCR ({
model: {
detectorSource: 'https://example.com/vertical-detector.pte' ,
recognizerSource: 'https://example.com/vertical-recognizer.pte' ,
language: 'ch' ,
},
});
const readScroll = async ( imageUri : string ) => {
if ( ! verticalOcr . isReady ) return ;
try {
const results = await verticalOcr . forward ( imageUri );
// Group detections into columns based on x-position
const grouped = new Map < number , string []>();
results . forEach (( det ) => {
const avgX = det . bbox . reduce (( sum , p ) => sum + p . x , 0 ) / det . bbox . length ;
const columnKey = Math . round ( avgX / 50 ) * 50 ; // Group by 50px intervals
if ( ! grouped . has ( columnKey )) {
grouped . set ( columnKey , []);
}
grouped . get ( columnKey ) ! . push ( det . text );
});
// Sort columns right to left
const sortedColumns = Array . from ( grouped . entries ())
. sort (([ a ], [ b ]) => b - a )
. map (([, texts ]) => texts . join ( '' ));
setColumns ( sortedColumns );
} catch ( error ) {
console . error ( 'Scroll reading failed:' , error );
}
};
return (
< ScrollView horizontal >
{ columns . map (( column , idx ) => (
< View key = { idx } style = {{ padding : 10 }} >
< Text style = {{ writingDirection : 'rtl' }} > { column } </ Text >
</ View >
))}
</ ScrollView >
);
}
Differences from useOCR
The useVerticalOCR hook differs from the standard useOCR hook in several ways:
Text Orientation : Optimized for vertical (top-to-bottom) text
Independent Characters : Option to treat each character independently
Column Detection : Better handling of multi-column vertical layouts
Language Support : Particularly suited for East Asian languages
Notes
Both detector and recognizer models automatically load when the hook mounts unless preventLoad is set to true.
Set independentCharacters to true for languages where characters don’t form continuous words, or when recognizing isolated characters.
For traditional Chinese text, consider sorting detected text columns from right to left for proper reading order.
Best Practices
Image Orientation : Ensure vertical text is properly oriented in the image
Column Spacing : Clear spacing between columns improves detection
Character Size : Larger characters generally produce better results
Language Selection : Choose the correct language for symbol set
Independent Mode : Use for seal scripts, calligraphy, or isolated characters
See Also