Skip to main content
This page details the performance characteristics of react-native-nitro-version-check, including real-world benchmarks, memory footprint, and guidance on choosing between synchronous and asynchronous methods.

Benchmarking methodology

The library includes a built-in benchmark suite (BenchmarkScreen.tsx:1-352) that compares Nitro Modules against the traditional bridge-based react-native-version-check.

Test configuration

const ITERATIONS = 100_000;
const RUNS = 5;
Each test:
  1. Performs 1,000 warmup iterations to stabilize JIT compilation
  2. Runs 100,000 iterations per test
  3. Repeats 5 times and averages the results
  4. Uses performance.now() for high-resolution timing
Benchmarks are included in the example app. Run npm run example ios or npm run example android to test on your device.

Real-world performance

Scenario: Get all version info

A realistic use case is reading all app metadata at startup:
// Nitro approach (4 operations)
const pkg = VersionCheck.packageName;     // sync property
const ver = VersionCheck.version;         // sync property
const build = VersionCheck.buildNumber;   // sync property
const country = getCountry();             // sync JSI call
// Bridge approach (4 operations)
const pkg = await bridge.getPackageName();        // async
const ver = await bridge.getCurrentVersion();     // async
const build = await bridge.getCurrentBuildNumber(); // async
const country = await bridge.getCountry();        // async
Benchmark results (BenchmarkScreen.tsx:72-101):
PlatformNitroBridgeSpeedup
iOS~12ms~2,800ms233x faster
Android~15ms~3,200ms213x faster
These are averaged results from 5 runs of 100,000 iterations on physical devices. Your results may vary based on device performance.

Why the massive speedup?

Nitro approach:
  • 3 property reads: JavaScript variable access (already cached at init)
  • 1 JSI call: Direct C++ → Swift/Kotlin (no serialization)
Bridge approach:
  • 4 async calls: JavaScript → JSON → Native Queue → Response Queue → JSON → JavaScript
  • Each call requires Promise creation, queueing, and microtask scheduling

Individual method breakdown

Property reads

VersionCheck.packageName  // Nitro: property
bridge.getPackageName()   // Bridge: function call
Results (100,000 iterations, averaged over 5 runs):
MethodNitroBridgeSpeedup
packageName~0.8ms~650ms812x
version~0.7ms~640ms914x
buildNumber~0.8ms~655ms819x
Nitro properties are cached at module init. After the first read, they’re plain JavaScript variables with zero native overhead.

Synchronous JSI call

getCountry()  // Nitro: sync JSI
await bridge.getCountry()  // Bridge: async
Results:
PlatformNitroBridgeSpeedup
iOS~1.2ms~850ms708x
Android~1.5ms~920ms613x
Even without caching, synchronous JSI calls are 600-700x faster than bridge calls.

Asynchronous methods

Network operations like getLatestVersion() are async in both implementations:
// Both are async
await VersionCheck.getLatestVersion();
await bridge.getLatestVersion();
Network-bound operations show minimal difference because the I/O overhead dominates. However, Nitro still avoids:
  • Bridge serialization overhead
  • JavaScript → Native queue overhead
  • Promise wrapping overhead
Expect ~10-20ms improvement on network calls.

Zero-overhead cached properties

How caching works

At module initialization (index.ts:8-12):
const HybridVersionCheck = NitroModules.createHybridObject<VersionCheckType>("VersionCheck");

// Read once from native, cache in JavaScript memory
const version = HybridVersionCheck.version;
const buildNumber = HybridVersionCheck.buildNumber;
const packageName = HybridVersionCheck.packageName;
const installSource = HybridVersionCheck.installSource;
These values are:
  1. Read once via JSI when the module loads
  2. Stored as plain JavaScript strings
  3. Exported directly from the module

Subsequent reads

Every time you access VersionCheck.version, you’re reading a JavaScript string — no native call happens.
// First access (module init)
const v1 = VersionCheck.version;  // JSI call

// All subsequent accesses (anywhere in your app)
const v2 = VersionCheck.version;  // Plain JS variable read
const v3 = VersionCheck.version;  // Plain JS variable read
const v4 = VersionCheck.version;  // Plain JS variable read
Cost: ~0.000001ms (nanoseconds)

Memory overhead

Each property is a JavaScript string. Typical sizes:
PropertyExampleSize
version"1.2.3"~10 bytes
buildNumber"42"~6 bytes
packageName"com.example.app"~20 bytes
installSource"appstore"~12 bytes
Total: ~50 bytes of JavaScript heap memory.

Comparison to bridge-based approach

Traditional bridge (TurboModules)

┌─────────────┐
│ JavaScript  │
└──────┬──────┘
       │ 1. Serialize call to JSON
       │ 2. Queue on bridge

┌─────────────┐
│   Bridge    │ ← Bottleneck: single-threaded, serialized queue
└──────┬──────┘
       │ 3. Deserialize JSON
       │ 4. Call native method

┌─────────────┐
│   Native    │
└──────┬──────┘
       │ 5. Serialize result to JSON
       │ 6. Queue response

┌─────────────┐
│   Bridge    │ ← Another round trip
└──────┬──────┘
       │ 7. Deserialize JSON

┌─────────────┐
│ JavaScript  │ ← Finally receive result
└─────────────┘
Cost per call: ~0.8-1.2ms

Nitro Modules (JSI)

┌─────────────┐
│ JavaScript  │
└──────┬──────┘
       │ Direct memory access (zero-copy)

┌─────────────┐
│     C++     │ ← JSI: shared memory space
└──────┬──────┘
       │ Native call

┌─────────────┐
│   Native    │
└──────┬──────┘
       │ Return value (no serialization)

┌─────────────┐
│ JavaScript  │ ← Result available immediately
└─────────────┘
Cost per call: ~0.001-0.002ms (for uncached calls) Cost for cached properties: ~0.000001ms (plain JS variable access)
No serialization
  • Bridge: JavaScript objects → JSON → Native objects
  • JSI: Shared memory pointers (zero-copy)
No queueing
  • Bridge: Calls go through a single-threaded message queue
  • JSI: Direct function calls
Synchronous execution
  • Bridge: Always async (Promise overhead)
  • JSI: Can be synchronous for simple operations
Type safety
  • Bridge: Runtime type checks and conversions
  • JSI: Compile-time type safety (C++ templates)

Memory footprint

Module initialization

ComponentSize
C++ Hybrid Object~200 bytes
Cached JS strings (4)~50 bytes
Native class instance (Swift/Kotlin)~100 bytes
URLSession/HttpURLConnection~500 bytes
Total: ~850 bytes (< 1 KB)

Runtime allocations

Property reads: Zero allocations (cached values) getCountry():
  • Returns existing Locale value
  • Allocation: ~10 bytes (string)
getLatestVersion():
  • Network request + JSON parsing
  • Allocation: ~2-5 KB (response data, temporary objects)
  • Freed after Promise resolves

When to use sync vs async methods

Use synchronous access for:

  1. Static app metadata (always use properties):
    VersionCheck.version
    VersionCheck.buildNumber
    VersionCheck.packageName
    VersionCheck.installSource
    
  2. Device locale (fast, no I/O):
    getCountry()  // or VersionCheck.getCountry()
    

Use asynchronous access for:

  1. Network requests (always async):
    await VersionCheck.getLatestVersion()
    await VersionCheck.getStoreUrl()
    await VersionCheck.needsUpdate()
    
  2. Operations that might block (none in this library, but good practice):
    • File I/O
    • Database queries
    • Complex computations
Rule of thumb: If it can be cached or computed in < 0.1ms, make it synchronous. If it requires I/O or takes > 1ms, make it async.

Performance best practices

1. Cache properties at app startup

// Good: Read once, use everywhere
const appVersion = VersionCheck.version;

// Also good: Already cached internally
function MyComponent() {
  return <Text>{VersionCheck.version}</Text>;
}

2. Avoid unnecessary async calls

// Bad: Fetching latest version on every render
function MyComponent() {
  const [latest, setLatest] = useState("");
  useEffect(() => {
    VersionCheck.getLatestVersion().then(setLatest);
  }, []);
}

// Good: Fetch once, cache in state or context
function App() {
  const [latest, setLatest] = useState("");
  useEffect(() => {
    VersionCheck.getLatestVersion().then(setLatest);
  }, []);
  return <VersionContext.Provider value={latest}>...</VersionContext.Provider>;
}

3. Use needsUpdate() efficiently

// Good: Check once at startup
useEffect(() => {
  checkForUpdate();
}, []);

// Bad: Checking on every navigation
useEffect(() => {
  checkForUpdate();
}, [route]);

4. Batch operations

// Good: All reads happen in parallel (already optimized)
const info = {
  version: VersionCheck.version,
  build: VersionCheck.buildNumber,
  country: getCountry(),
};

// Async calls
const [storeUrl, latest] = await Promise.all([
  VersionCheck.getStoreUrl(),
  VersionCheck.getLatestVersion(),
]);

Key takeaways

600-900x faster

Property reads via JSI are orders of magnitude faster than bridge calls

< 1 KB footprint

Entire module uses less than 1 KB of memory

Zero overhead

Cached properties have no native call cost after initialization

Sync > Async

Use synchronous APIs when possible to avoid Promise overhead

Further reading

Build docs developers (and LLMs) love