Skip to main content
Uniku is designed for performance-sensitive contexts like ORMs and high-throughput services. This guide covers benchmarks, bundle size, and optimization strategies.

Benchmark Results

Comparison of uniku string ID generation against equivalent npm packages:
Generatoruniku vs npmNotes
ULID85× fasterOptimized Crockford Base32 encoding
CUID28× fasterEfficient BigInt operations and pooling
KSUID1.5× fasterStreamlined Base62 encoding
UUID v71.1× fasterInlined hot path, reusable buffers
Nanoid~comparable speedBoth use similar pooling strategies
UUID v4npm is 1.1× fasterNative crypto.randomUUID() is hard to beat
Benchmarks run on Node.js 22 with tinybench. Results may vary by runtime and hardware. Source code available in the uniku repository.

Why Is ULID 85× Faster?

The npm ulid package has performance bottlenecks in Base32 encoding:
// ❌ Slow: String concatenation in loop
let str = ''
for (let i = 0; i < 26; i++) {
  str += ALPHABET[value & 0x1f]
  value = value >> 5
}

// ✅ Fast: Pre-allocated array with direct indexing
const chars = new Array(26)
for (let i = 0; i < 26; i++) {
  chars[i] = ALPHABET[value & 0x1f]
  value = value >> 5
}
return chars.join('')
Uniku uses optimized algorithms throughout:
  • Pre-allocated arrays instead of string concatenation
  • Bit manipulation instead of division
  • Inline fast paths for common cases

Why Is CUID2 8× Faster?

The @paralleldrive/cuid2 package creates new random bytes for every operation. Uniku pools random bytes:
// ❌ Slow: Generate fresh random bytes every call
function cuid2() {
  const random = new Uint8Array(32)
  crypto.getRandomValues(random)
  // ... use random bytes
}

// ✅ Fast: Pool random bytes, refill when depleted
const pool = new Uint8Array(4096)
let poolOffset = 0

function cuid2() {
  if (poolOffset + 32 > pool.length) {
    crypto.getRandomValues(pool)
    poolOffset = 0
  }
  const random = pool.subarray(poolOffset, poolOffset + 32)
  poolOffset += 32
  // ... use random bytes
}

Bundle Size Comparison

Only import what you use — each entry point is independently tree-shakeable:
ImportMinified + gzippedUse Case
uniku/uuid/v4~940 BRandom UUIDs
uniku/uuid/v7~1.1 KBTime-ordered UUIDs
uniku/ulid~1.5 KBCompact time-ordered IDs
uniku/cuid2~1.1 KB*Secure non-sequential IDs
uniku/nanoid~938 BURL-friendly short IDs
uniku/ksuid~1.0 KBK-sortable IDs
* CUID2 includes SHA3-512 hashing via @noble/hashes. If you already use @noble/hashes in your project, the incremental cost is minimal.

Tree-Shaking

Uniku uses separate entry points instead of barrel exports:
// ✅ Good: Only bundles UUID v7 code (~1.1 KB)
import { uuidv7 } from 'uniku/uuid/v7'

// ❌ Bad: Would bundle everything if we had a barrel export
// import { uuidv7 } from 'uniku' // Not available!
Bundle Impact Example:
// Scenario 1: Only use UUID v7
import { uuidv7 } from 'uniku/uuid/v7'
const id = uuidv7()
// Bundle size: ~1.1 KB

// Scenario 2: Use UUID v7 and Nanoid
import { uuidv7 } from 'uniku/uuid/v7'
import { nanoid } from 'uniku/nanoid'
const id1 = uuidv7()
const id2 = nanoid()
// Bundle size: ~2.0 KB (1.1 KB + 0.9 KB)

Comparison with Other Libraries

PackageSize (minified + gzipped)Tree-shakeable
uniku/uuid/v4940 B
uuid@v13~2.5 KB
uniku/nanoid938 B
nanoid~940 B
uniku/ulid1.5 KB
ulid~2.1 KB
uniku/cuid21.1 KB
@paralleldrive/cuid2~3.8 KB
uniku/ksuid1.0 KB
@owpz/ksuid~4.2 KB
For the smallest possible bundle, use uniku/nanoid or uniku/uuid/v4 — both under 1 KB gzipped.

Performance Best Practices

1. Use the Fast Path

Don’t pass options unless you need to:
import { uuidv7 } from 'uniku/uuid/v7'

// ✅ Fast path: No options
const id = uuidv7()

// ❌ Slower: Options bypass optimized hot path
const id2 = uuidv7({ msecs: Date.now() })
The fast path (no options) uses:
  • Inlined state management
  • Reusable buffers
  • Minimal branching

2. Batch Generation with Buffers

When generating many IDs, write directly to a buffer:
import { uuidv7 } from 'uniku/uuid/v7'

// Generate 100 UUIDs into a single buffer
const buffer = new Uint8Array(100 * 16)
for (let i = 0; i < 100; i++) {
  uuidv7(undefined, buffer, i * 16)
}

// Convert to strings later if needed
for (let i = 0; i < 100; i++) {
  const bytes = buffer.subarray(i * 16, (i + 1) * 16)
  const str = uuidv7.fromBytes(bytes)
}
Writing to buffers is faster when you need to serialize IDs later (e.g., storing in a database or sending over the network).

3. Reuse Buffers

For repeated operations, reuse the same buffer:
import { ulid } from 'uniku/ulid'

// Reuse buffer for multiple operations
const buffer = new Uint8Array(16)

function generateAndStore() {
  // Generate ULID bytes
  ulid(undefined, buffer, 0)
  
  // Write directly to database/network
  await db.insert({ id: buffer })
  
  // No string allocation needed!
}

4. Choose the Right Format

Different formats have different performance characteristics:
// Fastest string generation
import { nanoid } from 'uniku/nanoid'
const id = nanoid() // Simple Base64 encoding

// Fast with binary support
import { uuidv7 } from 'uniku/uuid/v7'
const id2 = uuidv7() // Fast hot path + byte conversion

// Slower but secure
import { cuid2 } from 'uniku/cuid2'
const id3 = cuid2() // SHA3-512 hashing overhead
Performance ranking (fastest to slowest):
  1. UUID v4 (when using default crypto.randomUUID())
  2. Nanoid (simple pooling)
  3. UUID v7 (inlined hot path)
  4. ULID (Base32 encoding)
  5. KSUID (Base62 encoding)
  6. CUID2 (hashing overhead)

5. Avoid Custom Options in Hot Paths

Custom options disable optimizations:
import { nanoid } from 'uniku/nanoid'

// ✅ Fast: Uses pre-allocated pool
for (let i = 0; i < 1000; i++) {
  const id = nanoid()
}

// ❌ Slower: Custom alphabet disables power-of-2 optimization
for (let i = 0; i < 1000; i++) {
  const id = nanoid({ alphabet: '0123456789' })
}

When Performance Matters

High-Throughput Services

If you’re generating thousands of IDs per second:
import { ulid } from 'uniku/ulid'

// Example: API server creating many records
app.post('/batch', async (req, res) => {
  const items = req.body.items.map(item => ({
    id: ulid(), // Fast generation
    ...item
  }))
  
  await db.insert(items)
  res.json({ created: items.length })
})
Recommendation: Use ULID or UUID v7 for optimal balance of speed and functionality.

ORMs and Database Layers

Many ORMs generate IDs on every insert:
import { uuidv7 } from 'uniku/uuid/v7'

class User {
  id = uuidv7() // Called on every new User()
  email: string
  createdAt = new Date()
}

// Creating 1000 users
const users = Array.from({ length: 1000 }, () => new User())
Uniku’s performance optimizations shine in these scenarios.

Serverless and Edge Functions

Cold starts matter. Smaller bundle = faster cold start:
// Cloudflare Worker
import { nanoid } from 'uniku/nanoid' // Only 938 B

export default {
  async fetch(request) {
    const id = nanoid()
    return new Response(JSON.stringify({ id }))
  }
}

Runtime Compatibility

Uniku works everywhere using Web Crypto API:
// All runtimes support globalThis.crypto
const crypto = globalThis.crypto

// Node.js (v16+)
import { uuidv7 } from 'uniku/uuid/v7'
const id = uuidv7()

// Deno
import { uuidv7 } from 'npm:uniku/uuid/v7'
const id = uuidv7()

// Bun
import { uuidv7 } from 'uniku/uuid/v7'
const id = uuidv7()

// Cloudflare Workers
import { uuidv7 } from 'uniku/uuid/v7'
const id = uuidv7()

// Browsers
import { uuidv7 } from 'uniku/uuid/v7'
const id = uuidv7()
No Node.js-specific APIs means zero compatibility issues across runtimes.

Profiling Tips

Measure performance in your own application:
import { uuidv7 } from 'uniku/uuid/v7'

// Simple benchmark
const iterations = 100_000
const start = performance.now()

for (let i = 0; i < iterations; i++) {
  uuidv7()
}

const end = performance.now()
const opsPerSec = (iterations / (end - start)) * 1000

console.log(`${opsPerSec.toFixed(0)} ops/sec`)
// Example output: 2,500,000 ops/sec
For accurate benchmarks, use tinybench or benchmark.js to account for warm-up, garbage collection, and statistical variance.

Summary

  • ✅ Use the default (no options) for fastest generation
  • ✅ Choose the right format for your use case
  • ✅ Write to buffers for batch operations
  • ✅ Reuse buffers when possible
  • ✅ Import only what you need (tree-shaking)
  • ❌ Don’t pass custom options in hot paths
  • ❌ Don’t use string concatenation for batch operations
Bottom line: Uniku is fast enough for any use case. For most applications, the performance difference between ID formats is negligible compared to network and database latency.

Build docs developers (and LLMs) love