Skip to main content
Learn how to deploy Orama in different environments, from client-side browsers to server-side Node.js and edge runtimes.

Overview

Orama is designed to run anywhere JavaScript runs:
  • Browser: Client-side search with instant results
  • Node.js: Server-side indexing and search
  • Edge: Deploy to Cloudflare Workers, Vercel Edge, etc.
  • Deno: Native Deno support with npm specifiers

Browser

Perfect for documentation sites, catalogs, and client-side filtering

Node.js

Ideal for APIs, large-scale indexing, and backend services

Edge

Ultra-low latency search at the edge network

Deno

Modern TypeScript runtime with top-level await

Browser Deployment

CDN Import (Quickest Start)

<!DOCTYPE html>
<html>
  <body>
    <input type="text" id="search" placeholder="Search..." />
    <div id="results"></div>

    <script type="module">
      import { create, insert, search } from 'https://cdn.jsdelivr.net/npm/@orama/orama@latest/+esm'

      const db = await create({
        schema: {
          title: 'string',
          description: 'string'
        }
      })

      await insert(db, { title: 'Product 1', description: 'Description 1' })
      await insert(db, { title: 'Product 2', description: 'Description 2' })

      document.getElementById('search').addEventListener('input', async (e) => {
        const results = await search(db, { term: e.target.value })
        document.getElementById('results').innerHTML = results.hits
          .map(hit => `<div>${hit.document.title}</div>`)
          .join('')
      })
    </script>
  </body>
</html>

NPM with Build Tools

1

Install Orama

npm install @orama/orama
2

Create your search instance

// search.js
import { create, insertMultiple, search } from '@orama/orama'

export async function createSearchIndex() {
  const db = await create({
    schema: {
      id: 'string',
      title: 'string',
      content: 'string',
      category: 'enum'
    }
  })
  
  // Load your data
  const response = await fetch('/api/content')
  const documents = await response.json()
  await insertMultiple(db, documents, 1000)
  
  return db
}

export async function searchContent(db, query) {
  return search(db, {
    term: query,
    limit: 20,
    properties: ['title', 'content'],
    boost: {
      title: 2
    }
  })
}
3

Use in your application

// app.js
import { createSearchIndex, searchContent } from './search.js'

let db

async function init() {
  // Initialize on app load
  db = await createSearchIndex()
  console.log('Search index ready!')
}

async function handleSearch(query) {
  const results = await searchContent(db, query)
  renderResults(results.hits)
}

init()

Pre-built Index Strategy

For production, pre-build your index at build time:
// scripts/build-index.js
import { create, insertMultiple, save } from '@orama/orama'
import { writeFile } from 'fs/promises'

async function buildIndex() {
  const db = await create({
    schema: {
      id: 'string',
      title: 'string',
      content: 'string'
    }
  })
  
  // Load your documents
  const documents = await loadDocuments()
  await insertMultiple(db, documents, 1000)
  
  // Serialize and save
  const serialized = await save(db)
  await writeFile('public/search-index.json', JSON.stringify(serialized))
  
  console.log(`Built index with ${documents.length} documents`)
}

buildIndex()
// app.js - Load pre-built index
import { create, load } from '@orama/orama'

async function loadSearchIndex() {
  const db = await create({ schema: { /* must match build schema */ } })
  
  const response = await fetch('/search-index.json')
  const data = await response.json()
  
  load(db, data)
  return db
}
Why pre-build? Loading a serialized index is 10-100x faster than re-indexing documents in the browser. Essential for production deployments!

Browser Performance Considerations

Memory Limits: Browsers have stricter memory constraints than servers. For large datasets (>10,000 documents), consider:
  • Server-side search with API calls
  • Index chunking/lazy loading
  • Reduced vector dimensions

Node.js Deployment

Installation

npm install @orama/orama

Express API Example

import express from 'express'
import { create, insertMultiple, search, save, load } from '@orama/orama'
import { readFile, writeFile } from 'fs/promises'

const app = express()
let db

// Initialize search index
async function initializeSearch() {
  db = await create({
    schema: {
      id: 'string',
      title: 'string',
      content: 'string',
      category: 'string',
      tags: 'string[]'
    }
  })
  
  // Try loading cached index
  try {
    const cached = JSON.parse(await readFile('./search-index.json', 'utf-8'))
    load(db, cached)
    console.log('Loaded cached search index')
  } catch (e) {
    // Build new index
    console.log('Building search index...')
    const documents = await fetchDocuments()
    await insertMultiple(db, documents, 1000)
    
    // Cache for next startup
    const serialized = await save(db)
    await writeFile('./search-index.json', JSON.stringify(serialized))
    console.log('Search index built and cached')
  }
}

// Search endpoint
app.get('/api/search', async (req, res) => {
  const { q, limit = 20, offset = 0 } = req.query
  
  if (!q) {
    return res.status(400).json({ error: 'Query parameter required' })
  }
  
  try {
    const results = await search(db, {
      term: q,
      limit: parseInt(limit),
      offset: parseInt(offset),
      properties: ['title', 'content']
    })
    
    res.json({
      results: results.hits,
      count: results.count,
      elapsed: results.elapsed.formatted
    })
  } catch (error) {
    res.status(500).json({ error: error.message })
  }
})

// Start server
await initializeSearch()
app.listen(3000, () => {
  console.log('Search API running on http://localhost:3000')
})

Next.js API Route

// pages/api/search.js
import { create, insertMultiple, search } from '@orama/orama'

let db

async function getDB() {
  if (!db) {
    db = await create({
      schema: {
        title: 'string',
        content: 'string'
      }
    })
    
    // Load documents
    const documents = await loadDocuments()
    await insertMultiple(db, documents)
  }
  return db
}

export default async function handler(req, res) {
  if (req.method !== 'GET') {
    return res.status(405).json({ error: 'Method not allowed' })
  }
  
  const { q } = req.query
  const db = await getDB()
  const results = await search(db, { term: q, limit: 20 })
  
  res.status(200).json(results)
}
The database instance is cached in memory. For production with multiple instances, use a shared cache (Redis, Memcached) or rebuild on each instance.

Edge Runtime Deployment

Cloudflare Workers

import { create, load, search } from '@orama/orama'

export default {
  async fetch(request, env) {
    const url = new URL(request.url)
    const query = url.searchParams.get('q')
    
    if (!query) {
      return new Response('Query parameter required', { status: 400 })
    }
    
    // Load index from KV storage
    const db = await create({ schema: { title: 'string', content: 'string' } })
    const indexData = await env.SEARCH_INDEX.get('orama-db', 'json')
    load(db, indexData)
    
    // Perform search
    const results = await search(db, {
      term: query,
      limit: 20
    })
    
    return new Response(JSON.stringify(results), {
      headers: { 'Content-Type': 'application/json' }
    })
  }
}
1

Build your index

Use a Node.js script to build and serialize your index
2

Upload to KV

wrangler kv:key put --binding=SEARCH_INDEX "orama-db" ./index.json
3

Deploy worker

wrangler deploy

Vercel Edge Functions

// pages/api/search.js
import { create, load, search } from '@orama/orama'

export const config = {
  runtime: 'edge'
}

// Pre-serialized index (imported at build time)
import searchIndex from './search-index.json'

let db

async function getDB() {
  if (!db) {
    db = await create({
      schema: {
        id: 'string',
        title: 'string',
        content: 'string'
      }
    })
    load(db, searchIndex)
  }
  return db
}

export default async function handler(req) {
  const { searchParams } = new URL(req.url)
  const query = searchParams.get('q')
  
  const db = await getDB()
  const results = await search(db, {
    term: query,
    limit: 20
  })
  
  return new Response(JSON.stringify(results), {
    headers: { 'Content-Type': 'application/json' }
  })
}

Edge Optimization Tips

Keep indexes small

Edge runtimes have size limits (1-10MB). Optimize your index size.

Use build-time indexing

Always pre-build indexes, never index at request time.

Cache aggressively

Use edge KV stores to cache serialized indexes.

Monitor cold starts

Test cold start performance with your index size.

Deno Deployment

Using npm specifiers

import { create, insert, search } from 'npm:@orama/orama'

const db = await create({
  schema: {
    title: 'string',
    content: 'string'
  }
})

await insert(db, {
  title: 'Hello Deno',
  content: 'Orama works seamlessly with Deno!'
})

const results = await search(db, { term: 'deno' })
console.log(results)

Deno Deploy

// main.ts
import { serve } from 'https://deno.land/[email protected]/http/server.ts'
import { create, load, search } from 'npm:@orama/orama'

const db = await create({ schema: { title: 'string' } })

// Load index from Deno KV or object storage
const indexData = await Deno.readTextFile('./search-index.json')
load(db, JSON.parse(indexData))

serve(async (req) => {
  const url = new URL(req.url)
  const query = url.searchParams.get('q')
  
  const results = await search(db, { term: query, limit: 20 })
  
  return new Response(JSON.stringify(results), {
    headers: { 'Content-Type': 'application/json' }
  })
})

Production Deployment Checklist

1

Pre-build indexes

Always build indexes at build time, not runtime
2

Implement caching

Use Redis, KV stores, or CDN caching for serialized indexes
3

Monitor performance

Track search latency and index size
4

Set up error handling

Handle edge cases like empty queries, malformed data
5

Implement rate limiting

Protect your search endpoints from abuse
6

Enable compression

Compress serialized indexes (gzip, brotli)
7

Test cold starts

Verify performance in serverless cold start scenarios

Environment Comparison

FeatureBrowserNode.jsEdgeDeno
Index SizeSmall (less than 5MB)Large (GB+)Small (less than 10MB)Medium
IndexingPre-built onlyBuild or loadPre-built onlyBuild or load
PerformanceLimited by deviceFull resourcesUltra-fastFast
Best ForClient searchAPIs, batch processingLow-latency APIsModern apps
MemoryLimitedHighLimitedMedium

Docker Deployment

# Dockerfile
FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm ci --production

COPY . .

# Pre-build search index
RUN node scripts/build-index.js

EXPOSE 3000

CMD ["node", "server.js"]
# docker-compose.yml
version: '3.8'
services:
  search-api:
    build: .
    ports:
      - '3000:3000'
    environment:
      - NODE_ENV=production
    restart: unless-stopped

Monitoring and Observability

import { search } from '@orama/orama'

app.get('/api/search', async (req, res) => {
  const startTime = Date.now()
  
  try {
    const results = await search(db, {
      term: req.query.q,
      limit: 20
    })
    
    const duration = Date.now() - startTime
    
    // Log metrics
    console.log({
      query: req.query.q,
      resultsCount: results.count,
      duration: `${duration}ms`,
      searchTime: results.elapsed.formatted
    })
    
    // Send to monitoring service
    metrics.gauge('search.duration', duration)
    metrics.increment('search.requests')
    
    res.json(results)
  } catch (error) {
    metrics.increment('search.errors')
    throw error
  }
})

Next Steps

Performance Optimization

Optimize your deployment for maximum performance

Plugin System

Extend functionality with plugins

Data Import

Learn how to import data efficiently

Persistence

Save and restore your search index

Build docs developers (and LLMs) love