Skip to main content
This guide provides detailed instructions for installing and configuring OmniSearches, covering prerequisites, setup options, and advanced configuration.

System Requirements

Minimum Requirements

  • Node.js: v18.0.0 or higher (v20+ recommended)
  • RAM: 512 MB minimum, 1 GB recommended
  • Disk Space: 500 MB for dependencies and build artifacts
  • Operating System: Linux, macOS, or Windows with WSL2

Package Manager

OmniSearches uses npm as its primary package manager. While other managers may work, npm is officially supported and tested.
You can check your Node.js and npm versions:
node --version
npm --version

API Keys Setup

OmniSearches requires two free API keys to function:

Google Gemini API Key

1

Visit Google AI Studio

Navigate to Google AI Studio and sign in with your Google account.
2

Create API Key

Click “Get API Key” and create a new key for your project. The free tier includes generous limits:
  • 15 requests per minute
  • 1 million tokens per minute
  • 1,500 requests per day
3

Copy the key

Save your API key securely. You’ll need it for the GOOGLE_API_KEY environment variable.

OpenRouter API Key

1

Visit OpenRouter

Go to OpenRouter and create a free account.
2

Generate key

Navigate to the API Keys section and create a new key. OpenRouter provides free access to Deepseek R1 Distill Llama 70B.
3

Copy the key

Save this key for the REASON_MODEL_API_KEY environment variable.
Keep your API keys confidential. Never commit them to version control or share them publicly.

Installation Methods

1

Clone the repository

git clone https://github.com/kiwigaze/OmniSearches.git
cd OmniSearches
2

Install dependencies

npm install
This installs:
  • Frontend: React 18.3, Vite 5.4, TailwindCSS, shadcn/ui components
  • Backend: Express 4.21, Google Generative AI SDK, OpenAI SDK
  • TypeScript: Full type safety across the stack
3

Configure environment

Create a .env file in the project root:
.env
# Required: Google Gemini API
GOOGLE_API_KEY=your_google_api_key_here

# Required: OpenRouter for reasoning mode
REASON_MODEL_API_KEY=your_openrouter_api_key_here
REASON_MODEL_API_URL=https://openrouter.ai/api/v1
REASON_MODEL=deepseek/deepseek-r1-distill-llama-70b:free

# Optional: Custom port (defaults to 3000)
PORT=3000

# Optional: Node environment (auto-detected)
NODE_ENV=development
4

Start development server

npm run dev
This command uses tsx to run the TypeScript server directly without compilation. The server includes:
  • Hot module replacement (HMR) for frontend
  • Automatic server restart on backend changes
  • Full TypeScript support
5

Verify installation

Open your browser to http://localhost:3000 and perform a test search. Check the terminal for:
--- Environment Setup Debug ---
Environment variables loaded: { GOOGLE_API_KEY: '***', ... }
--- End Debug ---

serving on port 3000

Method 2: Production Build

For production deployment:
1

Complete local installation

Follow Method 1 steps 1-3 above.
2

Build the application

npm run build
This runs a multi-stage build:
  1. Installs production dependencies
  2. Builds the React frontend with Vite (output: dist/public)
  3. Bundles the Express backend with esbuild (output: dist/index.js)
The build process uses:
  • Vite for frontend bundling with optimizations
  • esbuild for fast backend compilation
  • ESM format for modern JavaScript modules
3

Configure production environment

Update your .env file:
NODE_ENV=production
GOOGLE_API_KEY=your_production_key
REASON_MODEL_API_KEY=your_production_key
REASON_MODEL_API_URL=https://openrouter.ai/api/v1
REASON_MODEL=deepseek/deepseek-r1-distill-llama-70b:free
4

Start production server

npm start
This runs the compiled application from the dist directory. The production server:
  • Serves static assets directly (no Vite dev server)
  • Uses optimized builds for better performance
  • Runs on Node.js without development dependencies

Environment Variables Reference

Required Variables

GOOGLE_API_KEY
string
required
Your Google API key with access to Gemini 2.0 Flash API. Get it from Google AI Studio.
REASON_MODEL_API_KEY
string
required
OpenRouter API key for Deepseek reasoning model. Obtain from OpenRouter.
REASON_MODEL_API_URL
string
required
OpenRouter API base URL. Default: https://openrouter.ai/api/v1
REASON_MODEL
string
required
Reasoning model identifier. Default: deepseek/deepseek-r1-distill-llama-70b:free

Optional Variables

PORT
number
default:"3000"
Port number for the HTTP server. The application will listen on 0.0.0.0:[PORT].
NODE_ENV
string
default:"development"
Environment mode. Values: development or production. Affects:
  • Vite dev server vs static serving
  • Logging verbosity
  • Error handling detail

Project Structure

Understanding the project layout:
OmniSearches/
├── client/                 # Frontend React application
│   ├── src/
│   │   ├── pages/         # Home and Search pages
│   │   ├── components/    # React components (shadcn/ui)
│   │   ├── lib/           # Utility functions
│   │   ├── contexts/      # React contexts (Language)
│   │   └── i18n/          # Internationalization
│   └── index.html         # Entry HTML
├── server/                # Backend Express application
│   ├── index.ts           # Main server file
│   ├── routes.ts          # API route handlers
│   ├── env.ts             # Environment setup
│   └── vite.ts            # Vite dev server integration
├── dist/                  # Production build output
│   ├── public/            # Built frontend assets
│   └── index.js           # Compiled backend
├── package.json           # Dependencies and scripts
├── vite.config.ts         # Vite configuration
├── tsconfig.json          # TypeScript configuration
└── .env                   # Environment variables (create this)

Available Scripts

# Start development server with hot reload
npm run dev

# TypeScript type checking
npm run check

Technology Stack Details

Frontend Technologies

  • React 18.3.1: Modern React with hooks and concurrent features
  • Vite 5.4.9: Fast build tool and dev server
  • TypeScript 5.6.3: Type-safe JavaScript
  • TailwindCSS 3.4: Utility-first CSS framework
  • shadcn/ui: Beautiful, accessible component library
  • Radix UI: Unstyled, accessible component primitives
  • Wouter: Lightweight routing (3.3.5)
  • React Query: Data fetching and state management
  • Framer Motion: Animation library
  • Lucide React: Icon library
  • React Markdown: Render markdown responses

Backend Technologies

  • Express.js 4.21.2: Web framework
  • TypeScript: Type-safe server code
  • Google Generative AI 0.21.0: Gemini API SDK
  • OpenAI SDK 4.85.2: Compatible with OpenRouter
  • tsx 4.19.1: TypeScript execution for development
  • esbuild 0.25.0: Fast bundler for production
  • dotenv 16.4.7: Environment variable management
  • Marked 15.0.4: Markdown parsing
  • Axios 1.7.9: HTTP client

Verification & Testing

Test Development Setup

1

Check server logs

After running npm run dev, verify you see:
Environment variables loaded: { GOOGLE_API_KEY: '***', ... }
serving on port 3000
2

Test API endpoints

Open a new terminal and test the health endpoint:
curl http://localhost:3000/api/server-ip
You should receive a JSON response with an IP address.
3

Perform test search

In your browser at http://localhost:3000:
  1. Enter a simple query like “test”
  2. Verify you receive AI-generated results
  3. Check that sources appear with links
  4. Confirm related questions are generated
4

Test reasoning mode

  1. Enable reasoning mode toggle
  2. Enter a query
  3. Verify streaming reasoning output appears
  4. Confirm search results follow the reasoning

Test Production Build

# Build for production
npm run build

# Start production server
npm start

# In another terminal, test
curl http://localhost:3000
You should receive the HTML for the built application.

Troubleshooting

Installation Issues

Try using the legacy peer deps flag:
npm install --legacy-peer-deps
Ensure you’re using TypeScript 5.6.3:
npm list typescript
If different, reinstall:
npm install [email protected] --save-dev
Check your Node version:
node --version
If below v18, upgrade using nvm:
nvm install 20
nvm use 20

Runtime Issues

Verify your .env file:
  1. Is it in the root directory (not client/ or server/)?
  2. Are there any extra spaces around the = signs?
  3. Are values in quotes if they contain special characters?
Debug by checking server logs:
npm run dev
# Look for "Environment variables loaded:" in output
Common causes:
  • API key is invalid or expired
  • Gemini API is not enabled for your project
  • Quota limits exceeded
Verify at Google AI Studio:
  1. Check API key is active
  2. Verify Gemini 2.0 Flash is enabled
  3. Review usage quotas
Issues to check:
  • API key format is correct
  • Model identifier is exact: deepseek/deepseek-r1-distill-llama-70b:free
  • API URL is correct: https://openrouter.ai/api/v1
Test your OpenRouter key:
curl https://openrouter.ai/api/v1/auth/key \
  -H "Authorization: Bearer $REASON_MODEL_API_KEY"
Find and kill the process using port 3000:Linux/macOS:
lsof -ti:3000 | xargs kill -9
Windows:
netstat -ano | findstr :3000
taskkill /PID <PID> /F
Or use a different port:
PORT=3001 npm run dev
Increase Node.js memory limit:
NODE_OPTIONS="--max-old-space-size=4096" npm run build

Performance Issues

Factors affecting speed:
  • Network latency to Google/OpenRouter APIs
  • Search mode (Exhaustive is slower than Concise)
  • Reasoning mode adds overhead
Optimize by:
  • Using Concise mode for simple queries
  • Disabling reasoning for faster results
  • Checking your internet connection
Development mode uses more memory due to:
  • Vite dev server with HMR
  • Source maps
  • Development dependencies
For production, use:
npm run build && npm start

Security Best Practices

Follow these security guidelines to protect your API keys and application:

API Key Protection

  1. Never commit .env files
    • Already configured in .gitignore
    • Use environment variables in production
  2. Use different keys for development and production
    • Create separate API keys for each environment
    • Rotate keys periodically
  3. Implement rate limiting in production
    • Add Express rate limiting middleware
    • Monitor API usage quotas

Production Hardening

// Example: Add rate limiting (install express-rate-limit)
import rateLimit from 'express-rate-limit';

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100 // limit each IP to 100 requests per windowMs
});

app.use('/api/', limiter);

Next Steps

Architecture Guide

Understand how OmniSearches works internally

API Reference

Explore the REST API endpoints and responses

Configuration

Advanced configuration and customization

Deployment

Deploy to Railway, Vercel, or your own infrastructure

Getting Help

If you need assistance:

Build docs developers (and LLMs) love