Introduction
The OmniSearches API provides powerful endpoints for performing AI-powered web searches, reasoning analysis, and follow-up conversations. Built on Google’s Gemini 2.0 Flash model with grounding capabilities, it delivers accurate, cited answers with automatic source extraction.Base URL
All API requests should be made to:localhost:3000 with your deployed server URL.
Authentication
The OmniSearches API requires server-side configuration with the following environment variables:Google Generative AI API key for Gemini model access
API key for the reasoning model (e.g., DeepSeek)
Base URL for the reasoning model API endpoint
Model identifier for reasoning (defaults to “deepseek-reasoner”)
The API does not currently require client authentication headers. Authentication is managed through server environment configuration.
Request Format
All POST requests must include:Response Format
All API responses (except streaming endpoints) return JSON with the following structure:Success Response
Error Response
200- Success400- Bad request (missing or invalid parameters)404- Resource not found (e.g., invalid session)500- Server error
Rate Limits
Rate limits depend on your Google Generative AI API quota and reasoning model provider limits. Monitor your API usage through the respective provider dashboards.API Endpoints
Search
Perform AI-powered web searches with multiple modes
Reasoning
Generate search strategies and explanation plans
Follow-up
Continue conversations in existing search sessions
Getting Started
- Start a search: Use
GET /api/searchorPOST /api/searchto perform an initial query - Get reasoning (optional): Use
GET /api/reasoningto analyze complex queries before searching - Follow up: Use
POST /api/follow-upwith the returnedsessionIdto ask related questions
Example Workflow
Next Steps
Search Endpoint
Explore search modes and parameters
Reasoning Endpoint
Learn about streaming reasoning analysis