Skip to main content

AI-Powered Search Engine & Vector Database

Build search, recommendation, and RAG applications at scale with Vespa’s unified platform for text search, vector search, and real-time data serving.

Quick Start

Get up and running with Vespa in minutes

1

Install Vespa CLI

Install the Vespa command-line tool to interact with Vespa instances:
brew install vespa-cli
Verify the installation:
vespa version
2

Create Your First Application

Create a new Vespa application package with a schema definition:
mkdir my-app && cd my-app
mkdir -p schemas
Create a schema file schemas/music.sd:
schemas/music.sd
schema music {
  document music {
    field artist type string {
      indexing: summary | index
    }
    field title type string {
      indexing: summary | index
    }
    field year type int {
      indexing: summary | attribute
    }
  }
}
3

Deploy and Feed Data

Deploy the application locally using Docker:
vespa deploy --wait 300
Feed a document to your application:
vespa document put id:music:music::1 '{
  "fields": {
    "artist": "The Beatles",
    "title": "Hey Jude",
    "year": 1968
  }
}'
4

Query Your Data

Search for documents using the Query API:
vespa query 'select * from music where title contains "jude"'
{
  "root": {
    "id": "toplevel",
    "relevance": 1.0,
    "fields": {
      "totalCount": 1
    },
    "children": [
      {
        "id": "id:music:music::1",
        "relevance": 0.5,
        "fields": {
          "artist": "The Beatles",
          "title": "Hey Jude",
          "year": 1968
        }
      }
    ]
  }
}

Explore by Topic

Deep dive into Vespa’s powerful features

Vector Search

Build semantic search with embeddings and approximate nearest neighbor search

Schemas & Documents

Define your data model with Vespa’s schema definition language

Ranking & ML

Deploy machine learning models and custom ranking expressions

Document API

Feed, update, and retrieve documents via HTTP API

Query Language

Master YQL, Vespa’s powerful SQL-like query language

Deployment

Deploy to local Docker, Kubernetes, or Vespa Cloud

Key Features

Everything you need to build production-ready search applications

Hybrid Search

Combine text, vector, and structured data queries in a single request for superior search relevance

Real-Time Indexing

Index and serve data in real-time with millisecond latency, no batch processing required

Tensor Computing

Built-in tensor operations for advanced ranking, embeddings, and machine learning inference

Distributed Architecture

Automatically scale across multiple nodes with built-in redundancy and fault tolerance

Resources

Additional resources to help you succeed with Vespa

Sample Applications

Explore working examples and reference implementations

Community Slack

Join the community to ask questions and share feedback

Vespa Blog

Read about feature updates, use cases, and best practices

GitHub Repository

Contribute to Vespa’s open-source development

Ready to Build?

Start building powerful search and recommendation applications with Vespa today