Qdrant is an open-source vector search engine optimized for high-performance similarity search with rich filtering capabilities.
Installation
npm install @mastra/qdrant
Configuration
Start Qdrant
Run Qdrant locally with Docker:docker run -p 6333:6333 qdrant/qdrant
Or use Qdrant Cloud Import QdrantVector
import { QdrantVector } from '@mastra/qdrant';
Create vector store instance
const vectorStore = new QdrantVector({
id: 'embeddings',
url: 'http://localhost:6333',
});
Configure Mastra
import { Mastra } from '@mastra/core';
const mastra = new Mastra({
vectors: {
embeddings: vectorStore,
},
});
Configuration Options
Unique identifier for the vector store instance
Qdrant server URL (e.g., http://localhost:6333 or https://your-cluster.cloud.qdrant.io)
API key for Qdrant Cloud or secured instances
Vector Operations
Create Collection (Index)
const vectorStore = new QdrantVector({
id: 'embeddings',
url: 'http://localhost:6333',
});
await vectorStore.createIndex({
indexName: 'documents',
dimension: 1536,
metric: 'cosine',
});
Upsert Vectors
const vectors = [
[0.1, 0.2, 0.3, ...], // 1536 dimensions
[0.4, 0.5, 0.6, ...],
];
const metadata = [
{ text: 'First document', category: 'tech' },
{ text: 'Second document', category: 'business' },
];
const ids = await vectorStore.upsert({
indexName: 'documents',
vectors,
metadata,
});
Query Similar Vectors
const queryVector = [0.15, 0.25, 0.35, ...];
const results = await vectorStore.query({
indexName: 'documents',
queryVector,
topK: 5,
includeVector: false,
});
results.forEach(result => {
console.log(`Score: ${result.score}`);
console.log(`Text: ${result.metadata.text}`);
});
Advanced Filtering
Qdrant supports powerful payload filtering:
// Must conditions (AND)
const results = await vectorStore.query({
indexName: 'documents',
queryVector,
topK: 5,
filter: {
must: [
{ key: 'category', match: { value: 'tech' } },
{ key: 'year', range: { gte: 2020 } },
],
},
});
// Should conditions (OR)
const results = await vectorStore.query({
indexName: 'documents',
queryVector,
topK: 5,
filter: {
should: [
{ key: 'category', match: { value: 'tech' } },
{ key: 'category', match: { value: 'science' } },
],
},
});
// Must not conditions (NOT)
const results = await vectorStore.query({
indexName: 'documents',
queryVector,
topK: 5,
filter: {
must_not: [
{ key: 'status', match: { value: 'archived' } },
],
},
});
Update Vector
await vectorStore.updateVector({
indexName: 'documents',
id: 'vector-id-123',
update: {
vector: [0.2, 0.3, 0.4, ...],
metadata: { text: 'Updated document' },
},
});
Delete Operations
// Delete by ID
await vectorStore.deleteVector({
indexName: 'documents',
id: 'vector-id-123',
});
// Delete by filter
await vectorStore.deleteVectors({
indexName: 'documents',
filter: {
must: [
{ key: 'status', match: { value: 'archived' } },
],
},
});
// Delete multiple by IDs
await vectorStore.deleteVectors({
indexName: 'documents',
ids: ['id1', 'id2', 'id3'],
});
Collection Management
// List collections
const collections = await vectorStore.listIndexes();
console.log('Collections:', collections);
// Describe collection
const stats = await vectorStore.describeIndex({
indexName: 'documents',
});
console.log('Dimension:', stats.dimension);
console.log('Vector count:', stats.count);
// Delete collection
await vectorStore.deleteIndex({
indexName: 'documents',
});
// Clear all vectors from collection
await vectorStore.truncateIndex({
indexName: 'documents',
});
RAG Integration
import { Mastra } from '@mastra/core';
import { QdrantVector } from '@mastra/qdrant';
import { createOpenAI } from '@ai-sdk/openai';
import { embed } from 'ai';
const mastra = new Mastra({
vectors: {
embeddings: new QdrantVector({
id: 'embeddings',
url: process.env.QDRANT_URL!,
apiKey: process.env.QDRANT_API_KEY,
}),
},
});
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});
// Index documents with metadata
const docs = [
{ text: 'Document 1', category: 'tech', date: '2024-01-01' },
{ text: 'Document 2', category: 'business', date: '2024-01-02' },
];
for (const doc of docs) {
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: doc.text,
});
await mastra.vectors.embeddings.upsert({
indexName: 'documents',
vectors: [embedding],
metadata: [doc],
});
}
// Query with filters
const { embedding: queryEmbedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'tech query',
});
const results = await mastra.vectors.embeddings.query({
indexName: 'documents',
queryVector: queryEmbedding,
topK: 3,
filter: {
must: [
{ key: 'category', match: { value: 'tech' } },
],
},
});
Distance Metrics
Qdrant supports multiple distance metrics:
cosine - Cosine similarity (recommended for most cases)
euclidean - Euclidean distance (L2)
dotproduct - Dot product similarity
await vectorStore.createIndex({
indexName: 'documents',
dimension: 1536,
metric: 'cosine', // or 'euclidean' or 'dotproduct'
});
Payload Indexing
Create indexes on payload fields for faster filtering:
// This is handled automatically by Qdrant
// Frequently filtered fields are automatically indexed
Deployment Options
Docker (Local Development)
docker run -p 6333:6333 -p 6334:6334 \
-v $(pwd)/qdrant_storage:/qdrant/storage \
qdrant/qdrant
Qdrant Cloud (Managed Service)
const vectorStore = new QdrantVector({
id: 'embeddings',
url: 'https://your-cluster.cloud.qdrant.io',
apiKey: process.env.QDRANT_API_KEY!,
});
Kubernetes
Use the official Helm chart for production deployments.
Best Practices
Use Qdrant Cloud for Production
Managed service handles scaling, backups, and high availability.
Leverage Payload Filtering
Combine vector similarity with metadata filtering for precise results.
Batch Upserts
Insert vectors in batches for better performance.
Monitor Collection Stats
Use describeIndex() to track collection growth and performance.
HNSW Index Configuration
Configure HNSW parameters for collection creation:
// This is handled automatically with optimal defaults
// Qdrant uses HNSW with m=16, ef_construct=100 by default
Quantization
Enable scalar quantization to reduce memory usage:
# Configure in Qdrant config file
# See: https://qdrant.tech/documentation/guides/quantization/
Pinecone
Managed vector database alternative
Chroma
AI-native embedding database
Qdrant Docs
Official Qdrant documentation
Qdrant Cloud
Managed Qdrant service