Pinecone is a fully managed vector database that provides fast, accurate similarity search at scale.
Installation
npm install @mastra/pinecone
Configuration
Get Pinecone API Key
Sign up at Pinecone and create an API key Import PineconeVector
import { PineconeVector } from '@mastra/pinecone';
Create vector store instance
const vectorStore = new PineconeVector({
id: 'embeddings',
apiKey: process.env.PINECONE_API_KEY!,
});
Configure Mastra
import { Mastra } from '@mastra/core';
const mastra = new Mastra({
vectors: {
embeddings: vectorStore,
},
});
Configuration Options
Unique identifier for the vector store instance
Pinecone API key from your Pinecone dashboard
Vector Operations
Create Index
const vectorStore = new PineconeVector({
id: 'embeddings',
apiKey: process.env.PINECONE_API_KEY!,
});
await vectorStore.createIndex({
indexName: 'documents',
dimension: 1536, // OpenAI text-embedding-3-small
metric: 'cosine',
});
Upsert Vectors
const vectors = [
[0.1, 0.2, 0.3, ...], // 1536 dimensions
[0.4, 0.5, 0.6, ...],
];
const metadata = [
{ text: 'First document', source: 'doc1.pdf' },
{ text: 'Second document', source: 'doc2.pdf' },
];
const ids = await vectorStore.upsert({
indexName: 'documents',
vectors,
metadata,
});
Query Similar Vectors
const queryVector = [0.15, 0.25, 0.35, ...]; // Query embedding
const results = await vectorStore.query({
indexName: 'documents',
queryVector,
topK: 5,
includeVector: false,
});
results.forEach(result => {
console.log(`Score: ${result.score}`);
console.log(`Metadata: ${JSON.stringify(result.metadata)}`);
});
const results = await vectorStore.query({
indexName: 'documents',
queryVector,
topK: 5,
filter: {
source: { $eq: 'doc1.pdf' },
},
});
Update Vector
await vectorStore.updateVector({
indexName: 'documents',
id: 'vector-id-123',
update: {
vector: [0.2, 0.3, 0.4, ...],
metadata: { text: 'Updated document' },
},
});
Delete Vectors
// Delete by ID
await vectorStore.deleteVector({
indexName: 'documents',
id: 'vector-id-123',
});
// Delete multiple by IDs
await vectorStore.deleteVectors({
indexName: 'documents',
ids: ['id1', 'id2', 'id3'],
});
// Delete by filter
await vectorStore.deleteVectors({
indexName: 'documents',
filter: {
source: { $eq: 'doc1.pdf' },
},
});
Index Management
// List all indexes
const indexes = await vectorStore.listIndexes();
console.log('Available indexes:', indexes);
// Describe index
const stats = await vectorStore.describeIndex({
indexName: 'documents',
});
console.log('Dimension:', stats.dimension);
console.log('Vector count:', stats.count);
console.log('Metric:', stats.metric);
// Delete index
await vectorStore.deleteIndex({
indexName: 'documents',
});
// Truncate index (delete all vectors)
await vectorStore.truncateIndex({
indexName: 'documents',
});
RAG Integration
import { Mastra } from '@mastra/core';
import { PineconeVector } from '@mastra/pinecone';
import { createOpenAI } from '@ai-sdk/openai';
import { embed } from 'ai';
const mastra = new Mastra({
vectors: {
embeddings: new PineconeVector({
id: 'embeddings',
apiKey: process.env.PINECONE_API_KEY!,
}),
},
});
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});
// Store document
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'Your document text here',
});
await mastra.vectors.embeddings.upsert({
indexName: 'documents',
vectors: [embedding],
metadata: [{ text: 'Your document text here' }],
});
// Query for context
const { embedding: queryEmbedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'User query',
});
const results = await mastra.vectors.embeddings.query({
indexName: 'documents',
queryVector: queryEmbedding,
topK: 3,
});
const context = results.map(r => r.metadata.text).join('\n\n');
Pinecone supports rich metadata filtering:
// Equality
filter: { category: { $eq: 'technology' } }
// Inequality
filter: { priority: { $ne: 'low' } }
// Comparison
filter: { score: { $gte: 0.8 } }
// Set membership
filter: { tags: { $in: ['ai', 'ml'] } }
// Logical operators
filter: {
$and: [
{ category: { $eq: 'technology' } },
{ score: { $gte: 0.7 } },
]
}
Distance Metrics
Pinecone supports three distance metrics:
cosine - Cosine similarity (default, range: -1 to 1)
euclidean - Euclidean distance (L2)
dotproduct - Dot product similarity
await vectorStore.createIndex({
indexName: 'documents',
dimension: 1536,
metric: 'cosine', // or 'euclidean' or 'dotproduct'
});
Best Practices
Choose Appropriate Dimensions
Match index dimensions to your embedding model:
- OpenAI
text-embedding-3-small: 1536
- OpenAI
text-embedding-3-large: 3072
- OpenAI
text-embedding-ada-002: 1536
Batch Operations
Upsert vectors in batches of 100-1000 for optimal performance.
Use Metadata Filtering
Filter by metadata to reduce search space and improve relevance.
Monitor Index Stats
Regularly check index statistics with describeIndex() to track growth.
Serverless vs Pods
Pinecone offers two deployment options:
Serverless (Recommended for most use cases)
- Pay per request
- Auto-scaling
- No capacity planning
- Cold start latency possible
Pods
- Dedicated resources
- Predictable latency
- Fixed cost
- Manual scaling
Choose Pods for:
- High-throughput applications
- Latency-sensitive workloads
- Consistent traffic patterns
Qdrant
Open-source vector database alternative
pgvector
PostgreSQL vector extension
Pinecone Docs
Official Pinecone documentation
Pinecone Console
Manage indexes and API keys