Skip to main content

TypeScript SDK Guide

The VecLabs TypeScript SDK provides a Pinecone-compatible API for storing and querying vector embeddings on Solana with cryptographic verification.

Installation

1

Install the SDK

npm install solvec@alpha
# or
yarn add solvec@alpha
2

Import the client

import { SolVec } from 'solvec';

Quick Start

import { SolVec } from 'solvec';

// Initialize client
const sv = new SolVec({ network: 'devnet' });

// Create a collection (equivalent to Pinecone index)
const col = sv.collection('agent-memory', { dimensions: 1536 });

// Upsert vectors
await col.upsert([
  { 
    id: 'mem_001', 
    values: [...], // 1536-dim embedding
    metadata: { text: 'User prefers dark mode' } 
  }
]);

// Query for nearest neighbors
const results = await col.query({ 
  vector: [...], 
  topK: 5 
});

Client Configuration

Network Options

const sv = new SolVec({ network: 'devnet' });
The SDK automatically connects to default RPC endpoints for each network. Use rpcUrl to override with your own endpoint (Helius, QuickNode, etc.)

Wallet Configuration

const sv = new SolVec({ 
  network: 'devnet',
  walletPath: '~/.config/solana/id.json'
});

// Access wallet public key
console.log(sv.walletPublicKey?.toString());
Wallet is required for:
  • Updating on-chain Merkle roots
  • Verifying collection integrity
  • Listing your collections
Read-only operations (query, upsert without verification) work without a wallet.

Collections

Collections are equivalent to Pinecone indexes. Each collection has a fixed dimension and distance metric.

Creating Collections

const col = sv.collection('agent-memory', {
  dimensions: 1536,  // OpenAI text-embedding-3-small
  metric: 'cosine'   // 'cosine', 'euclidean', or 'dot'
});
Default dimension is 1536 (OpenAI embeddings). Default metric is cosine.

Listing Collections

const collections = await sv.listCollections();
console.log(collections); // ['agent-memory', 'user-profiles', ...]

Upserting Vectors

Single Vector

await col.upsert([{
  id: 'mem_001',
  values: [0.1, 0.2, 0.3, ...], // Must match collection dimensions
  metadata: { text: 'User is Alex', timestamp: Date.now() }
}]);

Batch Upsert

const vectors = [
  { id: 'vec_1', values: [...], metadata: { category: 'product' } },
  { id: 'vec_2', values: [...], metadata: { category: 'review' } },
  { id: 'vec_3', values: [...], metadata: { category: 'product' } },
];

const response = await col.upsert(vectors);
console.log(`Upserted ${response.upsertedCount} vectors`);
All vectors must match the collection’s dimension. Mismatched dimensions throw a DimensionMismatch error.

Upsert with OpenAI Embeddings

import OpenAI from 'openai';
import { SolVec } from 'solvec';

const openai = new OpenAI();
const sv = new SolVec({ network: 'devnet' });
const col = sv.collection('documents', { dimensions: 1536 });

const text = "VecLabs is a decentralized vector database on Solana";

const response = await openai.embeddings.create({
  model: 'text-embedding-3-small',
  input: text
});

const embedding = response.data[0].embedding;

await col.upsert([{
  id: 'doc_001',
  values: embedding,
  metadata: { text, source: 'docs' }
}]);

Idempotency

// First upsert
await col.upsert([{ id: 'a', values: [1, 0, 0] }]);

// Second upsert with same ID updates the vector
await col.upsert([{ id: 'a', values: [0, 1, 0] }]);

const stats = await col.describeIndexStats();
console.log(stats.vectorCount); // 1 (not 2)
Upsert is idempotent. Inserting a vector with an existing ID updates the vector instead of creating a duplicate.

Querying Vectors

Basic Query

const { matches } = await col.query({
  vector: queryEmbedding,
  topK: 5
});

for (const match of matches) {
  console.log(match.id, match.score, match.metadata);
}

Query Options

const { matches, namespace } = await col.query({
  vector: [...],
  topK: 10,
  includeMetadata: true,  // default: true
  includeValues: false    // default: false (saves bandwidth)
});

matches.forEach(match => {
  console.log(match.id);         // string
  console.log(match.score);      // 0.0 to 1.0
  console.log(match.metadata);   // { text: '...', ... }
  console.log(match.values);     // undefined (not included)
});
Set includeValues: true only when you need the raw vectors. This significantly increases response size.

Semantic Search Example

import OpenAI from 'openai';
import { SolVec } from 'solvec';

const openai = new OpenAI();
const sv = new SolVec({ network: 'devnet' });
const col = sv.collection('knowledge-base', { dimensions: 1536 });

// User query
const userQuery = "How do I verify vector integrity?";

// Generate embedding
const response = await openai.embeddings.create({
  model: 'text-embedding-3-small',
  input: userQuery
});

const queryVector = response.data[0].embedding;

// Search
const { matches } = await col.query({
  vector: queryVector,
  topK: 3
});

matches.forEach((match, i) => {
  console.log(`${i + 1}. ${match.metadata.text} (score: ${match.score})`);
});

Score Interpretation

Cosine Similarity

Range: 0.0 to 1.01.0 = identical direction0.0 = orthogonal

Dot Product

Range: unboundedHigher = more similarWorks best with normalized vectors

Euclidean

Range: 0.0 to 1.0 (inverted)1.0 = closest0.0 = farthest

Deleting Vectors

Delete by ID

await col.delete(['mem_001', 'mem_002']);

Delete All (Clear Collection)

const stats = await col.describeIndexStats();
const allIds = []; // You'll need to track IDs separately

await col.delete(allIds);
VecLabs does not currently support namespace-wide deletion. You must delete by specific IDs.

Collection Statistics

const stats = await col.describeIndexStats();

console.log(stats);
// {
//   vectorCount: 1542,
//   dimension: 1536,
//   metric: 'cosine',
//   name: 'agent-memory',
//   merkleRoot: 'a3f2c1...',
//   lastUpdated: 1678901234567,
//   isFrozen: false
// }

Type Definitions

UpsertRecord

interface UpsertRecord {
  id: string;                      // Unique vector ID
  values: number[];                // Embedding vector
  metadata?: Record<string, any>;  // Optional metadata
}

QueryOptions

interface QueryOptions {
  vector: number[];                // Query embedding
  topK: number;                    // Number of results
  filter?: Record<string, any>;    // Metadata filter (coming soon)
  includeMetadata?: boolean;       // Include metadata (default: true)
  includeValues?: boolean;         // Include vectors (default: false)
}

QueryMatch

interface QueryMatch {
  id: string;                      // Vector ID
  score: number;                   // Similarity score (higher = better)
  metadata?: Record<string, any>;  // Metadata (if requested)
  values?: number[];               // Vector values (if requested)
}

QueryResponse

interface QueryResponse {
  matches: QueryMatch[];           // Results sorted by score descending
  namespace: string;               // Collection name
}

Error Handling

try {
  await col.upsert([{ id: 'vec', values: [1, 2] }]);
} catch (error) {
  if (error.message.includes('Dimension mismatch')) {
    console.error('Vector dimension does not match collection');
  }
}
// Collection expects 1536 dimensions
const col = sv.collection('test', { dimensions: 1536 });

// This throws DimensionMismatch error
await col.upsert([{ id: 'bad', values: [1, 2, 3] }]);
// Error: Dimension mismatch: collection expects 1536, got 3 for id "bad"

Best Practices

1

Use meaningful IDs

Use descriptive IDs like doc_001, user_alex_mem_5 instead of random UUIDs. This helps with debugging.
2

Batch upserts

Upsert multiple vectors at once instead of one-by-one for better performance.
3

Keep metadata small

Store only essential metadata. Large metadata increases storage costs and query latency.
4

Normalize vectors for dot product

If using dot metric, normalize your vectors to unit length for consistent results.
5

Handle errors gracefully

Always validate vector dimensions before upserting to avoid runtime errors.

Next Steps

Python SDK

Learn how to use VecLabs with Python

Verification

Verify your vectors on-chain with Merkle proofs

Performance Tuning

Optimize query speed and recall

Collections

Advanced collection management

Build docs developers (and LLMs) love