Skip to main content

Overview

Synapse SDK provides comprehensive storage operations for Filecoin Onchain Cloud. This guide covers uploading files, downloading data, and managing storage contexts.

Upload Flow

The SDK uses a store → pull → commit pipeline for multi-copy durability:
1
Initialize Synapse
2
import { Synapse } from '@filoz/synapse-sdk'
import { http } from 'viem'
import { privateKeyToAccount } from 'viem/accounts'
import { calibration } from '@filoz/synapse-core/chains'

const account = privateKeyToAccount('0x...')
const synapse = Synapse.create({
  chain: calibration,
  transport: http(),
  account,
})
3
Upload a File
4
For single files, use the simple upload method:
5
import fs from 'fs'
import { Readable } from 'stream'

const fileStream = Readable.toWeb(fs.createReadStream('photo.jpg'))

const result = await synapse.storage.upload(fileStream, {
  callbacks: {
    onProviderSelected: (provider) => {
      console.log(`Selected SP ${provider.id}`)
    },
    onProgress: (bytesUploaded) => {
      console.log(`Uploaded: ${bytesUploaded} bytes`)
    },
    onStored: (providerId, pieceCid) => {
      console.log(`Stored on SP ${providerId}: ${pieceCid}`)
    },
    onPiecesConfirmed: (dataSetId, providerId, pieces) => {
      console.log(`Confirmed on SP ${providerId}`)
    },
  },
})

console.log(`PieceCID: ${result.pieceCid}`)
console.log(`Copies: ${result.copies.length}`)
6
Download a File
7
const data = await synapse.storage.download({ 
  pieceCid: result.pieceCid 
})

fs.writeFileSync('downloaded.jpg', data)

Upload Options

Multi-Copy Storage

By default, files are replicated to 2 providers for redundancy:
const result = await synapse.storage.upload(data, {
  count: 3, // Upload to 3 providers
})

console.log(`Stored on ${result.copies.length} providers`)

Specify Providers

Use specific storage providers:
const result = await synapse.storage.upload(data, {
  providerIds: [1n, 2n, 3n],
})

Enable CDN

Enable FilBeam CDN for faster retrieval:
const result = await synapse.storage.upload(data, {
  withCDN: true,
})

Preflight Checks

Check costs and allowances before uploading:
const stat = await fs.promises.stat('largefile.mp4')

const preflight = await synapse.storage.preflightUpload({ 
  size: stat.size 
})

console.log('Estimated costs:')
console.log(`  Per epoch: ${preflight.estimatedCost.perEpoch}`)
console.log(`  Per day: ${preflight.estimatedCost.perDay}`)
console.log(`  Per month: ${preflight.estimatedCost.perMonth}`)

if (!preflight.allowanceCheck.sufficient) {
  console.error('Insufficient allowances:', preflight.allowanceCheck.message)
  // Set up allowances first
}

Storage Context

For advanced control, use storage contexts:
const context = await synapse.storage.createContext({
  withCDN: true,
  callbacks: {
    onProviderSelected: (provider) => {
      console.log(`Using provider: ${provider.serviceProvider}`)
    },
  },
})

// Upload using context
const result = await context.upload(data)

Metadata

Attach metadata to uploads:
const result = await synapse.storage.upload(data, {
  pieceMetadata: {
    name: 'vacation-photo.jpg',
    category: 'photos',
    owner: 'alice',
  },
})
  • Maximum 5 metadata keys per piece
  • Keys: max 32 characters
  • Values: max 128 characters

Download Options

Download from Specific Provider

const data = await synapse.storage.download({
  pieceCid,
  providerAddress: '0x...',
})

Download with CDN

const data = await synapse.storage.download({
  pieceCid,
  withCDN: true, // Use FilBeam CDN
})

Error Handling

import { StoreError, CommitError } from '@filoz/synapse-sdk'

try {
  const result = await synapse.storage.upload(data)
} catch (error) {
  if (error instanceof StoreError) {
    console.error('Upload failed:', error.providerId)
  } else if (error instanceof CommitError) {
    console.error('Commit failed:', error.providerId)
  }
}

Size Limits

import { SIZE_CONSTANTS } from '@filoz/synapse-sdk'

console.log(`Min upload size: ${SIZE_CONSTANTS.MIN_UPLOAD_SIZE}`)
console.log(`Max upload size: ${SIZE_CONSTANTS.MAX_UPLOAD_SIZE}`)
Files exceeding the maximum size should be split into multiple uploads.

Best Practices

Use Streaming

Stream large files to minimize memory usage

Check Preflight

Always run preflight checks for cost estimates

Multi-Copy

Use at least 2 copies for data redundancy

Handle Failures

Check result.failures for partial upload issues

Next Steps

Multi-Copy Storage

Learn advanced multi-provider upload patterns

Split Operations

Use store/pull/commit for batch uploads

Build docs developers (and LLMs) love