Overview
The StorageManager is the central facade for all storage operations in the Synapse SDK. It manages storage contexts (provider + data set pairs) with intelligent caching and provides both simple auto-managed operations and advanced split-operation workflows.
Access via synapse.storage.
Architecture
StorageManager
├── Auto-managed uploads (upload, download)
├── Context management (createContext, createContexts)
└── Data set operations (findDataSets, terminateDataSet)
Simple Upload/Download
upload()
Upload data with automatic multi-copy redundancy across providers.
data
Uint8Array | ReadableStream<Uint8Array>
required
Raw bytes or readable stream to upload. Use streams for large files.
options
StorageManagerUploadOptions
Upload configuration options Number of provider copies (default: 2 for redundancy)
Specific provider IDs to use (disables auto-retry on failure)
Specific data set IDs to use (mutually exclusive with providerIds)
Pre-created contexts to use (skips context creation)
Custom metadata for data set matching
Custom metadata for the piece
Pre-calculated PieceCID to skip CommP calculation
Abort signal to cancel the upload
Lifecycle callbacks for upload progress
Upload result containing:
pieceCid - Content identifier
size - Data size in bytes
copies - Successful provider copies
failures - Failed copy attempts
import { Synapse } from '@filoz/synapse-sdk'
const synapse = Synapse . create ({ account , chain })
// Simple upload with 2 copies (default)
const data = new Uint8Array ([ 1 , 2 , 3 , 4 ])
const result = await synapse . storage . upload ( data )
console . log ( 'PieceCID:' , result . pieceCid . toString ())
console . log ( 'Copies:' , result . copies . length )
result . copies . forEach ( copy => {
console . log ( `Provider ${ copy . providerId } : ${ copy . role } ` )
})
// With progress tracking
const result = await synapse . storage . upload ( data , {
callbacks: {
onProgress : ( bytes ) => console . log ( `Uploaded ${ bytes } bytes` ),
onStored : ( providerId , pieceCid ) =>
console . log ( `Stored on provider ${ providerId } ` ),
onPiecesConfirmed : ( dataSetId , providerId ) =>
console . log ( `Confirmed on chain: dataset ${ dataSetId } ` )
}
})
// Streaming upload for large files
const fileStream = readableStream // ReadableStream<Uint8Array>
const result = await synapse . storage . upload ( fileStream , {
pieceCid: preCalculatedCid // Required for streams
})
download()
Download data from storage with automatic provider resolution.
options
StorageManagerDownloadOptions
required
Download options pieceCid
string | PieceCID
required
The piece CID to download
Specific context to download from
Specific provider address to use
Enable CDN retrieval (overrides instance default)
Downloaded and validated data bytes
// Auto-resolve from any available provider
const data = await synapse . storage . download ({
pieceCid: 'baga6ea4seaq...'
})
// From specific provider
const data = await synapse . storage . download ({
pieceCid: 'baga6ea4seaq...' ,
providerAddress: '0x...'
})
// From specific context
const context = await synapse . storage . createContext ()
const data = await synapse . storage . download ({
pieceCid: 'baga6ea4seaq...' ,
context
})
Context Management
createContext()
Create or retrieve a single storage context with intelligent caching.
Context creation options Specific provider ID to use
Specific data set ID to use
Metadata for data set matching
A storage context for the selected provider
// Auto-select provider and data set
const context = await synapse . storage . createContext ()
// Specific provider
const context = await synapse . storage . createContext ({
providerId: 1 n
})
// Specific data set
const context = await synapse . storage . createContext ({
dataSetId: 42 n
})
getDefaultContext()
Get or create a default storage context (first endorsed provider with a new or existing data set).
A cached or newly created storage context using an endorsed provider
// Get default context (cached after first call)
const context = await synapse . storage . getDefaultContext ()
await context . upload ( data )
createContexts()
Create multiple contexts for multi-provider redundancy.
Multi-context creation options Number of contexts to create
Specific provider IDs (mutually exclusive with dataSetIds)
Specific data set IDs (mutually exclusive with providerIds)
Provider IDs to exclude from selection
Metadata for data set matching
Array of storage contexts
// Create 3 contexts with auto-selection
const contexts = await synapse . storage . createContexts ({ count: 3 })
// Specific providers
const contexts = await synapse . storage . createContexts ({
providerIds: [ 1 n , 2 n , 3 n ]
})
Data Set Operations
findDataSets()
Query all data sets owned by the current account.
Account address (defaults to current signer)
Array of data set information including:
dataSetId - Data set ID
providerId - Provider ID
activePieceCount - Number of pieces
metadata - Data set metadata
withCDN - CDN status
const dataSets = await synapse . storage . findDataSets ()
dataSets . forEach ( ds => {
console . log ( `Dataset ${ ds . dataSetId } : ${ ds . activePieceCount } pieces` )
console . log ( `Provider: ${ ds . providerId } ` )
console . log ( `CDN enabled: ${ ds . withCDN } ` )
})
terminateDataSet()
Terminate a data set and remove all its pieces.
ID of the data set to terminate
Transaction hash of the termination
const txHash = await synapse . storage . terminateDataSet ({
dataSetId: 42 n
})
getStorageInfo()
Get comprehensive storage service information including pricing, providers, and allowances.
Complete service information:
pricing - Cost per TiB (per epoch/day/month)
providers - List of approved providers
serviceParameters - Epoch durations, size limits
allowances - Current operator approvals and usage
const info = await synapse . storage . getStorageInfo ()
console . log ( 'Price per TiB/month:' , info . pricing . noCDN . perTiBPerMonth )
console . log ( 'Approved providers:' , info . providers . length )
console . log ( 'Min upload size:' , info . serviceParameters . minUploadSize )
if ( info . allowances ) {
console . log ( 'Service approved:' , info . allowances . isApproved )
console . log ( 'Rate allowance:' , info . allowances . rateAllowance )
}
preflightUpload()
Run preflight checks for an upload without creating a context.
Size of data to upload in bytes
Preflight check results:
estimatedCost - Cost breakdown (per epoch/day/month)
allowanceCheck - Whether user has sufficient allowance
const preflight = await synapse . storage . preflightUpload ({
size: 1024 * 1024 * 100 , // 100 MiB
withCDN: true
})
console . log ( 'Cost per month:' , preflight . estimatedCost . perMonth )
if ( ! preflight . allowanceCheck . sufficient ) {
console . log ( 'Insufficient allowance:' , preflight . allowanceCheck . message )
}
Upload Flow Details
The upload() method orchestrates a multi-step flow:
Store - Upload to primary provider
Pull - Secondary providers fetch from primary via SP-to-SP transfer
Commit - Add pieces to on-chain data sets
const result = await synapse . storage . upload ( data , {
count: 3 , // 1 primary + 2 secondaries
callbacks: {
onStored : ( providerId ) =>
console . log ( `1. Stored on primary: ${ providerId } ` ),
onPullProgress : ( providerId , cid , status ) =>
console . log ( `2. Pull to ${ providerId } : ${ status } ` ),
onPiecesAdded : ( txHash , providerId ) =>
console . log ( `3. Commit tx submitted: ${ txHash } ` ),
onPiecesConfirmed : ( dataSetId , providerId ) =>
console . log ( `4. Confirmed on dataset: ${ dataSetId } ` )
}
})
// Check results
if ( result . copies . length < 3 ) {
console . warn ( 'Only' , result . copies . length , 'copies succeeded' )
result . failures . forEach ( f => {
console . error ( `Provider ${ f . providerId } ( ${ f . role } ): ${ f . error } ` )
})
}
Error Handling
import { StoreError , CommitError } from '@filoz/synapse-sdk/errors'
try {
const result = await synapse . storage . upload ( data )
} catch ( error ) {
if ( StoreError . is ( error )) {
console . error ( 'Primary store failed:' , error . message )
console . error ( 'Provider:' , error . providerId )
} else if ( CommitError . is ( error )) {
console . error ( 'All commits failed:' , error . message )
}
}
See Also