Overview
A StorageContext represents a connection to a specific service provider and data set. It provides fine-grained control over the storage workflow with split operations: store(), pull(), and commit().
Contexts are created via StorageManager.createContext() or createContexts().
Properties
provider
The service provider information.
const context = await synapse . storage . createContext ()
console . log ( context . provider . id )
console . log ( context . provider . pdp . serviceURL )
dataSetId
The data set ID if one exists, otherwise undefined.
if ( context . dataSetId ) {
console . log ( 'Using existing dataset:' , context . dataSetId )
} else {
console . log ( 'Will create new dataset on commit' )
}
Metadata associated with this context’s data set.
console . log ( context . dataSetMetadata )
// { category: 'videos', cdn: '' }
withCDN
Whether CDN services are enabled for this context.
if ( context . withCDN ) {
console . log ( 'CDN enabled' )
}
Split Operations Flow
The context provides three core operations that can be used independently:
store() → pull() → commit()
│ │ │
│ │ └── On-chain registration
│ └────────── SP-to-SP transfer
└─────────────────── Upload to provider
store()
Upload data to the provider without on-chain commitment.
data
Uint8Array | ReadableStream<Uint8Array>
required
Raw bytes or readable stream to upload
Store options Pre-calculated PieceCID to skip CommP calculation
Abort signal to cancel the store
onProgress
(bytesUploaded: number) => void
Progress callback
Store result containing:
pieceCid - Content identifier
size - Data size in bytes
const context = await synapse . storage . createContext ()
// Store data on provider (not yet on-chain)
const { pieceCid , size } = await context . store ( data , {
onProgress : ( bytes ) => console . log ( `Uploaded ${ bytes } bytes` )
})
console . log ( 'Stored:' , pieceCid . toString ())
presignForCommit()
Pre-sign EIP-712 extraData for pieces to avoid redundant wallet prompts.
pieces
Array<{ pieceCid, pieceMetadata? }>
required
Pieces to sign for with optional metadata
Signed extraData to pass to pull() or commit()
const extraData = await context . presignForCommit ([
{ pieceCid , pieceMetadata: { tag: 'important' } }
])
// Use extraData in pull and commit to avoid re-signing
await context . pull ({ pieces: [ pieceCid ], from: sourceUrl , extraData })
await context . commit ({ pieces: [{ pieceCid }], extraData })
pull()
Request this provider to pull pieces from another provider via SP-to-SP transfer.
Pull options from
string | ((pieceCid: PieceCID) => string)
required
Source URL or function returning piece URL
Pre-signed extraData from presignForCommit()
onProgress
(pieceCid, status) => void
Pull progress callback
Pull result with:
status - Overall status (‘complete’ or ‘failed’)
pieces - Per-piece status array
const primary = contexts [ 0 ]
const secondary = contexts [ 1 ]
// Store on primary
const { pieceCid } = await primary . store ( data )
// Pull to secondary from primary
const pullResult = await secondary . pull ({
pieces: [ pieceCid ],
from : ( cid ) => primary . getPieceUrl ( cid ),
onProgress : ( cid , status ) => console . log ( `Pull: ${ status } ` )
})
if ( pullResult . status === 'complete' ) {
console . log ( 'Pull successful' )
}
commit()
Commit pieces on-chain by calling AddPieces or CreateDataSetAndAddPieces.
Commit options pieces
Array<{ pieceCid, pieceMetadata? }>
required
Pieces to commit with optional metadata
Pre-signed extraData from presignForCommit()
Called when transaction is submitted
Commit result with:
txHash - Transaction hash
pieceIds - Piece IDs assigned by contract
dataSetId - Data set ID (may be newly created)
isNewDataSet - Whether a new data set was created
// Store data first
const { pieceCid } = await context . store ( data )
// Commit on-chain
const result = await context . commit ({
pieces: [{ pieceCid , pieceMetadata: { tag: 'doc' } }],
onSubmitted : ( txHash ) => console . log ( 'Tx submitted:' , txHash )
})
console . log ( 'Dataset ID:' , result . dataSetId )
console . log ( 'Piece ID:' , result . pieceIds [ 0 ])
console . log ( 'New dataset:' , result . isNewDataSet )
Convenience Method
upload()
Combines store() and commit() into a single operation.
data
Uint8Array | ReadableStream<Uint8Array>
required
Data to upload
Upload options including StoreOptions and callbacks
Upload result with pieceCid, size, copies array, and failures
const context = await synapse . storage . createContext ()
const result = await context . upload ( data , {
pieceMetadata: { category: 'images' },
onProgress : ( bytes ) => console . log ( `Uploaded ${ bytes } bytes` ),
onStored : ( providerId , pieceCid ) => console . log ( 'Stored' ),
onPiecesConfirmed : ( dataSetId , providerId , pieces ) =>
console . log ( 'Confirmed on-chain' )
})
console . log ( 'Copy:' , result . copies [ 0 ])
Download & Retrieval
download()
Download data from this specific provider.
pieceCid
string | PieceCID
required
The piece CID to download
Downloaded and validated data
const data = await context . download ({
pieceCid: 'baga6ea4seaq...'
})
getPieceUrl()
Get the retrieval URL for a piece on this provider.
Full retrieval URL for the piece
const url = context . getPieceUrl ( pieceCid )
console . log ( 'Retrieve from:' , url )
Piece Management
hasPiece()
Check if a piece exists on this provider.
pieceCid
string | PieceCID
required
The piece CID to check
True if the piece exists on this provider
const exists = await context . hasPiece ({ pieceCid })
if ( exists ) {
console . log ( 'Piece is available' )
}
pieceStatus()
Get comprehensive status including proof timing for a piece.
pieceCid
string | PieceCID
required
The piece CID
Status information:
exists - Whether piece exists on provider
pieceId - Piece ID if in data set
dataSetLastProven - When data set was last proven
dataSetNextProofDue - When next proof is due
inChallengeWindow - Whether in challenge window
isProofOverdue - Whether proof is overdue
retrievalUrl - Piece retrieval URL
const status = await context . pieceStatus ({ pieceCid })
console . log ( 'Exists:' , status . exists )
if ( status . dataSetLastProven ) {
console . log ( 'Last proven:' , status . dataSetLastProven )
}
if ( status . dataSetNextProofDue ) {
console . log ( 'Next proof due:' , status . dataSetNextProofDue )
}
if ( status . isProofOverdue ) {
console . warn ( 'Proof is overdue!' )
}
getPieces()
Get all active pieces in this data set as an async generator.
Batch size for pagination
pieces
AsyncGenerator<PieceRecord>
Async generator yielding piece records with:
pieceCid - Piece CID
pieceId - Piece ID
for await ( const piece of context . getPieces ()) {
console . log ( `Piece ${ piece . pieceId } : ${ piece . pieceCid } ` )
}
// With custom batch size
for await ( const piece of context . getPieces ({ batchSize: 50 n })) {
// Process piece
}
deletePiece()
Schedule a piece for removal from the data set.
piece
string | PieceCID | bigint
required
Piece CID or piece ID to delete
Transaction hash of the delete operation
// By piece CID
const txHash = await context . deletePiece ({ piece: pieceCid })
// By piece ID
const txHash = await context . deletePiece ({ piece: 42 n })
getScheduledRemovals()
Get all pieces scheduled for removal from this data set.
Array of piece IDs scheduled for removal
const scheduled = await context . getScheduledRemovals ()
console . log ( 'Pieces to be removed:' , scheduled )
Data Set Operations
getProviderInfo()
Get information about the service provider for this context.
Provider information including pricing and service URLs
const info = await context . getProviderInfo ()
console . log ( 'Provider:' , info . serviceProvider )
console . log ( 'PDP URL:' , info . pdp . serviceURL )
preflightUpload()
Run preflight checks for an upload to this context.
Size of data to upload in bytes
Preflight information with cost estimates and allowance checks
const preflight = await context . preflightUpload ({
size: 1024 * 1024 * 50 // 50 MiB
})
console . log ( 'Estimated cost/month:' , preflight . estimatedCost . perMonth )
terminate()
Terminate this data set and remove all pieces.
Transaction hash of the termination
const txHash = await context . terminate ()
console . log ( 'Dataset terminated:' , txHash )
Advanced Multi-Copy Pattern
import { Synapse } from '@filoz/synapse-sdk'
const synapse = Synapse . create ({ account , chain })
// Create contexts for 3 providers
const [ primary , secondary1 , secondary2 ] =
await synapse . storage . createContexts ({ count: 3 })
// Store on primary
const { pieceCid } = await primary . store ( data )
// Pre-sign for both secondaries
const extraData1 = await secondary1 . presignForCommit ([{ pieceCid }])
const extraData2 = await secondary2 . presignForCommit ([{ pieceCid }])
// Pull to secondaries in parallel
await Promise . all ([
secondary1 . pull ({
pieces: [ pieceCid ],
from : ( cid ) => primary . getPieceUrl ( cid ),
extraData: extraData1
}),
secondary2 . pull ({
pieces: [ pieceCid ],
from : ( cid ) => primary . getPieceUrl ( cid ),
extraData: extraData2
})
])
// Commit all in parallel
const commits = await Promise . all ([
primary . commit ({ pieces: [{ pieceCid }] }),
secondary1 . commit ({ pieces: [{ pieceCid }], extraData: extraData1 }),
secondary2 . commit ({ pieces: [{ pieceCid }], extraData: extraData2 })
])
console . log ( 'All copies committed' )
See Also