Upload Pipeline
Simple Upload
Section titled “Simple Upload”Upload data with a single call. The SDK selects providers and handles multi-copy replication automatically:
import { class Synapse
Synapse } from "@filoz/synapse-sdk";import { function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount } from 'viem/accounts'
const const synapse: Synapse
synapse = class Synapse
Synapse.Synapse.create(options: SynapseOptions): Synapse
create({ SynapseOptions.account: `0x${string}` | Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount('0x...'), SynapseOptions.source: string | null
source: 'my-app' });
const const data: Uint8Array<ArrayBuffer>
data = new var Uint8Array: Uint8ArrayConstructornew (elements: Iterable<number>) => Uint8Array<ArrayBuffer> (+6 overloads)
Uint8Array([1, 2, 3, 4, 5])
const { const pieceCid: PieceLink
pieceCid, const size: number
size, const complete: boolean
complete, const copies: CopyResult[]
copies, const failedAttempts: FailedAttempt[]
failedAttempts } = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.upload(data: UploadPieceStreamingData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload(const data: Uint8Array<ArrayBuffer>
data)
var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("PieceCID:", const pieceCid: PieceLink
pieceCid.Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>.toString<string>(base?: MultibaseEncoder<string> | undefined): ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>
Returns a string representation of an object.
toString())var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Size:", const size: number
size, "bytes")var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Stored on", const copies: CopyResult[]
copies.Array<CopyResult>.length: number
Gets or sets the length of the array. This is a number one higher than the highest index in the array.
length, "providers")
for (const const copy: CopyResult
copy of const copies: CopyResult[]
copies) { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(` Provider ${const copy: CopyResult
copy.CopyResult.providerId: bigint
providerId}: role=${const copy: CopyResult
copy.CopyResult.role: CopyRole
role}, dataSet=${const copy: CopyResult
copy.CopyResult.dataSetId: bigint
dataSetId}`)}
if (!const complete: boolean
complete) { var console: Console
console.Console.warn(...data: any[]): void
The console.warn() static method outputs a warning message to the console at the 'warning' log level.
warn("Some copies failed:", const failedAttempts: FailedAttempt[]
failedAttempts)}The result contains:
complete-truewhen all requested copies were stored and committed on-chain. This is the primary field to check.requestedCopies- the number of copies that were requested (default: 2)pieceCid- content address of your data, used for downloadssize- size of the uploaded data in bytescopies- array of successful copies, each withproviderId,dataSetId,pieceId,role('primary'or'secondary'),retrievalUrl, andisNewDataSetfailedAttempts- providers that were tried but did not produce a copy. The SDK retries failed secondaries with alternate providers, so a non-empty array often just means a provider was swapped out. These are diagnostic, checkcompletefor the actual outcome.
Upload with Metadata
Section titled “Upload with Metadata”Attach metadata to organize uploads. The SDK reuses existing data sets when metadata matches, avoiding duplicate payment rails:
import { class Synapse
Synapse } from "@filoz/synapse-sdk";import { function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount } from 'viem/accounts'
const const synapse: Synapse
synapse = class Synapse
Synapse.Synapse.create(options: SynapseOptions): Synapse
create({ SynapseOptions.account: `0x${string}` | Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount('0x...'), SynapseOptions.source: string | null
source: 'my-app' });
const const data: Uint8Array<ArrayBuffer>
data = new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("Hello, Filecoin!")
const const result: UploadResult
result = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.upload(data: UploadPieceStreamingData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload(const data: Uint8Array<ArrayBuffer>
data, { BaseContextOptions.metadata?: Record<string, string> | undefined
metadata: { type Application: string
Application: "My DApp", type Version: string
Version: "1.0.0", type Category: string
Category: "Documents", }, StorageManagerUploadOptions.pieceMetadata?: Record<string, string> | undefined
pieceMetadata: { filename: string
filename: "hello.txt", contentType: string
contentType: "text/plain", },})
var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Uploaded:", const result: UploadResult
result.UploadResult.pieceCid: PieceLink
pieceCid.Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>.toString<string>(base?: MultibaseEncoder<string> | undefined): ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>
Returns a string representation of an object.
toString())Subsequent uploads with the same metadata reuse the same data sets and payment rails.
Controlling Copy Count
Section titled “Controlling Copy Count”Adjust the number of copies for your durability requirements:
import { class Synapse
Synapse } from "@filoz/synapse-sdk"import { function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount } from "viem/accounts"
const const synapse: Synapse
synapse = class Synapse
Synapse.Synapse.create(options: SynapseOptions): Synapse
create({ SynapseOptions.account: `0x${string}` | Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount("0x..."), SynapseOptions.source: string | null
source: 'my-app' })
const const data: Uint8Array<ArrayBuffer>
data = new var Uint8Array: Uint8ArrayConstructornew (length: number) => Uint8Array<ArrayBuffer> (+6 overloads)
Uint8Array(256)
// Store 3 copies for higher redundancyconst const result3: UploadResult
result3 = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.upload(data: UploadPieceStreamingData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload(const data: Uint8Array<ArrayBuffer>
data, { CreateContextsOptions.copies?: number | undefined
copies: 3 })var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("3 copies:", const result3: UploadResult
result3.UploadResult.copies: CopyResult[]
copies.Array<CopyResult>.length: number
Gets or sets the length of the array. This is a number one higher than the highest index in the array.
length)
// Store a single copy when redundancy isn't neededconst const result1: UploadResult
result1 = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.upload(data: UploadPieceStreamingData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload(const data: Uint8Array<ArrayBuffer>
data, { CreateContextsOptions.copies?: number | undefined
copies: 1 })var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("1 copy:", const result1: UploadResult
result1.UploadResult.copies: CopyResult[]
copies.Array<CopyResult>.length: number
Gets or sets the length of the array. This is a number one higher than the highest index in the array.
length)The default is 2 copies. The first copy is stored on an endorsed provider (high trust, curated), and secondary copies are pulled via SP-to-SP transfer from approved providers.
Upload with Callbacks
Section titled “Upload with Callbacks”Track the lifecycle of a multi-copy upload with callbacks:
import { class Synapse
Synapse } from "@filoz/synapse-sdk"import { function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount } from "viem/accounts"
const const synapse: Synapse
synapse = class Synapse
Synapse.Synapse.create(options: SynapseOptions): Synapse
create({ SynapseOptions.account: `0x${string}` | Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount("0x..."), SynapseOptions.source: string | null
source: 'my-app' })
const const data: Uint8Array<ArrayBuffer>
data = new var Uint8Array: Uint8ArrayConstructornew (length: number) => Uint8Array<ArrayBuffer> (+6 overloads)
Uint8Array(1024) // 1KB of data
const const result: UploadResult
result = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.upload(data: UploadPieceStreamingData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload(const data: Uint8Array<ArrayBuffer>
data, { StorageManagerUploadOptions.callbacks?: Partial<CombinedCallbacks> | undefined
callbacks: { onStored?: ((providerId: bigint, pieceCid: PieceCID) => void) | undefined
onStored: (providerId: bigint
providerId, pieceCid: PieceLink
pieceCid) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Data stored on provider ${providerId: bigint
providerId}`) }, onCopyComplete?: ((providerId: bigint, pieceCid: PieceCID) => void) | undefined
onCopyComplete: (providerId: bigint
providerId, pieceCid: PieceLink
pieceCid) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Secondary copy complete on provider ${providerId: bigint
providerId}`) }, onCopyFailed?: ((providerId: bigint, pieceCid: PieceCID, error: Error) => void) | undefined
onCopyFailed: (providerId: bigint
providerId, pieceCid: PieceLink
pieceCid, error: Error
error) => { var console: Console
console.Console.warn(...data: any[]): void
The console.warn() static method outputs a warning message to the console at the 'warning' log level.
warn(`Copy failed on provider ${providerId: bigint
providerId}:`, error: Error
error.Error.message: string
message) }, onPullProgress?: ((providerId: bigint, pieceCid: PieceCID, status: PullStatus) => void) | undefined
onPullProgress: (providerId: bigint
providerId, pieceCid: PieceLink
pieceCid, status: pullPiecesApiRequest.PullStatus
status) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Pull to provider ${providerId: bigint
providerId}: ${status: pullPiecesApiRequest.PullStatus
status}`) }, onPiecesAdded?: ((transaction: Hex, providerId: bigint, pieces: { pieceCid: PieceCID;}[]) => void) | undefined
onPiecesAdded: (txHash: `0x${string}`
txHash, providerId: bigint
providerId, pieces: { pieceCid: PieceCID;}[]
pieces) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`On-chain commit submitted: ${txHash: `0x${string}`
txHash}`) }, onPiecesConfirmed?: ((dataSetId: bigint, providerId: bigint, pieces: PieceRecord[]) => void) | undefined
onPiecesConfirmed: (dataSetId: bigint
dataSetId, providerId: bigint
providerId, pieces: PieceRecord[]
pieces) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Confirmed on-chain: dataSet=${dataSetId: bigint
dataSetId}, provider=${providerId: bigint
providerId}`) }, onProgress?: ((bytesUploaded: number) => void) | undefined
onProgress: (bytesUploaded: number
bytesUploaded) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Uploaded ${bytesUploaded: number
bytesUploaded} bytes`) }, },})Callback lifecycle:
onProgress- fires during upload to primary provideronStored- primary upload complete, piece parked on SPonPullProgress- SP-to-SP transfer status for secondariesonCopyComplete/onCopyFailed- secondary pull resultonPiecesAdded- commit transaction submittedonPiecesConfirmed- commit confirmed on-chain
Understanding the Result
Section titled “Understanding the Result”upload() is designed around partial success over atomicity: it commits whatever succeeded rather than throwing away successful work. This means the return value is the primary interface for understanding what happened.
When upload() throws
Section titled “When upload() throws”upload() only throws in these cases:
| Error | What happened | What to do |
|---|---|---|
StoreError | Primary upload failed | Retry the upload |
CommitError | Data is stored on providers but all on-chain commits failed | Use split operations to retry commit() without re-uploading |
| Selection error | No endorsed provider available or reachable | Check provider health / network |
When upload() returns
Section titled “When upload() returns”If upload() returns (no throw), at least one copy is committed on-chain. But the result may contain fewer copies than requested. Every copy in copies[] represents a committed on-chain data set that the user is now paying for.
import { class Synapse
Synapse } from "@filoz/synapse-sdk"import { function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount } from "viem/accounts"
const const synapse: Synapse
synapse = class Synapse
Synapse.Synapse.create(options: SynapseOptions): Synapse
create({ SynapseOptions.account: `0x${string}` | Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount("0x..."), SynapseOptions.source: string | null
source: 'my-app' })
const const data: Uint8Array<ArrayBuffer>
data = new var Uint8Array: Uint8ArrayConstructornew (length: number) => Uint8Array<ArrayBuffer> (+6 overloads)
Uint8Array(256)
const const result: UploadResult
result = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.upload(data: UploadPieceStreamingData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload(const data: Uint8Array<ArrayBuffer>
data, { CreateContextsOptions.copies?: number | undefined
copies: 2 })
// Check overall success: complete === true means all requested copies succeededif (!const result: UploadResult
result.UploadResult.complete: boolean
complete) { var console: Console
console.Console.warn(...data: any[]): void
The console.warn() static method outputs a warning message to the console at the 'warning' log level.
warn(`Only ${const result: UploadResult
result.UploadResult.copies: CopyResult[]
copies.Array<CopyResult>.length: number
Gets or sets the length of the array. This is a number one higher than the highest index in the array.
length}/${const result: UploadResult
result.UploadResult.requestedCopies: number
requestedCopies} copies succeeded`) for (const const attempt: FailedAttempt
attempt of const result: UploadResult
result.UploadResult.failedAttempts: FailedAttempt[]
failedAttempts) { var console: Console
console.Console.warn(...data: any[]): void
The console.warn() static method outputs a warning message to the console at the 'warning' log level.
warn(` Provider ${const attempt: FailedAttempt
attempt.FailedAttempt.providerId: bigint
providerId} (${const attempt: FailedAttempt
attempt.FailedAttempt.role: CopyRole
role}): ${const attempt: FailedAttempt
attempt.FailedAttempt.error: string
error}`) }}
// Every copy is committed and being paid forfor (const const copy: CopyResult
copy of const result: UploadResult
result.UploadResult.copies: CopyResult[]
copies) { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Provider ${const copy: CopyResult
copy.CopyResult.providerId: bigint
providerId}, dataset ${const copy: CopyResult
copy.CopyResult.dataSetId: bigint
dataSetId}, piece ${const copy: CopyResult
copy.CopyResult.pieceId: bigint
pieceId}`)}Auto-retry behavior
Section titled “Auto-retry behavior”For auto-selected providers (no explicit providerIds or dataSetIds), the SDK automatically retries failed secondaries with alternate providers up to 5 times. If you explicitly specify providers, the SDK respects your choice and does not retry.
Split Operations
Section titled “Split Operations”upload() | Split Operations | |
|---|---|---|
| Control | Automatic | Manual per-phase |
| Error recovery | Re-upload on commit failure | Retry commit without re-upload |
| Batch files | One call per file | Store many, commit in batch |
| Wallet prompts | Managed internally | Control via presignForCommit() |
| Best for | Most use cases | Production pipelines, custom UX |
The Pipeline
Section titled “The Pipeline”Every upload goes through three phases:
store --> pull --> commit | | | | | +-- On-chain: create dataset, add piece, start payments | +-- SP-to-SP: secondary provider fetches from primary +-- Upload: bytes sent to one provider (no on-chain state yet)- store: Upload bytes to a single SP. Returns
{ pieceCid, size }. The piece is “parked” on the SP but not yet on-chain and subject to garbage collection if not committed. - pull: SP-to-SP transfer. The destination SP fetches the piece from a source SP. No client bandwidth used.
- commit: Submit an on-chain transaction to add the piece to a data set. Creates the data set and payment rail if needed.
Store Phase
Section titled “Store Phase”Upload data to a provider without committing on-chain:
const const contexts: StorageContext[]
contexts = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContexts(options?: CreateContextsOptions): Promise<StorageContext[]>
createContexts({ CreateContextsOptions.copies?: number | undefined
copies: 2,})const [const primary: StorageContext
primary, const secondary: StorageContext
secondary] = const contexts: StorageContext[]
contexts
const { const pieceCid: PieceLink
pieceCid, const size: number
size } = await const primary: StorageContext
primary.StorageContext.store(data: UploadPieceStreamingData, options?: StoreOptions): Promise<StoreResult>
store(const data: Uint8Array<ArrayBuffer>
data, { StoreOptions.pieceCid?: PieceLink | undefined
pieceCid: const preCalculatedCid: PieceLink
preCalculatedCid, // skip expensive PieceCID (hash digest) calculation (optional) StoreOptions.signal?: AbortSignal | undefined
signal: const abortController: AbortController
abortController.AbortController.signal: AbortSignal
The signal read-only property of the AbortController interface returns an AbortSignal object instance, which can be used to communicate with/abort an asynchronous operation as desired.
signal, // cancellation (optional) StoreOptions.onProgress?: ((bytesUploaded: number) => void) | undefined
onProgress: (bytes: number
bytes) => { // progress callback (optional) var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Uploaded ${bytes: number
bytes} bytes`) },})
var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Stored: ${const pieceCid: PieceLink
pieceCid}, ${const size: number
size} bytes`)store() accepts Uint8Array or ReadableStream<Uint8Array>. Use streaming for large files to minimize memory.
After store completes, the piece is parked on the SP and can be:
- Retrieved via the context’s
getPieceUrl(pieceCid) - Pulled to other providers via
pull() - Committed on-chain via
commit()
Pull Phase (SP-to-SP Transfer)
Section titled “Pull Phase (SP-to-SP Transfer)”Request a secondary provider to fetch pieces from the primary:
// Pre-sign to avoid double wallet prompts during pull + commitconst const extraData: `0x${string}`
extraData = await const secondary: StorageContext
secondary.StorageContext.presignForCommit(pieces: Array<{ pieceCid: PieceCID; pieceMetadata?: MetadataObject;}>): Promise<Hex>
presignForCommit([{ pieceCid: PieceLink
pieceCid }])
const const pullResult: PullResult
pullResult = await const secondary: StorageContext
secondary.StorageContext.pull(options: PullOptions): Promise<PullResult>
pull({ PullOptions.pieces: PieceLink[]
pieces: [const pieceCid: PieceLink
pieceCid], PullOptions.from: PullSource
from: (cid: PieceLink
cid) => const primary: StorageContext
primary.StorageContext.getPieceUrl(pieceCid: PieceCID): string
getPieceUrl(cid: PieceLink
cid), // source URL builder (or URL string) PullOptions.extraData?: `0x${string}` | undefined
extraData, // pre-signed auth (optional, reused for commit) PullOptions.signal?: AbortSignal | undefined
signal: const abortController: AbortController
abortController.AbortController.signal: AbortSignal
The signal read-only property of the AbortController interface returns an AbortSignal object instance, which can be used to communicate with/abort an asynchronous operation as desired.
signal, // cancellation (optional) PullOptions.onProgress?: ((pieceCid: PieceCID, status: PullStatus) => void) | undefined
onProgress: (cid: PieceLink
cid, status: pullPiecesApiRequest.PullStatus
status) => { // status callback (optional) var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`${cid: PieceLink
cid}: ${status: pullPiecesApiRequest.PullStatus
status}`) },})
if (const pullResult: PullResult
pullResult.PullResult.status: "complete" | "failed"
status !== "complete") { for (const const piece: { pieceCid: PieceCID; status: "complete" | "failed";}
piece of const pullResult: PullResult
pullResult.PullResult.pieces: { pieceCid: PieceCID; status: "complete" | "failed";}[]
pieces) { if (const piece: { pieceCid: PieceCID; status: "complete" | "failed";}
piece.status: "complete" | "failed"
status === "failed") { var console: Console
console.Console.error(...data: any[]): void
The console.error() static method outputs a message to the console at the 'error' log level.
error(`Failed to pull ${const piece: { pieceCid: PieceCID; status: "complete" | "failed";}
piece.pieceCid: PieceLink
pieceCid}`) } }}The from parameter accepts either a URL string (base service URL) or a function that returns a piece URL for a given PieceCID.
Pre-signing: presignForCommit() creates an EIP-712 signature that can be reused for both pull() and commit(). This avoids prompting the wallet twice. Pass the same extraData to both calls.
Commit Phase
Section titled “Commit Phase”Add pieces to an on-chain data set. Creates the data set and payment rail if one doesn’t exist:
// Commit on both providersconst [const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit, const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit] = await var Promise: PromiseConstructor
Represents the completion of an asynchronous operation
Promise.PromiseConstructor.allSettled<[Promise<CommitResult>, Promise<CommitResult>]>(values: [Promise<CommitResult>, Promise<CommitResult>]): Promise<[PromiseSettledResult<CommitResult>, PromiseSettledResult<CommitResult>]> (+1 overload)
Creates a Promise that is resolved with an array of results when all
of the provided Promises resolve or reject.
allSettled([ const primary: StorageContext
primary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: [{ pieceCid: PieceLink
pieceCid, pieceMetadata?: MetadataObject | undefined
pieceMetadata: { filename: string
filename: "doc.pdf" } }], CommitOptions.onSubmitted?: ((txHash: Hex) => void) | undefined
onSubmitted: (txHash: `0x${string}`
txHash) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Transaction submitted: ${txHash: `0x${string}`
txHash}`) }, }), const secondary: StorageContext
secondary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: [{ pieceCid: PieceLink
pieceCid, pieceMetadata?: MetadataObject | undefined
pieceMetadata: { filename: string
filename: "doc.pdf" } }], CommitOptions.extraData?: `0x${string}` | undefined
extraData, // pre-signed auth from presignForCommit() (optional) CommitOptions.onSubmitted?: ((txHash: Hex) => void) | undefined
onSubmitted: (txHash: `0x${string}`
txHash) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Transaction submitted: ${txHash: `0x${string}`
txHash}`) }, })])
if (const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Primary: dataSet=${const primaryCommit: PromiseFulfilledResult<CommitResult>
primaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}if (const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Secondary: dataSet=${const secondaryCommit: PromiseFulfilledResult<CommitResult>
secondaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}The result:
txHash- transaction hashpieceIds- assigned piece IDs (one per input piece)dataSetId- data set ID (may be newly created)isNewDataSet- whether a new data set was created
Multi-File Batch Example
Section titled “Multi-File Batch Example”Upload multiple files to 2 providers with full error handling:
import { class Synapse
Synapse, type type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID } from "@filoz/synapse-sdk"import { function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount } from "viem/accounts"
const const synapse: Synapse
synapse = class Synapse
Synapse.Synapse.create(options: SynapseOptions): Synapse
create({ SynapseOptions.account: `0x${string}` | Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount("0x.."), SynapseOptions.source: string | null
source: "my-app" })
const const files: Uint8Array<ArrayBuffer>[]
files = [ new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("File 1 content..."), new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("File 2 content..."), new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("File 3 content..."),]
// Create contexts for 2 providersconst [const primary: StorageContext
primary, const secondary: StorageContext
secondary] = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContexts(options?: CreateContextsOptions): Promise<StorageContext[]>
createContexts({ CreateContextsOptions.copies?: number | undefined
copies: 2, BaseContextOptions.metadata?: Record<string, string> | undefined
metadata: { source: string
source: "batch-upload" },})
// Store all files on primary (note: these could be done in parallel w/ Promise.all)const const stored: { pieceCid: PieceCID; size: number;}[]
stored: { pieceCid: PieceLink
pieceCid: type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID; size: number
size: number }[] = []for (const const file: Uint8Array<ArrayBuffer>
file of const files: Uint8Array<ArrayBuffer>[]
files) { const const result: StoreResult
result = await const primary: StorageContext
primary.StorageContext.store(data: UploadPieceStreamingData, options?: StoreOptions): Promise<StoreResult>
store(const file: Uint8Array<ArrayBuffer>
file) const stored: { pieceCid: PieceCID; size: number;}[]
stored.Array<{ pieceCid: PieceCID; size: number; }>.push(...items: { pieceCid: PieceCID; size: number;}[]): number
Appends new elements to the end of an array, and returns the new length of the array.
push(const result: StoreResult
result) var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Stored ${const result: StoreResult
result.StoreResult.pieceCid: PieceLink
pieceCid}`)}
// Pre-sign for all pieces on secondaryconst const pieceCids: PieceLink[]
pieceCids = const stored: { pieceCid: PieceCID; size: number;}[]
stored.Array<{ pieceCid: PieceCID; size: number; }>.map<PieceLink>(callbackfn: (value: { pieceCid: PieceCID; size: number;}, index: number, array: { pieceCid: PieceCID; size: number;}[]) => PieceLink, thisArg?: any): PieceLink[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(s: { pieceCid: PieceCID; size: number;}
s => s: { pieceCid: PieceCID; size: number;}
s.pieceCid: PieceLink
pieceCid)const const extraData: `0x${string}`
extraData = await const secondary: StorageContext
secondary.StorageContext.presignForCommit(pieces: Array<{ pieceCid: PieceCID; pieceMetadata?: MetadataObject;}>): Promise<Hex>
presignForCommit( const pieceCids: PieceLink[]
pieceCids.Array<PieceLink>.map<{ pieceCid: PieceLink;}>(callbackfn: (value: PieceLink, index: number, array: PieceLink[]) => { pieceCid: PieceLink;}, thisArg?: any): { pieceCid: PieceLink;}[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(cid: PieceLink
cid => ({ pieceCid: PieceLink
pieceCid: cid: PieceLink
cid })))
// Pull all pieces to secondaryconst const pullResult: PullResult
pullResult = await const secondary: StorageContext
secondary.StorageContext.pull(options: PullOptions): Promise<PullResult>
pull({ PullOptions.pieces: PieceLink[]
pieces: const pieceCids: PieceLink[]
pieceCids, PullOptions.from: PullSource
from: (cid: PieceLink
cid) => const primary: StorageContext
primary.StorageContext.getPieceUrl(pieceCid: PieceCID): string
getPieceUrl(cid: PieceLink
cid), PullOptions.extraData?: `0x${string}` | undefined
extraData,})
// Commit on both providersconst [const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit, const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit] = await var Promise: PromiseConstructor
Represents the completion of an asynchronous operation
Promise.PromiseConstructor.allSettled<[Promise<CommitResult>, Promise<CommitResult>]>(values: [Promise<CommitResult>, Promise<CommitResult>]): Promise<[PromiseSettledResult<CommitResult>, PromiseSettledResult<CommitResult>]> (+1 overload)
Creates a Promise that is resolved with an array of results when all
of the provided Promises resolve or reject.
allSettled([ const primary: StorageContext
primary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: const pieceCids: PieceLink[]
pieceCids.Array<PieceLink>.map<{ pieceCid: PieceLink;}>(callbackfn: (value: PieceLink, index: number, array: PieceLink[]) => { pieceCid: PieceLink;}, thisArg?: any): { pieceCid: PieceLink;}[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(cid: PieceLink
cid => ({ pieceCid: PieceLink
pieceCid: cid: PieceLink
cid })) }), const pullResult: PullResult
pullResult.PullResult.status: "complete" | "failed"
status === "complete" ? const secondary: StorageContext
secondary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: const pieceCids: PieceLink[]
pieceCids.Array<PieceLink>.map<{ pieceCid: PieceLink;}>(callbackfn: (value: PieceLink, index: number, array: PieceLink[]) => { pieceCid: PieceLink;}, thisArg?: any): { pieceCid: PieceLink;}[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(cid: PieceLink
cid => ({ pieceCid: PieceLink
pieceCid: cid: PieceLink
cid })), CommitOptions.extraData?: `0x${string}` | undefined
extraData }) : var Promise: PromiseConstructor
Represents the completion of an asynchronous operation
Promise.PromiseConstructor.reject<never>(reason?: any): Promise<never>
Creates a new rejected promise for the provided reason.
reject(new var Error: ErrorConstructornew (message?: string, options?: ErrorOptions) => Error (+1 overload)
Error("Pull failed, skipping secondary commit")), // not advised!])
if (const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Primary: dataSet=${const primaryCommit: PromiseFulfilledResult<CommitResult>
primaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}if (const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Secondary: dataSet=${const secondaryCommit: PromiseFulfilledResult<CommitResult>
secondaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}Error Handling
Section titled “Error Handling”Each phase’s errors are independent. Failures don’t cascade, and you can retry at any level:
| Phase | Failure | Data state | Recovery |
|---|---|---|---|
| store | Upload/network error | No data on SP | Retry store() with same or different context |
| pull | SP-to-SP transfer failed | Data on primary only | Retry pull(), try different secondary, or skip |
| commit | On-chain transaction failed | Data on SP but not on-chain | Retry commit() (no re-upload needed) |
The key advantage of split operations: if commit fails, data is already stored on the SP. You can retry commit() without re-uploading the data. With the high-level upload(), a CommitError would require re-uploading.
Next Steps
Section titled “Next Steps”-
Storage Operations - Data set management, retrieval, downloads, and lifecycle operations.
-
Storage Costs - Calculate your monthly costs and understand funding requirements.
-
Synapse Core - Use the core library directly for maximum control over provider selection, uploads, and SP-to-SP transfers.