Overview
For large files, streaming provides better memory efficiency than loading entire files into memory. The FTPClient provides streaming methods using the Web Streams API.
Why use streaming?
Streaming is essential when:
Working with large files that exceed memory limits
Processing data incrementally (e.g., parsing as you download)
Piping data between sources and destinations
Operating in resource-constrained environments like Cloudflare Workers
Cloudflare Workers have memory limits. Streaming allows you to handle files larger than available memory by processing them in chunks.
Streaming downloads
Use downloadReadable() to get a ReadableStream for downloading files:
const readable = await client . downloadReadable ( 'large-file.zip' );
// Process the stream
const reader = readable . getReader ();
while ( true ) {
const { done , value } = await reader . read ();
if ( done ) break ;
// Process chunk (value is Uint8Array)
console . log ( `Received ${ value . byteLength } bytes` );
}
// IMPORTANT: Finalize the stream
await client . finalizeStream ();
Implementation details
The downloadReadable() method is implemented in src/classes/ftp-client.ts:231:
public async downloadReadable (
fileName : string ,
): Promise < ReadableStream < Uint8Array >> {
await this.lock.lock();
if (this.conn === undefined ) {
this . lock . unlock ();
throw FTPClient . notInit ();
}
await this.initializeDataConnection();
const res = await this . command ( Commands . Retrieve , fileName );
if (
res.code !== StatusCodes.StartTransferConnection &&
res.code !== StatusCodes.StartingTransfer
) {
this . assertStatus ( StatusCodes . StartingTransfer , res , this . dataConn );
}
const conn = await this . finalizeDataConnection ();
return conn.readable;
}
Always call finalizeStream() after consuming a streaming download or upload. This releases locks and closes connections properly.
Streaming uploads
Use uploadWritable() to get a WritableStream for uploading files:
const writable = await client . uploadWritable ( 'output.dat' , 1024000 );
const writer = writable . getWriter ();
// Write chunks
await writer . write ( new Uint8Array ([ 1 , 2 , 3 ]));
await writer . write ( new Uint8Array ([ 4 , 5 , 6 ]));
// Close the writer
await writer . close ();
// IMPORTANT: Finalize the stream
await client . finalizeStream ();
Allocation parameter
Some FTP servers require pre-allocation of file space. Pass the expected file size as the second parameter:
const expectedSize = 1024 * 1024 * 10 ; // 10 MB
const writable = await client . uploadWritable ( 'large.bin' , expectedSize );
The uploadWritable() method signature from src/classes/ftp-client.ts:278:
public async uploadWritable (
fileName : string ,
allocate ?: number ,
): Promise < WritableStream < Uint8Array >>
If your server doesn’t require allocation, you can omit the second parameter. The client will handle servers that return status code 202 (command not implemented) gracefully.
Finalizing streams
The finalizeStream() method must be called after streaming operations:
public async finalizeStream (): Promise < void > {
await this.dataConn?.close();
this.dataConn?.writable.close();
this. dataConn = undefined ;
const res = await this . getStatus ();
this.assertStatus(StatusCodes. DataClose , res);
this.lock.unlock();
}
This method:
Closes the data connection
Waits for server confirmation (status code 226)
Releases the internal lock
Cleans up resources
Forgetting to call finalizeStream() will:
Leave connections open
Keep internal locks held
Prevent subsequent operations
Waste server resources
Example: Stream download to response
Stream a file directly to an HTTP response in Cloudflare Workers:
import { FTPClient } from 'workerd-ftp' ;
export default {
async fetch ( request : Request , env : Env ) : Promise < Response > {
const url = new URL ( request . url );
const filename = url . searchParams . get ( 'file' );
if ( ! filename ) {
return new Response ( 'Missing file parameter' , { status: 400 });
}
const client = new FTPClient ( env . FTP_HOST , {
user: env . FTP_USER ,
pass: env . FTP_PASS
});
await client . connect ();
// Get readable stream from FTP
const readable = await client . downloadReadable ( filename );
// Create a TransformStream to finalize after streaming
const { readable : output , writable } = new TransformStream ();
// Pipe and finalize
readable . pipeTo ( writable ). then (() => {
client . finalizeStream (). then (() => client . close ());
});
// Return response immediately with stream
return new Response ( output , {
headers: {
'Content-Type' : 'application/octet-stream' ,
'Content-Disposition' : `attachment; filename=" ${ filename } "`
}
});
}
} ;
Example: Stream upload from request
Stream an uploaded file directly to FTP:
import { FTPClient } from 'workerd-ftp' ;
export default {
async fetch ( request : Request , env : Env ) : Promise < Response > {
if ( request . method !== 'POST' ) {
return new Response ( 'Method not allowed' , { status: 405 });
}
const filename = request . headers . get ( 'X-Filename' ) || 'upload.bin' ;
const contentLength = request . headers . get ( 'Content-Length' );
const client = new FTPClient ( env . FTP_HOST , {
user: env . FTP_USER ,
pass: env . FTP_PASS
});
await client . connect ();
// Get writable stream from FTP
const writable = await client . uploadWritable (
filename ,
contentLength ? parseInt ( contentLength ) : undefined
);
// Pipe request body to FTP
await request . body ?. pipeTo ( writable );
// Finalize and close
await client . finalizeStream ();
await client . close ();
return new Response ( 'Upload complete' , { status: 200 });
}
} ;
Example: Process while downloading
Process file contents incrementally during download:
const readable = await client . downloadReadable ( 'data.csv' );
const reader = readable . getReader ();
const decoder = new TextDecoder ();
let buffer = '' ;
let lineCount = 0 ;
while ( true ) {
const { done , value } = await reader . read ();
if ( done ) break ;
// Decode chunk and add to buffer
buffer += decoder . decode ( value , { stream: true });
// Process complete lines
let newlineIndex ;
while (( newlineIndex = buffer . indexOf ( ' \n ' )) !== - 1 ) {
const line = buffer . slice ( 0 , newlineIndex );
buffer = buffer . slice ( newlineIndex + 1 );
// Process line
lineCount ++ ;
console . log ( `Line ${ lineCount } : ${ line } ` );
}
}
// Process remaining buffer
if ( buffer . length > 0 ) {
lineCount ++ ;
console . log ( `Line ${ lineCount } : ${ buffer } ` );
}
await client . finalizeStream ();
console . log ( `Processed ${ lineCount } lines` );
Example: Upload with progress tracking
const fileSize = 1024 * 1024 * 50 ; // 50 MB
const chunkSize = 1024 * 64 ; // 64 KB chunks
const writable = await client . uploadWritable ( 'large.bin' , fileSize );
const writer = writable . getWriter ();
let uploaded = 0 ;
for ( let i = 0 ; i < fileSize ; i += chunkSize ) {
const chunk = new Uint8Array (
Math . min ( chunkSize , fileSize - i )
);
// Fill chunk with data (example: random bytes)
crypto . getRandomValues ( chunk );
await writer . write ( chunk );
uploaded += chunk . byteLength ;
const progress = ( uploaded / fileSize ) * 100 ;
console . log ( `Upload progress: ${ progress . toFixed ( 1 ) } %` );
}
await writer . close ();
await client . finalizeStream ();
console . log ( 'Upload complete!' );
Example: Pipe between streams
Download from one FTP server and upload to another:
import { FTPClient } from 'workerd-ftp' ;
const source = new FTPClient ( 'source.example.com' , {
user: 'source-user' ,
pass: 'source-pass'
});
const dest = new FTPClient ( 'dest.example.com' , {
user: 'dest-user' ,
pass: 'dest-pass'
});
await source . connect ();
await dest . connect ();
// Get source readable stream
const readable = await source . downloadReadable ( 'file.dat' );
// Get file size for allocation
const size = await source . size ( 'file.dat' );
// Get destination writable stream
const writable = await dest . uploadWritable ( 'file.dat' , size );
// Pipe directly
await readable . pipeTo ( writable );
// Finalize both
await source . finalizeStream ();
await dest . finalizeStream ();
// Close connections
await source . close ();
await dest . close ();
console . log ( 'Transfer complete!' );
AsyncDisposable support
The streaming methods are designed to work with AsyncDisposable pattern (mentioned in the source comments):
// The comments in src/classes/ftp-client.ts:227 mention:
// "Or, you can use the AsyncDispoable interface."
This allows for automatic cleanup when using await using (when available in your TypeScript/JavaScript environment):
// Future syntax support
await using stream = await client . downloadReadable ( 'file.dat' );
// Automatic finalization when scope exits
Best practices
Always finalize streams
Call finalizeStream() in a finally block to ensure cleanup: try {
const readable = await client . downloadReadable ( 'file.dat' );
// ... process stream ...
} finally {
await client . finalizeStream ();
}
Handle backpressure
When piping streams, the Web Streams API handles backpressure automatically: await readable . pipeTo ( writable );
Specify file size for uploads
When known, provide the file size to uploadWritable() for better server compatibility: const size = await getFileSize ();
const writable = await client . uploadWritable ( 'file.dat' , size );
Use chunks appropriately
Balance chunk size between memory usage and performance: const CHUNK_SIZE = 64 * 1024 ; // 64 KB is often optimal
Comparison: Regular vs Streaming
Regular download
Streaming download
// Loads entire file into memory
const data = await client . download ( 'large.zip' );
// data is Uint8Array with complete file
// Memory usage: Full file size
// Use when: File fits comfortably in memory
// Returns stream for incremental reading
const readable = await client . downloadReadable ( 'large.zip' );
const reader = readable . getReader ();
while ( true ) {
const { done , value } = await reader . read ();
if ( done ) break ;
// Process chunk
}
await client . finalizeStream ();
// Memory usage: Just the chunk size
// Use when: File is large or processing incrementally
Next steps
File operations Learn about regular file operations
Security Best practices for secure FTP