Append records
Append records to a stream from stdin:- Sequence number range appended:
0..0(single record) - New tail position: sequence
1, timestamp1704067200000
Append from a file
Append multiple records
Records are newline-delimited by default:Append JSON records
Use the--format json flag for JSON records:
Append JSON with base64 bodies
Use--format json-base64 when record bodies contain binary data:
Append with fencing token
Enforce a fencing token to coordinate writers:Append with sequence number match
Only append if the next sequence number matches the expected value:Control batching with linger
Adjust how long to wait before flushing a batch:5ms
Lower values reduce latency but may increase number of API calls. Higher values improve throughput for bulk appends.
Read records
Read all records from a stream:read will tail the stream indefinitely, waiting for new records. Use --count or press Ctrl+C to stop.
Read a specific number of records
Read from a specific sequence number
Read from a timestamp
Read records from a specific Unix timestamp (milliseconds):Read the last N records
Read starting from N records before the tail:Limit by bytes
Stop reading after consuming a certain number of bytes:Read until a timestamp
Read records up to (but not including) a specific timestamp:Clamp start position at tail
If the requested start position is beyond the tail, start at the tail instead of returning an error:Output to a file
Read formats
Text format (default):Tail a stream
Show the last N records (like Unixtail):
Specify number of records
Follow mode
Continuously show new records (liketail -f):
Tail output formats
Same format options asread:
Common workflows
Stream processing pipeline
Export records to file
Continuous monitoring
Bulk import from file
Replay records to different stream
Coordinated append with fencing
Incremental backup
Appending command records
Certain operations liketrim and fence append special command records to the stream. When reading with --format text, these are displayed as:
Performance tips
Batching
- Use
--lingerto control batch size vs. latency tradeoff - Larger batches (higher linger) improve throughput
- Smaller batches (lower linger) reduce latency
Reading
- Use
--countor--bytesto limit reads - Filter records early in the pipeline to reduce data transfer
- Use
--format textfor better performance if metadata isn’t needed