This guide will help you get S2 Lite running locally in minutes.
Prerequisites
Install S2 CLI
Choose your preferred installation method: macOS/Linux (Homebrew)
Cargo
Release Binary
Docker
brew install s2-streamstore/s2/s2
Verify installation: Make sure you have version 0.26 or later for S2 Lite support.
Start S2 Lite
Run S2 Lite in in-memory mode (no persistence): You should see output like: 2024-03-03T12:00:00.000000Z INFO s2_lite::server: using in-memory object store
2024-03-03T12:00:00.000000Z INFO s2_lite::server: starting plain http server addr="0.0.0.0:8080"
Configure CLI
Point the S2 CLI to your local Lite instance: export S2_ACCOUNT_ENDPOINT = "http://localhost:8080"
export S2_BASIN_ENDPOINT = "http://localhost:8080"
export S2_ACCESS_TOKEN = "ignored"
The S2_ACCESS_TOKEN can be any value when using S2 Lite locally.
Verify the server is ready: curl http://localhost:8080/health
You should see:
Create a Basin
Create a basin with auto-creation of streams enabled: s2 create-basin my-basin \
--create-stream-on-append \
--create-stream-on-read
List basins:
Write and Read Data
Write some data to a stream: echo "Hello, S2!" | s2 append s2://my-basin/my-stream
Read it back: s2 read s2://my-basin/my-stream
You should see:
Running with Object Storage
For persistent storage, run S2 Lite with an S3-compatible bucket.
AWS S3
Set up AWS credentials
Make sure you have AWS credentials configured: Or use environment variables: export AWS_ACCESS_KEY_ID = "your-access-key"
export AWS_SECRET_ACCESS_KEY = "your-secret-key"
export AWS_REGION = "us-east-1"
Start S2 Lite with S3
s2 lite \
--port 8080 \
--bucket my-s2-bucket \
--path s2lite
The --path argument sets a prefix within the bucket. This allows multiple S2 Lite instances to share a bucket.
Tigris, R2, or Other S3-Compatible Storage
Set credentials and endpoint
export AWS_ACCESS_KEY_ID = "your-access-key"
export AWS_SECRET_ACCESS_KEY = "your-secret-key"
export AWS_ENDPOINT_URL_S3 = "https://fly.storage.tigris.dev" # or your endpoint
Start S2 Lite
s2 lite \
--port 8080 \
--bucket my-bucket \
--path s2lite
Running with Local Filesystem
For single-node deployments with local persistence:
s2 lite \
--port 8080 \
--local-root ./s2-data
Local filesystem mode does not provide the same durability guarantees as object storage. Use it only for development or single-node scenarios where you have reliable local storage.
Run the built-in benchmark to test your setup:
# Create a basin first
s2 create-basin benchmark --create-stream-on-append
# Run benchmark
s2 bench benchmark --target-mibps 10 --duration 5s --catchup-delay 0s
You’ll see real-time metrics:
Writing at 10.2 MiB/s, Reading at 10.1 MiB/s
Testing Streaming Sessions
S2 Lite supports bidirectional streaming sessions.
Open a read session
In one terminal: s2 read s2://my-basin/events 2> /dev/null
This will wait for new records.
Write data in real-time
In another terminal: # Stream data line by line
echo -e "event1\nevent2\nevent3" | s2 append s2://my-basin/events
You should see the events appear in the read terminal immediately.
Initializing Resources at Startup
You can pre-create basins and streams using an init file.
Create an init file
Create resources.json: {
"basins" : [
{
"name" : "my-basin" ,
"config" : {
"create_stream_on_append" : true ,
"default_stream_config" : {
"storage_class" : "standard" ,
"retention_policy" : "7days"
}
},
"streams" : [
{
"name" : "events" ,
"config" : {
"retention_policy" : "infinite"
}
}
]
}
]
}
Start S2 Lite with the init file
s2 lite --port 8080 --init-file resources.json
Or use the environment variable: export S2LITE_INIT_FILE = resources . json
s2 lite --port 8080
The init file uses create-or-reconfigure semantics, so it’s safe to use on repeated restarts.
Next Steps
Deployment Deploy S2 Lite to production
Configuration Learn about all configuration options
Kubernetes Deploy with Helm
Monitoring Set up monitoring and observability