Understanding overflow
The async worker (worker.go:64) uses a buffered channel with a configurable capacity:submit() method (worker.go:100-118) detects overflow:
default case executes when the channel cannot accept new entries immediately.
Available strategies
Velo provides three overflow strategies (options.go:42-56) that balance performance, reliability, and latency differently.OverflowSync (default)
- Guarantees no logs are lost
- Temporarily blocks the calling goroutine during direct writes
- Prevents unbounded memory growth
- Automatically resumes async operation when buffer drains
- Log completeness is critical
- You need guaranteed delivery without manual queue monitoring
- Temporary latency spikes are acceptable
OverflowSync is the default strategy (options.go:91) because it balances reliability and performance. Most applications tolerate occasional synchronous writes during extreme load.
OverflowDrop
- Prioritizes application performance over log completeness
- Never blocks the calling goroutine
- Silently discards entries until buffer space becomes available
- Lost logs are not recoverable
- Maintaining low latency is more critical than logging every event
- Your monitoring can tolerate gaps in log data
- You’re logging high-frequency metrics where sampling is acceptable
OverflowBlock
- Guarantees no logs are lost
- Blocks the calling goroutine indefinitely if the worker cannot keep up
- Can severely impact application latency and throughput
- May cause cascading failures if many goroutines block on logging
- Absolute log completeness is required (audit logs, compliance)
- You have strict guarantees about output stream throughput
- Your application can tolerate blocking on logging operations
Configuring buffer size
The buffer size (options.go:87-88) controls how many log entries can queue before overflow occurs:- Larger buffers absorb traffic spikes better but use more memory
- Smaller buffers reduce memory usage but overflow more frequently
- Buffer size must be a power of 2
- Default 8192 entries is sufficient for most applications
Each buffer entry is a reference to a pooled byte slice, not the full log message. The memory overhead is approximately
BufferSize * 8 bytes for the channel itself.Overflow in practice
Here’s how the three strategies behave during a traffic spike:| Strategy | Behavior |
|---|---|
| OverflowSync | First 8192 entries queue async. Entries 8193+ write synchronously until buffer drains. No data loss. |
| OverflowDrop | First 8192 entries queue async. Entries 8193+ are discarded. Approximately 25% data loss. |
| OverflowBlock | First 8192 entries queue async. Entries 8193+ block until worker processes earlier entries. Severe latency. |
Monitoring overflow
Velo does not currently expose metrics for overflow events. To detect overflow issues:- OverflowSync: Monitor application latency for spikes correlating with high log volume
- OverflowDrop: Compare log volume at source with log volume at destination
- OverflowBlock: Monitor goroutine counts and application throughput
Choosing a strategy
Decision tree:-
Do you need every log entry?
- Yes →
OverflowSyncorOverflowBlock - No →
OverflowDrop
- Yes →
-
Can your application tolerate blocking on logging?
- Yes →
OverflowBlock - No →
OverflowSync
- Yes →
-
Is low latency more important than log completeness?
- Yes →
OverflowDrop - No →
OverflowSync(default)
- Yes →