Effect is designed for high performance, but following best practices ensures optimal runtime characteristics.
General Principles
Lazy evaluation : Effects are lazy - they describe what to do, not when
Fiber efficiency : Lightweight fibers enable massive concurrency
Resource pooling : Reuse connections and expensive resources
Batching : Combine operations to reduce overhead
Caching : Memoize expensive computations
Concurrency Optimization
Control Concurrency Levels
Limit concurrent operations to prevent resource exhaustion.
Bad - Unbounded concurrency
Good - Bounded concurrency
const results = yield * Effect . forEach (
Array . from ({ length: 10000 }, ( _ , i ) => i ),
( id ) => fetchUser ( id ),
{ concurrency: "unbounded" }
)
For I/O operations, set concurrency to 2-4x your CPU cores. For CPU-bound tasks, match your core count.
Batch Operations
Reduce overhead by batching similar operations.
import { Effect , Array as EffectArray } from "effect"
// Bad: Individual queries
const users = yield * Effect . forEach (
userIds ,
( id ) => db . query ( "SELECT * FROM users WHERE id = ?" , [ id ])
)
// Good: Batch query
const users = yield * db . query (
`SELECT * FROM users WHERE id IN ( ${ userIds . map (() => "?" ). join ( "," ) } )` ,
userIds
)
Use Effect.all for Parallel Operations
Sequential - Slower
Parallel - Faster
const user = yield * fetchUser ( id )
const orders = yield * fetchOrders ( id )
const settings = yield * fetchSettings ( id )
Caching and Memoization
Cache Expensive Operations
import { Effect , Cache , Duration } from "effect"
const makeUserCache = Cache . make ({
capacity: 1000 ,
timeToLive: Duration . minutes ( 5 ),
lookup : ( id : string ) => fetchUserFromDatabase ( id )
})
const program = Effect . gen ( function* () {
const cache = yield * makeUserCache
// First call fetches from database
const user1 = yield * Cache . get ( cache , "123" )
// Second call returns cached value
const user2 = yield * Cache . get ( cache , "123" )
})
Set appropriate cache expiration times. Stale data can cause bugs, while too-short TTLs reduce cache effectiveness.
Memoize Pure Computations
import { Effect } from "effect"
const expensiveComputation = ( n : number ) : Effect . Effect < number > =>
Effect . sync (() => {
let result = 0
for ( let i = 0 ; i < n * 1000000 ; i ++ ) {
result += Math . sqrt ( i )
}
return result
})
// Memoize the effect
const memoized = Effect . cached (
expensiveComputation ( 100 ),
Duration . infinity
)
const program = Effect . gen ( function* () {
// First call executes computation
const result1 = yield * memoized
// Subsequent calls return cached result
const result2 = yield * memoized
const result3 = yield * memoized
})
Resource Management
Pool Connections
Reuse database connections and HTTP clients.
import { Effect , Pool } from "effect"
const makeConnectionPool = Pool . make ({
acquire: Effect . acquireRelease (
Effect . tryPromise (() => createDatabaseConnection ()),
( conn ) => Effect . promise (() => conn . close ())
),
size: 10
})
const query = ( sql : string ) =>
Effect . gen ( function* () {
const pool = yield * makeConnectionPool
return yield * Pool . get ( pool ). pipe (
Effect . flatMap ( conn =>
Effect . tryPromise (() => conn . query ( sql ))
)
)
})
Scope Resources Appropriately
Bad - Creates new connection per request
Good - Reuses connection
const handleRequest = ( req : Request ) =>
Effect . gen ( function* () {
const db = yield * Database // New connection!
return yield * db . query ( "SELECT * FROM users" )
}). pipe (
Effect . provide ( DatabaseLive )
)
Stream Optimization
Use Chunking for Large Datasets
import { Stream , Chunk } from "effect"
// Bad: Process one item at a time
const processed = stream . pipe (
Stream . map ( processItem )
)
// Good: Process in chunks
const processed = stream . pipe (
Stream . rechunk ( 1000 ),
Stream . mapChunks ( chunk =>
Chunk . map ( chunk , processItem )
)
)
Buffer for Bursty Streams
const buffered = stream . pipe (
Stream . buffer ({ capacity: 1000 , strategy: "dropping" })
)
Use "dropping" strategy for real-time data where old data can be discarded. Use "sliding" to keep the latest values.
Don’t Create Effects in Loops
Bad - Creates new effect each iteration
Good - Use Effect.forEach
for ( let i = 0 ; i < 1000 ; i ++ ) {
yield * Effect . sync (() => processItem ( items [ i ]))
}
Avoid Excessive Flatmapping
Bad - Deeply nested flatMaps
Good - Use Effect.gen
const result = yield * effect1 . pipe (
Effect . flatMap ( a => effect2 ( a )),
Effect . flatMap ( b => effect3 ( b )),
Effect . flatMap ( c => effect4 ( c )),
Effect . flatMap ( d => effect5 ( d ))
)
Minimize Effect.sync Overhead
Bad - Wraps simple operation
Good - Use Effect.map for pure functions
const double = ( n : number ) => Effect . sync (() => n * 2 )
const result = yield * Effect . forEach (
numbers ,
double
)
Benchmarking
Measure performance to identify bottlenecks.
import { Effect , Duration , Console } from "effect"
const benchmark = < A , E >( label : string , effect : Effect . Effect < A , E >) =>
Effect . gen ( function* () {
const start = Date . now ()
const result = yield * effect
const duration = Date . now () - start
yield * Console . log ( ` ${ label } : ${ duration } ms` )
return result
})
const program = Effect . gen ( function* () {
yield * benchmark ( "Fetch users" , fetchUsers ())
yield * benchmark ( "Process orders" , processOrders ())
})
Use Effect.withSpan for Tracing
import { Effect } from "effect"
const tracedEffect = Effect . withSpan (
fetchUsers (),
"fetchUsers" ,
{ attributes: { userId: "123" } }
)
Memory Optimization
Avoid Memory Leaks in Long-Running Streams
// Bad: Accumulates all values in memory
const sum = yield * stream . pipe (
Stream . runFold ( 0 , ( acc , n ) => acc + n )
)
// Good: Process in chunks
const sum = yield * stream . pipe (
Stream . rechunk ( 10000 ),
Stream . mapChunks ( chunk =>
Chunk . of ( Chunk . reduce ( chunk , 0 , ( a , b ) => a + b ))
),
Stream . runFold ( 0 , ( acc , n ) => acc + n )
)
Release Resources Promptly
// Good: Resource is released immediately after use
const data = yield * Effect . scoped (
Effect . gen ( function* () {
const resource = yield * acquireResource ()
return yield * useResource ( resource )
})
)
// resource is now released
Compilation Optimization
Use Type Annotations
Help TypeScript by providing explicit types.
// Good: Explicit return type helps TypeScript
const fetchUser = ( id : string ) : Effect . Effect < User , UserNotFound > =>
Effect . gen ( function* () {
// ...
})
Avoid Deep Type Inference
Bad - Complex nested inference
Good - Simpler with Effect.gen
const complex = yield * pipe (
effect1 ,
Effect . flatMap ( a => pipe (
effect2 ( a ),
Effect . flatMap ( b => pipe (
effect3 ( b ),
Effect . map ( c => ({ a , b , c }))
))
))
)
Runtime Optimization
Create Reusable Runtimes
import { Runtime , Effect } from "effect"
// Create runtime once
const runtime = Runtime . make ( AppLive )
// Reuse for multiple executions
Runtime . runPromise ( runtime )( effect1 )
Runtime . runPromise ( runtime )( effect2 )
Runtime . runPromise ( runtime )( effect3 )
Use runFork for Fire-and-Forget
// Don't block on non-critical operations
const fiber = yield * Effect . fork ( logAnalytics ( event ))
// Continue with main logic
const result = yield * processRequest ()
// Optionally join later if needed
// yield* Fiber.join(fiber)
Write performance tests to catch regressions.
import { it } from "@effect/vitest"
import { Effect , TestClock , Duration } from "effect"
it . effect ( "should complete within 100ms" , () =>
Effect . gen ( function* () {
const start = Date . now ()
yield * heavyOperation ()
const duration = Date . now () - start
assert . true ( duration < 100 , `Took ${ duration } ms, expected <100ms` )
})
)
it . effect ( "should handle 1000 concurrent requests" , () =>
Effect . gen ( function* () {
const requests = Array . from ({ length: 1000 }, ( _ , i ) => i )
const start = Date . now ()
yield * Effect . forEach (
requests ,
( id ) => handleRequest ( id ),
{ concurrency: 100 }
)
const duration = Date . now () - start
yield * Effect . log ( `Processed 1000 requests in ${ duration } ms` )
}). pipe (
Effect . provide ( TestLayer )
)
)
Production Monitoring
Monitor performance in production.
import { Effect , Metric } from "effect"
const requestDuration = Metric . timer ( "http_request_duration" )
const handleRequest = ( req : Request ) =>
Effect . gen ( function* () {
const start = Date . now ()
const result = yield * processRequest ( req )
const duration = Date . now () - start
yield * Metric . update ( requestDuration , duration )
return result
})
Best Practices Summary
Limit concurrency to prevent resource exhaustion
Cache aggressively for expensive operations
Pool connections for databases and HTTP clients
Batch operations to reduce overhead
Use chunks for large data processing
Avoid premature optimization - measure first
Monitor in production to catch real-world issues
Test performance to prevent regressions
Always profile before optimizing. Premature optimization can make code harder to maintain without significant benefits.
Next Steps