By default, channels are unbuffered, meaning sends block until a receiver is ready. Buffered channels accept a limited number of values without a corresponding receiver, enabling asynchronous communication patterns.
Creating Buffered Channels
Specify the buffer capacity as the second argument to make():
package main
import "fmt"
func main() {
// Create a channel buffering up to 2 values
messages := make(chan string, 2)
// Send values without a concurrent receiver
messages <- "buffered"
messages <- "channel"
// Receive the values later
fmt.Println(<-messages) // buffered
fmt.Println(<-messages) // channel
}
Buffered channels are non-blocking for sends until the buffer is full, and non-blocking for receives until the buffer is empty.
Unbuffered vs Buffered
Unbuffered Channel
ch := make(chan string) // Capacity: 0
ch <- "value" // BLOCKS until a receiver is ready
Buffered Channel
ch := make(chan string, 2) // Capacity: 2
ch <- "value1" // Does NOT block (buffer has space)
ch <- "value2" // Does NOT block (buffer has space)
ch <- "value3" // BLOCKS (buffer is full)
Buffer Capacity
The buffer capacity determines how many values can be queued:
// Buffer of 1
ch1 := make(chan int, 1)
ch1 <- 42 // OK, buffer has space
ch1 <- 43 // Blocks, buffer is full
// Buffer of 3
ch2 := make(chan int, 3)
ch2 <- 1 // OK
ch2 <- 2 // OK
ch2 <- 3 // OK
ch2 <- 4 // Blocks
Sending to a full buffered channel blocks the sender. Receiving from an empty buffered channel blocks the receiver.
When to Use Buffered Channels
1. Decoupling Sender and Receiver Timing
results := make(chan string, 10)
// Producer can send results without waiting
for _, url := range urls {
go func(u string) {
results <- fetch(u)
}(url)
}
// Consumer processes when ready
for i := 0; i < len(urls); i++ {
processResult(<-results)
}
2. Preventing Goroutine Leaks
func searchWithTimeout(query string) string {
result := make(chan string, 1) // Buffered!
go func() {
// Even if timeout occurs, this won't leak
result <- slowSearch(query)
}()
select {
case r := <-result:
return r
case <-time.After(1 * time.Second):
return "timeout"
}
}
Use a buffer of 1 when the sender should never block, even if the receiver times out or stops listening.
3. Rate Limiting
// Limit concurrent operations to 5
semaphore := make(chan struct{}, 5)
for _, task := range tasks {
semaphore <- struct{}{} // Acquire
go func(t Task) {
defer func() { <-semaphore }() // Release
process(t)
}(task)
}
Blocking Behavior
| Operation | Unbuffered | Buffered (not full) | Buffered (full) |
|---|
| Send | Blocks | Does NOT block | Blocks |
| Receive | Blocks (if empty) | Blocks (if empty) | Does NOT block |
Practical Example: Worker Queue
func main() {
jobs := make(chan int, 100) // Buffer 100 jobs
results := make(chan int, 100)
// Start workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Send jobs (non-blocking until buffer fills)
for j := 1; j <= 9; j++ {
jobs <- j
}
close(jobs)
// Collect results
for a := 1; a <= 9; a++ {
<-results
}
}
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
results <- j * 2
}
}
Checking Channel Length and Capacity
ch := make(chan int, 5)
ch <- 1
ch <- 2
fmt.Println(len(ch)) // 2 (current items in buffer)
fmt.Println(cap(ch)) // 5 (total buffer capacity)
The len() and cap() functions return the current state, which may change immediately in concurrent code. Use these only for monitoring, not for control flow.
Buffer Size Guidelines
Small Buffers (1-10)
- Signal channels with goroutine leak prevention
- Small batch operations
- Rate limiting with semaphore pattern
Medium Buffers (10-1000)
- Producer-consumer queues
- Batch processing pipelines
- Buffering between processing stages
Large Buffers (1000+)
- High-throughput data processing
- Burst handling in network servers
- Decoupling fast producers from slow consumers
Start with unbuffered channels for simplicity. Add buffering only when you have a specific performance or design reason.
Common Pitfalls
Over-buffering
// Bad: Excessive buffer hides synchronization issues
ch := make(chan int, 10000)
Under-buffering
// Bad: May cause unexpected blocking
results := make(chan Result) // Unbuffered
for i := 0; i < 100; i++ {
go func() {
results <- compute() // May leak if not all are read
}()
}
Best Practices
- Use unbuffered by default - Only add buffering when needed
- Size buffers appropriately - Match buffer size to your use case
- Prevent goroutine leaks - Buffer size of 1 for timeout scenarios
- Document buffer size choices - Explain why a specific size was chosen
- Avoid using len() for logic - Channel length is a race condition in concurrent code