Skip to main content

Overview

Concurrency is Go’s superpower. Unlike OS threads which are heavy (1MB+ stack), Go uses Goroutines (2KB stack). A single Go program can easily run tens of thousands of concurrent tasks. This chapter explores goroutines, channels, and demonstrates the Worker Pool pattern — a production-ready approach to managing concurrent tasks.

Core Concurrency Components

1
Goroutine (go func())
2
A lightweight thread of execution managed by the Go runtime.
3
Channel (make(chan T))
4
A pipe that connects concurrent goroutines. You send values into one end and receive from the other.
5
WaitGroup (sync.WaitGroup)
6
A counter to wait for a collection of goroutines to finish.

Goroutines: Lightweight Concurrency

Basic Goroutine Example

concurrency/task1/main.go
func say(s string) {
	for i := 0; i < 5; i++ {
		time.Sleep(100*time.Millisecond)
		fmt.Println(s)
	}
}

func main() {
   go say("world")  // Runs concurrently
   say("hello")     // Runs in main goroutine
}
The go keyword launches a new goroutine. The function executes concurrently with the rest of the program.

Multiple Goroutines

concurrency/task3/main.go
func printNumbers() {
	for i := 0; i < 5; i++ {
		time.Sleep(100*time.Millisecond)
		 fmt.Println(i)
	}
}

func printLetters(){
	for ch:='a';ch<='e';ch++{
		time.Sleep(100*time.Millisecond)
		 fmt.Println(string(ch))
	}
}

func main() {
   go printNumbers()
   go printLetters()
   time.Sleep(1* time.Second)  // Wait for goroutines to finish
}
Problem: Using time.Sleep() to wait for goroutines is unreliable. What if they take longer than expected? This is where channels and WaitGroups come in.

Channels: Communication Between Goroutines

Channels allow goroutines to communicate safely without explicit locks.

Creating Channels

// Unbuffered channel (blocks until receiver is ready)
ch := make(chan int)

// Buffered channel (can hold values without blocking)
ch := make(chan int, 100)

Channel Operations

ch <- value  // Send value to channel

Synchronizing with Channels

concurrency/task4/main.go
func numbers(ch chan bool) {
	for i := 0; i < 5; i++ {
		time.Sleep(100*time.Millisecond)
		fmt.Println(i)
	}
	ch<- true  // Send signal: "I'm done"
}

func character(ch chan bool){
	for i:='a';i<='e';i++{
		time.Sleep(100*time.Millisecond)
		fmt.Println(string(i))
	}
	ch<-true
}

func main() {
     ch:=make(chan bool, 1)
	 go numbers(ch)
	 go character(ch)
	 <-ch  // Wait for first completion
	 <-ch  // Wait for second completion
}
Channels provide a way to synchronize goroutines without explicit locks. The <-ch operation blocks until a value is available.

Worker Pool Pattern

Instead of spawning a new goroutine for every single job (which can crash a system under load), we start a fixed number of workers that pick jobs from a queue.

Why Worker Pools?

  • Resource Control: Limit concurrent operations (e.g., max 10 concurrent HTTP requests)
  • Efficiency: Reuse goroutines instead of creating/destroying them
  • Backpressure: Queue jobs when all workers are busy

Architecture: Fan-Out / Fan-In

Complete Worker Pool Implementation

1
Step 1: Define the Worker Function
2
func worker(wg *sync.WaitGroup, resultChan chan string, jobsChan chan string) {
	defer wg.Done() // Ensure Done is called when function exits

	for url := range jobsChan {
		// Simulate network delay
		time.Sleep(time.Millisecond * 50)
		fmt.Println("Fetching URL:", url)
		resultChan <- "Fetched " + url
	}
}
3
The for url := range jobsChan loop automatically stops when the channel is closed.
4
Step 2: Set Up Channels and WaitGroup
5
jobs := []string{
	"http://example.com/image1.jpg",
	"http://example.com/image2.jpg",
	"http://example.com/image3.jpg",
	"http://example.com/image4.jpg",
	"http://example.com/image5.jpg",
}

var wg sync.WaitGroup
totalWorkers := 5
resultChan := make(chan string, len(jobs))
jobsChan := make(chan string, len(jobs))
6
Step 3: Start Workers
7
for i := 0; i < totalWorkers; i++ {
	wg.Add(1)
	go worker(&wg, resultChan, jobsChan)
}
8
Step 4: Send Jobs and Close Job Channel
9
for _, job := range jobs {
	jobsChan <- job
}
close(jobsChan)  // Signal: no more jobs coming
10
Step 5: Wait and Close Result Channel
11
go func() {
	wg.Wait()           // Wait for all workers to finish
	close(resultChan)   // Close result channel
}()
12
Critical: We wait in a separate goroutine to avoid deadlock. If we waited in the main thread before reading results, the program could deadlock if the result channel fills up.
13
Step 6: Collect Results
14
for result := range resultChan {
	fmt.Println("Result received:", result)
}

fmt.Println("Total time taken:", time.Since(start))

Full Worker Pool Code

concurrency/main.go
package main

import (
	"fmt"
	"sync"
	"time"
)

func worker(wg *sync.WaitGroup, resultChan chan string, jobsChan chan string) {
	defer wg.Done()

	for url := range jobsChan {
		time.Sleep(time.Millisecond * 50)
		fmt.Println("Fetching URL:", url)
		resultChan <- "Fetched " + url
	}
}

func main() {
	jobs := []string{
		"http://example.com/image1.jpg",
		"http://example.com/image2.jpg",
		"http://example.com/image3.jpg",
		"http://example.com/image4.jpg",
		"http://example.com/image5.jpg",
	}

	var wg sync.WaitGroup
	totalWorkers := 5
	resultChan := make(chan string, len(jobs))
	jobsChan := make(chan string, len(jobs))

	start := time.Now()

	// Start workers
	for i := 0; i < totalWorkers; i++ {
		wg.Add(1)
		go worker(&wg, resultChan, jobsChan)
	}

	// Send jobs to workers
	for _, job := range jobs {
		jobsChan <- job
	}
	close(jobsChan)

	// Wait for all workers to finish
	go func() {
		wg.Wait()
		close(resultChan)
	}()

	// Collect results
	for result := range resultChan {
		fmt.Println("Result received:", result)
	}

	fmt.Println("Total time taken:", time.Since(start))
}

Key Concepts Explained

Buffered vs Unbuffered Channels

TypeDeclarationBehavior
Unbufferedmake(chan int)Sender blocks until receiver is ready
Bufferedmake(chan int, 100)Sender only blocks when buffer is full

WaitGroup Pattern

var wg sync.WaitGroup

wg.Add(1)        // Increment counter
go func() {
    defer wg.Done()  // Decrement when done
    // ... work ...
}()

wg.Wait()        // Block until counter reaches 0

Channel Closing Rules

  • Only the sender should close a channel
  • Closing an already-closed channel causes panic
  • Sending to a closed channel causes panic
  • Receiving from a closed channel returns the zero value

Common Patterns

Pattern 1: Fan-Out (Distribute Work)

for i := 0; i < numWorkers; i++ {
    go worker(jobsChan, resultsChan)
}

Pattern 2: Fan-In (Collect Results)

for result := range resultsChan {
    // Process result
}

Pattern 3: Pipeline

stage1 := generate(input)
stage2 := process(stage1)
stage3 := output(stage2)

Performance Benefits

Example: Fetching 5 URLs sequentially would take 250ms (5 × 50ms). With 5 workers running concurrently, it takes just ~50ms — a 5x speedup!

Running the Examples

cd concurrency
go run main.go
You’ll see workers processing jobs concurrently and the total execution time.

Best Practices

1
Always close channels when done sending
2
This prevents goroutine leaks and allows range loops to exit cleanly.
3
Use buffered channels for known workloads
4
Buffer size = number of jobs prevents unnecessary blocking.
5
Use WaitGroups for synchronization
6
Avoid time.Sleep() — it’s unreliable and wastes resources.
7
Handle panics in goroutines
8
A panic in a goroutine crashes the entire program. Use defer recover() for critical workers.

Next Steps

Build docs developers (and LLMs) love