# Why Patterns Matter
Go gives you goroutines and channels. Patterns tell you how to structure them so systems stay fast, safe, and easy to stop. Real-world concurrency is about backpressure, cancellation, and owning the lifecycle of every goroutine you launch.
# Worker Pool (Backpressure)
A worker pool caps concurrency and prevents overload. It is the default pattern for CPU and IO heavy jobs.
jobs := make(chan Job)
results := make(chan Result)
worker := func() {
for job := range jobs {
results <- handle(job)
}
}
for i := 0; i < 4; i++ {
go worker()
}# Pipeline Stages
Pipelines model work as stages. Each stage reads from an input channel and writes to an output channel. The most important rule is cancellation: every stage must stop on context cancellation and close its output.
func stage(ctx context.Context, in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for {
select {
case v, ok := <-in:
if !ok {
return
}
out <- v * 2
case <-ctx.Done():
return
}
}
}()
return out
}# Fan-out and Fan-in
Fan-out spreads work to multiple workers. Fan-in merges their outputs into one channel. The safe fan-in closes the output only after all forwarders finish.
func fanIn(ctx context.Context, inputs ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
output := func(ch <-chan int) {
defer wg.Done()
for v := range ch {
select {
case out <- v:
case <-ctx.Done():
return
}
}
}
wg.Add(len(inputs))
for _, ch := range inputs {
go output(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}# Bounded Parallelism (Semaphore)
A buffered channel can act as a semaphore. Each goroutine acquires a token and releases it when done, limiting concurrency to the channel capacity.
sem := make(chan struct{}, 4)
for _, job := range jobs {
sem <- struct{}{}
go func(j Job) {
defer func() { <-sem }()
process(j)
}(job)
}# errgroup (Fail Fast)
If you need parallel work with error propagation, use errgroup. It collects the first error and cancels the shared context so other goroutines can stop quickly.
g, ctx := errgroup.WithContext(context.Background())
for _, url := range urls {
u := url
g.Go(func() error {
return fetch(ctx, u)
})
}
if err := g.Wait(); err != nil {
return err
}# Rate Limiting with Ticker
A ticker can feed a token channel. Each token allows one unit of work, which gives you a simple and reliable rate limiter.
ticker := time.NewTicker(200 * time.Millisecond)
limiter := make(chan struct{}, 1)
go func() {
for range ticker.C {
select {
case limiter <- struct{}{}:
default:
}
}
}()
for _, job := range jobs {
<-limiter
go process(job)
}# Common Pitfalls
- Starting goroutines without a shutdown signal.
- Closing channels from the wrong side (receivers should not close).
- Unbounded fan-out that overwhelms dependencies.
- Range loops that never exit because the channel is never closed.
- Using `default` in select loops and creating CPU spin.
⚡ Key Takeaways
- Every goroutine must have a clear exit path.
- Use worker pools for bounded concurrency and backpressure.
- Pipelines must propagate cancellation.
- Fan-in requires one closer that waits for all senders.
- errgroup is for fail-fast parallel work.