# Start With the Mental Model
Concurrency means doing multiple things at once. In Go, you do that with goroutines and coordinate them with channels or sync primitives. The goal is not "more threads" — it is correct, predictable, cancelable work.
- Goroutine = lightweight concurrent function
- Channel = typed pipe for coordination
- WaitGroup = wait for completion
- Mutex/Atomic = protect shared state
# Goroutines
A goroutine is started with the go keyword. It runs concurrently with the caller.
func say(name string) {
fmt.Println("hi", name)
}
func main() {
go say("Alice")
go say("Bob")
time.Sleep(50 * time.Millisecond)
}Avoid using time.Sleep for synchronization in real code; use a WaitGroup or channels instead.
# Waiting for Work to Finish (WaitGroup)
var wg sync.WaitGroup
for i := 0; i < 3; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
fmt.Println("worker", n)
}(i)
}
wg.Wait()# Channels
Channels are typed pipes for communication. Unbuffered channels synchronize sender and receiver.
ch := make(chan string)
go func() {
ch <- "hello"
}()
msg := <-ch
fmt.Println(msg)Buffered channels
ch := make(chan int, 2)
ch <- 1
ch <- 2
fmt.Println(<-ch)
fmt.Println(<-ch)# select, Timeouts, and Cancellation
Use select to wait on multiple channel events.
select {
case msg := <-ch:
fmt.Println(msg)
case <-time.After(200 * time.Millisecond):
fmt.Println("timeout")
}For production, prefer context.WithTimeout so timeouts can be shared across call chains.
# Context Cancellation (Production Pattern)
Contexts let you cancel a tree of goroutines. Every worker should select onctx.Done() and return quickly.
ctx, cancel := context.WithCancel(context.Background())
go func() {
time.Sleep(200 * time.Millisecond)
cancel()
}()
go func() {
for {
select {
case <-ctx.Done():
return
default:
// do work
}
}
}()# Fan-out and Fan-in with WaitGroup
Fan-out spreads work across workers. Fan-in merges their results back into a single stream. The safe pattern is: start workers, range their outputs, and close the output afterWait().
func fanIn(ctx context.Context, inputs ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
output := func(ch <-chan int) {
defer wg.Done()
for v := range ch {
select {
case out <- v:
case <-ctx.Done():
return
}
}
}
wg.Add(len(inputs))
for _, ch := range inputs {
go output(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}# Error Groups (errgroup)
errgroup.Group is WaitGroup plus error propagation and optional cancellation. It is ideal for "do N things in parallel or fail fast".
g, ctx := errgroup.WithContext(context.Background())
for _, url := range urls {
u := url
g.Go(func() error {
return fetch(ctx, u)
})
}
if err := g.Wait(); err != nil {
// first error wins, ctx is canceled
}# Worker Pool (Most Asked Pattern)
A worker pool limits concurrency and provides backpressure.
jobs := make(chan int)
results := make(chan int)
worker := func(id int) {
for j := range jobs {
results <- j * 2
}
}
for w := 0; w < 3; w++ {
go worker(w)
}
go func() {
for j := 1; j <= 5; j++ { jobs <- j }
close(jobs)
}()
for i := 0; i < 5; i++ {
fmt.Println(<-results)
}# Data Races and the -race Tool
A race happens when multiple goroutines access shared data concurrently and at least one writes.
// run: go test -race
// or: go run -race main.goUse a mutex or channel ownership to protect shared state. The race detector is essential for production-grade code.
# Common Pitfalls
- Closure capture in loops (pass loop variable as parameter).
- Forgetting to close channels used with range.
- Using `time.Sleep` for synchronization.
- Goroutine leaks due to missing cancellation paths.
- Assuming map or slice operations are thread-safe.
⚡ Key Takeaways
- Goroutines are cheap, but they still need clear shutdown paths.
- Channels coordinate work; unbuffered channels synchronize.
- WaitGroup waits; Mutex protects shared state.
- Use `select` for timeouts and cancellation.
- Always run with `-race` when concurrency matters.