# Why Goroutines Matter
Goroutines make it practical to structure software around concurrent tasks: request fan-out, streaming pipelines, background cleanup, and bounded worker pools.
They are lightweight, but not free. The core production skill is not just "start goroutines"—it is managing lifecycle, cancellation, and ownership so nothing leaks or races.
# Concurrency vs Parallelism
Concurrency is about structuring independent tasks; parallelism is about running tasks simultaneously on multiple cores. Goroutines give you concurrency always, and parallelism when scheduler resources are available.
// Concurrency: many tasks in progress
// Parallelism: tasks executing at the same instant
go fetchUser()
go fetchOrders()
go fetchRecommendations()# Starting Goroutines Safely
The go keyword launches async work immediately. If main returns too early, all spawned goroutines are terminated.
go say("world")
say("hello") // current goroutine continues# Lifecycle Management with WaitGroup
Use sync.WaitGroup to guarantee all launched tasks complete before exiting a scope. This prevents partial work and hidden shutdown bugs.
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
// work
}()
wg.Wait()# Cancellation and Leak Prevention
Every long-running goroutine should have an exit path (context cancellation, done channel, or input close). Without this, you risk goroutine leaks that accumulate memory and CPU overhead.
func worker(done <-chan struct{}, jobs <-chan int) {
for {
select {
case j, ok := <-jobs:
if !ok { return }
_ = j // process
case <-done:
return
}
}
}# Common Pitfall: Loop Variable Capture
Pass loop variables explicitly into goroutine closures. This avoids subtle bugs and keeps behavior consistent across versions and contexts.
for i := 0; i < 3; i++ {
go func(i int) { fmt.Println(i) }(i)
}# Bounded Concurrency Pattern
Limit parallel work to avoid resource spikes. A semaphore channel is a simple and effective pattern.
sem := make(chan struct{}, 8) // at most 8 workers at once
for _, task := range tasks {
sem <- struct{}{}
go func(task Task) {
defer func() { <-sem }()
process(task)
}(task)
}# Practice Challenge
Build ParallelSum by splitting input, summing halves in separate goroutines, then extend it with cancellation and bounded concurrency for large workloads.
package main
import "fmt"
func ParallelSum(nums []int) int {
// TODO: sum halves concurrently
}
func main() {
nums := []int{1, 2, 3, 4, 5, 6, 7, 8}
fmt.Println(ParallelSum(nums))
}⚡ Key Takeaways
- Goroutines are easy to start; lifecycle management is the real engineering challenge
- Coordinate completion with WaitGroup and coordinate shutdown with cancellation
- Always design explicit goroutine exit paths to prevent leaks
- Use bounded concurrency to protect databases, APIs, and CPU under load
- Prefer explicit loop-variable capture to keep goroutine behavior predictable