Best Practices

Go Performance Code Review: Memory Leaks and Concurrency Patterns

Tony Dong
June 21, 2025
10 min read
Share:
Featured image for: Go Performance Code Review: Memory Leaks and Concurrency Patterns

Go’s concurrency model makes it easy to spin up goroutines and channels, but performance bugs can hide in seemingly innocuous pull requests. This guide gives reviewers a structured checklist for catching memory leaks, blocked goroutines, and misuse of synchronization primitives before the code lands in production.

Why Performance Issues Slip Past Review

Performance regressions rarely present as obvious syntax errors. They emerge from resource lifecycles and data flow, making them harder to spot in a diff. When reviewing Go code, center the conversation around the workload characteristics rather than the syntax.

  • Context-free diffs hide lifetime problems. Goroutines launched deep in a helper function can outlive the request scope unless they are tied to a context.
  • Benchmarks rarely run in CI. Without load tests, reviewers must reason about malloc frequency, object pooling, and channel contention by inspection.
  • Go’s garbage collector masks leaks until traffic spikes. Backpressure in queues or forgotten timers may only surface when the GC cannot reclaim memory fast enough.

Checklist for Memory Leaks in Go Reviews

Watch for Detached Goroutines

Every goroutine should be tied to an owner. Confirm that goroutines exit when the parent request is canceled and that buffered channels drain on error paths.

ctx, cancel := context.WithTimeout(req.Context(), 2*time.Second)
defer cancel()

go func() {
    defer close(results)
    for _, item := range batch {
        select {
        case <-ctx.Done():
            return
        case results <- process(item):
        }
    }
}()

In reviews, insist on passing a context.Context into goroutines and using it to exit on cancellation. Detached goroutines that only check a boolean flag are a leak risk.

Verify Channel Ownership

The goroutine that creates a channel should close it. If a producer depends on multiple consumers to drain the channel, verify that all error and timeout paths still close it; a leaked channel keeps goroutines alive and retains buffered messages.

Guard Against Timer and Ticker Leaks

Reviewers should flag time.NewTicker and time.NewTimer usage that lacks a call to Stop(). Unstopped timers keep references alive and can delay GC.

Scrutinize Object Pools

If the PR introduces a sync.Pool or custom freelist, validate that objects are reset before being returned and that the pool is sized appropriately. Overly aggressive pooling can pin large allocations in memory; lack of pooling in hot loops creates GC churn.

Concurrency Patterns That Degrade Throughput

Unbounded Work Queues

A buffered channel used as a queue should have a capacity justified by metrics. If the code uses an unbounded slice to accumulate work, ask for backpressure handling or explicit limits.

Misused Mutexes

Look for large critical sections guarded by sync.Mutex where most work could run outside the lock. Point out double locking and panic paths that skip Unlock(). Prefer sync.RWMutex when read-heavy workloads dominate, but confirm that upgrade patterns do not lead to deadlocks.

Channel-Based State Machines

Reviewers should evaluate whether a channel loop can block if the consumer exits early. For request-scoped state machines, a buffered channel with capacity 1 plus a default case usually prevents deadlocks during shutdown.

ErrGroup and WaitGroup Hygiene

With sync.WaitGroup, ensure Add() calls happen before goroutines start to avoid panic. With errgroup.Group, confirm that the context is used to stop siblings when one returns an error.

Performance Signals to Request from Authors

When a PR alters concurrency primitives or memory allocation, request supporting data:

  • Benchmarks for hot paths. A small go test -bench snippet gives reviewers confidence about latency and allocations per operation.
  • pprof captures. CPU and heap profiles before/after the change quickly surface new hotspots or leak patterns.
  • Dashboard screenshots. Charts showing goroutines in flight, heap usage, and GC pause times are persuasive evidence that the change is safe.

Red Flags in Diff Reviews

Use the following list while skimming a diff to quickly identify risky code:

  • New map[string][]byte or []byte caches without eviction logic.
  • Loop-local slices passed to goroutines without copy semantics; can lead to data races or stale references.
  • Channel sends in select statements lacking a default case in high-volume loops.
  • Goroutines spawned inside HTTP handlers without a context check.
  • Usage of runtime.GC() or manual tuning as a “fix” rather than addressing the root issue.

Suggested Review Questions

Close reviews with targeted questions that prompt authors to share operational context:

  • What protects this goroutine from running after the request completes?
  • How large can this buffer grow and what happens when it fills up?
  • Do we have benchmarks or load-test data showing allocation deltas?
  • Is there a runbook entry for monitoring goroutine counts and queue depth?

When to Block the PR

Ship velocity matters, but reviewers should block until fixes land when they see unbounded queues, detached goroutines, or evidence that the change increases heap usage per request. Offer concrete suggestions such as introducing timeouts, adding pooling, or pairing the change with a dashboard alert.

Bringing AI into the Review Loop

AI code review assistants excel at pattern matching across repositories. Configure them to flag unbounded channels, missing Stop() calls on timers, and goroutines that ignore context cancellation. Use the tooling to surface candidates, then apply human judgment on trade-offs and design.

Go’s simplicity does not exempt teams from robust performance reviews. By layering static reasoning, targeted benchmarks, and operational guardrails, reviewers can stop leaks and contention before customers feel the impact.

Transform Your Code Review Process

Experience the power of AI-driven code review with Propel. Catch more bugs, ship faster, and build better software.

Explore More

Propel AI Code Review Platform LogoPROPEL

The AI Tech Lead that reviews, fixes, and guides your development team.

SOC 2 Type II Compliance Badge - Propel meets high security standards

Company

© 2025 Propel Platform, Inc. All rights reserved.