Best Practices

Resource Leak Detection in Code Review: A Comprehensive Guide for Engineering Teams

Tony Dong
August 23, 2025
16 min read
Share:
Featured image for: Resource Leak Detection in Code Review: A Comprehensive Guide for Engineering Teams

Resource leaks are silent killers that can crash production systems, degrade performance, and cost thousands in infrastructure spend. Yet most code reviewers struggle to identify them. Learn the patterns, techniques, and checklists that will make you a resource leak detection expert.

Key Takeaways

  • Resource leaks cost real money: A single database connection leak can exhaust connection pools and crash services
  • 5 critical resource types: Memory, file handles, database connections, network sockets, and thread pools
  • Try-with-resources pattern: The #1 technique for preventing resource leaks in modern languages
  • Code review checklist: 12 specific patterns to look for during manual review
  • Language-specific gotchas: Common leak patterns in JavaScript, Python, Java, and Go

What Are Resource Leaks?

Resource leaks occur when your application acquires system resources (memory, file handles, network connections) but fails to properly release them. Unlike syntax errors or logic bugs, resource leaks are insidious—they often work fine in development but cause production systems to slowly degrade over time until they crash.

According to production incident analysis, resource leaks are among the most prevalent issues causing production system failures, yet they're the hardest category of bugs to detect during code review.

The 5 Critical Resource Types Every Reviewer Must Know

1. Memory Leaks

Memory leaks happen when objects are allocated but never freed, causing memory usage to grow until the system runs out of available memory.

Common patterns to watch for:

  • Event listeners that are never removed
  • Circular references preventing garbage collection
  • Static collections that grow indefinitely
  • Closures holding references to large objects

2. File Handle Leaks

Every opened file consumes a file handle from the OS. Most systems have limits (typically 1,024-65,536 open files per process). When you hit this limit, your application can't open new files.

Red flags during review:

  • File operations without explicit close() calls
  • Missing try-finally blocks around file operations
  • File streams opened in loops without proper cleanup
  • Temporary files created without deletion logic

3. Database Connection Leaks

Database connections are expensive resources. Connection pools typically have 10-100 connections. A single leaked connection that's never returned to the pool can cascade into complete service failure.

Critical review points:

  • Connections obtained but never closed
  • Early returns that bypass connection cleanup
  • Exception handling that doesn't include connection cleanup
  • Transactions without proper commit/rollback logic

4. Network Socket Leaks

Each network connection consumes a socket. Operating systems limit the number of simultaneous sockets (often 65,535). Socket leaks can prevent your application from making new network requests.

5. Thread Pool Leaks

Thread pools have fixed sizes. If threads are created but never properly shut down, or if tasks submitted to thread pools never complete, you can exhaust the thread pool and block all future work.

Language-Specific Resource Leak Patterns

JavaScript/Node.js

JavaScript's garbage collector handles memory automatically, but resource leaks still occur frequently:

Common JavaScript leak patterns:

  • Event listeners on DOM elements that are never removed
  • Timers (setInterval) without corresponding clearInterval
  • Callbacks holding references to large objects
  • Node.js streams not properly closed

Python

Python has automatic memory management, but external resources require manual cleanup:

Python leak patterns to catch:

  • Files opened without with statements
  • Database connections not using context managers
  • Subprocess objects without proper cleanup
  • Thread objects that are started but never joined

Java

Java's try-with-resources pattern was specifically designed to prevent resource leaks:

Java anti-patterns:

  • Resources not implementing AutoCloseable
  • Manual resource management instead of try-with-resources
  • Static collections growing without bounds
  • Thread pools not properly shut down

Go

Go requires explicit resource management with deferred cleanup:

Go leak indicators:

  • Missing defer statements for resource cleanup
  • Goroutines that never terminate
  • Channels that are never closed
  • HTTP client connections without timeout

The Resource Leak Detection Checklist

Use this checklist during every code review where resource allocation occurs:

Resource Acquisition Review Checklist

  • Every resource acquisition has a corresponding cleanup call
  • Cleanup occurs in finally blocks or defer statements
  • Early returns don't bypass resource cleanup
  • Exception handling includes resource cleanup paths
  • Loops that allocate resources also clean them up
  • Static/global collections have size limits or cleanup mechanisms
  • Database transactions have proper commit/rollback logic
  • Thread pools and executors are properly shut down
  • Event listeners are removed when no longer needed
  • Timers and intervals have corresponding clear functions
  • HTTP connections specify timeouts and connection limits
  • Temporary files are deleted after use

Advanced Detection Techniques

Pattern Recognition

Train yourself to recognize these high-risk code patterns during review:

  • Resource allocation in loops: Any loop that opens files, creates connections, or allocates memory
  • Early returns: Functions with multiple exit points often miss cleanup on some paths
  • Exception-heavy code: Complex error handling often has resource cleanup gaps
  • Recursive functions: Stack-based resource allocation can quickly exhaust resources
  • Callback-heavy code: Asynchronous patterns can make resource lifetimes unclear

Static Analysis Integration

While manual review is crucial, combine it with static analysis tools for comprehensive coverage:

  • Java: SpotBugs, PMD, and SonarQube detect common resource leak patterns
  • JavaScript: ESLint rules for event listener cleanup and memory management
  • Python: Bandit and PyLint catch file handling and resource management issues
  • Go: go vet and golangci-lint identify goroutine and resource leaks

Real-World Production Incidents

Resource leaks have caused significant production outages across major technology companies. Here are documented incidents that highlight the importance of resource leak detection:

Database Connection Pool Exhaustion

A major web platform experienced a system-wide outage affecting 75% of users globally due to a database connection leak introduced through a code deployment. The leak caused gradual exhaustion of available database connections, leading to slowed response times and eventual system failure. This incident, documented in their post-mortem, demonstrates how connection leaks can cascade into complete service failures.

Key lesson: Database connection leaks often manifest gradually but can cause sudden, complete system failures under load.

File Descriptor Leaks in Container Runtime

CVE-2024-21626 revealed a critical file descriptor leak in runc (the container runtime used by Docker and Kubernetes) that allowed container escape. The leak occurred when internal file descriptors were inadvertently leaked into container processes, including handles to the host filesystem. This vulnerability affected millions of containerized applications worldwide.

Key lesson: File descriptor leaks can have severe security implications beyond just resource exhaustion.

Memory Leak During High Load

Google's SRE team documented a cascading failure in their Shakespeare search service caused by resource leaks that only appeared under exceptional load. The leak occurred when searches failed due to terms not being in the corpus, and file descriptor exhaustion was confirmed in system logs.

Key lesson: Resource leaks may be undetectable under normal conditions but become critical during traffic spikes.

Handle Leaks in System Agents

The CheckMK monitoring agent was found to be leaking Windows handles across multiple Windows servers, posing a significant threat to system stability. Handle leaks of this type can cause performance degradation and eventually system crashes if left unchecked.

Key lesson: System-level software requires especially careful resource management as leaks can affect entire servers.

Tools and Automation for Resource Leak Detection

While manual code review is essential, these tools can help catch what humans miss:

Runtime Detection Tools

  • Java: VisualVM, Eclipse MAT (Memory Analyzer Tool), JProfiler
  • JavaScript: Chrome DevTools Memory tab, heap snapshots
  • Python: memory_profiler, tracemalloc, objgraph
  • Go: pprof for memory and goroutine leak detection

CI/CD Integration

Integrate resource leak detection into your development workflow:

  • Run static analysis tools in CI pipelines
  • Set up memory usage monitoring in staging environments
  • Create alerts for resource consumption trends
  • Include resource leak tests in your test suite

Building a Resource-Aware Engineering Culture

Team Education

Resource leak prevention starts with team awareness:

  • Conduct training sessions on resource management patterns
  • Share post-mortems of production resource leak incidents
  • Include resource management in your coding standards
  • Create team-specific checklists for high-risk code areas

Code Review Process Enhancement

Integrate resource leak detection into your standard review process:

  • Require senior developer review for resource-heavy changes
  • Use dedicated resource management review templates
  • Flag PRs that touch resource allocation code for extra scrutiny
  • Create automated PR comments for high-risk patterns

Measuring Success: Resource Leak Detection Metrics

Track these metrics to measure your team's resource leak detection improvement:

  • Production resource leak incidents: Target zero per quarter
  • Resource leak detection in PR review: Track catches before production
  • Static analysis tool adoption: Percentage of projects with resource leak detection
  • Memory usage trends: Monitor application memory usage over time
  • Resource cleanup code coverage: Ensure cleanup paths are tested

Frequently Asked Questions

How do I know if my application has resource leaks?

Monitor memory usage, file descriptor counts, and database connection pool metrics over time. Gradual increases that never decrease indicate potential leaks. Tools like lsof (Linux/Mac) show open file handles, while application monitoring tools can track memory trends.

Should I always use try-with-resources patterns?

Yes, for any resource that implements AutoCloseable (Java) or context manager protocols (Python). These patterns guarantee resource cleanup even when exceptions occur. The slight code overhead is worth the reliability improvement.

Can garbage collectors prevent all memory leaks?

No. Garbage collectors only free objects that are no longer referenced. If your code maintains references to objects it no longer needs (like in static collections or event listeners), the GC cannot free that memory.

How often should we review code specifically for resource leaks?

Every PR that touches resource allocation should get resource-focused review. Additionally, conduct quarterly reviews of high-traffic services and any services that have experienced resource-related incidents.

What's the difference between memory leaks and memory bloat?

Memory leaks are unintentional—memory that should be freed but isn't. Memory bloat is inefficient but intentional memory usage, like loading entire datasets into memory when streaming would be better. Both can cause production issues but require different solutions.

Conclusion: Making Resource Leak Detection Second Nature

Resource leak detection is a learnable skill that can save your team from costly production incidents. By systematically applying the patterns, checklists, and techniques in this guide, you'll develop the instinct to spot resource management issues before they reach production.

Remember: every resource allocated in your code needs a clear path to cleanup. When you can mentally trace the lifecycle of every resource in a code change, you've mastered resource leak detection.

Ready to catch resource leaks before they hit production? Propel's AI-powered code review automatically detects resource management issues across multiple programming languages, saving your team hours of manual review time.

Join 500+ engineering teams using Propel to prevent resource leaks and improve code quality.

Ready to Transform Your Code Review Process?

See how Propel's AI-powered code review helps engineering teams ship better code faster with intelligent analysis and actionable feedback.

Explore More

Propel AI Code Review Platform LogoPROPEL

The AI Tech Lead that reviews, fixes, and guides your development team.

SOC 2 Type II Compliance Badge - Propel meets high security standards

Company

© 2025 Propel Platform, Inc. All rights reserved.