back to blog

Fixing High CPU Usage in AWS Elastic Beanstalk: A Debugging Journey

Written by Namit Jain·April 19, 2025·3 min read

Fixing High CPU Usage in AWS Elastic Beanstalk: A Debugging Journey

Deploying to AWS Elastic Beanstalk often feels seamless—until it doesn’t.

Not long ago, I pushed a Node.js application to Elastic Beanstalk. Everything ran smoothly during initial tests and moderate usage. But once traffic spiked, the app began to crumble.

Symptoms:

  • CPU usage climbing rapidly
  • Requests timing out
  • Eventually, the instance would crash and restart

CloudWatch metrics weren’t pointing to anything helpful. There were no recent deployments, no environment changes, and nothing obvious in the application logs.

Yet, something was clearly wrong.


The Investigation: When CloudWatch Isn’t Enough

The first instinct was to check CloudWatch for clues—CPU metrics, memory usage, disk IO. But everything looked mostly normal except for one outlier: CPU usage spiking intermittently and then flatlining after a crash.

What didn’t make sense? The app itself wasn’t logging any issues. No errors, no warnings—just silence. That’s when I realized I needed to dig deeper.

I SSHed into the EC2 instance behind the Beanstalk environment and started tailing logs manually from within the box.

That’s when I found it.


The Real Issue: A Background Process Gone Rogue

Deep inside the instance logs, I found a rogue background worker process running independently of the main application threads. It had spun out of control—eating up CPU and silently tanking performance.

This process wasn’t part of the core app logic. It was meant to be lightweight, fire-and-forget—but under load, it snowballed and became a resource hog.

Because it wasn’t tied to the main app lifecycle, it also didn’t trigger typical logs or alerts.


The Fix: Containing the Chaos

Here’s how I fixed it:

  • Terminated the runaway background process
  • Adjusted instance configuration to better isolate and monitor background tasks
  • Implemented custom CloudWatch CPU alarms with a lower threshold for quicker detection
  • Set up centralized log shipping using Amazon CloudWatch Logs Agent to capture lower-level system logs automatically

These changes gave me much-needed visibility into the system, and the app stabilized almost immediately after restarting the environment.


Key Takeaways

1. Don’t Rely Solely on CloudWatch

CloudWatch is a great start, but it doesn’t capture everything—especially not low-level processes or rogue system behavior. Sometimes, SSH access and manual log inspection are necessary.

2. Make Observability a First-Class Citizen

Debugging in Elastic Beanstalk can be frustrating because of its “black-box” feel. Build observability into your stack from the beginning:

  • Use detailed application and system logging
  • Ship logs to an external service (CloudWatch Logs, Datadog, etc.)
  • Monitor background processes separately from the main app

3. Prepare for Traffic Spikes

Performance issues often don’t show up until real users hit your app at scale. Add autoscaling, proper load testing, and resource isolation for background jobs.


TL;DR

  • Deployed a web app to AWS Elastic Beanstalk — crashed under load due to high CPU usage.
  • Root cause: an unmonitored background process consuming CPU silently.
  • Solution: killed the process, reconfigured instances, added monitoring and log shipping.
  • Lesson: CloudWatch isn’t always enough. Full observability is critical, especially on EBS.