Advice for Rails Performance Optimization
Recently, our team releasing to a large set of users and needed to ensure that our application could meet the performance needs of the new users. Launch day was a month away. Months of steady Agile feature development needed to meet a healthy amount performance engineering.
We started with a few goals in mind. We wanted:
- data-driven improvements
- to prefer simple performant code to complex caching strategies
- to use available tools to provide visualization for badly performing requests
Plan a Solution
We brainstormed for ideas on what would reliably lead us in the right direction. Requests per second (and seconds per request) were useful data points that we were able to get from Apache Benchmark (ab). Using AB we could make a change, run the benchmark, and be sure that the change made a positive performance impact. One obvious way to get requests per second to increase is to use caching. Our team was hesitant to use caching for several reasons:
- there are many user-specific features
- caching increases the complexity of your codebase when dealing with cache population and invalidation strategies
- we felt that we could meet our performance needs without caching by employing a few other tools
To track down specific bottlenecks we used Ruby Benchmark. We could perform an Outside-in performance analysis of specific actions, benchmarking deeper and deeper, until specific methods or database calls stood out as obvious problems.
The Big Wins
A few things stood out as Big Wins for us:
- Missing database indexes often came up as a cause for slowness. Often there were non-foreign key columns that had not been indexed but were being used in a WHERE or a JOIN. The MySQL slow query log led us to several major culprits.
- We could see from our New Relic graphs that while iterating and rendering the same exact view with a different user, we would see a large spike of time spent rendering. This smelled to us of garbage collection. We turned on Rack::Bug and quickly noticed that we were performing 6 garbage collections during one request. We had read a blog article from Sam Coward about using Ruby Enterprise Edition Garbage Collection statistics gathering to track down object allocation and garbage collection problems. (https://blog.pivotal.io/users/scoward/blog/articles/identify-memory-abusing-partials-with-gc-stats) We installed his patch to Rack::Bug which displays memory usage information for each ActionView template that is rendered. From there we were able to use our outside-in benchmarking to find our offensive memory hogs and make them more friendly. We found out later that New Relic has released an RPM that also graphs time spent in garbage collection.
- This led us to our final optimization. We had removed most of our GC time, but still had not taken advantage of the Ruby Enterprise GC variables. Our servers are hosted with Blue Box Group (http://www.blueboxgrp.com/) and their service staff helped us eek out the last bit of memory performance. They worked with us to tweak our Ruby Enterprise Edition garbage collection variables to the most optimum settings.
What We Learned
In the end, we learned a few things about optimization.
- There are many things you can do easily to improve performance without resorting to complex solutions.
- A data driven approach can lead you quickly to a good solution.
- Use tools that can give you visibility into your performance bottlenecks.
- Set a realistic performance goal to meet. Your site can always perform better. The real question is, how much performance do you need?
These are the obvious, well-covered topics of performance enhancement. There’s a reason. They can take your application a long way.
Thanks to Evan Farrar for much of the wisdom that went into our performance optimization thought process.
About the AuthorMore Content by Adam Berlin