Monday, August 25, 2014

Caching Ruby Sequel Models to Reduce Object Allocations and Database Load

Caching seems to be an inevitable part of most applications. The strategies you employ application-to-application will differ depending on use. Write intensive areas will receive little benefit while read intensive areas can see significant performance boosts. As you design and build your application, you should be able to name off data stores that remain fairly static and can readily be held in memory with little risk of becoming stale. I've been using the Sequel ORM to build my models on PostgreSQL and my number one candidate for caching are contact details. Just about everything output from the application include at least one, if not 20 or more references to the contact model. Further research found that within one request, many contact lookups were actually the same contact record which made it a perfect candidate for caching at least within the request.

Since most of these references are done through a primary key lookup on the table:

   ...
   contact = Contact[other_object.contact_id]
   ...


I figured I could proxy the [] method on the class and add my caching logic there.

I found a gem called lru_redux which, if using Ruby 1.9+, takes advantage of the fact that a Hash is ordered. This makes a very efficient and easy to use LRU cache to keep a finite set of very active models:

class Contact < Sequel::Model
   attr_accessor :last_reload_time

   class << self
      
      @@cache = LruRedux::Cache.new(100)
     
      # Override [] to add in caching logic
      def []( rid )
         if ( rec = @@cache[rid] )
            # Model is found in cache, no need to load it from the DB
            if rec.last_reload_time < 5.minutes.ago
               # Cached model instance is stale, reload it
               rec.reload
               rec.last_reload_time = Time.now.utc
            end
         else
            rec = super(rid)
            if rec
               # Don't cache nil models (id not found in DB)
               rec.last_reload_time = Time.now.utc
               @@cache[rid] = rec
            end

         end
         rec
      end
   end

end


Since I'm going to leave the models in memory between requests, I still need to ensure a model doesn't get stuck there indefinitely without reloading on a periodic basis. Maintaining the last_reload_time instance variable ensures data stay fairly up-to-date. Since I know these don't change too often, five minutes is probably conservatively low. You might also ask why not just reload the model when it changes? That reload would only be local to the current process and not across the multiple processes spread over several servers I have running. Those other processes have no knowledge of the change and, in an effort to not make this more complicated, I settled on a simple timeout and fairly short LRU list.

With the above caching strategy, I chopped a few hundred milliseconds off my most used end point. The savings really were two-fold. First, the caching removes the round trip to the database and the processing time required to parse the data and build the model. Secondly, because these objects are more persistent, there is less object allocations on each request and thus, less garbage collection. Since GC gets expensive, reducing that alone made a significant improvement.

The above example may not solve every problem. There's no replacement for studying your application's specific request patterns, execution paths, and data structures. Performance tuning is not a simple task but having as many tools at your disposal to solve various bottlenecks can make the process a whole lot smoother.