Since most of these references are done through a primary key lookup on the table:
... contact = Contact[other_object.contact_id] ...
I figured I could proxy the  method on the class and add my caching logic there.
I found a gem called lru_redux which, if using Ruby 1.9+, takes advantage of the fact that a Hash is ordered. This makes a very efficient and easy to use LRU cache to keep a finite set of very active models:
class Contact < Sequel::Model attr_accessor :last_reload_time class << self @@cache = LruRedux::Cache.new(100) # Override  to add in caching logic def ( rid ) if ( rec = @@cache[rid] ) # Model is found in cache, no need to load it from the DB if rec.last_reload_time < 5.minutes.ago # Cached model instance is stale, reload it rec.reload rec.last_reload_time = Time.now.utc end else rec = super(rid) if rec # Don't cache nil models (id not found in DB) rec.last_reload_time = Time.now.utc @@cache[rid] = rec end end rec end end end
Since I'm going to leave the models in memory between requests, I still need to ensure a model doesn't get stuck there indefinitely without reloading on a periodic basis. Maintaining the
last_reload_timeinstance variable ensures data stay fairly up-to-date. Since I know these don't change too often, five minutes is probably conservatively low. You might also ask why not just reload the model when it changes? That reload would only be local to the current process and not across the multiple processes spread over several servers I have running. Those other processes have no knowledge of the change and, in an effort to not make this more complicated, I settled on a simple timeout and fairly short LRU list.
With the above caching strategy, I chopped a few hundred milliseconds off my most used end point. The savings really were two-fold. First, the caching removes the round trip to the database and the processing time required to parse the data and build the model. Secondly, because these objects are more persistent, there is less object allocations on each request and thus, less garbage collection. Since GC gets expensive, reducing that alone made a significant improvement.
The above example may not solve every problem. There's no replacement for studying your application's specific request patterns, execution paths, and data structures. Performance tuning is not a simple task but having as many tools at your disposal to solve various bottlenecks can make the process a whole lot smoother.