Monday, November 4, 2013

Considerations on Client-Side Javascript Caching Strategies

If you look at any library that implements a type ahead like search widget, you will inevitably see a solution for caching remote search results. Its a problem that pops up just about anywhere that you have a list of data that you'd like to reuse without making another round trip to the server to fetch. Recently, I've been working on several projects which required a variety of caching techniques. I spent some time trying to isolate the commonalities between them so I could reuse parts of the logic and avoid writing a bunch of redundant code. Below, I've tried to highlight the approach I took to solving some of these common problems.

Before anything is cached, you need to make a request to fetch data from the server to start filling that cache. Let's assume you have a text box that the user will type a search string in and, as they type, it triggers the lookup - either in the cache or on the server. Since you'd rather avoid firing multiple concurrent requests to the server while the user is typing, you'd like to employ a method of throttling the search requests. Since I already have Underscore loaded, I prefer to use as much of its functionality as possible before either writing my own solution or looking elsewhere. The library contains several handy "function" functions which essentially wrap functions with enhancements. In the case of the _.throttle() function, it takes a passed in function and wraps it with throttling logic to prevent that function from being fired except after a certain delay has occurred first (or the other way around or both - see the documentation for a full explanation):

   this.fetchData = _.throttle( function( p ) {
   
        /* if search is already in cache */
            /* return cached data set */
        /* else */
            /* $.ajax() to server */
      },
   },
   500,               // Timer in milliseconds
   { leading: false } // Only triggered on the falling edge
);


Now that requests are protected from firing too close together, let's first assume nothing is in the cache and we need to make a request to the server. Using the jQuery $.ajax() function with the parameters specific to your application seems like the most likely solution. However, something to be mindful of while performing searches remotely (especially on slower queries), is the volatility of the search variables. From the point when the search starts to the point when it completes, the search term, offset, or any other input to the request may be different. Remember, your throttling is not detecting whether a request has completed before allowing another request, its simply forcing them to be spaced out at minimal time intervals (500ms in this example). You have to somehow capture all the values you require in the response handler such that they won't be affected by events that occur after the request started. Since $.ajax only passes one value to the callback, you could setup a partial function using Underscore at the point of initiating the request. Then your callback will receive those captured values along with the return data:

   function callback( offset, term, data ) {
      ...
      /* update cache, fire events, etc */
      ...
   }

   $.ajax({
       url: endPoint,
       data: 'term='+term+'&offset='+offset+'&count='+count,
       dataType: 'json',
       method: 'GET'
    }).done( _.partial( callback, offset, term ) );



jQuery is happy, you're happy. Let's move on to caching the data.

We have some data from the server and we want to cache it so, at some point, if the same conditions occur that caused it to be retrieved, we can just return that data instead of issuing another server request. In general, most caching uses a hash where the key represents some meaningful state that matches a specific set of data. This could be any kind of criteria from a search string to a page in a list. When I was only dealing with a search string, I was just using a hash of arrays. Each hash entry represented a unique search and the value was the array of results. However, once I started paginating the results, I found that I had to handle loading random parts of the result set (due to skipping pages). In this case, the best option for storing data in the cache was to use a hash of hashes. Like in the first approach, the search term indexed the first object hash, however, instead of an array of results, each row of data was indexed as a hash. As results were loaded from the server, they could be integrated into the cache using the offset of the paginated data:

 
   /* update the cache with new data */
   set: function( key, offset, data ) {

      if ( !this._cache[key] )
         this._cache[key] = { data: {}, last: -1, full: false };

      if ( data.length > 0 ) {

         _.each( data, function( d, i ) {
            this._cache[key].data[offset+i] = data[i];
         }, this );

      }
   }



With data in the cache, I can detect if for a given key, offset, and number of records, whether the local cache contained all the data. I wrote this assuming the start/end intervals would never be consistent. This meant that there could be holes anywhere in the cache which would cause a miss:

   has: function( key, offset, count ) {

      var start = offset,
          end = offset + count;

      if ( !this._cache[key] ) return false;

      if ( this._cache[key].last > -1 && end > this._cache[key].last )
         end = this._cache[key].last

      return this._cache[key].full || _.all( _.range( start, end ), function( o ) {
         return !!this._cache[key].data[o];
      }, this );
   }


As you can see, I make heavy use of Underscore to reduce the amount of code necessary to iterate and query the data. It significantly reduces bulky looping logic and the number of extra test variables. The end result of the above code is to ensure that, for the given range, the cache has data set on each record index.

You'll notice I short-circuit the check above by evaluating the "full" property on the cache object. As data is loaded into the cache, I check if the dataset size is less than the expected limit on the number of records returned. This means we've loaded the last page of data and now know how much data should be in the cache. Once we know this, we can set some internal variables to check the cache and mark it full to avoid the iteration:


   if ( data.length < this._pageSize ) {
      this._cache[key].last = offset + data.length;      
   }
   
   if ( this._cache[key].last > -1 ) {
      this._cache[key].full = this.has( key, 0, this._cache[key].last );
   }



Once I know that the cache is complete, I also know that all the possible data has been returned for that specific key. This extra knowledge can be useful as a search is refined. For a type ahead, as the user types, the search term becomes more specific. If you have a complete cache for a shorter part of that string, then a longer, more specific string should be contained in that data. As an example, if the search "bana" has a complete cache, then "banan" or "banana" would only refine that cache further. In our naive caching logic so far, we've only used the current term as a key to determine if there is a hit on the cache. Obviously, "bana" and "banan" are different so the code will fire off a new server request for each search term. However, if we were to chop off the last character and check the cache, we'd get a hit on the cache. If, in addition to that hit, we also knew that all the data was loaded from the server, then we could just use that data and perform the search locally:


   while ( term ) {
      term = term.length > 1 ? term.slice( 0, term.length - 1 ) : null;
      if ( this.localCache.full( term ) )
         break;
   }

   /* local search on cached data to refine the data set 
   *  create a new cache entry for the longer term with the 
   *  search results
   */
   ...



This solution requires you to replicate the search logic the server will use to produce the same results. Additionally, it does require knowledge of the search domain to ensure the refinement will actually be contained in the larger dataset. However, if applicable, it can greatly increase the response time to the user and reduce the amount of requests to the search.

One final concern when dealing with cached data is how to decide if the cache is no longer valid. There are two problems to consider when addressing the issue. First, the remote side may change and you'll then have stale, out-of-date data in your cache. In this situation, you have no idea that the data may have changed. Your best solution is to just keep track of a timestamp and, based on the turnover on your data, decide how long is too long to hold the cache. The second problem is when local data is changed. Client-side MVC frameworks like Backbone make single page applications easy to build with immediate updates to the UI and the local data. However, any edit to that data can significantly affect the cache state. If an edit was made to a field that's part of the search criteria, it may no longer match. If you're working with sorted data, the page on which the record resides may have changed. The easy solution is to just clear the whole cache whenever a edit is performed. This brute force approach doesn't seem like the best solution but the alternatives lend themselves to a lot more code and the potential for more difficult to identify bugs.

These are some of the consideration I made when working on a caching solution. Some may not be applicable to a given situation while other may not be capable of handling another case. I've found that sometimes a really simple solution is fine while, at other times, something much more complex is required. A one-size-fits-all solution can probably be built that handles a large number of cases. However, as I explained above, there are opportunities to enhance performance that may not specifically belong in the main caching logic. In the end, these considerations are probably going to be on your list of design topics and, given the problem your solving, may have any number of possible outcomes to create a functioning solution.