Monday, August 25, 2014

Caching Ruby Sequel Models to Reduce Object Allocations and Database Load

Caching seems to be an inevitable part of most applications. The strategies you employ application-to-application will differ depending on use. Write intensive areas will receive little benefit while read intensive areas can see significant performance boosts. As you design and build your application, you should be able to name off data stores that remain fairly static and can readily be held in memory with little risk of becoming stale. I've been using the Sequel ORM to build my models on PostgreSQL and my number one candidate for caching are contact details. Just about everything output from the application include at least one, if not 20 or more references to the contact model. Further research found that within one request, many contact lookups were actually the same contact record which made it a perfect candidate for caching at least within the request.

Since most of these references are done through a primary key lookup on the table:

   ...
   contact = Contact[other_object.contact_id]
   ...


I figured I could proxy the [] method on the class and add my caching logic there.

I found a gem called lru_redux which, if using Ruby 1.9+, takes advantage of the fact that a Hash is ordered. This makes a very efficient and easy to use LRU cache to keep a finite set of very active models:

class Contact < Sequel::Model
   attr_accessor :last_reload_time

   class << self
      
      @@cache = LruRedux::Cache.new(100)
     
      # Override [] to add in caching logic
      def []( rid )
         if ( rec = @@cache[rid] )
            # Model is found in cache, no need to load it from the DB
            if rec.last_reload_time < 5.minutes.ago
               # Cached model instance is stale, reload it
               rec.reload
               rec.last_reload_time = Time.now.utc
            end
         else
            rec = super(rid)
            if rec
               # Don't cache nil models (id not found in DB)
               rec.last_reload_time = Time.now.utc
               @@cache[rid] = rec
            end

         end
         rec
      end
   end

end


Since I'm going to leave the models in memory between requests, I still need to ensure a model doesn't get stuck there indefinitely without reloading on a periodic basis. Maintaining the last_reload_time instance variable ensures data stay fairly up-to-date. Since I know these don't change too often, five minutes is probably conservatively low. You might also ask why not just reload the model when it changes? That reload would only be local to the current process and not across the multiple processes spread over several servers I have running. Those other processes have no knowledge of the change and, in an effort to not make this more complicated, I settled on a simple timeout and fairly short LRU list.

With the above caching strategy, I chopped a few hundred milliseconds off my most used end point. The savings really were two-fold. First, the caching removes the round trip to the database and the processing time required to parse the data and build the model. Secondly, because these objects are more persistent, there is less object allocations on each request and thus, less garbage collection. Since GC gets expensive, reducing that alone made a significant improvement.

The above example may not solve every problem. There's no replacement for studying your application's specific request patterns, execution paths, and data structures. Performance tuning is not a simple task but having as many tools at your disposal to solve various bottlenecks can make the process a whole lot smoother.

Saturday, June 28, 2014

Learning to Graph with D3

I've been meaning to play with D3 for quite some time. Maybe the only thing stopping me was the seemingly overwhelming amount of features available in the library. There's a definite learning curve involved to get in the right mindset required to use D3. However, once you get there, its actually really easy to use and, surprisingly, does a lot of the heavy lifting for you. As you navigate through the documentation, you need to separate the basics from the advanced features. If you need a static graph, you only need to learn a subset of the library to be successful. Once you get that foundation in place, you can build into transitions, interactivity, and layouts.

For my first foray into the world of D3, I chose to keep things as simple as possible and focus on deconstructing the multivariate example graph to attempt to isolate the basic components of a graph. I was really interested in what was actually happening at each step and how the pieces fit together. Below, I'll attempt to iteratively build the example one feature at a time to get to the final product. I consider each step, a basic component of any graph so you should be able to take the pieces and assemble them together to rapidly start your own "first" graph.

Sunday, June 8, 2014

Searching for Patterns when Developing Browser Extensions

Browser plugin development has become as easy as writing a plain vanilla web application. The only difference is it's installed on the user's computer and has a lot more access to local resources. As such, many libraries and design patterns utilized in browser-based web applications can be reused within extension development. There are some quirks here and there to learn related to scope and security. However, most importantly, you'll have to rewrite portions of your extension to deploy it in different browsers (if they even support a HTML/JS based plugin). Both Chrome and Firefox offer APIs supporting a HTML/JS based plugins which is a good enough reason for me to explore the possibilities of what use cases I can find for them. I started with Chrome because Firefox has more installation requirements and, in my opinion, more development hurdles when you wanted to test your creation. Chrome simply requires you to make a directory and tell it to load the extension (and reload it after every change). That's the kind of simplicity I expect in my development environment.

Maybe the biggest learning curve when trying to build an extension for the first time is deciphering the terminology for the different execution contexts available in the application. Additionally, identifying what functionality exists in each execution scope that affects how to organize the extension's logic and any limitations that limit adopting techniques from traditional web application development. Using the diagram on the left as a guide, you can see there's really only two scopes to manage in an extension. One is the extension context and the other is the page context. Each browser platform names these differently, but the purpose of each is essentially the same. The challenge is decomposing several key components and identifying their role within each context. I've named a few pieces that I think make sense and, personally, would find useful when building an extension.

As I worked on my extension, it became apparent that a small layer of abstraction to facilitate the interaction of the different contexts and normalize the creation of UI components to create a consistent mechanism for managing communication between scopes would be useful. Since I was starting in Chrome with aspirations of also deploying it in Firefox, I'd like to make as much of my extension reusable across different browser extension platforms. Chrome and Firefox have many similarities related to how to structure an extension, context isolation, security, and API capability. However, there are nuances and, the more those can be abstracted, the easier it will be to port between browsers.

The goal in this discussion is to investigate how to break up the extension into logical chunks and stub out a few key code blocks to help create some concrete examples of what parts of a final solution might look like. In subsequent posts, I'll build off of those pieces to create a simple library to encapsulate these core pieces and demonstrate how to use these components to build a simple extension.

Extension Context

Code that runs in the extension context has access to the full browser API, however, has no access to the page DOM. In Chrome, they use the terminology background pages, which have been confusingly split into "persistent" background pages and event pages. The responsibilities placed on this area of the program include providing access to the browser API, activating the extension, managing extension level UI pages, and storing/retrieving global settings/data. Visually, you can add an icon to the toolbar which enables user interaction from the browser level regardless of the site or content displayed in the browser.

Extension Controller

Early on, it was apparent that the primary role of the "background page" was to act as a main messaging hub and maintain application state. As such, all the other components will ask for services or information through the browser's messaging services (in Chrome its part of the runtime API) using a request/response pattern:

chrome.runtime.onMessage.addListener(

   function( request, sender, sendResponse ) {
      // request has two keys - topic and data
      // topic is a string, data is a hash relevant to the topic
      // which the handler will understand
      switch ( request.topic ) {

         case 'subject.scope':

            sendResponse({ foo: bar });
            break;

         ...
      }
   }

);


This chunk of code implements the line labeled "A" in the above diagram. The important thing to note in the code is the structure of the message. The request is an object containing a "topic" and related "data" keys. The topic is a string structured with both a subject and a scope to help avoid collisions. You might use topics like "user.data" to fetch data about the current user, "user.login" to start the authentication process, or "site.monitor" to register a callback when certain conditions are met using the Chrome events API. In any case, its an opinionated structure to the messaging that is not inherently enforced in the browser API so we want something that can help ensure that there is consistency to the requests.

Popup

A popup in the extension context is a UI component that can run any web page content inside a browser provided window outside the context of the page. These pages are useful for collecting settings or information required by the extension to customize the user experience. Technically, a popup is not even required. You can load these pages into a new tab and offer a complete web application inside the browser without connecting to a server to download the content. You're extension could run as a locally installed web application activated by clicking a button on the browser tool bar. Granted, distributing updates to your application requires more effort but its an interesting concept for building client-side web apps.

Activating your popup or page can happen in one of two ways. Either it can be automatically triggered based on the manifest.json configuration:

  "browser_action": {
      "default_title": "My Bookmarks",
      "default_icon": "icon.png",
      "default_popup": "popup.html"
  },


Or, you can manually open a page in a tab or popup from the background page:

chrome.browserAction.onClicked.addListener(function(tab) {
  var manager_url = chrome.extension.getURL("manager.html");
  focusOrCreateTab(manager_url);
});


The former is obviously easier and works well if you only have one page that needs to be displayed. The latter example offers significantly more control and comes in handy if different pages should be displayed under different conditions. I'd guess that most extensions can implement one popup page to enable extension level user interaction and that's why there is a section to configure it in the manifest. In the extension I'll be building, there is only one popup page to collect some user specific data. Everything else operates inside the page context.

Page Context

Code running inside the page scope can access the DOM of the currently loaded page. Anything you would do in a normal web page to query and manipulate the DOM, you can do in this scope. The biggest difference is that all the code loaded in this context is isolated from any code running on the page. If jQuery is loaded on the current site, you're extension code can't use it. You must load your own copy of jQuery. The only thing shared between the page loaded in the browser window and the extension is the DOM.

Since you can manipulate the DOM, you can inject content into the current site. While this may seem great, in practice it may not be the best approach. Injecting anything into the DOM is subject to the current styling of the page which may cause undesirable results on the injected markup. Visa versa, any style sheets injected into the page may adversely affect the page and disrupt the functionality of the current site. As such, only minor changes should be made and maybe only when you know exactly how the site reacts to those changes. More elaborate UI elements need to be isolated from the current page.

Based on these considerations, I've broken the page context into three pieces. First, a controller manages all the components in the page context and any communication with the extension context. Second, a monitoring agent helps identify interesting events to the current site which the extension may want to use to trigger an action. Finally, an iframe container builds the foundation for UI components that will be instantiated to allow user interaction within the page.

Page Controller

This part of the extension is used to manage any logic required at the page context. Its primary purpose is to monitor events from page content detectors, manage the UI frames life-cycle, and bridge requests to the extension context. From a framework perspective, the UI frame component is broken into two pieces. On the controller side, there is a portion to send and receive messages from the iframe window and handle life-cycle events. These parts will be reflected on the iframe side of the library to handle the same activities but from the frame's perspective. Using messages, we can communicate between windows:

   
   // Receive messages from parent controller
   window.addEventListener( 'message', ... )

   // Send messages to parent controller
   // This is inside an abstraction layer to normalize
   // identifying each iframe to ensure proper message
   // routing
   function notify( topic, data ) {
      var win = this.$iframe[0].contentWindow,
          message = JSON.stringify({ target: this.cid, topic: topic, data: data });

      win.postMessage( message, this.$iframe[0].src );
   };

   // Send message to extension context for processing
   function request( topic, data ){
      chrome.runtime.sendMessage( { topic: topic, data: data }, function( response ) {
         ...
      });
   };


This API can only pass strings between the windows so all the data needs to be serialized and deserialized when working with the method and events. I chose a similar strategy to structuring the message that I used in the extension context to keep things consistent. The library can wrap this logic in a way to allow passing basic Javascript object hashes between the main window and its iframe children. The messaging will broadcast to all the children frames so its necessary to identify which one should respond to the event. Since the main window side of the iframe view will be an object instance representing the iframe, it can identify itself and creatively set the URL on the iframe so it can know its identity and respond accordingly. The cid above accomplishes this along with the src in the postMessage call. Between the two mechanisms, you can ensure secure communication is maintained between the two windows.

UI Frame

The iframe component enables isolating styling from the site's content. Since its own window as well, you can run anything inside it to create the view displayed to the user. If you want to build something fancy, you can use Backbone, Angular, or any other framework you'd like. Minimally, you will want to add a little logic to help wrap the messaging between the main window and the iframe window to ensure consistent communication:

   
   // Receive messages from parent controller
   window.addEventListener( 'message', ... )

   // Send messages to parent controller
   function notify( topic, data ) {
      window.parent.postMessage( JSON.stringify({ source: window.location.href, topic: topic, data: data }), '*' );
   }


The only other issue to be aware of when running code in the iframe is working around cross-domain security policies. When I tried to render a Backbone view in an iframe I created, I had issues since the Underscore micro-template uses eval to inject data into the compiled template and render the view. To enable this feature, you have to add this line to your manifest file:

   "content_security_policy": "script-src 'self' 'unsafe-eval'; object-src 'self'"


Monitor/Detect

The final piece of the puzzle that may not always be important to every plugin is watching the target page for interesting changes. Since most sites dynamically load and generate content, its not good enough to wait for the page to load and check it for certain content. For instance, if you're creating an extension to perform actions with images found on a page, those images may only load as the user scrolls the page. If its a single page app, the page loads once and everything else renders inside that page dynamically. Instead of binding to the loaded event, you have to bind to a mutation event. However, these events fire often and are prone to crash if not used wisely. It was fortuitous that Addy Osmani wrote an article about DOM mutation observers because before I considered that approach, I was manually binding/unbinding and throttling the event:

function monitorChange() {
   $( document.body ).bind( 'DOMSubtreeModified', detectContent);
}

function unmonitorChange() {
   $( document.body ).unbind( 'DOMSubtreeModified', detectContent);
}

var detectContent= _.throttle(
      function() {

          var realChanges = 0;

          unmonitorChange();

          console.log( 'tree changed' );
          ...
          /* Find changes and do something, which may modify the DOM */
          ...

          monitorChange();
      },
      500,
      { trailing: false }
   );

detectContent();



Using the observer avoids both those solutions and even provides detail about the changes made. I'd still like to abstract this part slightly to provide selectors that should trigger different actions if that content is among the changes:
Detect.monitor({
   'insert img': function() {
      ...
   },

   'remove img': function() {
      ...
   }
});
Integrating that back into the page context's controller enables it to act on changes of interest and perform an appropriate action. Since the mutation observer API doesn't provide a robust query selector on exactly what to observe, this layer can provide that capability and dispatch an appropraite subset of targeted events.

Next Steps

So far I've only made a broad outline of the types of components I'd like to have when building a browser extension. Now its time to flush those pieces out so something useful can be built with them. I'm still a little early in my research and will definitely refine these concepts a bit. But after cobbling together a simple plugin, these were the main themes I saw emerging in my work.

Monday, May 12, 2014

State of the Stack: What Javascript Libraries Do You Use?

Every now and then I take a moment to reflect on the state-of-the-art and make sure I'm not heading the way of the dinosaurs. Every five years or so, the Internet seems to head in another direction (or at least starts to shift). Technologies and techniques get old fast and staying on top of the latest trends is important to remain relevant. Building your client stack probably starts with the selection of a MV* like library. What you choose will impact all aspects of your project. InfoQ has built a survey of adoption of various MVC libraries. You have to vote to see the results but it probably should not come as too big a surprise that Angular, Backbone, and Knockout are at the top of the list.

Interested in how that compares to another source, I turned to GitHub to look at popularity. The number of watches, stars, and forks is a pretty good indicator of relative adoption:



The "related" graph depicts how many results you get on GitHub when you search for one of these libraries. Its a reasonable indicator of the supporting ecosystem that has evolved around the library. Why is this important? By themselves, these libraries are not complete frameworks that allow you to build web apps. They do help solve the problem of separating data and logic from the DOM and establishing good design patterns. However, depending on your application, you'll need a lot more than these solutions to create the desired user experience. I've attempted to generalize the basic components you'll probably consider when building your application:



Each of the top three MVC libraries address parts of these areas and provide meaningful support to extend or integrate other solutions as necessary. The supporting community around these tools becomes important as you try to find existing solutions to meet your needs. The less you have to write and test the better. If it already exists and, is in reasonably high use, then you can feel comfortable that it will work and you can focus on solving problems unique to your application.

Today, I primarily use Backbone as my MVC. Its strength lies on the data side with a focus on abstracting the acquisition and persistence of data to the server. I typically swap out Underscore in favor of Lodash and drop in MomentJS to make date manipulation a breeze. On the UI side of things, I start with Bootstrap and only use the interaction and autocomplete widgets in jQuery UI. RequireJS takes care of managing modularization and dependency management. Everything else is pulled from one of the ecosystems surrounding those main libraries and adapted to play well with each other. Most of the glue code I have written is related to streamlining the two-way bindings with form controls and the data sources. Its probably the greatest weakness of Backbone, and although the community has some solutions, it still required some work to fill the gap and create something that worked within the context of applications I need to build.

These MVC libraries are all entering their fourth year of existence. At this point, we can probably call them mature and reasonably stable. The question is, if technology keeps changing at its historical pace, how relevant are these libraries in building the future of the web? What are the latest innovations in building better technologies for developers to enable writing leaner, more efficient code that creates a rich user experience that performs well across both desktop and mobile devices? We'll always discover better ways to do the same thing and be faced with finding new solutions as technology changes. I'd expect these established libraries to be capable of adapting to these changes for some time to come. Maybe in another 10 years we'll look back and say that these solutions are out dated and no longer represent the cutting edge, defacto standard developers turn to when building applications.

Monday, April 28, 2014

A Basic Star Rating Bar jQuery Widget

The first web-based rating system I remember using was provided by Netflix when picking movies to watch. I can remember inspecting the HTML to learn how they built it. At the time, it seemed like an amazing trick given the available browser technology. Today, a rating bar UI widget is pretty common place. Whether you build your own or pick from the pool of existing options depends on your needs. Interested in what was out there, I did some quick research to see what existed related to libraries I already work with like Bootstrap and jQuery UI. On the Bootstrap GitHub issue log, an external project was recommended in the discussion for those interesting in an implementation. I also found there's a plan to build this type of widget into jQuery UI and found a project that already provides such an implementation.

Looking through these projects, it became evident that they are a bit larger than I might really need. I was curious how little code I could write to make a very simple rating bar that would work in a variety of situations. If its small enough, it costs very little in the way of bandwidth and would be easy to extend and adapt to other use cases. The final product I built is about 120 lines of code (both JS and CSS) with comments and blank lines. Follow along through this post to see how I got there. Not interested in how I built it? Then scroll to the end for a jsFiddle demo and links to the final code.