Sunday, June 30, 2013

Using Google Maps Drawing Manager to Create User Selectable Areas

Drawing-Manager-Selections

The Google Maps API has been on my list of things to learn for some time now. I recently had an opportunity to dive in and try a few features out. There's a lot of good documentation and small demos available from Google which makes it pretty easy to get started. I decided to create a simple scenario where, given a set of places marked on the map, the user can select a region of interest and list more details about the points inside the selected area.

I started this experiment with the Drawing Tools demo, tweaking it, and adding various different features. There are several pieces needed to make it work:


  1. Create the map and center it

  2. Plot all the points the user can select

  3. Create the DrawingManager, tell it what to draw, and then listen for it to complete drawing that object

  4. Each time a selection is made, find the points inside the area and list them next to the map

  5. Maintain only one selection area. Clear the existing one in drawing more and, as a convenience, make a completed selection draggable/resizable



From that list of goals, I start digging through the documentation and examples looking for what I needed to realize the functionality. Creating the map and DrawingManager was already handled in the demo I started from. However, I didn't want to draw multiple overlays - just a circle. I tweaked the configuration options to only draw circles and not show any controls:



drawingManager = new google.maps.drawing.DrawingManager({

drawingMode: google.maps.drawing.OverlayType.CIRCLE,
drawingControl: false,

circleOptions: {
fillColor: '#ffff00',
fillOpacity: 0.3,
strokeWeight: 1,
clickable: false,
editable: false,
zIndex: 1
}

});



Now, I could only draw circles but there was no limit on how many I could draw. The next step was to listen for when a draw operation completed and track the circle object that was drawn so I could ensure only one was drawn at a time:



google.maps.event.addListener(drawingManager, 'circlecomplete', function( circle ) {
selectedArea = circle;
});



Once I capture the selected region, I know there is something selected. However, I still need to either remove the current circle before drawing a new one or prevent a new circle from being drawn. I attempted the former by listening to the map's click event:



google.maps.event.addListener(map, 'click', function() {

if ( selectedArea ) {
selectedArea.setMap(null);
google.maps.event.clearInstanceListeners(selectedArea);
}

selectedArea = null;
});


But that handler was never called. It seems the DrawingManager was preventing map click events. My solution was to use jQuery to listen for mousedown events on the map container DIV with the same handler function as above:



$('#map-canvas').on('mousedown', function() {
...
});


Now, as the next circle is drawn, the current one is removed from the map.

Before anything can be selected, there needs to be something to select. I created a simple array of points and added them to the map:


var sites = [
{ location: 'Alfond Swimming Pool', lat: 28.5903, lng: -81.3484},
{ location: 'Cahall Sandspur Field', lat: 28.5928, lng: -81.35},
...
];

function plotMarkers () {

$.each( sites, function () {

if ( this.marker ) this.marker.setMap(null);

this.position = new google.maps.LatLng(this.lat, this.lng);

this.marker = new google.maps.Marker({
position: this.position,
map: map,
title: this.location
});

});
}


I saved the LatLng object since I'll need that later to determine if a location is inside the selected area. That process is setup in the DrawingManager's circlecomplete event handler, I'm going to add a function to determine what points fall inside the circle:



google.maps.event.addListener(drawingManager, 'circlecomplete', function( circle ) {

selectedArea = circle;

listSelected();

});



That function uses the Geometry helper function computeDistanceBetween() to take the center of the circle and find the distance to each site on the map to see if its less than the radius of the circle:


function listSelected () {

var r = selectedArea.getRadius(),
c = selectedArea.getCenter();

var inside = $.map( sites, function ( s ) {

var d;

if ( ( (d = google.maps.geometry.spherical.computeDistanceBetween( s.position, c )) <= r ) )
return s.location + ' ('+(Math.round(d/100)/10)+' km)';

});

$('#map-selected').html( inside.sort().join('<br/>') );
}


If it is within the circle, it will be added to the list of locations to display next to the map. As this is a proof of concept, I've made no attempt to make this work with larger sets of data. Clearly, a different strategy would be required to accommodate anything more than a few hundred points. Also, the "extra" information is not exactly that spectacular but it shows the idea.

Right now the map is stuck in edit mode so you can't interact with the it. Every action results in a drawing activity. You can't drag the map around or use the mouse wheel to zoom (just the controls on the map). For now, I decided to toggle the map into interact/select mode by using a button to enable the DrawingManager for creating a selection. Upon drawing the circle, the DrawingManager is disabled and the resulting circle is allowed to be moved and resized. A few changes were needed to enable this feature. First, the circleOptions in the DrawingManager needed to be set:


drawingManager = new google.maps.drawing.DrawingManager({

drawingMode: google.maps.drawing.OverlayType.CIRCLE,
drawingControl: false,

circleOptions: {
fillColor: '#ffff00',
fillOpacity: 0.3,
strokeWeight: 1,
clickable: false,
editable: true,
zIndex: 1
}

});


Setting editable to true allows the circle to be modified after being drawn. Next, I added the button and toggling logic:



$('#map-controls').children().button().click(toggleSelector);

function toggleSelector () {

var $el = $('#map-controls button');

if ( $el.button('option', 'label') == 'Select' ) {

$el.button('option', 'label', 'Interact');
drawingManager.setMap(map);
} else {

$el.button('option', 'label', 'Select');
drawingManager.setMap(null);
}

selecting = !selecting;
}


And added the toggleSelector() function to the circlecomplete event handler so the DrawingManager is disabled after a circle is drawn:


google.maps.event.addListener(drawingManager, 'circlecomplete', function( circle ) {

selectedArea = circle;

google.maps.event.addListener(circle, 'center_changed', listSelected);
google.maps.event.addListener(circle, 'radius_changed', listSelected);

listSelected();
toggleSelector();

});


Additionally, I added listeners to the circle to watch for changes to the circle which may affect the selected region. When that happens, I want listSelected() to be called to rebuild the list of locations inside the selected area.

A working demo is on my sandbox along with complete source. Although relatively simple, the example examines several different features of the Goggle Maps API and how they can be tied together to create an interactive user experience.

Sunday, June 23, 2013

Backbone Router-Friendly jQuery UI Tabs Widget

jquery-ui-tabs-widgetTabs are a basic page layout tool that help organize and separate content. The jQuery UI library provides a fairly feature-rich tabs solution complete with AJAX content loading. I had originally thought I could just drop that widget on my page, setup the links for each tab, and use a Backbone router to manage capturing the tab changes and loading my content into the appropriate tab. However, I quickly discovered that the tabs widget prevented the click event from bubbling up to the point where the browser location would change such that the router would see the change. Upon further review, I realized that I really didn't need a whole lot of elaborate functionality. While the jQUery UI widgets are great, they are designed to provide functionality that may not make sense in a Backbone-based application. In the case of tabs, you are most interested in the layout and not much else. Depending on the use case, you'll probably tie the tabs to routes and then use the Backbone library to handle loading and generating all the content for each tab. Now, you could just listen for the "activate" event from the tab widget and change rewrite the location yourself:


var myTabsView = Backbone.View.extend({

render: function () {

this.$el.html(...);
this.$el.tabs();

return this;
},

events: {
'tabsactivate' : function ( e, ui ) {

var hash = ui.newTab.find('a[href^=#]').attr('href');

if ( location.hash.length == 0 )
location.href += hash;
else if ( location.hash != hash )
location.href = location.href.replace( /#.*$/, hash );

}
}

});


But, somehow, that just seems to defeat the purpose. You're essentially building a router to trigger your router. I thought I'd at least explore what would be necessary to build a basic tabs layout that would provide the same behavior without any intervention from external code. Given that, I decided to build a tabs widget using jQuery UI as a foundation so it would have a similar look-and-feel, leverage the existing framework, and provide flexibility for using it in several contexts.

My markup will look very close the required markup for the jQuery UI tabs widget. I did not make the link between the tab LI element and its panel DIV element based on the HREF/ID attributes. Instead, I assumed the order (and number) would match:



<div id="tab-layout">
<ul>
<li><a href="#overview">Overview</a></li>
<li><a href="#usage">Usage</a></li>
<li><a href="#example">Example</a></li>
<li><a href="#lorum">Lorum Ipsum</a></li>
</ul>
<div></div>
<div></div>
<div></div>
<div></div>
</div>



Each of those LI/A elements need to look like tabs. I was trying to keep things as simple as possible so I thought I could use a combination of button widgets and a few well place styles to reproduce the look and functionality with as little code as possible. After all, the button widget can style an anchor tag to look almost exactly like a tab. However, again, there's a lot of extra functionality happening in the button widget which wanted to work against me. A button will not keep the active (pushed) state after its clicked. I tried to defeat it, but the widget will attach a single-execute click handler on the document node to remove that state. I just decided that the only reason I wanted the widget was for the styles so just wrote code to replicate the results:



// Generate jQuery UI Button markup on an anchor
// without using a button widget
$el.find('a')
.addClass( 'ui-button ui-widget ui-state-default ui-button-text-only ui-corner-top' )
.each(function () {
var $button = $( this );
$( '<span>' )
.addClass( 'ui-button-text' )
.text( $button.text() )
.appendTo( $button.empty() )
})



The next consideration was to expose a way to both programatically change and retreive the current tab and panel node. This allows the widget to decide what consistitutes the container for a tab panel and nicely encapsulate that part of the design away from the rest of the code. Additionally, in cases where you may not use a router to detect tab changes, I decided to publish both the beforeActivate and activate events just like the existing jQuery UI tabs widget. With those considerations in mind, I organized the widget to cache the tabs and panels DOM nodes, build a consistent object that could be used to represent a tab in both the events and get/set method, and add classes to style the tabs. Here's some of the highlights:




/**
* Listen for clicks, trigger events, and use active to change the tab
*
*/
events: {
'click li' : function ( event ) {

// Get the index amongst the LIs siblings
var idx = $( event.currentTarget ).index(),

// Make normalized objects for the tab we're leaving
// and the tab we're changing to. Don't need to know
// the index of the current tab, the function will figure it
// out.
oTab = this._getTabInfo(),
nTab = this._getTabInfo( idx ),

eventData = {
oldTab: oTab.tab,
oldPanel: oTab.panel,
newTab: nTab.tab,
newPanel: nTab.panel
};

// Provide a way to cancel it
if ( oTab.tab.index != nTab.tab.index &&
this._trigger( 'beforeActivate', event, eventData ) !== false ) {

// Use the setting to change the tab
this.active( idx );
this._trigger( 'activate', event, nTab );
} else {

event.preventDefault();
}

}
},

/**
* Get/Set the current tab. Accepts the index or string match the hash (less #)
*
*/
active: function ( tab ) {

var idx = 0;

if ( arguments.length > 0 ) {

// Resolve the argument type and find the tab
if ( typeof(tab) == 'string' && tab.length > 0 ) {
idx = this.tabs.index( this.tabs.find( '[href=#'+tab+']' ).closest( 'li' ) );
} else if ( typeof(tab) == 'number' && tab >= 0 ) {
idx = tab;
}

this.panels.hide().eq(idx).show();

} else {

return this._getTabInfo();

}
},

/**
* Assemble tab info object from the provided index. No argument means
* to get the current active tab.
*
*/
_getTabInfo: function ( idx ) {

var idx = arguments.length > 0 ? idx : this.tabs.find( 'a.ui-state-active' ).closest( 'li' ).index(),
tab = this.tabs.eq( idx ).find( 'a' );

return {
tab: { index: idx, hash: tab.attr( 'href' ).slice(1), label: tab.text() },
panel: this.panels.eq( idx )
}
}



Now its time to use the widget to build a simple test Backbone app. All this does is create a router for the changing tabs and set each tab with some static content.



$(function () {

var Router = Backbone.Router.extend({

routes: {

"overview" : "showAsText",
"usage" : "showAsCode",
"example" : "showAsCode",
"lorum" : "showAsText"

},

showAsText: function () {
var selected = $('#tab-layout').simpletabs('active');

selected.panel.html($('#tab-'+selected.tab.hash).html());
},

showAsCode: function () {
var selected = $('#tab-layout').simpletabs('active');

selected.panel.html('<pre><code class="prettyprint">'+htmlEncode($('#tab-'+selected.tab.hash).html())+'</code></pre>');

PR.prettyPrint();
}

});


$('#tab-layout').simpletabs();

var router = new Router();
Backbone.history.start();

});



I have a demo setup on my sandbox which has additional usage information. If you're interested in using the widget or want to use it as a starting point, you'll need both the Javascript and CSS files. The widget has no dependencies on Backbone. It only requires jQuery and jQuery UI to work.

Several features are missing that are in the jQuery UI widget which might make sense have available. I did not implement the ARIA attributes to enable screen readers, the keyboard navigation, or the ability to enable/disable individual tabs. Depending on your needs, these features may be desirable. My initial goal was to try to determine what seemed like a minimal setup to provide the tabs functionality.


Now that I have this widget built, I can use it in several different ways that work well in my Backbone apps. It maintains the visual style to match other jQuery UI widgets I use on my pages and keeps the familiar jQuery call interface for creating and managing instances. By keeping it light-weight, it reduces the likelihood that the widget will behave in a way that doesn't work with the functionality I want to build. It seems that the jQuery UI widgets can be a great platform for building application. Sometimes, however, you may need to either tweak an existing component or build something similar to enable them to integrate better with Backbone.

Saturday, June 15, 2013

Simplify Loading Backbone Objects with the RequireJS Namespace Plugin

Generally, your Backbone project is going to broken out into different files and directories to keep things organized and more modular. If you weren't using RequireJS to load everything, you might declare a namespace in the global scope and, in each file, add the object definition to its place in the namespace object so it can be used when necessary. If you're using RequireJS, then your probably aware of some of the issues of hard-coding the namespace and cluttering the global scope. However, as you avoid those issues, you still want to build a local object to map all your view, collection, and model definitions into a similar namespace so they're readily available. The RequireJS Namespace Plugin does just this and can significantly reduce the amount of intermediate definition modules and other featureless boilerplate code.

The GitHub read me does a good job introducing the plugin so I'm not going to rehash that. I did take my example Backbone + RequireJS project and experimented with using it to manage loading my Backbone elements. There were a few considerations along the way I needed to make to ensure my optimized component worked properly. I decided to branch the project to make it easier to compare and tinker. I'll discuss the changes I made to make it work properly.

First, based on the example on the Namespace read me, I was able to flatten my directory structure and eliminate some files.

I had this layout:

- lib
|- main.js
|- scripts
|- models.js
|- views.js
|- models
| |- model1.js
| |- model2.js
|- views
|- view1.js
|- layout
|- view1.html


And after namespace had this:

- lib
|- main.js
|- models
| |- model1.js
| |- model2.js
|- views
|- view1.js
|- layout
|- view1.html


That removed the intermediate script directory and models.js and views.js files that were simply defining more dependencies and returning object hashes. However, the ramifications of removing those items requires declaring the dependencies with the "namespace!" prefix and defining the mappings in the RequireJS configuration object. It took a try or two to figure out the each nomenclature but it seems easier to define and maintain from the configuration object then creating a bunch of files to define the namespace.

So the main files changes to add the plugin which triggers loading and mapping the dependencies per the configuration.

/component1/lib/main.js:

define(['namespace!./views', 'namespace!./models'], function(views, models) {

return {
Views: views,
Models: models
};

});


Without the configuration, namespace won't know what to include in each object. The left side should correspond to the dependency declared with the namespace prefix and the right side should be dependencies relative to those dependencies so namespace can require them internally before mapping them into an object.

component1/test/lib.js:

(function () {

require.config({

...

config: {
namespace: {
"models": "Model1,Model2",
"views" : "View1"

}
}

...

})();


Since the plugin needs to be part of the optimized build, it will expect the configuration to be available even after being built. I've designed my component to be self-contained using almond, so I don't want downstream users to have to define that namespace mapping. I was hoping to avoid including it at all but couldn't see an easy way to rebuild the dependencies and write them to the optimized file. As such, the best solution was to include the plugin and alter the wrapper files to add the configuration to the built component. Now the namespace is internally managed by the component and nobody needs to have any knowledge of its existence.

component1/build/wrap.end:

...

require.config({
config: {
namespace: {
"models": "Model1,Model2",
"views" : "View1"

}
}
});

...


The end result of all this was to reduce the basically unnecessary files, flatten the directory structure, and build object hashes with all the Backbone object definitions indexed by name. In its current state, the component only returns a final object with all those elements combined. That's probably an over simplification of what would really happen in a real application but provides a reasonable demonstration of the idea.

Sunday, June 9, 2013

Building Modular Backbone Apps with RequireJS and Bower

One goal in application design is to properly architect separation of concerns by carefully compartmentalizing different functionality and avoiding coupling components such that its difficult to reuse or change them in future projects. While its easier to achieve this within a specific component by properly defining models and views, it becomes more challenging when different logical components need to interact. As a simple example, you can see that a products component can be separated from a customer component. However, an order component would clearly have an interest in consuming some of the functionality available in both the product and customer components. Defining clear public interfaces to those different components is all part of modular design and, as projects grow, this methodology sometimes breaks down in the interest of time or various other reasons.


An Idea



I wanted to try was to completely separate each component's development from the main application. Each component would be built as a library that could be used in any other project as needed. The final application would essentially load the required components, manage the navigation, and act as a mediator between components. I knew that RequireJS was probably a good candidate for managing the definition of modules and handling dynamic loading of components. However, I still needed something to help manage the project dependencies so all the required libraries could be easily refreshed as needed. It was less searching and more accidental that I found this tutorial on using Bower with RequireJS and Backbone to setup a project environment with all the dependencies automatically downloaded and available in the environment. Basically, its Ruby's Bundler but for Javascript projects which is exactly what I wanted. Of course, its still pretty new and not every library out there is using it. However, the basic Backbone requirements are available and, more importantly, it can be used to define my different Backbone-based component libraries.


Proof of Concept



A basic, useless proof of concept follows, but shows the principal ideas behind the layout of the different projects. I've already mentioned that I want different logical application components to be self-contained libraries physically separated from the main application project. Additionally, I want to ensure I can both test while developing and also generate an optimized build for production. Below is the project structure I'll be referring to through out the rest of the discussion. I've created a project on GitHub with this structure and all the files used to create the sample application/component.


projects
|
|- webapp1 # The main application
| |- bower.json # Bower package file. Includes component1 dependancy
| |
| |- app # Application code goes here
| | |- css
| | |- scripts
| | |- main.js # Main entry point. Requires app and starts
| | |- app.js # Defines navigation, mediates, requires components
| |
| |- build
| | |- build.js # Defines single optimized file
| | |- build.sh # Run r.js, copy dependencies to dist directory
| | |- index.html # Production main page. Loads the final optimized bundle.
| |
| |- components # Bower installs dependencies here
| |- vendor # Put non-bower things here
| |- tpl.js
| |
| |- dist # Target of the production build
| |
| |- test
| |- config.js # Loads everything with no optimization
| |- visual.html # Browse here to test
|
|- component1
|
|- bower.json # Defines the component and its dependencies
|- component1.js # Built component after r.js optimization
|- component1.min.js # Minified version
|
|- build # Create the stand-alone AMD compatible component
| |- almond.js
| |- build.js
| |- build.sh
| |- wrap.start
| |- wrap.end
|
|- components
|- vendor
| |- tpl.js
|
|- lib # Define Backbone component here main.js is the
| |- main.js # top level entry point for testing and building
| |- scripts # the final self-contained component. Each level
| |- models.js # builds the top-level Component1 object so the
| |- views.js # final component can be referenced globally
| |- models # or via RequireJS as component1
| | |- model1.js
| | |- model2.js
| |- views
| |- view1.js
| |- layout # The optimized version will use tpl to pre-compile
| |- view1.html # all the micro-templates and include them in the build
|
|- test # Test each variation of the component:
|- index-build.html # The build component1.js using RequireJS to load
|- build.js
|- index-lib.html # Not built, load all the individual files one-by-one
|- lib.js
|- index-global.html # Build, but do not use an AMD, just the global object



Creating a Component Library



Before writing the application, I'm going to need a component or two that will be used in the application. I'm using the word "component" here to not confuse it with module or package which are terms you'll see in RequireJS and Bower. A component could be really anything. I'm using concepts like customer, product, and order as examples, but the point is to create something that can stand on its own and provide meaningful functionality.

Define the Package

Before writing any code, I'm going to create my Bower definition. The package file actually works in two ways - one, when you're developing the package, it makes it easy to fetch and refresh all your dependencies, and second, once built, it can be referenced in other projects using Bower to manage external dependencies.

bower.json:

{
"name": "component1",
"version": "0.0.1",
"main": "component1.js",
"ignore": [
"components",
"test",
"build"
],
"dependencies": {
"jquery": "latest",
"backbone-amd": "latest",
"underscore-amd": "latest"
},
"devDependencies": {
"requirejs": "latest"
}
}


Saving this file in the root of my component project file and running bower install from the command line will cause bower to fetch the declared dependencies and and place them in the components directory. Since the dependencies I'm requesting are registered, I don't need to provide any location details. However, bower does allow specific locations to be declared. These can be git repositories or on the local filesystems. I'm going to leverage the latter feature when creating the application's bower definition. The other parts of the bower.json file are intended to define the package you're building. This information will be used when installing this package as part of another bower dependency map. I'm using the ignore directive to ensure only the lib directory is pulled into a dependent project. There's no need to have the other parts published since they are part of the development process.

Unfortunately, not everything has a Bower package file so you may still have to manually grab libraries you need to use. You can't put them in the components directory since its basically wiped out every time bower refreshes. I created a separate vendor directory to hold those non-bower libraries. For this example, I needed to use requirejs-tpl to handle loading view template files. Since this is not bower aware, I downloaded it and added it to this directory.

Write the Code

Once my dependencies are available, I can start developing the library. The normal Backbone project layout can be used here. The only difference is I'm going to wrap everything in RequireJS define() functions. Level-by-level, I'm going to declare the immediate lower level's definitions as dependencies so they are loaded and available in the definition function.

So, in the example component project, the main.js entry point would look like this:


define(['./scripts/models', './scripts/views'], function(models, views) {

Component1 = {
Models: models,
Views: views
};

return Component1;

});


main.js is in the library base directory (lib) so it will require anything in the directory immediately below it - scripts. In the scripts directory, the are two files - models.js and views.js. These will take care of the next level of directories. The resulting objects will be added to the main object definition Component1 and returned. Note the use of relative directories. This will become important when using this as a package while developing the application later.

I'll follow the views definition down all the way to one of the actual views as an example.

./scripts/views.js:

define(
[
'./views/View1'
],
function( View1 ) {

var Views = {
View1: View1
};

return Views;

});


You can see how that can get pretty lengthy if you have a lot of views. In the example project, I started playing with an idea to map all the arguments of the function to the final Views object. However, for simplicity, I left that out of this discussion.

./scripts/views/view1.js:

define(['jquery', 'underscore', 'backbone', 'tpl!./layout/view1.html'],
function( $, _, Backbone, template ) {

return Backbone.View.extend({

render: function () {

this.$el.html(template());

return this;
}

});

});


This is the first time I need to require any of the external libraries like jQuery and Backbone. Everything else has been just to build up the final component. I also made use of the requirejs-tpl plugin to load plain text Underscore templates from the layout directory and compile it before passing it to the definition function. Later, I'll use this to generate pre-compiled templates in the final build.

Make Sure it Works

Once there's some definitions in place, it might be nice to test them and ensure they work. In this setup, I've placed several alternative RequireJS configuration definitions in the test directory and created a few HTML pages to refer to them so I can test the various different uses of the component. For development purposes, I want to load all the files individually using RequireJS to solve the load order and ensure dependencies are loaded before loading the next module.

test/lib.js:

(function () {

require.config({

baseUrl: "../lib",

paths: {
"jquery": "../components/jquery/jquery",
"underscore": "../components/underscore-amd/underscore",
"backbone": "../components/backbone-amd/backbone",
"tpl": "../vendor/tpl"
}

});


require(["jquery", "main"], function( $, mod ) {

var view1 = new mod.Views.View1();


$(document.body).append(view1.render().$el);

});

})();



Then, in my HTML file, I just need to load RequireJS and set the data-main to "lib".

test/lib.js:

<script data-main="lib" src="../components/requirejs/require.js"></script>


The example project on GitHub also has definitions for testing the built component using RequireJS and as a global variable without using RequireJS.

Build and Optimize

Now, the final goal of the creating a component (ie order, customer, etc) is to have a single, AMD compliant file that can be consumed in a downstream component or application. It's similar to making a widget in jQuery UI. Once built, there should be a file in the root of the project with all the optimized code in one file excluding any external vendor's libraries. There is an example library in the RequireJS repository that illustrates how to setup a optimization build that packages only the library code while leaving the dependencies out and uses almond as a stand-in for RequireJS. I used that as a starting point for the optimizer build definition.

build.js:


{
baseUrl: "../lib",
include: ["../build/almond", "main"],
exclude: ["jquery", "underscore", "backbone"],
stubModules: ['tpl'],
out: "component1.js",

wrap: {
"startFile": "wrap.start",
"endFile": "wrap.end"
},

paths: {
"jquery": "../components/jquery/jquery",
"underscore": "../components/underscore-amd/underscore",
"backbone": "../components/backbone-amd/backbone",
"component1": "../component1",
"tpl": "../vendor/tpl"
},

optimize: "none"
}


Here, you can see it follows the example to use almond to emulate an AMD inside the components definition. This enables us to internally leverage the modular encapsulation built-in to the design of RequireJS without changing how any of our code is written or organized while providing the outside world with both an AMD-compliant definition or a global object if no AMD is detected. Additionally, we're not including jQuery, Underscore, or Backbone in the build because that will be handled by the final consumer of the library. It would be wasteful to include those libraries here and then have them potentially included somewhere else. Finally, since I'm using requirejs-tpl to load the underscore micro-templates, I can use the optimizer to load all the templates pre-compiled into the final component's build. This eliminates the need for the plugin so the stubModules: ['tpl'] can be used to exclude the definition but leave a stub in place to allow the dependencies to properly load.


An Application of Components



The next step is to create an application to use the component(s) we've built. Again, we'll start with the Bower definition. This is where the dependency for component1 is declared so its copied to our components directory.

bower.json:

{
"name": "webapp1-test",
"version": "0.0.1",
"dependencies": {
"requirejs": "latest",
"jquery": "latest",
"component1": "../component1"
}
}


Running bower install will copy component1, and install all its dependencies (if not already defined in the application's). Once all the dependencies are in place, we're going to want to be able to test our application differently than how it will run production. I've put an HTML file in the test directory and created this RequireJS config to enable loading all the files individually, including those from component1.

test/config.js:

(function () {

require.config({

baseUrl: "../app/scripts",
paths: {
"jquery": "../../components/jquery/jquery",
"underscore": "../../components/underscore-amd/underscore",
"backbone": "../../components/backbone-amd/backbone",
"tpl": "../../vendor/tpl"
},

packages: [
{
name: 'component1',
location: '../../components/component1/lib',
}
]

});

require(["main"], function(main) {});

})();


Since I told Bower to keep the lib folder when installing the component1 package as a dependency, I can setup a packages hash in the RequireJS config and point to the main.js file. Now, all definitions requiring component1 will load component1's main.js which will cause all the other files to load individually. This is where the relative paths become important. If I did not use those in the component1 define() functions, RequireJS would try to resolve those dependencies relative to the baseUrl defined here (../app/scripts). Obviously, those files are not there but in the components directory. Although not shown, if I decided not to load all the component1 files, I could switch to loading just the optimized component1.js file by removing the packages hash and adding component1 to the paths hash.

Ready for Production

Once development is complete and the application is ready for production, you're going to want to minimize the amount of round trips to the server to load the application. In the example I've built here, I have the main application which includes jQuery, Underscore, Backbone, and RequireJS. Component1 is an additional package I want to load, but maybe defer to avoid loading it right when the main application loads.

For my first attempt, I added component1 to the paths hash in the build.js file and required it immediately in my main.js file.

build/build.js:


({

baseUrl: "../app/scripts",
name: 'main',
out: "../dist/scripts/bundle.js",

paths: {
"requireLib": "../../components/requirejs/require",
"jquery": "../../components/jquery/jquery",
"underscore": "../../components/underscore-amd/underscore",
"backbone": "../../components/backbone-amd/backbone",
"component1": "../../components/component1/component1"
},

include: ['requireLib'],

optimize: 'uglify2'
})


app/scripts/main.js:

require(["jquery", "underscore", "backbone", "component1"], function($, _, Backbone, c1) {

var view1 = new c1.Views.View1();

$(document.body).append(view1.render().$el);

});


This method caused the optimizer to attempt to package everything into one bundled file. However, running r.js using this setup resulted in the follow error:


Tracing dependencies for: main
Error: Error: nope
at check (/usr/local/lib/node_modules/requirejs/bin/r.js:2789:23)


Not the most descriptive error which made it quite hard to find what might have caused the issue. After several failed searches, I finally found this discussion which pointed to the wrapper around the component1 library using a define/factory pattern. Really, I didn't want the optimizer to include the component1, just skip it and load it from a separate file. This would give me the flexibility to load it whenever I wanted as the application started getting larger. I found this discussion that showed how to define the dependency but get the optimizer to basically skip it.

build.js:

paths: {
...
"component1": "empty:" # Note the colon at the end - its important!
},


When I do that, I'll need to copy the file over to the dist directory manually so its available to RequireJS to load as the page loads

build.sh:

cp ../components/component1/component1.min.js ./scripts/component1.js


Another option to avoid this problem and gain the advantage of not requiring the dependency on initial load is to push it down into the application logic and load it only when necessary. By doing this, the optimizer will ignore it and not try to pull it into the build. This is probably a more likely use case anyways since, if you make this component, you probably don't want it loaded until the user needs to use it. Instead of using the above setup, I created a separate app.js module to create a simple start function that loads the component and renders an instance of a view to the DOM.

app/scripts/app.js:

define(["jquery", "underscore", "backbone"], function($, _, Backbone) {
return {

start: function () {

require(['component1'], function ( c1 ) {
// Not the best example but its just a
// proof of concept after all...
var view1 = new c1.Views.View1();

$(document.body).append(view1.render().$el);

});

}
}
});


Now main.js just needs to require app.js and call the start() method. This approach does not need the component1 defined in the paths hash in the build.js.

app/scripts/main.js

define(["app"], function(app) {

app.start();

});


In either version, the final build will create a bundle.js file in the dist directory. The build.sh script copies the component1.min.js file from the components directory and renames it to component1.js so its found when trying to load it later. Finally, a production index.html is copied which has the correct script tag defined to load the bundle and start the application.

build/index.html:

<script data-main="scripts/main" src="scripts/bundle.js"></script>


I took advantage of the option to build RequireJS right into the bundled file to skip that round trip to the server to first load RequireJS and then have it load the optimized bundle. Now, I have just one JS file loading when the application starts and, when necessary, another request to load component1 (which in this demo is basically immediately).

While it doesn't force you to design properly encapsulated, modular applications. This setup does go a long way to promote those design decisions. Developing parts of the application in isolation helps reinforce the intended goal of keeping each part distinct from the other. RequireJS and Bower provide a lot of flexibility to enable simplifying the development workflow and packaging libraries and applications for deployment. The example provided here is a good starting point for exploring other potential ideas to building modular Backbone applications.

Saturday, June 1, 2013

OAuth, Simple?

I really don't think its possible. Any mechanism that requires multiple steps, including ones outside the control of your application, is going to add some layer of complexity. As I've been working with building functionality that consumes OAuth protected resources, I've narrowed down my list of what makes it so difficult to manage:

  1. The authorization flow requires you to maintain state since there is a transfer of control

  2. At least for OAuth 1.0, a very specific series of steps must be followed to sign each request

  3. There is more then one version (even OAuth 1.0 has a revision A and extensions for sessions) and there is room for provider specific requirements



Of the three, I'd say I've had the most headaches with the second point. Signing a request isn't necessarily difficult. Signing it correctly and determining what is wrong about a rejected signature seems to be the challenge.

While learning, I had the fortune to stumble upon an article that described how to build a Rack Middleware to verify the signed header for incoming API requests. At the heart of the solution was the use of a library called Simple OAuth. This library's sole purpose is to create and validate the authorization header. Given the required OAuth parameters, is will generate the signature, and if present, will parse an existing header to compare the header's signature with the one it generated to ensure they match.

After seeing this, I thought that it would be great if something similar existed as a Javascript library so I could just make requests without having to build a proxy to sign everything for me. If generating a request to an OAuth protected API could be as easy as this:



var keys = {
consumer_key: 'R1Y3QW1L15uw8X0t5ddJbQ',
consumer_secret: '7xKJvmTCKm97WBQQllji9Oz8DRQHJoN1svhiY8vo'
},

base = location.protocol + '//' + location.host + (location.port ? ':' + location.port : ''),
url = '/do_something',

oauth = new SimpleOAuth.Header('get', base+url, null, keys);

/**
* Map requests to /api path for reverse proxy
* They must be signed using the path the server will
* see when checking the signature.
*/
$.ajax({
url: '/api' + url,
type: "GET",
processData: false,
headers: { 'Authorization' : oauth.build() }
})
.done( function (data, textStatus, jqXHR) {
console.log('Success: ' + data);
})
.fail( function (jqXHR, textStatus, errorThrown) {
console.log('Fail: ' + jqXHR.status);
});



I thought that would help alleviate that required portion of OAuth by creating a simple, consistent method to build the signed header. So I opened up the OAuth specification and the Ruby implementation and starting trying to understand what was required to create a Javascript version of the library.

After some studying, I determined that UnderscoreJS could be my best friend for mimicking some of the nice features of the Ruby language. Most of library is Hash/Array mapping to apply transforms to the input parameters to satisfy each step outlined in the specification. Underscore parallels most of these functions to enable a similar style to the processing. Unfortunately, Javascript is lacking some of the basic libraries available in Ruby. Something was still required to handle parsing/building URI strings and actually producing the HMAC signature. I managed to cobble together a minimum set of URI processing functions and found some Javascript-based cryptography algorithms that provided the necessary functionality. My current converted Javascript version is available on GitHub.

The next question was will it create valid signatures? I created a series of test cases that compared the output of this library against the output of the Ruby version using the same inputs. Once I worked out those issues, I used it to sign a request to send to a real end-point like Yahoo's Social API.

What's really nice about this library is that it will automatically generate the correct timestamp, nonce values, and deal with normalizing the URL and other parameters. The only thing you really need to be aware of is what URL and request parameters to pass to the library so it generates the same signature that the service provider's verification logic will produce. Generally, you'll need to use the full URL to the service and only pass parameters for form posts. Unfortunately, as I found out, those two remaining items can also lead to a great deal of pain and suffering.

To avoid cross-domain requests, I setup a reverse proxy on my Apache server to pass requests to the remote service. I had started by testing it on a local service written in Ruby using Sinatra. I originally thought that the URL the service would see was the same as the real end-point's full path. However, it appears that Rack Request see's the HTTP_X_FORWARDED_HOST that Apache sets on proxied requests and uses that for the request URL. In that case, I had to sign everything using the path the browser thought I was requesting. However, when I did the same thing to send a request to the Yahoo Social API, it failed. Turns out that Yahoo sees the path for the real request, not the forwarded host. I only came to this conclusion after a lot of trial-and-error since the responses from Yahoo didn't just say "hey, your URL is different!". These issues are what really make implementing OAuth tedious. Something as simple as the URL can cause you hours of work trying to find the one reason among many that the service rejects the request.

I can say that just working on this project gave me a much clearer understanding of the signing mechanism of OAuth. As much as I wanted a solution that just worked, it seems that the true way to fully understand some of the concepts is to sit down and just write the code required to implement them. It made me a lot wiser when trying to debug all the "Bad Request" responses from the various end-points I wanted to interact with. Knowing what could be wrong helps narrow down the possible reasons quickly and, even though its still difficult, does make it a little bit easier then if I didn't understand it as well.