Wednesday, October 31, 2012

Building SVG Paths with Raphaël

SVG enables building really complex vector graphics.  Most of that power comes from path shapes which are constructed through a series of special commands to define points in the shape and how to connect them.  Any primitive shape (ie circle, rectangle, line, etc) can be expressed as a path so paths could be universally used to draw all of the SVG shapes you define.  The problem is learning and reading the path strings - they can get quite cryptic.  Even if you use a library like Raphael, you still will need to know how to define paths using the SVG syntax.

If that's the case, then why even use Raphael?  Well, even though you'll still need to learn how to define paths, Raphael does have some functionality to make it easier to manage them.  As I was digging around the library, it became evident that many of the utilities in the library are designed to work with paths.  There are functions for primitive shapes like circles and rectangles which will generate those SVG objects, however, they will not be compatible with the path utilities.  As much as I've avoided learning the path commands, it seemed a good time to learn the basics and see how Raphael could help me leverage them.

Once you've defined your SVG canvas via the Raphael constructor, you can use it to define a path via the cleverly named path() function.  The documentation states this function takes a string that follows the same syntax as the standard SVG Path commands.  After reading the intro to that document, there's a few things to note:

  • There are only four basic commands - move, line, curve, and close path.  Some of these have specialized commands that make some assumptions thus simplifying the syntax.

  • Case matters - upper case uses absolute coordinates whereas lower case calculates actions relative to the last command's end point.

  • These commands have parallels to the Canvas API functions.  If you've used those, then the inputs to these commands might be easier to understand.

  • The SVG documentation states that commas are not necessary, however, the Raphael reference indicates that command parameters should be separated by commas.


One thing I noticed when working with the path() function is that it will also take an array of arrays that define the path.  So instead of writing "M450,175 l0,90 a 10,10,0,0,1,-10,10 l-390,0 a 10,10,0,0,1,-10,-10 l0,-190 a 10,10,0,0,1,10,-10 l390,0 a 10,10,0,0,1,10,10 l0,90 z" to create a rectangle with rounded corners, I can write:



path = paper.path( [
['M',450,175],
['l',0,90],
['a',10,10,0,0,1,-10,10],
['l',-390,0],
['a',10,10,0,0,1,-10,-10],
['l',0,-190],
['a',10,10,0,0,1,10,-10],
['l',390,0],
['a',10,10,0,0,1,10,10],
['l',0,90],
['z']
] );



This is much more readable and feels more like chaining function calls together to construct a path. As best I can tell, all of the path handling functions in Raphael use the array syntax to represent the path definition.

Although not documented (which means it might not be the same in future releases), there are functions that will create primitive shapes as paths. Inside the Raphael._getPath object, there are several functions that take similar arguments to the built-in shape functions:


path3 = paper.path( Raphael._getPath.circle({ attrs: { cx:120, cy:120, r:50 } }))
.attr( 'stroke-width', 3 )
.attr( 'stroke', 'rgb(80,80,255)' );


As a final example that illustrates the power of Raphael's path functions, I took a fairly complex path string and transformed it using Raphael.transformPath() prior to creating the SVG element:


pstr = "M295.186,122.908c12.434,18.149,32.781,18.149,45.215,0l12.152-17.736c12.434-18.149,22.109-15.005,21.5,6.986l-0.596,21.49c-0.609,21.992,15.852,33.952,36.579,26.578l20.257-7.207c20.728-7.375,26.707,0.856,13.288,18.29l-13.113,17.037c-13.419,17.434-7.132,36.784,13.971,43.001l20.624,6.076c21.103,6.217,21.103,16.391,0,22.608l-20.624,6.076c-21.103,6.217-27.39,25.567-13.971,43.001l13.113,17.037c13.419,17.434,7.439,25.664-13.287,18.289l-20.259-7.207c-20.727-7.375-37.188,4.585-36.578,26.576l0.596,21.492c0.609,21.991-9.066,25.135-21.5,6.986L340.4,374.543c-12.434-18.148-32.781-18.148-45.215,0.001l-12.152,17.736c-12.434,18.149-22.109,15.006-21.5-6.985l0.595-21.492c0.609-21.991-15.851-33.951-36.578-26.576l-20.257,7.207c-20.727,7.375-26.707-0.855-13.288-18.29l13.112-17.035c13.419-17.435,7.132-36.785-13.972-43.002l-20.623-6.076c-21.104-6.217-21.104-16.391,0-22.608l20.623-6.076c21.104-6.217,27.391-25.568,13.972-43.002l-13.112-17.036c-13.419-17.434-7.439-25.664,13.288-18.29l20.256,7.207c20.728,7.374,37.188-4.585,36.579-26.577l-0.595-21.49c-0.609-21.992,9.066-25.136,21.5-6.986L295.186,122.908z",

path1 = paper.path( Raphael.transformPath(pstr, 't-50,-60r125s0.5') )
.attr( 'stroke-width', 3 )
.attr( 'stroke', 'rgb(255,80,80)' );


There might not seem to be any benefit to doing this because you could just as easily create the element and then transform it once its attached to the DOM. However, some of the Raphael functions don't always account for the transform on the element. By altering the path prior to creating it, those functions will work as expected.

I've added the above examples to my sandbox. In my next post, I'll dig a little deeper into how to leverage some the other Raphael path functions to build something a little more complex.

Saturday, October 27, 2012

HTML5 Canvas and the Cracked Glass Effect - Revisited

After much toiling and tinkering, I think I have found a reasonably good method to produce a cracking glass effect. While not absolutely perfect, you can achieve some pretty interesting results. I used the same path construction as described in my original attempt. However, I broke the rendering into five different layers. Each layer adds a different dimension to the effect. These include refracting the image, adding reflections, the main cracking lines, fracture points, and some noise. In the demo I created, you can tinker with all the input options. The demo is on my sandbox and below are some examples I created with it.

The default settings will create something that looks like this first image. Since everything is random, you might have to draw the cracks a few times to find a variation you like.



I added a solid background to see what it would look like. Most of the samples out there have a black background so I wanted to see something to compare.



The white background is fun for playing with colors.



Most of the algorithm is finding places to draw lines.  All of the parameters of the lines are precalculated and each layer uses parts of those calculations to guide it in placing various canvas elements (lines, gradients, and curves) around those lines that were calculated as part of the path generation logic.

From a performance perspective, its not too bad.  Chrome seems to render the fastest.  The Noise layer can really slow it down because of the amount of tiny lines it can draw (depending on the decay and frequency selected).  Each layer can be completely disabled by turning the opacity all the way down (the show/hide button will render the layer but its just not visible).  Overall, there's a lot of Canvas API calls being generated and the rendering times are not unacceptable.   Its not something you'd want to try calling over and over again, but it can function in a real time setting.

I suppose the next task is to package it up into something that can actually be called on an image.  Maybe make it a jQuery plugin.  However, that's another project for another day ...

Friday, October 26, 2012

Thoughts on Extending jQuery UI Spinner: Adding a Slider Widget

I've been thinking more about the spinner and how to best extend its functionality beyond the basic options it currently offers.  I really like the simplicity of the current design and don't really want to disrupt that basic implementation. 

Most enhancements probably fall into two categories:


  • Additional visual components to assist in picking values. These might include a slider to quickly move to a different point in the valid range or a drop down menu with common choices.


  • Other formatted values that are incremental in nature.  Anything that can be represented as a number can be used with the spinner widget. Dates and times seem like excellent candidates to use with the spinner.



The question becomes how much functionality should be included/supported by the base spinner. The current approach seems to extend spinner to make more specialized versions. Although not in the current release, development on the timepicker includes a spinner to allow stepping through each part of the time value (hours, minutes, seconds). The same theme can be used to add extra visual controls to the spinner to provide alternative ways of selecting values.

Enforcing certain values in the field is a little bit more challenging. The primary design goal of the Spinner is to provide globalized formatting of numbers, etc. That validation is deferred to the Globalize library. For different formats, different values are allowable. Another widget in development right now - Mask - provides a means to add validation to any field. Combine a mask with the spinner and you get incremental stepping with the spin controls and enforcement of manually input numbers that agree with the selected locale.

I created an example that combines a slider and spinner. The code is not too complicated, however, you can see there is some work to ensure all the visual components stay in sync. Additionally, the demo uses the alternative layout enhancements I made to the spinner.

Keeping with the example set in the demos and development of jQuery UI, I created an extension to encapsulate the code required to draw and coordinate the spinner and slider controls. I called it SlideSpinner and you can see how it works on the project page. The implementation does not address restricting manually entered values. I did make an attempt, but to keep it generic enough to work with different locales, requires a lot more effort so it does not interfere with the users typing. However, I was able to get a feel for how to extend the functionality of a base widget to provided enhanced features like adding the slider. The SlideSpinner handles all the details of drawing the two widgets, positioning them, and then maintaining the values when the user interacts with either widget.

The jQuery UI library provides an excellent starting point for building customized widgets for more specific situations. Instead of bloating the Spinner widget with features that may only be useful in certain applications, it provides the basic framework to enable incremental controls that can be easily extended to create more specific enhancements as required. As long as the library includes appropriate hooks to build these extensions, then the available feature set is adequate to allow these new widgets to be constructed. As the library continues to mature, you can see that there will be a lot of extensible components that can be combined in specific ways to perform special tasks. Depending on the circumstances, you may need to build new widgets that extend the library or just combine the existing basic widgets in creative ways to build your solution.

Wednesday, October 24, 2012

Drawing Curves with the HTML5 Canvas quadraticCurveTo() Function

The HTML5 Canvas API has a built-in bezier function to draw smoothly curved lines.  However, it requires you to specify a control point to draw the line.  If you have points you want to draw the line through, you will need to find the control point to properly draw the line.  Visually, you can see on the left that I have three green points that I want to draw a curve through.   The red point represents the control point required to draw that curve through those points.  The question is: how do I find that point so I can draw the line I want using the Canvas quadraticCurveTo()? Intuitively, it doesn't seem too difficult. You can look at the points and see the relationship between the second point and the control point and see its about twice the distance from the line formed between the first and last points. Additionally, it appears to be on a certain angle from that line.

If we draw some lines on the sample, you can see that the control point is found by translating P2 up and over by L1 and L2.  The question is how do we find L1 and L2?  The problem can be broken down into two steps:

1) L1 is the length of the line that is perpendicular to P1,P3 and runs to P2.

2) L2 is the length of the line from the midpoint of P1,P3 to the intersection found in step 1.

The first step requires us to project P2 onto P1,P3 to find the intersection point.  Once we have that point, the two length are easy to calculate to perform the translation on P2 to determine the control point. Below is the implementation of this process:


function findControlPoint(s1, s2, s3)
{
var // Unit vector, length of line s1,s3
ux1 = s3.x - s1.x,
uy1 = s3.y - s1.y,
ul1 = Math.sqrt(ux1*ux1 + uy1*uy1)
u1 = { x: ux1/ul1, y: uy1/ul1 },

// Unit vector, length of line s1,s2
ux2 = s2.x - s1.x,
uy2 = s2.y - s1.y,
ul2 = Math.sqrt(ux2*ux2 + uy2*uy2),
u2 = { x: ux2/ul2, y: uy2/ul2 },

// Dot product
k = u1.x*u2.x + u1.y*u2.y,

// Project s2 onto s1,s3
il1 = { x: s1.x+u1.x*k*ul2, y: s1.y+u1.y*k*ul2 },

// Unit vector, length of s2,il1
dx1 = s2.x - il1.x,
dy1 = s2.y - il1.y,
dl1 = Math.sqrt(dx1*dx1 + dy1*dy1),
d1 = { x: dx1/dl1, y: dy1/dl1 },

// Midpoint
mp = { x: (s1.x+s3.x)/2, y: (s1.y+s3.y)/2 },

// Control point on s2,il1
cpm = { x: s2.x+d1.x*dl1, y: s2.y+d1.y*dl1 },

// Translate based on distance from midpoint
tx = il1.x - mp.x,
ty = il1.y - mp.y,
cp = { x: cpm.x+tx, y: cpm.y+ty };

return cp;
}


Here, points are described by objects with a x and y property (ie. { x: 150, y: 210 }). To use the function, just pass in the points you want used to draw the curve and use the returned control point in the quadraticCurveTo() function:


function drawCurve($canvas, p1, p2, p3)
{
var ctx = $canvas[0].getContext('2d'),
cp = findControlPoint(p1, p2, p3);

ctx.strokeStyle = 'black';
ctx.strokeWidth = 1;

ctx.beginPath();
ctx.moveTo(p1.x, p1.y);
ctx.quadraticCurveTo(cp.x,cp.y,p3.x,p3.y);
ctx.stroke();
}


I created a demo in my sandbox to allow moving the points around and visually see the resulting control point and curve.

There is a small caveat to keep in mind - the second point won't always be on the extreme point of the curve. You can see this best by moving the point further to the side by the start/end points. I believe this has to do with the function assuming that the second point is at the midpoint of curve (in terms of the parametric function, this is t=0.5). Unfortunately, you have no control over that selection. One way to work around it is to use path interpolation to draw the curve.

The demo draws the same curve using the PathJS library which will put the second point much closer to the extreme of the curve. This might cause some sharper angles in some cases but will put the second point very close to where the curve changes directions. The image on the left uses the same points as the example above but used PathJS to interpolate the path.

With the examples here, you can now draw bezier curves with whatever information you have available. If you already have the control point, then you're all set. If not, the function presented here will provide an accurate point to use as the control point required by the quadraticCurveTo() function. The same basic approach can be extended to find two control points that are required in the cubicCurveTo() function as well.

Thursday, October 18, 2012

Exploring the New jQuery UI Spinner - Beyond the Basics

jQuery UI 1.9 is out and with it a bunch of new goodies to play with. I thought I'd start by digging into the new Spinner widget and see what kind of features it offers. Very simply, Spinner adds a up/down arrow to the left of a input box thus allowing a user to increment/decrement a value in the input box. It adds keyboard support so you can use up/down arrows and page up/down to move through values. It also has a step feature to skip values. In addition to the basic numeric features, it also enables globalized formatting options (ie currency, thousand separator, decimals, etc.) thus providing a convenient internationalized masked entry box.  The demo on the jQuery UI site even extends Spinner to enable time aware "spinning".

As I started using the widget, I noticed it scaled nicely with different sized input fields.  The CSS enables you to use any font size in the input field and still maintain a visually appealing layout.  The optional min/max values provide a way to prevent the spinner from going outside a valid range.  However, the min/max controls the spinning functionality - you can still type a invalid value into the box and it won't revert the entry.  It seems that this would be important feedback to the user that they manually entered a value that the spin control would not allow.  Additionally, the spin controls can only be placed on the left of the input box.  In some situations, it may be desirable to have the buttons above and below the input box.

In light of this, I thought I'd try extending the Spinner widget to add some of these enhancements.   I created a demo that implements the min/max constraint on entries manually entered in the input box by the user and also provides a means to move the spin buttons to be positioned above and below the input box.  The screenshot on the left shows the default field and how it looks compared to two different examples of a top/bottom alternative.  The last example extends the base Spinner widget to allow it to step through the alphabet.

Top/Bottom Spin Buttons



I started by inspecting the base Spinner widget's CSS properties.  I was curious how difficult it would be to shift the UI components around to achieve this look without having to change any functionality.   The input field is wrapped in a span and then the buttons are added to this wrapper.  CSS controls the positioning of the buttons to align them to the right.  To move them, I just needed to override the default CSS with rules that would position the buttons above and below the input.


.c-topbottom .ui-spinner-input {
margin: 0;
margin-top: 10px;
margin-bottom: 10px;
text-align: center;
}

.c-topbottom .ui-spinner-button {
height: 10px;
left: 0px;
width: 100%;
}

.c-topbottom a.ui-spinner-button {
border: none;
}

.c-topbottom .ui-spinner .ui-icon {
margin-left: -7px;
top: 5px;
left: 50%;
}


These rules follow along with the original definitions but change the necessary properties to position the buttons above and below the input field.

Those changes move the buttons to the correct place, however, if you zoom into the page, the top/bottom corners of the buttons are not rounded.  This is because they are squared off in the base widget to fit nicely against the right side of the input box. However, now they need to be rounded to look correct in their new home.  Looking at the generated HTML, the corners are rounded using a CSS class assigned to the anchor tag that represents the button (ui-corner-tl and ui-corner-bl).   These two classes only round the left corners not the right corners.  I could have just added a corner radius rule to my new CSS but then there would be a chance that a future change to the other jQuery UI ui-corner-* rules would make my custom rule break and not look right.  Instead, I'd like to just add the ui-corner-tr and ui-corner-br to the button elements. This required some Javascript to call jQuery.addClass() on the elements after creating the widget:


$('#topbottom input').spinner()
.parent()
.find('.ui-spinner-up')
.addClass('ui-corner-tl')
.end()
.find('.ui-spinner-down')
.addClass('ui-corner-bl');


So after making the CSS tweaks and adding the new classes, the spin buttons are now in the correct place and format.

Extending Spinner



I wanted to try extending the Spinner widget to enable it to understand how to spin through the alphabet. Since each letter is just an sequential ASCII code, it should be fairly simple to add logic to convert to/from the string/numeric representation so the spinner can iterate through the letters. Working from the example time extension source found on the jQuery UI site, I came up with the following implementation:


$.widget( "ui.alphaspinner", $.ui.spinner, {
options: {
max: 'Z',
min: 'A'
},

_create: function( ) {

this._super();

// Make this a top/bottom spinner. Add rounded corners.
this.uiSpinner
.addClass('ui-spinner-alpha')
.find('.ui-spinner-up').addClass('ui-corner-tl').end()
.find('.ui-spinner-down').addClass('ui-corner-bl');

},

_parse: function( value ) {
if ( typeof value === "string" ) {
// Only one letter is valid
if (value.length > 1) {
return "";
}
return value.toUpperCase().charCodeAt(0);
}
return value;
},

_format: function( value ) {
return String.fromCharCode(value);
}

});


Here I added an override for _create to add classes to the base Spinner so my widget will automatically have the spin controls above and below the input box. Additionally, _parse() is used to convert from the value in the input box so it can be manipulated by the spin control and _format() is used to convert back to the value displayed in the input field. Overriding those two functions is all that is needed to enable the remaining spinner widget to understand how to "spin" through the alphabet.

Validating Manual Input



So far, the new widget does not address validating manual entry in the field to ensure it is A) a single valid letter, and B) inside the range specified by min/max. The _parse() function does parse strings that are only 1 character long and will convert everything to uppercase. However, this only affects the internal representation and not the value in the input box. The user is not provided any feedback that they are entering bad values.

Looking at the example, I output the value of the three controls to the right of the widgets. I first attempted to enter a letter into the box. Since this widget only wants numeric values, the internal value will be null. You can see that the output on the right does not show the "j" because of this parsing.



Next, the field only is suppose to accept 0-9 as a valid input range. However, I can type a 12 into the box and it will accept the value and it will appear in the output on the right.



In my extension, I added an override to the _stop() function to check the value to ensure it is one that parses correctly. _stop() is called on each keyup event so it seemed like a good place to add this code. There are probably half a dozen other ways to add this logic. In the end, this is where I decided to place it:



_stop: function( event ) {
var value = this.value() || 0;

if (event.type == 'keyup' && value != this._adjustValue(value) ) {
this.value(this.previous);
this._trigger( "invalid", event );
}
else {
this.previous=value;
this.element.val(this._format(value));
}

this._super(event);
}



The _adjustedValue() function is used in a similar fashion when the spin functionality occurs. If the adjusted value is not the same, something is wrong, so set the value back to the previous value. I also trigger a custom "invalid" event that can be caught and an error/help box could be shown to guide the user to enter a correct value. Additionally, this code writes the formatted value back to the input box so if you type a lower case "g", the input will update with a upper case "G" for consistency.

The full source and demo is in my sandbox. Some of these features seem like them might be a good addition to the base Spinner widget. Enabling multiple layouts and checking manual entries seem like good options that can make building solutions with the widget a little bit easier.

Tuesday, October 16, 2012

Color Made Easy with the jQuery Color Plugin

The jQuery Color Plugin powers the color animation in jQuery UI.  In general, you don't even realize its there.  There's only a small reference to it in the API docs for jQuery UI.  However, the plugin is a self-contained library full of useful features that you can leverage when trying to dynamically manipulate colors.

The GitHub home has a good writeup on the plugin so I won't regurgitate what's already been said.  However, I was working on something that required me to find a darker version of the color a user selected so I could create a highlight (or low-light) effect.  Without the Color plugin, I'd have to do some work to find a way to darken the color.  However, with the plugin, it was as easy as retrieving a color object, calling lightness(), and then grabbing the new color value.

In my sandbox, I tested this concept on several colors.  I passed in color names, hash color values, and RGB/HSL space values to the plugin, stepped through the lightness values, and rendered those colors into little color swatches with their corresponding RGB/HSL values.

As a final test, I used a color picker widget to pick the main color and then calculate a highlight color that would contrast nicely with the main color.  I used that calculated color as the border to my div.  Depending on the desired contrast, you can adjust the math used to find the new lightness value.

Overall, the jQuery Color plugin is a nice utility that can save you a lot of extra work.  If you need to perform extensive color operations in your code, this might be a good library to check out.  If you're already using jQuery UI, then its already included in your site and you can leverage its goodness now.

Friday, October 12, 2012

HTML Canvas ImageData: Creating Layers and Blending Pixels

Drawing images using layers is a feature that provides a lot of flexibility. Generally, once you place an object on the canvas, its not moveable nor can you draw something under it. Hence, the order you perform the drawing becomes important. This differs from SVG or other HTML element that can either be moved around the DOM or have their z-index adjusted. As I worked on my cracked glass effect, I realized that using layers would be beneficial for helping me achieve the desired result. Animations and games would also benefit from using multiple canvases to great various effects and ease drawing operations. I built a simple demo in my sandbox that partially mimics how layers are setup in a image editor like Photoshop to construct the picture on the right from several canvas object and to experiment with different blending effects. In the end, there is an easy and a hard way to make this work - the harder way add more features and flexibility at the cost of performance.


Creating Layers


The first part of the process is fairly easy.  You can stack canvas tags by positioning them absolutely inside a relatively positioned parent.  Now, anything you draw will look like one image but you're actually working with distinct layers that you can reorder, change opacity, and draw shapes or images:


<style>
.wrapper {
position: relative;
}

.wrapper > * {
position: absolute;
}
</style>

<div id="drawing" class="wrapper">
<canvas id="flowers"></canvas>
<canvas id="gradient"></canvas>
<canvas id="circle"></canvas>
<canvas id="inverted"></canvas>
</div>


I labeled each canvas so as I debugged moving layers around, I could see in Firebug that they moved where I expected. Now, I can draw each layer:


// Setup background
// -----------------------------------------------------
ctx = $canvas[0].getContext('2d');
ctx.drawImage($('#baseimg')[0], 0, 0, w, h);


This copies the hidden image I placed on the page into the first layer.



Now, create a gradient. This will create a transparent area so the flowers show through:


// Add a fill layer
// -----------------------------------------------------
ctx = $canvas[1].getContext('2d');

grd = ctx.createLinearGradient(100,100,150,300);

grd.addColorStop(0, 'rgba(0,255,0,0)');
grd.addColorStop(1, 'rgba(0,0,255,1)');

ctx.fillStyle = grd;
ctx.fillRect(0,0,w,h);




A canvas is transparent by default, so drawing shapes will only fill those areas allowing everything else to be visible:


// Add a shape layer
// -----------------------------------------------------
ctx = $canvas[2].getContext('2d');

ctx.fillStyle = 'yellow';
ctx.strokeStyle = 'black';
ctx.lineWidth = 1;
ctx.shadowColor = 'purple';
ctx.shadowBlur = 15;
ctx.shadowOffsetX = -5;
ctx.shadowOffsetY = 5;

ctx.beginPath();
ctx.arc(200,200,100,0,2*Math.PI);
ctx.stroke();
ctx.fill();

ctx.lineWidth = 3;
ctx.shadowColor = 'white';
ctx.shadowBlur = 1;
ctx.shadowOffsetX = 2;
ctx.shadowOffsetY = 2;

ctx.beginPath();
ctx.moveTo(0,150);
ctx.lineTo(250,0);
ctx.stroke();


I got a little fancy with the shapes because I wanted to try out the shadow options available in the canvas API:



For the last layer, I want to copy a part of the original flower image and invert it. This process demonstrates accessing the ImageData to apply a filter by manipulating the individual pixels contained in the canvas:



function invert(ctx)
{
var imd = ctx.getImageData(0,0,ctx.canvas.width,ctx.canvas.height),
imp = imd.data,
len = imp.length;

for (var i=0;i<len;i+=4)
{
imp[i] = 255 - imp[i];
imp[i+1] = 255 - imp[i+1];
imp[i+2] = 255 - imp[i+2];
}

ctx.putImageData(imd,0,0);
}

// Add a layer, copy part of image and invert it
// -----------------------------------------------------
ctx = $canvas[3].getContext('2d');

ctx.drawImage($('#baseimg')[0], 200, 200, 200, 200, 200, 200, 200, 200);
invert(ctx);




Now I have four individual canvas layers that can be independently manipulated and the layering effect will be maintained. At this point, you could stop. If all you wanted was to be able to layer some drawings, this setup would work fine. However, I want to work with some pixel blending algorithms, so I will need to merge these canvases to accomplish that goal.

Combining Canvases



Flattening the canvases into one canvas is just a simple process of creating a temporary destination canvas and then copying each canvas layer, bottom up, using the context drawImage() function. Each canvas is treated as an image and can be passed directly to the function. Once complete, you can either add the canvas to the DOM or call toDataURL() on the final canvas to extract the image data and set it as the source of an image tag:


function merge2(layers)
{
var tmpc = $('<canvas>')[0],
dstc = tmpc.getContext('2d'),
h = layers[0].height,
w = layers[0].width;

tmpc.height = h;
tmpc.width = w;
dstc = tmpc.getContext('2d');

layers.each(function (idx)
{
dstc.globalAlpha = +$(this).css('opacity');
dstc.drawImage(this, 0, 0);
});

return tmpc;
}

var merged_canvas = merge2($('canvas'));
$('#mergeimg')[0].src = merged_canvas.toDataURL();



You may notice that I alter the globalAlpha setting prior to each draw operation. In my demo, I added sliders to change the opacity of the canvases. I draw a completely opaque circle. If I'd like it to show some of the background, I don't need to redraw it, just alter the opacity of the canvas. When I merge the layers, I need to account for the opacity on the element (instead of the pixels in the canvas) by copying the opacity of the canvas into the globalAlpha setting. This will cause all the pixels to inherit this additional alpha value when the blending is performed in the drawImage() operation.

Blending Modes



So far we've only performed a standard blend - the top image is drawn over the underlying image. The alpha channel is considered in the drawing process to composite each image. However, if you've used Photoshop, you know there are quite a few other methods to blend multiple layers together. Multiply, screen, burn are just a few of the options. Each of these blend modes calculate the final pixel color by performing a series of basic math operations on each pixel in the two layers being merged. For instance, I can use a Difference function to blend the four layers into the following image:



To achieve this type of blending, we need to iterate over each pixel and find the absolute value of the difference:


newr = Math.abs(srcr - dstr);
newg = Math.abs(srcg - dstg);
newb = Math.abs(srcb - dstb);


In this snippet, the src is a pixel from current layer (canvas) being merge and dst is a pixel from the same relative location as src in the destination canvas that contains all the layers already merged. We have to calculate each color channel individually so three different lines of code are repeated on each color component. I skipped ahead a little at this point so the unanswered question remains: how do we get to the pixels so we can manipulate them? As I quickly pointed out in the invert() function above, the canvas context has a getImageData function which allows you to access the raw pixel data. Before running our blending math, we need to do some setup to create variables that point to this data:



var tmpc = $('<canvas>')[0],
dstc, dstd, dstpx, dsta,
tmpm, len, srcpx, alppx,
h = layers[0].height,
w = layers[0].width;

tmpc.height = h;
tmpc.width = w;

dstc = tmpc.getContext('2d');

dstc.createImageData(w,h);
dstd = dstc.getImageData(0,0,w,h);
dstpx = dstd.data;

len = dstpx.length;
tmpm = new Array(4);

cnt = layers.length;
srcpx = new Array(cnt);
alppx = new Array(cnt);

layers.each(function (idx)
{
var ctx = this.getContext('2d'),
imd = ctx.getImageData(0,0,w,h);

srcpx[idx] = imd.data;
alppx[idx] = +$(this).css('opacity');
});



Here, I'm creating my destination canvas and then creating variables to reference the ImageData.data value for the destination and all the source layers. An important point to notice in this code is the use of createImageData() on the destination canvas. Since this canvas has had nothing drawn on it, the pixels are not setup. The createImageData() function initializes all the pixels to black and fully transparent. If you accidentally forget to do this, nothing will be drawn in the canvas and you will spend a lot of time trying to figure out why.

The pixel data is stored in one continuous array where each pixel is represented by 4 consecutive elements order by red, green, blue, alpha. So any loop needs to step by 4 on each iteration and you need to reference the pixel data array using data[i] for the red channel, data[i+1] for green, etc:



...

for (i=0;i<len;i+=4)
{

r = srcpx[i];
g = srcpx[i+1];
b = srcpx[i+2];
a = srcpx[i+3];

...



In my setup process, you'll note that I'm copying the canvas opacity into the variable alppx[idx]. This is needed because the transparency is not automatically handled like it is in drawImage(). We will need to perform all the alpha compositing as part of the other blending calculations:


srca = srcpx[l][i+3] / 255 * alppx[l];
dsta = tmpm[3] / 255*(1-srca);
outa = (srca + tmpm[3]*(1-srca)/255);

newr = newr*srca + dstr*dsta;
newg = newg*srca + dstg*dsta;
newb = newb*srca + dstb*dsta;

newr = outa == 0 ? 0 : newr/outa;
newg = outa == 0 ? 0 : newg/outa;
newb = outa == 0 ? 0 : newb/outa;


In this portion of the code, the new colors have already been found by performing the selected blend mode, this step takes the alpha channels (and the opacity from the canvas) and combines them based on the Porter and Duff "over" method described here.

The important thing to keep in mind, is the pixel data uses 0-255 (0=transparent, 255=opaque) for the alpha range while the CSS opacity is 0-1 (0=transparent, 1=opaque). The compositing operation is assuming values between 0 and 1, so I converted them accordingly. Once the new RGB values are found with the correct alpha compositing, they need to be clipped, rounded, and then set into the destination's pixel data:


tmpm[0] = (newr > 255) ? 255 : ( (newr < 0) ? 0 : newr ) | 0;
tmpm[1] = (newg > 255) ? 255 : ( (newg < 0) ? 0 : newg ) | 0;
tmpm[2] = (newb > 255) ? 255 : ( (newb < 0) ? 0 : newb ) | 0;
tmpm[3] = (255*outa) | 0;

dstpx[i] = tmpm[0];
dstpx[i+1] = tmpm[1];
dstpx[i+2] = tmpm[2];
dstpx[i+3] = tmpm[3];


Clipping is required for some of the blending methods since they might calculate a value over 255 or under 0. I used a bit-wise OR to quickly round the value to a whole integer. Once all the pixels are calculated in this manner, they just need to be placed back into the destination canvas using putImageData():


dstc.putImageData(dstd,0,0);



In my demo, I implemented several different blend mode. Wikipedia provides a great reference and compendium of the various modes in common use. Here is the full source of the merge function:


function merge(layers, mode)
{
var tmpc = $('<canvas>')[0],
mode = mode || 'normal',
dstc, dstd, dstpx, dsta,
tmpm, len, srcpx, alppx,
h = layers[0].height,
w = layers[0].width,
i, l, cnt, wt, srca, outa,
srcr, srcg, srcb,
dstr, dstg, dstb,
newr, newg, newb;

tmpc.height = h;
tmpc.width = w;
dstc = tmpc.getContext('2d');

dstc.createImageData(w,h);
dstd = dstc.getImageData(0,0,w,h);
dstpx = dstd.data;

len = dstpx.length;
tmpm = new Array(4);

cnt = layers.length;
srcpx = new Array(cnt);
alppx = new Array(cnt);

layers.each(function (idx)
{
var ctx = this.getContext('2d'),
imd = ctx.getImageData(0,0,w,h);

srcpx[idx] = imd.data;
alppx[idx] = +$(this).css('opacity');
});

for (i=0;i<len;i+=4)
{
// Seed with first layer
tmpm[0] = srcpx[0][i];
tmpm[1] = srcpx[0][i+1];
tmpm[2] = srcpx[0][i+2];
tmpm[3] = srcpx[0][i+3] * alppx[0];

/*
Now merge each layer from the bottom up:
1) Find the each alpha value (convert to 0-1)
2) Perform blend mode calculation on each channel
3) Perform alpha compositing between current background and new RGB values
5) Clip (if necessary) and set final color and alpha
*/

for (l=1;l<cnt;l++)
{
srca = srcpx[l][i+3] / 255 * alppx[l];
dsta = tmpm[3] / 255*(1-srca);
outa = (srca + tmpm[3]*(1-srca)/255);

srcr = srcpx[l][i];
srcg = srcpx[l][i+1];
srcb = srcpx[l][i+2];

dstr = tmpm[0];
dstg = tmpm[1];
dstb = tmpm[2];

switch (mode)
{
case 'normal' :

newr = srcr;
newg = srcg;
newb = srcb;
break;

case 'multiply' :

newr = srcr * dstr / 255;
newg = srcg * dstg / 255;
newb = srcb * dstb / 255;
break;

case 'screen' :

newr = 255 - ( ( (255 - srcr) * (255 - dstr) ) / 255);
newg = 255 - ( ( (255 - srcg) * (255 - dstg) ) / 255);
newb = 255 - ( ( (255 - srcb) * (255 - dstb) ) / 255);
break;

case 'overlay' :

newr = dstr < 128 ? (2 * srcr * dstr / 255) : (255 - ( ( 2 * (255 - srcr) * (255 - dstr) ) / 255));
newg = dstg < 128 ? (2 * srcg * dstg / 255) : (255 - ( ( 2 * (255 - srcg) * (255 - dstg) ) / 255));
newb = dstb < 128 ? (2 * srcb * dstb / 255) : (255 - ( ( 2 * (255 - srcb) * (255 - dstb) ) / 255));
break;

case 'soft light' :

newr = dstr < 128 ? (2 * (srcr>>1+64) * dstr / 255) : (255 - ( ( 2 * (255 - (srcr>>1+64)) * (255 - dstr) ) / 255));
newg = dstg < 128 ? (2 * (srcg>>1+64) * dstg / 255) : (255 - ( ( 2 * (255 - (srcg>>1+64)) * (255 - dstg) ) / 255));
newb = dstb < 128 ? (2 * (srcb>>1+64) * dstb / 255) : (255 - ( ( 2 * (255 - (srcb>>1+64)) * (255 - dstb) ) / 255));
break;

case 'hard light' :

newr = srcr < 128 ? (2 * srcr * dstr / 255) : (255 - ( ( 2 * (255 - srcr) * (255 - dstr) ) / 255));
newg = srcg < 128 ? (2 * srcg * dstg / 255) : (255 - ( ( 2 * (255 - srcg) * (255 - dstg) ) / 255));
newb = srcb < 128 ? (2 * srcb * dstb / 255) : (255 - ( ( 2 * (255 - srcb) * (255 - dstb) ) / 255));
break;

case 'dodge' :

newr = srcr + dstr;
newg = srcg + dstg;
newb = srcb + dstb;
break;

case 'burn' :

newr = srcr + dstr - 255;
newg = srcg + dstg - 255;
newb = srcb + dstb - 255;
break;

case 'difference' :

newr = Math.abs(srcr - dstr);
newg = Math.abs(srcg - dstg);
newb = Math.abs(srcb - dstb);
break;
}

newr = newr*srca + dstr*dsta;
newg = newg*srca + dstg*dsta;
newb = newb*srca + dstb*dsta;

newr = outa == 0 ? 0 : newr/outa;
newg = outa == 0 ? 0 : newg/outa;
newb = outa == 0 ? 0 : newb/outa;

tmpm[0] = (newr > 255) ? 255 : ( (newr < 0) ? 0 : newr ) | 0;
tmpm[1] = (newg > 255) ? 255 : ( (newg < 0) ? 0 : newg ) | 0;
tmpm[2] = (newb > 255) ? 255 : ( (newb < 0) ? 0 : newb ) | 0;
tmpm[3] = (255*outa) | 0;

}

dstpx[i] = tmpm[0];
dstpx[i+1] = tmpm[1];
dstpx[i+2] = tmpm[2];
dstpx[i+3] = tmpm[3];
}

dstc.putImageData(dstd,0,0);
return tmpc;
}


The code is significantly longer than the merge function that uses drawImage(). However, this variation can perform the same functionality of the other version plus provide different pixel blending modes.

Performance



Performing pixel level manipulation can be quite expensive. The number of iterations can easily be in the 100's of thousands for larger canavases. Every extra arithmatic operation becomes magnified by all these loops plus they'll generally be performed three times - once on each color. If you are counting on using functions performing pixel manipulation in anything that is time sensitive like animations or games, the code will have to be optimized to perform the least number of operations as possible. I did not try to do that in this demo. I felt clarity was more important than speed. There are several opportunities for performance increases in my merge function. In the demo, I compared the speed of using the drawImage() merging versus the pixel data merge and the former was four times faster on my computer. This would be expected since the browser is doing the work in drawImage() and the pixel data method is all in Javascript.

Concluding Thoughts



Utilizing these techniques provides a lot of power when working with images or just drawing basic shapes. The demo I built did not create a complete layering/merge functionality you might see in Photoshop. However, the pieces are there that would enable it. Since the merge function returns a canvas, I could have added select boxes to each layer which would allow for different blend modes on each layer to enable different modes on different layers. The merge function could then be called to merge the bottom two layers to find the resulting canvas, that canvas could be added to the DOM and the source canvases hidden so the resulting blend was visible. That canvas could then be subsequently merged into the next layer, and so on.

Wednesday, October 10, 2012

Understanding HTML5 Canvas Gradients

If you use a paint program, you can fill an object with a solid color, a pattern, or a gradient between one or more colors.  The features exist when using the HTML Canvas API to draw shapes.  However, there are some differences to keep in mind when trying to use these functions.

The first time I used them, I was not fully aware of the concept so it took me a few tries before getting the results I was expecting.  The example on the right illustrates several shapes drawn with gradient fills.  If I were using a drawing program, I'd draw the shape and fill it with a custom gradient defined for each shape.  However, you'll notice each shape has a different gradient that seems to continue through each shape.  That would be pretty difficult to create by filling each shape individually.  In fact, there are actually only two gradients defined in this example:

  1. A red radial gradient that is assigned to the squares running diagonal from the top/left to the bottom/right.

  2. A linear gradient fading from transparent green to opaque blue that is assigned to the remaining shapes.


The gradients are actually defined in the canvas' coordinate space and then each shape assigned that gradient as a fill style will display the portion of the gradient that would be visible at that shapes coordinates.  Its like creating a layer, flood filling it with a gradient fill, and then cutting holes through an opaque layer above it in the shapes you want.  This is not an exact analogy because if you'll notice, the overlapping shapes interact with the transparency defined in the gradient.  So the gradient seems to be copied into the object as well so it can interact with underlying shapes.

To better understand the concept, I decided to render each gradient I used into the whole canvas so I could examine them individually.  Additionally, I plotted some reference shapes on the images to illustrate the control points used in the function calls to create the gradients.

For a radial gradient, I used the following code to construct it:


rfl = ctx.createRadialGradient(100, 100, 10, 150, 150, 250);

rfl.addColorStop(0, 'rgba(255,0,0,1)');
rfl.addColorStop(0.7, 'rgba(255,0,0,0.5)');
rfl.addColorStop(1, 'rgba(255,0,0,0.2)');


Which looks like this:



The function takes two circles to define the start and end points of the gradient.  I plotted those on the reference image.  The function itself is pretty useless without the addColorStop function to setup the actual gradient colors.  The first argument of the function defines the relative point between the two circles that the color transition will be interpolated while the second argument is the color.  In the most simple case, you need two color stop points - the start and end.  I used three in this example to move the transition of the transparency further to the edge of the gradient.  The important thing to notice is that the gradient just doesn't start and end at the defined points.  It actually fills the whole canvas.  What the gradient defines is the space between the start and end.  Anything before the start (inside the small circle) will be the same color that is defined in the addColorStop(0, ...) call.  Anything after the end (outside the larger circle) will be the same color as defined in the addColorStop(1, ...) call.

The linear gradient allows you to really see this concept.  This code defines the gradient on the left:


grd = ctx.createLinearGradient(100,100,150,300);

grd.addColorStop(0, 'rgba(0,255,0,0)');
grd.addColorStop(1, 'rgba(0,0,255,1)');


Which looks like this:



On the image, I plotted the yellow line to illustrate the points passed to the gradient function.  It starts at the top (100,100) and ends at the bottom (150,300).  The two silver lines are perpendicular to the yellow line and define the actual gradient region.  The line simply defines the transition space and could be arbitrarily placed anywhere along those two silver lines to achieve the exact same effect.  Notice how outside the transition region, the color is the same as that defined by the two color stops.  If you assigned this gradient to a shape's fill style, and positioned the shape in the bottom/right corner, it would just look solid blue.  You'll only get the gradient by placing the shape in that area between the silver lines.

The code and example are available in my sandbox. The gradient functionality is quite powerful. You just have to know how to leverage it to achieve the desired effect.

Tuesday, October 9, 2012

Using HTML5 Canvas to Create Cracked Glass Effect

I decided to spend some time working with the HTML Canvas object and see what kind of fun I could have with it.  I stumbled upon an article about creating a cracked glass effect in the context of gaming and how you might approach the problem in an optimal way.  There's no code accompanying this description so I thought I give it a try and see what I could devise.  As I began working out the details, it became apparent there was more to this problem than figuring out how to construct all the cracks in the glass.  I found myself spending a lot more time figuring out how to draw just one cracked line.  As a result, I broke this algorithm into two pieces:

  1. Find all the points to draw a crack from/to.  This follows the concept of the algorithm described in the article above.

  2. Draw a crack between two points defined in step 1.  This function attempts to replicate the look an individual crack segment.


Before diving into both of these pieces, I decided to start by looking at some real-life examples.  Here's one particular image I thought I would use as a reference:



I was attempting to find more examples of broken glass with more detailed items in the background to see how it looked.  The closest I could find were broken screens of tablets, computers, and other mobile devices.  So decided to just take a image of cracked glass and overlay it on the image I was testing with just to see what it might look like:

[caption id="attachment_422" align="alignnone" width="400"> Base sample image[/caption]

After some fussing around in an image editor, I achieved the following results.  I figured this was a good target so set out to write some code that could make something that looked close to this:

[caption id="attachment_424" align="alignnone" width="400"> Overlay real broken glass onto the image using photo editing software.[/caption]

The end result after writing the algorithm looks like this:

[caption id="attachment_423" align="alignnone" width="397"> Use the algorithm to create a cracked glass effect on a HTML5 Canvas and overlay it on the image.[/caption]

Its pretty close and completely generated by drawing lines on an HTML5 Canvas.  The demo is in my sandbox and each time you click the "Add Cracks" button, a center point is randomly chosen on the image and the cracks are drawn.  Each run is completely different since everything is randomized.

Now, let's dive into the code and see how it all works.  The function that is called by clicking on "Add Cracks" implements creating the network of paths that define where cracks will be drawn.  It starts by constructing an array that represents a table of concentric circles growing outward from the center point and lines running outward like spokes on a wheel.  Each row of the table is considered to be one of these circles and each column is one of the lines.  Each cell of the table then represents the point where the line interests each circle.  The radius of each circle and the angle of each line is randomly generated so each time the function is called, it creates a different set of points.

Our table of points in the array is now setup in a convenient way to iterate through them and create lines between adjacent points:


level / line | line 0 | line 1 | line 2 | line 3 | line 4 |
---------------+-----------+-----------+-----------+-----------+-----------+--
circle r0 | (x0,y0) | (x1,y1) | (x2,y2) | (x3,y3) | (x4,y4) |
---------------+-----------+-----------+-----------+-----------+-----------+--
circle r1 | (x5,y5) | (x6,y6) | (x7,y7) | (x8,y8) | (x9,y9) |
---------------+-----------+-----------+-----------+-----------+-----------+--
circle r2 | (x10,y10) | (x11,y11) | (x12,y12) | (x13,y13) | (x14,y14) |
---------------+-----------+-----------+-----------+-----------+-----------+--
circle r3 | (x15,y15) | (x16,y16) | (x17,y17) | (x18,y18) | (x19,y19) |
---------------+-----------+-----------+-----------+-----------+-----------+--


Once that base table of points is generated, we simply need to draw individual cracking lines between each point in the table.  There are three types of lines we want to draw:

[caption id="attachment_428" align="alignnone" width="489"> Three types of lines are drawn between each circle in the cracking pattern[/caption]

  1. Line segment A is always drawn to connect each point along the path from the center out to the last circle (or edge of the image).  In the table, this is the points in the same column but in two consecutive rows.

  2. Line segment B also connects points between two rows of the table but instead of points in the same column, two adjacent columns are connected.

  3. Line segment C connects two points in the same row but in two adjacent columns.


Lines B and C are not always drawn but randomly drawn using a cutoff that causes the lines to be drawn more frequently near the center.

So far, we haven't drawn anything on the canvas.  The next step is to figure out how to actually render a crack in our fictitious glass.  The method I chose does the following steps:

  1. Randomly define a slight curve to the line so the crack isn't perfectly straight.

  2. Create a clipping region along the crack line and copy a portion of the image into the canvas offsetting it slightly to create the refraction you would see in cracked glass.

  3. Step along the path drawing three types of lines along that line at random intervals:

    1. A rectangular shape filled using a radial gradient that fades out to blur the line.  This emulates the opaqueness created in the glass when its cracked.

    2. A white solid line along the path to create a highlight.  This better defines the actual crack.

    3. Short lines perpendicular to the crack path which simulate the fracturing in the glass along the cracking line segment.



  4. Add some random noise around in the rectangle defined by the bounds of the crack.


By controlling the variables used to define the color, length, width, and frequency of those drawn objects, you can create a whole array of visual cracking effects that can mimic various lighting conditions, types of glass, etc.  I tinkered for quite awhile adjusting different values.  As a separate project, I might add all those options to the function arguments so I can externally control the styling of the cracks.  For now, I stuck with mimicking my reference image.

The final result is not perfect and there is definitely some room for improvement.  One problem occurs if you try to copy larger parts of the image into the canvas to create a more pronounced refraction effect.  Each call to the drawing function draws on top of the previous calls already completed drawing.  What happens is the lines drawn in step 3 above get drawn over by the copied sections and they become less visible thus losing part of the effect (this is really bad near the center).  Really, the base image needs to be copied first and then all the lines drawn on top of that.  Additionally, the reference image has a lot of extra "noise" and fogginess between the actual cracking lines.  I did add some noise to the image but its not creating the effect as well as I'd like. Overall, this was a fun first pass at the algorithm and gave me a chance to learn a little more about using the HTML5 Canvas object. I'll probably revisit it after spending more time delving into more specific areas of the Canvas API.

EDIT:  Here's my updated post on an improved version of the effect.  An updated demo is in my sandbox.

Saturday, October 6, 2012

Javascript Physics: The Exploding DIVs Experiment

I decided to have some fun and experiment with creating an explosion effect (try it out in my sandbox). For this demo, I just created a bunch of divs of various sizes on the page, placed a blast point, and then using TweenLite, animated the effect of the large magnitude force applied to those divs.

[caption id="attachment_408" align="alignnone" width="595"> Initial setup of divs[/caption]

In the end, everything boils down this these two animations applied to each div that needs to be affected by the explosion:



timeline.insertMultiple(
[
TweenLite.to(this, (o.mmt - o.sst), {
ease: Power2.easeIn,
css: {
top: off.top+o.my,
left: off.left+o.mx,
rotation: o.rota+'deg'
}
}),

TweenLite.to(this, (o.eet - o.mmt), {
ease: Power2.easeOut,
css: {
top: off.top+o.fy,
left: off.left+o.fx,
rotation: o.rotb+'deg'
}
})
], o.sst, 'sequence', 0);



The first is the acceleration phase and the second is the deceleration. The variables in the calls identify the timing and distance. The acceleration function is handled by the easing feature so all we need to figure out is the when to start moving, when to stop, and how far to go.

An explosion is characterized by a wave of energy radiating out from a source point at a certain speed. The energy and speed are incredibly large in magnitude. For this demo, the initial values of the energy and speed are 105kg*px2/s2 and 3000 px/s, respectively. As a result, there's a very fast displacement of the divs in the first millisecond of the effect.

[caption id="attachment_407" align="alignnone" width="595"> Displacement 0.046 seconds[/caption]

The speed of the explosive wave remains constant as it sweeps out, however, the energy decreases proportionate to the inverse square of the distance from the initial point of the blast. This basic information leads to several important points that we need to consider:


  • Each div will need a fictitious mass to determine the effect of the force

  • We need to know how far each div is from the blast point to find the lose in energy

  • The size of the div affects how long the blast force acts on the div to accelerate it



Now we can find the maximum speed our div will achieve due to the force of the explosion:



// pin: energy of explosion (kg * px^2 / s^2)
// dr: distance from blast point (px)
// surf: surface area being affected by force (px)
// psd: speed of the wave (px/s)
// mass: mass of the div (g)

o.osd = (pin * 1000 / (o.dr * o.dr)) * (o.surf / psd) / o.mass;



In addition to the above values, we need to know how much drag is being applied to slow down our divs so they don't keep moving forever. Using that number and the top speed of the div calculated above we can find how long the div will travel and how far it will go until stopping. Now we have all the information required to find the timing and distance of our divs. The remaining calculations are just finding the points on the line created by the mid-point of the div to the blast point. We just need to extrapolate the line out based on the distance the div will travel.

The meat of the code is at the very bottom of the page source and is not really that much. Its just a lot of math to solve the physics calculations and then some basic trigonometry to find proportionate triangles that define the distance traveled.

The only remaining part that adds another touch of realism is making the divs rotate as they fly apart. That calculation is just a random angle amplified by the distance traveled. The farther/longer the div moves, the more it rotates.

Overall, the effect is pretty good. You can see the divs closest to the blast point move the fastest and furthest (due to the quick loss in energy of the blast). Smaller divs move faster and futher than larger divs as would be expected in an actual explosion (due to the difference in mass). The initial blast might be more abrupt which could be achieved by removing the acceleration animation (which generally lasts for a hundredth of a second if using the higher speed wave) and setting the first movement in the onStart of the deceleration animation. This would avoid any overhead added by the animation calculations since the acceleration is basically instantaneous. However, because I wanted to be able to step through the explosion or use a slower speed blast, I left the animation in this example. I also found that Firefox seemed to skip more than either Chrome and Opera.

If you play with the demo, you can click on a div and watch its motion through out the animation. I also dumped out all the variables in the calculations on the left as a reference. Because the demo is built on the GreenSock TweenLite library, you can also move the slider back and forth and see the effect at a slower speed. Additionally, you can move the blast point around by dragging the green square.

This experiment is a nice starting point for other projects that need this type of effect. The demo shows that it is possible to animate a relatively large set of elements fast enough to look reasonably realistic. Additionally, the easing feature in the animation library simplifies the calculations significantly reducing the complexity of the code. Finally, using the TimelineLite object, enables building the effect in a convenient container that could be combined with other animations to build a more complex series of movements.

Thursday, October 4, 2012

Using Timelines to Manage Animations with TweenLite

Yesterday, I was writing about how to manage multiple animation sequences in jQuery. The issue I was attempting to solve was how to organize all the animations so you can clearly see the sequence, easily reuse sections, and, in general, just make you life easier. My conclusion after making the attempt to build my SVG-based impulse wave animation was that I may have hit the point where jQuery wasn't going to be the best tool to solve my problem.

So I decided to rewrite the whole thing using GreeenSock's JS Animation Platform. Included in this library is the handy TimelineLite (or TimelineMax depending on your needs) which can organize and control a collection of animation sequences. Its a rather robust little tool that can manage nested timelines, add labels, arbitrary functions, and then let you control the whole animation from one convenient object. I figure an example is better than a bunch of words so let's look at the new and improved version of the code.

First, TweenLite has an SVG plugin which depends on Raphael. That meant I needed to swap out the SVG creation I was doing and start using the Raphael library. That support alone made this an easy sell to switch over and we're not even at the timeline yet:


var MAX_R = 25,

paper = Raphael(0, 0, $(window).width(), $(window).height()),
$svg = $('svg'),

wave = paper.circle(-1, -1, 1)
.attr('id', 'wave')
.attr('stroke', 'black')
.attr('fill-opacity', 0.4)
.attr('stroke-opacity', 0.3)
.hide(),

charge = paper.circle(-1, -1, 1)
.attr('id', 'charge')
.attr('fill', 'red')
.attr('stroke', 'none')
.attr('fill-opacity', 0.4)
.hide(),

pulse_x, pulse_y, intensity,
timeline, pulse;

timeline = new TimelineLite({
paused: true
});



Now we have the two SVG circles that will represent the wave charging and pulse burst created as Raphael objects. Additionally, we'll create a paused timeline since nothing should happen until the mouse is clicked somewhere on the page.

Now, let's define the animations using timeline as the container to manage everything:


/*
Setup the timeline. There are 4 distinct phases to
out pulse animation:
1 - MouseDown Initial charging - wave expands out to MAX_R
2 - Mouse still down and fully charged, show pulsing red circle
3 - Mouse up - collapse wave back to 0 radius
4 - Release the wave expanding it until it animates off the screen

Each part has a label so we can move the play head to that
part of the animation. Additionally, functions are added to
the timeline to setup, repeat, or complete various stages of the
animation.
*/
timeline.insertMultiple(
[

/*
Part 1: Charge the wave.
* Add the label for playback control
* Add function to reset wave attributes and show
* Add the animation
* Add a function to show the pulsing circle
*/

'charging',
function ()
{
wave
.attr('fill', 'white')
.attr('fill-opacity', 0.4)
.attr('stroke-opacity', 0.3)
.show();
},

TweenLite.fromTo(wave, 0.5, {
raphael: {
r: 1,
'stroke-width': 1
}
}, {
ease: Linear.easeNone,
raphael: {
r: MAX_R,
'stroke-width': 5
}
}),

function () {charge.show();},

/*
Part 2: Charged and waiting.
* Add the label for playback control
* Add the animation
* Add a function to repeat the animation
*/

'charged',
TweenLite.fromTo(charge, 0.5, {
raphael: {
r: 1,
'fill-opacity': 0.8
}
}, {
ease: Linear.easeNone,
raphael: {
r: MAX_R,
'fill-opacity': 0.05
}
}),

function () {timeline.play('charged')},

/*
Part 3: Start discharge process.
* Add the label for playback control
* Add function to hide the pulsing circle
* Add the animation
*/

'discharge',
function () {charge.hide();},

TweenLite.to(wave, 0.25, {
ease: Linear.easeNone,
raphael: {
r: 1,
'fill-opacity': 0,
'stroke-width': 1
}
}),

/*
Part 4: Emit the wave.
* Add the label for playback control
* Add the animation - save a reference for later
* Add function to hide everything
*/

'pulse',
(pulse = TweenLite.fromTo(wave, 0.15, {
raphael: {
r: 1,
'stroke-width': 1,
'stroke-opacity': 0.3
},
}, {
ease: Linear.easeNone,
raphael: {
r: 1000,
'stroke-width': 100,
'stroke-opacity': 0.1
}
})),

function ()
{
charge.hide();
wave.hide();

pulse_x = -1;
pulse_y = -1;
}

], 0, 'sequence', 0.001);


Ok, that's a fairly large chunk code so let's break it down a bit. First, there are several ways to add elements to a timeline. I chose to use insertMultiple() since I already knew how everything was being sequenced ahead of time. I highly recommend reading up on the documentation to see all the different alternatives available. Next, if you look at the first three elements added to the timeline, you can see how much power this tool has for building complex animations:



/* Add a label so we can refer to it in timeline.play() */
'charging',

/* Add a function that will ensure the SVG element is
reset to starting values and visible */

function ()
{
wave
.attr('fill', 'white')
.attr('fill-opacity', 0.4)
.attr('stroke-opacity', 0.3)
.show();
},

/* Now add the actual animation of growing the circle outward
to the MAX_R defined at the beginning of the code. Since
this is an SVG object, we need to pass a raphael object to
TweenLite so the plugin is used to do the animation */

TweenLite.fromTo(wave, 0.5, {
raphael: {
r: 1,
'stroke-width': 1
}
}, {
ease: Linear.easeNone,
raphael: {
r: MAX_R,
'stroke-width': 5
}
}),


Using the timeline, I can add labels to define important points in the animation that I can later refer to in other functions like play(). Additionally, I can add functions that perform various tasks as the playback passes through that point in the timeline. These have a duration of zero so will not affect the timing of the animation but can be quite helpful to execute tasks that are not animated. In this case, I ensure the SVG element is properly styled before showing it on the page. Later on in the timeline, I use a function to loop the animation to restart the charged pulsing animation until the user releases the mouse button. I could have put these functions in the onStart and onComplete callbacks in the actual TweenLite animations. However, I like seeing them inline with the rest of the timeline because its clear what order everything is running.

There's one minor detail I'd like to point out in the insertMultiple call - the last parameter is used to stagger the individual elements in the timeline. I passed 0.001 here because I was having an issue with the function immediately after the 'discharge' label not getting called. Without it, there's a pileup of zero duration elements all together and it seems that trying to move the play head there was causing an issue getting the function called. A small staggering did the trick and is something to keep in mind building timelines with an assortment of labels and functions.

Now that my timeline is setup, I just need to control it from a few mouse events:


$svg
.on('mousedown', function (e)
{
var pos = $(document.body).offset();

pulse_x = e.pageX - pos.left;
pulse_y = e.pageY - pos.top;

wave
.attr('cx', pulse_x)
.attr('cy', pulse_y);

charge
.attr('cx', pulse_x)
.attr('cy', pulse_y);

timeline.play('charging');

})
.on('mouseup', function (e)
{
intensity = wave.attr('r') / MAX_R;

/*
Update target values based on run-time
intensity calculation. Need to use invalidate
to clean out any caching and force recalculation
*/

pulse.invalidate();
pulse.vars.raphael['stroke-width'] = 100*intensity;
pulse.vars.raphael['stroke-opacity'] = 0.3*intensity;

timeline.play('discharge');
});


When the user holds the mouse down, the code moves the SVG elements to that point they clicked and starts the charging phase of the animation using the predefined label in the timeline. The timeline will run the first function, the animation, the second function (after the 'charged' label, the next animation, and then the third function which replays the second animation by calling timeline.play('charged'). This will repeat over and over again as long as the user holds the mouse down. Once they release the mouse button, the playback is moved to the 'discharge' label to finish the animation.

You'll notice I added an intensity calculation in the mouseup event. I did this so if the user does not hold the mouse down for the full half second required to charge the wave to full strength (25 pixels), the resulting pulse will have a thinner, less visible line to create an effect that the pulse is less intense. Since I already setup the animation for the effect at full strength in the timeline (or from a previous animation cycle), I need to invalidate it and then reset values using the reference I saved when I created the timeline.

So that was my little voyage into learning how to use the TimelineLite object. I just barely scratched the surface of what can be achieved with this part of the animation library. I think you can clearly see the advantages it provides over trying to do the equivalent functionality in jQuery. If your planning a project that has a large dependency on sequencing and controlling the animation of multiple elements, these feature will not only simplify the construction of the animation, but your code will be more readable and reusable making your life, as a developer, just a little bit easier.

Wednesday, October 3, 2012

Managing Multiple Animation Sequences in jQuery

The animate function in jQuery provides a quick and convenient method of animating an element's styles. A problem you might run into when working with animate() is how to setup sequences of animations to run based on certain events or other animations completing. The challenge is coding the animation sequences so you can read the code and understand the flow of the animation. If its just one element (or a group that need to do the same thing), then you can just chain multiple animate calls together:



// Starting at 0,0
$('#myDiv')
.animate({left: 50}, 1000)
.delay(1000)
.animate({top: 50}, 1000);



This simply moves a div left 50px over 1 second, waits 1 second, and then moves the div down 50px over 1 second. So far, nothing we can't understand. We could get a little more fancy and perform the animation based on a click event:



$('goDiv').click(function (e) {

$('#myDiv')
.css({left: e.pageX, top: e.pageY})
.show()
.animate({left: 0}, 1000)
.delay(1000)
.animate({top: 0}, {
duration: 1000,
complete: function () {
$(this).hide();
}
});

});


Now we're animating from the mouse coordinates of the click to the top left of the screen. Additionally, the element is only made visible during the animation and then hidden upon finishing via the complete callback.

Again, so far everything is pretty readable and not overly complex. But what if we want to coordinate the animation of two divs based on the click event and stagger their animations:



$('.box').css('display', 'none');

$('#goDiv').click(function (e) {

var pos = $(this).offset();

$('#myDiv1')
.stop(true)
.css({left: e.pageX-pos.left, top: e.pageY-pos.top})
.show(250)
.animate({left: 0}, 1000)
.delay(1000)
.animate({top: 0}, {
duration: 1000,
complete: function () {
$(this).hide(250);
}
});

$('#myDiv2')
.stop(true)
.css({left: e.pageX-pos.left, top: e.pageY-pos.top})
.delay(1000)
.show(250)
.animate({top: 0}, 1000)
.delay(1000)
.animate({left: 0}, {
duration: 1000,
complete: function () {
$(this).hide(250);
}
});

});



Its not readily obvious how these animations are timed and even work together. Here's a demo of the functionality the above code implements. Its easier to see it than visualize it based on the code. You could try to place the myDiv2 animation in the complete callback of myDiv1's first animation. This would make it more obvious that it starts after myDiv1 moves to the left:



$('.box').css('display', 'none');

$('#goDiv').click(function (e) {

var pos = $(this).offset();

$('#myDiv1')
.stop(true)
.css({left: e.pageX-pos.left, top: e.pageY-pos.top})
.show(250)
.animate({left: 0}, {
duration: 1000,
complete: function () {
$('#myDiv2')
.stop(true)
.css({left: e.pageX-pos.left, top: e.pageY-pos.top})
.show(250)
.animate({top: 0}, 1000)
.delay(1000)
.animate({left: 0}, {
duration: 1000,
complete: function () {
$(this).hide(250);
}
});
}
})
.delay(1000)
.animate({top: 0}, {
duration: 1000,
complete: function () {
$(this).hide(250);
}
});

});



I can't say this is really anymore elegant or readable. And if more elements were involved with any more complex timings, the code would be virtually impossible to understand. This is the problem I encountered when working on a SVG-based animation that simulates a pulse wave when the mouse is clicked. As I built the sequences of animations, it became apparent that I was going to need to organize my code in a way that would make it easier to understand what was happening. Otherwise, in a month, when I wanted to do something with it, I would have no idea what was happening. I also intended to reuse parts of the effect with some other projects and wanted something that was reasonably modular enough that I could plug it in without much effort. The problem was further compounded by the fact that I needed to use the step callback to manually manage the animated properties of the SVG elements.

My solution, while not perfect, was to move the sequencing code out of the control code by creating functions that could be used in the step and complete callbacks of the animate function. This enabled me to keep the event handler code as short as possible and organize the animation logic in an order that approximated the sequencing of the actual animation.

Using that same approach with the example I developed in this post:



$('.box').css('display', 'none');

function div1Show()
{
$('#myDiv1').show(250, div1Left);
}

function div1Left()
{
$('#myDiv1').animate({left: 0}, {
duration: 1000,
complete: function ()
{
div2Show();
div1Top();
}
});
}

function div2Show()
{
$('#myDiv2').show(250, div2Top);
}

function div2Top()
{
$('#myDiv2').animate({top: 0}, {
duration: 1000,
complete: div2Left
});
}

function div1Top()
{
$('#myDiv1')
.delay(1000)
.animate({top: 0}, {
duration: 1000,
complete: div1Hide
});
}

function div1Hide()
{
$('#myDiv1').hide(250);
}

function div2Left()
{
$('#myDiv2')
.delay(1000)
.animate({left: 0}, {
duration: 1000,
complete: div2Hide
});
}

function div2Hide()
{
$('#myDiv2').hide(250);
}


$('#goDiv').click(function (e) {

var pos = $(this).offset();

$('#myDiv1')
.stop(true)
.css({left: e.pageX-pos.left, top: e.pageY-pos.top});

$('#myDiv2')
.stop(true)
.css({left: e.pageX-pos.left, top: e.pageY-pos.top});

div1Show();

});


Now, the click event only handles setting the initial position and kicking off the animation sequence. Each part of the animation is organized in a different function in approximately the order it will run. However, without even looking at the code in the functions, you can get an idea of the flow. The function names provide a reasonable outline of what's happening. The other advantage is now the animation is modular - I can use any part of it, repeat a section, skip parts, etc. The demo is running on this last version and I left all the other variations in the demo's source for reference.

There's certainly no requirement to organize the code in this fashion. However, if you are building complex animation sequences, it may be advantageous to consider a similar approach to constructing your animations. Ultimately, it will provide flexible and readable code that you will probably appreciate in the future when you need to understand what you wrote and why you wrote it.

Monday, October 1, 2012

Raphaël: Getting/Setting SVG Element Attributes

Raphaël is a nice library for working with SVG objects.  It uses a similar syntax to jQuery for getting/setting attribute properties on an element. Unlike the normal DOM elements which use the style attribute to manage the appearance of an element, SVG uses individual attributes in the tag to define the styling options (some of these can also be placed in the style tag, but for consistency, I prefer using attributes). As a result, when working with SVG elements, you make heavy use of the attr() function instead of the css() function. Both have an identical syntax, however, the css() function only affects the style attribute of the element. Technically, you can use attr() to set the style attribute but you can't set individual styles, just the whole string. As in jQuery, Raphaël has a attr() function to get/set the individual attributes on a SVG element. There is more than one way to call the attr() function. The flexibility of the function in both jQuery and Raphaël allows you to pick the method appropriate for what you're trying to achieve.

Using jQuery, one variation allows you to set one attribute at a time by passing a string as the first argument and the value as the second:



$('#mySVG')
.attr('stroke-width', 2);



Here, we use the actual attribute name with the hyphen. Raphaël allows the same syntax. If you do not pass the second argument, both libraries will return the current value of stroke-width.

If you have several attributes to set, you can pass an object hash to attr():



$('#mySVG')
.attr({
strokeWidth: 2,
fill: 'blue'
});



Notice how you can use camel case in this situation. This is where jQuery and Raphaël differ. It turns out you can't use camel case when passing an object hash to attr(). Instead, you must quote the object keys that contain hyphens to avoid a Javascript error:



// rect is a Raphaël Element:

rect.attr({
'stroke-width': 2,
fill: 'blue'
});



Similarly, if you get the attributes on an element, they will not be camel cased. You will not be able to use dot access on the returned object. You will need to use the square bracket method:



var a = rect.attr({
'stroke-width': 2,
fill: 'blue'
});

// Will not work and will throw an error
// ReferenceError: invalid assignment left-hand side
sw = a.stroke-width;

// Use this instead
sw = a['stroke-width'];



It was not clear in the Raphaël documentation that the camel case alternative was either valid or invalid. However, after a significant amount of time rechecking my code for issues and digging through the Raphaël source, it does appear that camel case is not a valid syntax. Maybe this is a no-brainer for some who may not be using jQuery. However, its something to be aware of if you are familiar with the jQuery approach. The nice feature of the having the camel case alternative is the ability to work with objects using any of the available Javascript object access methods and not have to deal with quoting them or other errors due to the invalid hyphen character.