Friday, October 12, 2012

HTML Canvas ImageData: Creating Layers and Blending Pixels

Drawing images using layers is a feature that provides a lot of flexibility. Generally, once you place an object on the canvas, its not moveable nor can you draw something under it. Hence, the order you perform the drawing becomes important. This differs from SVG or other HTML element that can either be moved around the DOM or have their z-index adjusted. As I worked on my cracked glass effect, I realized that using layers would be beneficial for helping me achieve the desired result. Animations and games would also benefit from using multiple canvases to great various effects and ease drawing operations. I built a simple demo in my sandbox that partially mimics how layers are setup in a image editor like Photoshop to construct the picture on the right from several canvas object and to experiment with different blending effects. In the end, there is an easy and a hard way to make this work - the harder way add more features and flexibility at the cost of performance.


Creating Layers


The first part of the process is fairly easy.  You can stack canvas tags by positioning them absolutely inside a relatively positioned parent.  Now, anything you draw will look like one image but you're actually working with distinct layers that you can reorder, change opacity, and draw shapes or images:


<style>
.wrapper {
position: relative;
}

.wrapper > * {
position: absolute;
}
</style>

<div id="drawing" class="wrapper">
<canvas id="flowers"></canvas>
<canvas id="gradient"></canvas>
<canvas id="circle"></canvas>
<canvas id="inverted"></canvas>
</div>


I labeled each canvas so as I debugged moving layers around, I could see in Firebug that they moved where I expected. Now, I can draw each layer:


// Setup background
// -----------------------------------------------------
ctx = $canvas[0].getContext('2d');
ctx.drawImage($('#baseimg')[0], 0, 0, w, h);


This copies the hidden image I placed on the page into the first layer.



Now, create a gradient. This will create a transparent area so the flowers show through:


// Add a fill layer
// -----------------------------------------------------
ctx = $canvas[1].getContext('2d');

grd = ctx.createLinearGradient(100,100,150,300);

grd.addColorStop(0, 'rgba(0,255,0,0)');
grd.addColorStop(1, 'rgba(0,0,255,1)');

ctx.fillStyle = grd;
ctx.fillRect(0,0,w,h);




A canvas is transparent by default, so drawing shapes will only fill those areas allowing everything else to be visible:


// Add a shape layer
// -----------------------------------------------------
ctx = $canvas[2].getContext('2d');

ctx.fillStyle = 'yellow';
ctx.strokeStyle = 'black';
ctx.lineWidth = 1;
ctx.shadowColor = 'purple';
ctx.shadowBlur = 15;
ctx.shadowOffsetX = -5;
ctx.shadowOffsetY = 5;

ctx.beginPath();
ctx.arc(200,200,100,0,2*Math.PI);
ctx.stroke();
ctx.fill();

ctx.lineWidth = 3;
ctx.shadowColor = 'white';
ctx.shadowBlur = 1;
ctx.shadowOffsetX = 2;
ctx.shadowOffsetY = 2;

ctx.beginPath();
ctx.moveTo(0,150);
ctx.lineTo(250,0);
ctx.stroke();


I got a little fancy with the shapes because I wanted to try out the shadow options available in the canvas API:



For the last layer, I want to copy a part of the original flower image and invert it. This process demonstrates accessing the ImageData to apply a filter by manipulating the individual pixels contained in the canvas:



function invert(ctx)
{
var imd = ctx.getImageData(0,0,ctx.canvas.width,ctx.canvas.height),
imp = imd.data,
len = imp.length;

for (var i=0;i<len;i+=4)
{
imp[i] = 255 - imp[i];
imp[i+1] = 255 - imp[i+1];
imp[i+2] = 255 - imp[i+2];
}

ctx.putImageData(imd,0,0);
}

// Add a layer, copy part of image and invert it
// -----------------------------------------------------
ctx = $canvas[3].getContext('2d');

ctx.drawImage($('#baseimg')[0], 200, 200, 200, 200, 200, 200, 200, 200);
invert(ctx);




Now I have four individual canvas layers that can be independently manipulated and the layering effect will be maintained. At this point, you could stop. If all you wanted was to be able to layer some drawings, this setup would work fine. However, I want to work with some pixel blending algorithms, so I will need to merge these canvases to accomplish that goal.

Combining Canvases



Flattening the canvases into one canvas is just a simple process of creating a temporary destination canvas and then copying each canvas layer, bottom up, using the context drawImage() function. Each canvas is treated as an image and can be passed directly to the function. Once complete, you can either add the canvas to the DOM or call toDataURL() on the final canvas to extract the image data and set it as the source of an image tag:


function merge2(layers)
{
var tmpc = $('<canvas>')[0],
dstc = tmpc.getContext('2d'),
h = layers[0].height,
w = layers[0].width;

tmpc.height = h;
tmpc.width = w;
dstc = tmpc.getContext('2d');

layers.each(function (idx)
{
dstc.globalAlpha = +$(this).css('opacity');
dstc.drawImage(this, 0, 0);
});

return tmpc;
}

var merged_canvas = merge2($('canvas'));
$('#mergeimg')[0].src = merged_canvas.toDataURL();



You may notice that I alter the globalAlpha setting prior to each draw operation. In my demo, I added sliders to change the opacity of the canvases. I draw a completely opaque circle. If I'd like it to show some of the background, I don't need to redraw it, just alter the opacity of the canvas. When I merge the layers, I need to account for the opacity on the element (instead of the pixels in the canvas) by copying the opacity of the canvas into the globalAlpha setting. This will cause all the pixels to inherit this additional alpha value when the blending is performed in the drawImage() operation.

Blending Modes



So far we've only performed a standard blend - the top image is drawn over the underlying image. The alpha channel is considered in the drawing process to composite each image. However, if you've used Photoshop, you know there are quite a few other methods to blend multiple layers together. Multiply, screen, burn are just a few of the options. Each of these blend modes calculate the final pixel color by performing a series of basic math operations on each pixel in the two layers being merged. For instance, I can use a Difference function to blend the four layers into the following image:



To achieve this type of blending, we need to iterate over each pixel and find the absolute value of the difference:


newr = Math.abs(srcr - dstr);
newg = Math.abs(srcg - dstg);
newb = Math.abs(srcb - dstb);


In this snippet, the src is a pixel from current layer (canvas) being merge and dst is a pixel from the same relative location as src in the destination canvas that contains all the layers already merged. We have to calculate each color channel individually so three different lines of code are repeated on each color component. I skipped ahead a little at this point so the unanswered question remains: how do we get to the pixels so we can manipulate them? As I quickly pointed out in the invert() function above, the canvas context has a getImageData function which allows you to access the raw pixel data. Before running our blending math, we need to do some setup to create variables that point to this data:



var tmpc = $('<canvas>')[0],
dstc, dstd, dstpx, dsta,
tmpm, len, srcpx, alppx,
h = layers[0].height,
w = layers[0].width;

tmpc.height = h;
tmpc.width = w;

dstc = tmpc.getContext('2d');

dstc.createImageData(w,h);
dstd = dstc.getImageData(0,0,w,h);
dstpx = dstd.data;

len = dstpx.length;
tmpm = new Array(4);

cnt = layers.length;
srcpx = new Array(cnt);
alppx = new Array(cnt);

layers.each(function (idx)
{
var ctx = this.getContext('2d'),
imd = ctx.getImageData(0,0,w,h);

srcpx[idx] = imd.data;
alppx[idx] = +$(this).css('opacity');
});



Here, I'm creating my destination canvas and then creating variables to reference the ImageData.data value for the destination and all the source layers. An important point to notice in this code is the use of createImageData() on the destination canvas. Since this canvas has had nothing drawn on it, the pixels are not setup. The createImageData() function initializes all the pixels to black and fully transparent. If you accidentally forget to do this, nothing will be drawn in the canvas and you will spend a lot of time trying to figure out why.

The pixel data is stored in one continuous array where each pixel is represented by 4 consecutive elements order by red, green, blue, alpha. So any loop needs to step by 4 on each iteration and you need to reference the pixel data array using data[i] for the red channel, data[i+1] for green, etc:



...

for (i=0;i<len;i+=4)
{

r = srcpx[i];
g = srcpx[i+1];
b = srcpx[i+2];
a = srcpx[i+3];

...



In my setup process, you'll note that I'm copying the canvas opacity into the variable alppx[idx]. This is needed because the transparency is not automatically handled like it is in drawImage(). We will need to perform all the alpha compositing as part of the other blending calculations:


srca = srcpx[l][i+3] / 255 * alppx[l];
dsta = tmpm[3] / 255*(1-srca);
outa = (srca + tmpm[3]*(1-srca)/255);

newr = newr*srca + dstr*dsta;
newg = newg*srca + dstg*dsta;
newb = newb*srca + dstb*dsta;

newr = outa == 0 ? 0 : newr/outa;
newg = outa == 0 ? 0 : newg/outa;
newb = outa == 0 ? 0 : newb/outa;


In this portion of the code, the new colors have already been found by performing the selected blend mode, this step takes the alpha channels (and the opacity from the canvas) and combines them based on the Porter and Duff "over" method described here.

The important thing to keep in mind, is the pixel data uses 0-255 (0=transparent, 255=opaque) for the alpha range while the CSS opacity is 0-1 (0=transparent, 1=opaque). The compositing operation is assuming values between 0 and 1, so I converted them accordingly. Once the new RGB values are found with the correct alpha compositing, they need to be clipped, rounded, and then set into the destination's pixel data:


tmpm[0] = (newr > 255) ? 255 : ( (newr < 0) ? 0 : newr ) | 0;
tmpm[1] = (newg > 255) ? 255 : ( (newg < 0) ? 0 : newg ) | 0;
tmpm[2] = (newb > 255) ? 255 : ( (newb < 0) ? 0 : newb ) | 0;
tmpm[3] = (255*outa) | 0;

dstpx[i] = tmpm[0];
dstpx[i+1] = tmpm[1];
dstpx[i+2] = tmpm[2];
dstpx[i+3] = tmpm[3];


Clipping is required for some of the blending methods since they might calculate a value over 255 or under 0. I used a bit-wise OR to quickly round the value to a whole integer. Once all the pixels are calculated in this manner, they just need to be placed back into the destination canvas using putImageData():


dstc.putImageData(dstd,0,0);



In my demo, I implemented several different blend mode. Wikipedia provides a great reference and compendium of the various modes in common use. Here is the full source of the merge function:


function merge(layers, mode)
{
var tmpc = $('<canvas>')[0],
mode = mode || 'normal',
dstc, dstd, dstpx, dsta,
tmpm, len, srcpx, alppx,
h = layers[0].height,
w = layers[0].width,
i, l, cnt, wt, srca, outa,
srcr, srcg, srcb,
dstr, dstg, dstb,
newr, newg, newb;

tmpc.height = h;
tmpc.width = w;
dstc = tmpc.getContext('2d');

dstc.createImageData(w,h);
dstd = dstc.getImageData(0,0,w,h);
dstpx = dstd.data;

len = dstpx.length;
tmpm = new Array(4);

cnt = layers.length;
srcpx = new Array(cnt);
alppx = new Array(cnt);

layers.each(function (idx)
{
var ctx = this.getContext('2d'),
imd = ctx.getImageData(0,0,w,h);

srcpx[idx] = imd.data;
alppx[idx] = +$(this).css('opacity');
});

for (i=0;i<len;i+=4)
{
// Seed with first layer
tmpm[0] = srcpx[0][i];
tmpm[1] = srcpx[0][i+1];
tmpm[2] = srcpx[0][i+2];
tmpm[3] = srcpx[0][i+3] * alppx[0];

/*
Now merge each layer from the bottom up:
1) Find the each alpha value (convert to 0-1)
2) Perform blend mode calculation on each channel
3) Perform alpha compositing between current background and new RGB values
5) Clip (if necessary) and set final color and alpha
*/

for (l=1;l<cnt;l++)
{
srca = srcpx[l][i+3] / 255 * alppx[l];
dsta = tmpm[3] / 255*(1-srca);
outa = (srca + tmpm[3]*(1-srca)/255);

srcr = srcpx[l][i];
srcg = srcpx[l][i+1];
srcb = srcpx[l][i+2];

dstr = tmpm[0];
dstg = tmpm[1];
dstb = tmpm[2];

switch (mode)
{
case 'normal' :

newr = srcr;
newg = srcg;
newb = srcb;
break;

case 'multiply' :

newr = srcr * dstr / 255;
newg = srcg * dstg / 255;
newb = srcb * dstb / 255;
break;

case 'screen' :

newr = 255 - ( ( (255 - srcr) * (255 - dstr) ) / 255);
newg = 255 - ( ( (255 - srcg) * (255 - dstg) ) / 255);
newb = 255 - ( ( (255 - srcb) * (255 - dstb) ) / 255);
break;

case 'overlay' :

newr = dstr < 128 ? (2 * srcr * dstr / 255) : (255 - ( ( 2 * (255 - srcr) * (255 - dstr) ) / 255));
newg = dstg < 128 ? (2 * srcg * dstg / 255) : (255 - ( ( 2 * (255 - srcg) * (255 - dstg) ) / 255));
newb = dstb < 128 ? (2 * srcb * dstb / 255) : (255 - ( ( 2 * (255 - srcb) * (255 - dstb) ) / 255));
break;

case 'soft light' :

newr = dstr < 128 ? (2 * (srcr>>1+64) * dstr / 255) : (255 - ( ( 2 * (255 - (srcr>>1+64)) * (255 - dstr) ) / 255));
newg = dstg < 128 ? (2 * (srcg>>1+64) * dstg / 255) : (255 - ( ( 2 * (255 - (srcg>>1+64)) * (255 - dstg) ) / 255));
newb = dstb < 128 ? (2 * (srcb>>1+64) * dstb / 255) : (255 - ( ( 2 * (255 - (srcb>>1+64)) * (255 - dstb) ) / 255));
break;

case 'hard light' :

newr = srcr < 128 ? (2 * srcr * dstr / 255) : (255 - ( ( 2 * (255 - srcr) * (255 - dstr) ) / 255));
newg = srcg < 128 ? (2 * srcg * dstg / 255) : (255 - ( ( 2 * (255 - srcg) * (255 - dstg) ) / 255));
newb = srcb < 128 ? (2 * srcb * dstb / 255) : (255 - ( ( 2 * (255 - srcb) * (255 - dstb) ) / 255));
break;

case 'dodge' :

newr = srcr + dstr;
newg = srcg + dstg;
newb = srcb + dstb;
break;

case 'burn' :

newr = srcr + dstr - 255;
newg = srcg + dstg - 255;
newb = srcb + dstb - 255;
break;

case 'difference' :

newr = Math.abs(srcr - dstr);
newg = Math.abs(srcg - dstg);
newb = Math.abs(srcb - dstb);
break;
}

newr = newr*srca + dstr*dsta;
newg = newg*srca + dstg*dsta;
newb = newb*srca + dstb*dsta;

newr = outa == 0 ? 0 : newr/outa;
newg = outa == 0 ? 0 : newg/outa;
newb = outa == 0 ? 0 : newb/outa;

tmpm[0] = (newr > 255) ? 255 : ( (newr < 0) ? 0 : newr ) | 0;
tmpm[1] = (newg > 255) ? 255 : ( (newg < 0) ? 0 : newg ) | 0;
tmpm[2] = (newb > 255) ? 255 : ( (newb < 0) ? 0 : newb ) | 0;
tmpm[3] = (255*outa) | 0;

}

dstpx[i] = tmpm[0];
dstpx[i+1] = tmpm[1];
dstpx[i+2] = tmpm[2];
dstpx[i+3] = tmpm[3];
}

dstc.putImageData(dstd,0,0);
return tmpc;
}


The code is significantly longer than the merge function that uses drawImage(). However, this variation can perform the same functionality of the other version plus provide different pixel blending modes.

Performance



Performing pixel level manipulation can be quite expensive. The number of iterations can easily be in the 100's of thousands for larger canavases. Every extra arithmatic operation becomes magnified by all these loops plus they'll generally be performed three times - once on each color. If you are counting on using functions performing pixel manipulation in anything that is time sensitive like animations or games, the code will have to be optimized to perform the least number of operations as possible. I did not try to do that in this demo. I felt clarity was more important than speed. There are several opportunities for performance increases in my merge function. In the demo, I compared the speed of using the drawImage() merging versus the pixel data merge and the former was four times faster on my computer. This would be expected since the browser is doing the work in drawImage() and the pixel data method is all in Javascript.

Concluding Thoughts



Utilizing these techniques provides a lot of power when working with images or just drawing basic shapes. The demo I built did not create a complete layering/merge functionality you might see in Photoshop. However, the pieces are there that would enable it. Since the merge function returns a canvas, I could have added select boxes to each layer which would allow for different blend modes on each layer to enable different modes on different layers. The merge function could then be called to merge the bottom two layers to find the resulting canvas, that canvas could be added to the DOM and the source canvases hidden so the resulting blend was visible. That canvas could then be subsequently merged into the next layer, and so on.