Sunday, January 26, 2014

Resizing Images in the Browser using an HTML Canvas

Nothing pleases me more than not having to upload an image from someone's computer and use my server to resize the image. Being able to use the browser to acquire the file, resize it to something reasonable, and only upload it after the user previews it and decides its correct saves both bandwidth and server resources. However, as great as it sounds, there's always a catch. On the server-side, you can control the resources and processes used to resize the image. On the client-side, you have less control. Different resources and browsers limit your ability to use any method you can conjure up to perform the resize operation. Users tend to be fickle about their browser locking up or, worse, crashing when they attempt to open an unusually large 15Mb JPEG image of their dog, Sammy. Even if you can avoid that problem, you may find the resize of the image looks undesirable due to how the browser resampled it.

The easiest way to resize an image is to draw it into a smaller canvas object and then pull the data URL from the canvas with the resized image:


   var baseImg = $( '#base' )[0],
       outImg = $ctr.find( '.resize1' )[0],
       canvas, context;

   canvas = document.createElement( 'canvas' );
   canvas.width = baseImg.width;
   canvas.height = baseImg.height;
   context = canvas.getContext( '2d' );
   context.drawImage( baseImg, 0, 0, baseImg.width, baseImg.height );

   outImg.src = canvas.toDataURL();




Below is a test image with a resolution of 1024 X 768 pixels and a file size of about 2.4 Mb. I placed it in an image tag with a max-height of 400px which will force the browser to scale the image. When those properties are queried in the resize operation above, they report the size of the rendered image on the screen and not the original size. Because of this, the resulting canvas image will then be sized at the desired 400 X 300 pixels and the new file size will be 78 Kb:



While fast (Chrome, Firefox, and IE all perform this operation in a hundredth of a second), the resulting image does have some artifacts from the resampling. You can see in the box that the penguin's wing now has some jaggies. I found that some images looked better than others but clearly, if I was going to rely on this to perform client-size resizing, a solution that consistently produced better looking images would be required.

A little searching dug up a few alternatives like a hermite or lanczos resampling algorithm. The image produced by the hermite implementation looked a lot better than the attempt above, however, its performance was considerably slower in Firefox (Chrome and IE were slower, but not nearly as bad):

   var baseImg = $( '#base' )[0],
       outImg = $ctr.find( '.resize2' )[0],
       canvas, context,
       copy, cctx;

   canvas = document.createElement( 'canvas' );
   canvas.width = baseImg.naturalWidth;
   canvas.height = baseImg.naturalHeight;
   context = canvas.getContext( '2d' );
   context.drawImage( baseImg, 0, 0, baseImg.naturalWidth, baseImg.naturalHeight );

   resample_hermite( canvas, baseImg.naturalWidth, baseImg.naturalHeight, baseImg.width, baseImg.height );

   copy = document.createElement( 'canvas' );
   copy.width = baseImg.width;
   copy.height = baseImg.height;
   cctx = copy.getContext( '2d' );

   cctx.putImageData( context.getImageData( 0, 0, baseImg.width, baseImg.height ), 0, 0 );

   outImg.src = copy.toDataURL();



On the image above, these were the runtimes for the hermite algorithm:

Firefox: 0.491s
Chrome: 0.128s
IE: 0.204s

And on a larger, 2048 X 3072 (2.2Mb) image, the Firefox time was significantly worse:

Firefox: 2.113s
Chrome: 0.411s
IE: 0.778s

The issue with these algorithms is they are directly accessing the pixel data and loop over all of it to perform the resampling. On large images, that's a lot of looping and its really a test of how fast can the browser run Javascript. Given the disparity in run times, my preference is to avoid accessing the pixel data and attempt to find a solution that takes advantage of the browser's built in algorithms.

Based on some additional research, it appears the drawImage() function is using a linear-interpolation, nearest-neighbor resampling method. That works well when your not resizing down more than half the original size. Out of curiosity, I used the basic drawImage() approach from above, but looped it to only resize in half steps down to the desired size. The resulting image looked a lot better than the first attempt and only took a little longer to run than resizing in one step:



On the image above, the runtimes were:

Firefox: 0.218
Chrome: 0.122s
IE: 0.035s

Running that new version on the larger 2048 X 3072 (2.2Mb) image resulted in more acceptable execution times:

Firefox: 0.456s
Chrome: 0.51s
IE: 0.217s

Here is the function that produced that image:


function resize_image( src, dst, type, quality ) {
   var tmp = new Image(),
       canvas, context, cW, cH;

   type = type || 'image/jpeg';
   quality = quality || 0.92;

   cW = src.naturalWidth;
   cH = src.naturalHeight;

   tmp.src = src.src;
   tmp.onload = function() {

      canvas = document.createElement( 'canvas' );

      cW /= 2;
      cH /= 2;

      if ( cW < src.width ) cW = src.width;
      if ( cH < src.height ) cH = src.height;

      canvas.width = cW;
      canvas.height = cH;
      context = canvas.getContext( '2d' );
      context.drawImage( tmp, 0, 0, cW, cH );

      dst.src = canvas.toDataURL( type, quality );

      if ( cW <= src.width || cH <= src.height )
         return;

      tmp.src = dst.src;
   }

}



The function halves the width/height and draws a new image until it reaches the final size. It loops by waiting for the onload to fire on the updated image and exits as soon as the new width/height are less than or equal to the final size. To resize an image, simply pass a source and destination image. One or both of these can be in the DOM or just Image objects:


   resize_image( $( '#original' )[0], $( '#smaller' )[0] );



While the era of server-side image processing may not have completely come to an end, there is a lot of potential to push processing down to the client. In order to make it work well, you may have to experiment and probably ensure you can trust that the user's environment is capable of handling the task. Depending on what you need to support, you could use a hybrid approach and use client-side when possible, but fall back to server-side when conditions don't look optimal for successfully completing the processing.