Tag Archives: canvas

Diffing Two Canvases

03 Sep 2016

Below is the script I wrote that will take two canvases and find the differences between them – a canvas diff. What it returns is the bounding area – upper left coordinates and lower-right coordinates – with which you can do as you like.

The challenge here is iterating through the pixel data. The data itself is an array of each pixel’s RGBA values in sequence. For example, if we are looking at four pixels then the pixel array representing them would have a length of 16 (RGBA = 4 array elements x the number of pixels = 16). This imaginary array could look like:

• 0,0,122,5,100,12,123,6,16,100,43,123,55,55,100,50

With a little formatting to help make it make more sense we can easily see the groupings:

• 0,0,122,5,    100,12,123,6,    16,100,43,123,    55,55,100,50

Looking at the first group of 4 numbers we can see that the pixel they represent has these RGBA values:

• R: 0
• G: 0
• B: 122
• A: 5

Imagine a 1024×768 image represented by a single data array of RGBA values. Such an image would have a data array length of 3,145,728 (1024 x 768 x 4). In order to manipulate the pixel data you’d have to discover a way of looping through the array keeping in mind that every group of 4 array elements is itself a single pixel. You would also need to realize that any given group of array elements represents a specific pixel somewhere within your image/canvas.

Image Comparison Example

In the example shown here I’m comparing the pixel data between two images, keeping track of the differences, and returning a bounding area that encompasses them.

This example is in an iframe – view source to see the script that loads the images into image objects, then loads those images into two canvas elements, extracts the image data from the canvases, compares them, and then finally draws the red bounding boxes to illustrate where the images diverge from each other.

Diff Function – Comparing Two Canvases

The function I wrote below compares all the pixels and notes the coordinates of the ones that don’t match. I then use the sort array method to sort the resulting “diff” array by either the X or Y of each diff’d coordinate so that I can find the extremes of each one.

While I’m looking for the differences I am also keeping track of the X and Y coordinate representing each RGBA quadruplet. Note the use of the modulus operator as that is what makes the coordinate-tracking work.

To use this function all you have to do is create two image data objects using the canvas getImageData() method then pass them to the canvasDiff function where the first data object is the original image and the second is the one that has changed. Refer to the iframed example above – view source to see how the diff function seen below was used within the example to produce the bounding differential boxes.

When using images as the source data they need to be of identical size and ideally in PNG format. PNG is optimal because JPG is a lossy compression algorithm that will make it hard to do a legit diff – provided you are using images at all – you could easily just have programmatically-generated canvas art – point is that canvasDiff needs two image data objects where the images are the same physical dimensions.

function canvasDiff(imageData1,imageData2){
// www.rickluna.com - please leave the attribution!
var w = imageData1.width;
var h = imageData1.height;
var diffs = [];
var start = {x:null,y:null};
var end   = {x:null,y:null};
var pA1_r,pA1_g,pA1_b,pA1_a,
pA2_r,pA2_g,pA2_b,pA2_a;
var y = 0;
var x = 0;
var len = imageData1.data.length;
for (var i=0;i b.x){
return 1;
} else {
return 0;
}
});
start.x = diffs.x || 0;
end.x = diffs[diffs.length-1].x || w;
diffs.sort(function(a,b){
if (a.y < b.y){
return -1;
} else if (a.y > b.y){
return 1;
} else {
return 0;
}
});
start.y = diffs.y || 0;
end.y = diffs[diffs.length-1].y || h;

// DONE
// "start" and "end" have the bounding coordinates
console.log(start,end);
return [start,end];
}

Creating Image Maps From Canvas-Derived Coordinates

10 Jun 2015

Here’s a cool thing – I came across a situation where I was stacking identically-sized transparent PNG’s on top of each other but needed to be able to select their visible areas. The layered nature of z-ordering the images prevented us from getting beyond the top-most layer (actually, not image tags but z-ordered divs with background-images, but for all intents and purposes its the same issue).

The first thought was to use HTML5 Canvas and then track the coordinate of the users click/touch event to figure out what they were clicking on. A nice start but our browser requirements included old IE which prevented us from using Canvas so we were stuck with the PNG stack. In addition to dealing with old IE the requirements of the project meant also that we couldn’t simply merge all the PNG’s and then simply create an image map because:

• Our client would be uploading the UI-related PNG’s into the system – both on and off states
• We couldn’t rely on the client to be smart enough to draw image maps within the admin console each time an image was uploaded, especially since some of the art bumped or “went under” some of the other art within the PNG stack.

I thought we could still use Canvas if we applied it to the process of uploading the images – knowing that we essentially had small bitmaps “floating” within a larger transparent PNG I realized that we might be able to get some useful data from a Canvas, maybe enough to determine what should be clickable and what shouldn’t.

A quick Google search revealed this post at stackoverflow. It describes a “Marching Squares Edge Detection” algorithm that when applied to my needs would give me an array of coordinates that could be easily converted into an image map.

Detecting Multiple Edges

Looking at the edge detection algorithm showed that it detects the edge for the first shape that it encounters, ignoring anything else in the Canvas even though other shapes may exist. The fix here was to remove the shape that it finds and then run the algorithm again, repeating the find/remove process until nothing else is found. This is done while saving the boundary of each shape so that when the entire process was done we could use that data to create the image maps.

As for removing the first shape that was encountered I wrote the following – it draws a shape on the canvas exactly where the found shape is using that shape’s boundary and then overwrites it with a fill. Since I set the Canvas globalCompositeOperation to destination-out the end result is that the shape is removed from the canvas thus allowing me to find the next shape’s boundary as the previous shape no longer exists.

function _removeBitmap(){
var i, len;
// draw outline path
_ctx.globalCompositeOperation = 'destination-out';
_ctx.beginPath();
_ctx.moveTo(_points,_points);
for (i=1,len=_points.length;i<len;i++){
var point = _points[i];
_ctx.lineTo(point,point);
}
_ctx.closePath();
_ctx.fill();
_ctx.globalCompositeOperation = 'source-over';
}

Detecting if the Canvas is Empty

Next, before I call the edge detection function again I first need to know if the Canvas is empty – so here’as another function that looks to see if the alpha of any pixel is set below a certain threshold. Why a threshold? Well, it seems that even though I can’t see the bitmaps that I removed with the destination-out composite operation that there may still be pixels here and there that do exist though are effectively invisible. A threshold settles that particular issue, you may need to play with it on your own if you use this code.

ns.isEmpty = function(){
var data = _ctx.getImageData(0,0,_canvas.width,_canvas.height).data;
var emptyThreshold = 20; // maximum allowed alpha before a pixel is considered "empty"
var i, l;
var retVal = true;
// maxAlpha: what is the highest alpha? most times its not zero.
// log this to the console to see what the max alpha is,
// then set "emptyThreshold" accordingly.
var maxAlpha = 0;
for (i=0,l=data.length; i < l; i += 4){
// for debugging purposes
maxAlpha = data[i + 3] > maxAlpha ? data[i + 3] : maxAlpha;
if(data[i + 3] > emptyThreshold){
retVal = false;
}
}
return retVal;
}

Yes, I know about loading up a blank canvas, getting its base64 via toDataURL and then trying to compare against it to see if a Canvas is empty – but note again how the composite operation left some pixels behind which means that comparing against a truly blank canvas wouldn’t work.

Working Example

Here’s a working example – inspect the iFramed page and note the absence of any image maps, then click “Start”. What you will see is:

• Each image is loaded into the Canvas
• The edge detection script finds the first “floating” shape, I remove it and then run the edge detection again, repeating until i determine that the image is now empty of any “solid” shapes
• The next image is loaded and the process repeats
• Once all edges have been found the image map is added to the DOM

NOTE: click start and let the sample run through all of the edge detection for everything (slower on mobile as all of the images load synchronously). It will be done when all the pieces display. From there you can click the other buttons.

You will note that the Canvas is still in this proof-of-concept after all of the edge detection is completed and the image maps are added – the final implementation of this stacks all images via absolute positioning without any Canvas elements. The top-most image of the stack has the image map applied to it. In this way we are able to automate the creation of the image maps within the browser when each image is uploaded into the system via the purpose-built CMS and not need to worry about the client using some sort of drawing tool to create the image maps themselves.

Finished Code

Here’s the result – separated from the edge detection stuff which I broke out into its own file that you can download from here while viewing the source of the example to learn how the parts were assembled.

;(function(ns,\$){

var _canvas, _ctx, _cw, _points, _imgData, _data;
var _allPaths = [];
var _img = new Image();
_img.crossOrigin = 'anonymous';

function _drawImgToCanvas(){
_ctx.drawImage(_img,_canvas.width/2-_img.width/2,_canvas.height/2-_img.height/2);
_findArea();
}

function _findArea(){
_imgData = _ctx.getImageData(0,0,_canvas.width,_canvas.height);
_data = _imgData.data;
_points = marchingSquares.contour();// call the marching ants algorithm
_allPaths[_allPaths.length] = _points;// store the area in the _allPaths collection
_removeBitmap();// remove the shape so we can move on to the next one
}

function _removeBitmap(){
var i, len;
// draw outline path
_ctx.globalCompositeOperation = 'destination-out';
_ctx.beginPath();
_ctx.moveTo(_points,_points);
for (i=1,len=_points.length;i<len;i++){
var point = _points[i];
_ctx.lineTo(point,point);
}
_ctx.closePath();
_ctx.fill();
_ctx.globalCompositeOperation = 'source-over';

if (ns.isEmpty()){
_createMap();
} else {
_findArea();
}
}

ns.isEmpty = function(){
var data = _ctx.getImageData(0,0,_canvas.width,_canvas.height).data;
var emptyThreshold = 20; // maximum allowed alpha before a pixel is considered "empty"
var i, l;
var retVal = true;
var maxAlpha = 0; // what is the highest alpha? most times its not zero. log this to console to see what the max alpha is, set "emptyThreshold" accordingly.
for (i=0,l=data.length; i < l; i += 4){
maxAlpha = data[i + 3] > maxAlpha ? data[i + 3] : maxAlpha; // for debugging purposes
if(data[i + 3] > emptyThreshold){
retVal = false;
}
}
return retVal;
}

function _createMap(){
var mapTPL = '%areas%';
var areasTpl = '';
var areas = '';
var map = '';
var coordsList = '';
for (var h=0,len=_allPaths.length;h<len;h++){
coordsList = '';
for (var i=0,len2=_allPaths[h].length;i<len2;i++){
coordsList += _allPaths[h][i].join(',');
coordsList += i != _allPaths[h].length -1 ? ',' : '';
}
areas += areasTpl.replace('%coords%',coordsList);
}
map = mapTPL.replace('%areas%',areas);
\$('#mapWrapper').html(map);
}

ns.returnData = function(){
return _data;
}

ns.returnCW = function(){
return _cw;
}

ns.init = function(canvasID,imgSrc){
_canvas = document.getElementById(canvasID);
_ctx = _canvas.getContext('2d');
_cw = _canvas.width;
_img.src = imgSrc;
}

})(this.mapFromCanvas = this.mapFromCanvas || {}, jQuery);

HTML5 Canvas and Particle Systems

13 Mar 2015

I’ve been doing a lot of canvas stuff lately which reminded me of some things that I’ve always wanted to try. In particular I’ve always meant to find time to try writing a particle system using HTML5 Canvas.

Its pretty easy to do – the idea is that we render a shape or a number of shapes to a canvas using window.requestAnimationFrame to recalculate the positions of the particles it each iteration. Before each render to the canvas we wipe the canvas clean then render the shapes to their new locations. That’s all there is to it.

I wrote two experiments at creating a particle system – in one I had the particles take care of themselves separate from any thing else – they essentially moved themselves around the canvas. The second attempt has a system that updates all of the new particle coordinates before rendering them all to the canvas. There are some subtle differences and effects that can be achieved. In almost all cases the second “style” of updating the canvas is preferred.

Before we go any further I’m assuming that you are using a modern web browser – I’ve not bothered with supporting lesser browsers.

Experiment One

The methodology in this one is that each particle takes care of itself. That is, it calculates its own position within the canvas and writes itself to the canvas regardless of whatever else might be happening. The caveat here is that we always need to remove the previously rendered shape before plotting the new one else we end up drawing lines on the screen.

You might think that this would be easy to do as we always know the coordinate of the previous shape and can simply erase it. Shapes, however, are anti-aliased. The outer-most anti-aliased edge of the shape (a “ghost”) is always left behind when we attempt to erase only the portion of the canvas where the previously plotted shape was. You can enlarge the bounding area of the shape to be sure to remove all of it but then you see “empty borders” around shapes as they cross each other.

The point is that even though this looks cool its impractical for most purposes.

The first example doesn’t bother to erase the previously plotted shape. As a result we have a series lines – but lines that have opacity and compositing so that we end up with something cool.

The example on the right does attempt to erase the previously plotted shape but as I mentioned above you can still see the “ghost” of that previous shape which leaves a sort of trail behind it as it moves about the screen.

Experiment Two

This one approaches Canvas animation the way its usually done. First calculate the new position of all shapes, wipe the entire canvas clean, and then write all the shapes to the canvas, repeat.

I wont go through any exhaustive description of how to do things – the described workflow above and the source code below should be all that you need to give it a try for yourself.

;(function(ns){

var _parts = [];
var _cvs = null;
var _ctx = null;
var _bgColor = null;

ns.setupParts = function(cvsID,bgColor){
_cvs = document.getElementById(cvsID);
_ctx = _cvs.getContext('2d');
_bgColor = bgColor;
}

_parts.push(o);
}

ns.updateCanvasWithParts = function(){
_ctx.clearRect(0,0,_cvs.width,_cvs.height);
if (_bgColor){
_ctx.fillStyle = _bgColor;
_ctx.fillRect(0,0,_cvs.width,_cvs.height);
}
for (var i=0;i<_parts.length;i++){
_ctx.fillStyle = _parts[i].color;
_ctx.globalCompositeOperation = _parts[i].comp;
_ctx.globalAlpha = _parts[i].alpha;
_ctx.fillRect(_parts[i].x, _parts[i].y,_parts[i].height,_parts[i].width);
_parts[i].update();
}
requestAnimationFrame(ns.updateCanvasWithParts);
}

ns.particle = function(config){
var that = this;
this.vx = config.omni ? (Math.random() < 0.5 ? config.vx * -1: config.vx) : config.vx;
this.vy = config.omni ? (Math.random() < 0.5 ? config.vy * -1: config.vy) : config.vy;
this.x = config.x;
this.y = config.y;
this.originX = config.x;
this.originY = config.y;
this.starfield = config.starfield;
this.color = config.color;
this.bgColor = config.bgColor;
this.alpha = config.alpha;
this.comp = config.comp;
this.size = config.size;
this.height = config.uniform ? config.size : Math.round(Math.random() * config.size);
this.width = config.uniform ? config.size : Math.round(Math.random() * config.size);
this.update = function(){
if (!that.starfield){
if (that.x > _cvs.height - that.height){
that.vx = that.vx * -1;
} else if (that.x < 0){
that.vx = Math.abs(that.vx);
}
if (that.y > _cvs.width - that.width){
that.vy = that.vy * -1;
} else if (that.y < 0){
that.vy = Math.abs(that.vy);
}
} else {
if (that.x > _cvs.height + that.size || that.y > _cvs.width + that.size ||
that.x < -that.size || that.y < -that.size){
that.x = that.originX;
that.y = that.originY;
}
}
that.x = that.x + that.vx;
that.y = that.y + that.vy;
}
}

})(this.particles2 = this.particles2 || {});

particles2.setupParts('cvs1','#000');
for (var i=0;i<500;i++){
var color = Math.floor(Math.random()*16777215).toString(16);
var p = new particles2.particle({
color: '#' + color,
comp: null,
alpha:1,
x:(Math.random() * 400),
y:(Math.random() * 400),
vx:(Math.random() * 2),
vy:(Math.random() * 2),
size:(Math.random() * 6),
uniform: true,
omni:false,
starfield:false
});