# Category Archives: canvas

## Diffing Two Canvases

03 Sep 2016

Below is the script I wrote that will take two canvases and find the differences between them – a canvas diff. What it returns is the bounding area – upper left coordinates and lower-right coordinates – with which you can do as you like.

The challenge here is iterating through the pixel data. The data itself is an array of each pixel’s RGBA values in sequence. For example, if we are looking at four pixels then the pixel array representing them would have a length of 16 (RGBA = 4 array elements x the number of pixels = 16). This imaginary array could look like:

• 0,0,122,5,100,12,123,6,16,100,43,123,55,55,100,50

With a little formatting to help make it make more sense we can easily see the groupings:

• 0,0,122,5,    100,12,123,6,    16,100,43,123,    55,55,100,50

Looking at the first group of 4 numbers we can see that the pixel they represent has these RGBA values:

• R: 0
• G: 0
• B: 122
• A: 5

Imagine a 1024×768 image represented by a single data array of RGBA values. Such an image would have a data array length of 3,145,728 (1024 x 768 x 4). In order to manipulate the pixel data you’d have to discover a way of looping through the array keeping in mind that every group of 4 array elements is itself a single pixel. You would also need to realize that any given group of array elements represents a specific pixel somewhere within your image/canvas.

# Image Comparison Example

In the example shown here I’m comparing the pixel data between two images, keeping track of the differences, and returning a bounding area that encompasses them.

This example is in an iframe – view source to see the script that loads the images into image objects, then loads those images into two canvas elements, extracts the image data from the canvases, compares them, and then finally draws the red bounding boxes to illustrate where the images diverge from each other.

# Diff Function – Comparing Two Canvases

The function I wrote below compares all the pixels and notes the coordinates of the ones that don’t match. I then use the sort array method to sort the resulting “diff” array by either the X or Y of each diff’d coordinate so that I can find the extremes of each one.

While I’m looking for the differences I am also keeping track of the X and Y coordinate representing each RGBA quadruplet. Note the use of the modulus operator as that is what makes the coordinate-tracking work.

To use this function all you have to do is create two image data objects using the canvas getImageData() method then pass them to the canvasDiff function where the first data object is the original image and the second is the one that has changed. Refer to the iframed example above – view source to see how the diff function seen below was used within the example to produce the bounding differential boxes.

When using images as the source data they need to be of identical size and ideally in PNG format. PNG is optimal because JPG is a lossy compression algorithm that will make it hard to do a legit diff – provided you are using images at all – you could easily just have programmatically-generated canvas art – point is that canvasDiff needs two image data objects where the images are the same physical dimensions.

```function canvasDiff(imageData1,imageData2){
var w = imageData1.width;
var h = imageData1.height;
var diffs = [];
var start = {x:null,y:null};
var end   = {x:null,y:null};
var pA1_r,pA1_g,pA1_b,pA1_a,
pA2_r,pA2_g,pA2_b,pA2_a;
var y = 0;
var x = 0;
var len = imageData1.data.length;
for (var i=0;i b.x){
return 1;
} else {
return 0;
}
});
start.x = diffs[0].x || 0;
end.x = diffs[diffs.length-1].x || w;
diffs.sort(function(a,b){
if (a.y < b.y){
return -1;
} else if (a.y > b.y){
return 1;
} else {
return 0;
}
});
start.y = diffs[0].y || 0;
end.y = diffs[diffs.length-1].y || h;

// DONE
// "start" and "end" have the bounding coordinates
console.log(start,end);
return [start,end];
}

```

23 Aug 2016

While working through one of my personal projects I’ve figured out how to load an image from an iOS device’s Library. There are two steps – first use the Camera plugin to provide a UI for the user to select a file. The next is to take the file path the Camera plugin provides and use the File plugin to load it.

# Requirements

This was tested via PhoneGap Build using the following setup:

• CLI 6.3.0
• iOS deploy target: 9.0.0
• Camera plugin version 2.2.0
• File plugin version 3.0.0

If you’re using PhoneGap Build this is what should be added to your config.xml

```

```

# Using the Camera Plugin to Access the Library

It might seem counter intuitive to use the Camera plugin since it seems logical to first look at the File API to look for files… unlike the File API where you would need to write your own file browser and UI, the Camera Plugin uses native functionality and so makes it trivial to pick an image from a user’s Library. The Camera plugin will present a native UI to the end-user so that they can navigate their Library’s folder structure to locate the image they want to use and in the end provide a path to that image on the device.

This code will do what is described above:

```   navigator.camera.getPicture(
function (fileURI){
console.log(fileURI);
/* remove the comment below to use with
*/
//convertPath(fileURI);
},
function (err){
// fail
console.log('fail',err);
},
{
allowEdit: true,
correctOrientation: true,
destinationType: Camera.DestinationType.FILE_URI,
sourceType: Camera.PictureSourceType.PHOTOLIBRARY,
targetHeight: window.innerHeight,
targetWidth: window.innerWidth
}
);
```

Literally copy and paste the above, here are the things to note about the configuration object:

• allowEdit – this is a flag that tells the native Library picker UI to allow scaling/positioning of the resource that the user selects.
• correctOrientation – as it implies, use the image in the correct orientation relevant to how the device is being held
• destinationType – this is the part that tells the plugin to return the path to the image
• sourceType – tells the plugin to display UI to allow the user to select the image from the library
• targetHeight – the desired height of the image – iOS creates a temporary image and passes that path back to you based on any edits and the Height and Width settings. Here I just assume that you would want an image that is the size of the viewport.
• targetWidth – see above

That’s it. Dead simple. Now we need to load up the file using the path that the Camera plugin returns which requires the use of the File plugin.

# Using the File plugin to Load an Image

This part is trickier and the source of much frustration among developers – during my search for documentation there was no single source that explained how this should work. I was left to putting the parts together from various sources as the “official” documentation didn’t directly explain how to do it. Anyway, I’ll do the explaining here within the code comments.

In short, these are the steps that result in a Base64 serialization of the image from which you can do whatever you like:

1. Convert the image path to a file Entry Object
2. Pass the FileEntry Object to a function that converts it to a File Object
3. Pass the File Object to a FileReader to read the file
4. Handle the response containing the image data

Here is all of the code:

```   /**
* This takes a file:// URI and creates a file entry object. The operation is asynch,
* so the resulting fileEntry object is passed to the success callback.
* @type {Function}
* @name convertPath
* @param {String} fileURI - the file:// path to the resource
* @return {} Returns nothing
*/
function convertPath(fileURI){
window.resolveLocalFileSystemURL(
fileURI,
function(fileEntry){
getFileSuccess(fileEntry);
}
);
}

/**
* This starts the read process via the file entry object. This is asynch, so the file is passed to the success callback
* @type {Function}
* @name getFileSuccess
* @param {Object} fileEntry - the file entry object
* @return {} Returns nothing
*/
function getFileSuccess(fileEntry){
fileEntry.file(
function(err){ // failure
console.log('Failed to get file.',err);
}
);
}

/**
* This creates a file reader using the file object that is passed to it.
* Note how similar this is to programatically creating an image and loading data into it.
* @type {Function}
* @param {Object} file - file object
* @return {} Returns nothing
*/
console.log('got file...',file);
};
console.log('we have the file:',fileObject);
console.log('the image data is in fileObject.target._result');
};
}
```

You can use the fileObject.target._result to populate the background of a div, for example:

`\$('#myDiv').css('background-image','url:(' + fileObject.target._result + ')');`

Or insert it into a canvas:

```   var image = new Image();
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d'); //retrieve context

context.drawImage(this, 0, 0,_canvas.width, _canvas.height);
}
image.src = fileObject.target._result; // load the image data
```

It’s worth noting that of course you’ll need the appropriate styling for your DIVs if using the resulting image data as a background image. Also, if loading the data into a canvas your aspect ratio may be off – you’ll need to figure out how to scale the data to fit the canvas without distortion.

## CanvasPainter

27 Jul 2016

One of the ways a developer shows the passion he or she has for their work is how they spend their free time. I do my best to spend my time learning more about what I like to do. Being a fellow who has a degree in Graphic Design that means that I have spent a lot of time over the years slaving over self-assigned projects in an effort to learn the things that I never learned in college.

A few years ago I wrote a canvas signature widget for a PhoneGap’d Sencha Touch-based mobile app. That tiny canvas signature pad was the genesis for the thing that I today call CanvasPainter. While it grew slowly over the ensuing years via bursts of productive energy it mostly languished in the dusty corners of hard-drives and USB memory sticks. That has changed and a lot of time is now being applied to CanvasPainter.

The collage at the top of this page illustrate portions of the CanvasPainter UI – portions of a fully functional web app. A group of beta testers are currently running it through its paces.

You will be able to experience CanvasPainter online. If you are inclined, you will eventually be able to buy it from the App Store. The hybrid app version will do things that the web app version does not. What those features will be I’ll share at a later date as I get closer to launch.

Anyway, this page serves as my way of sharing my excitement at the approaching V1.0 milestone (huzzah!! applause!!!).

[edit 7/27/2016]

Below is a sample image that I made with CanvasPainter… as a result of having painted this I realized I needed a color picker and that the swatches needed to be modified a little, which lead to a new thing in the settings panel…

[edit 10/26/2016]

CanvasPainter is now available for purchase at the App Store. Took less than 48hrs for it to be accepted – apparently they didn’t find any issue with it. Visit the app’s website at www.canvaspainter.io to learn more about what it does.

I will also do a breakdown here at my website running through how the app was made as time allows.

## HTML5 Canvas and Particle Systems

13 Mar 2015

I’ve been doing a lot of canvas stuff lately which reminded me of some things that I’ve always wanted to try. In particular I’ve always meant to find time to try writing a particle system using HTML5 Canvas.

Its pretty easy to do – the idea is that we render a shape or a number of shapes to a canvas using window.requestAnimationFrame to recalculate the positions of the particles it each iteration. Before each render to the canvas we wipe the canvas clean then render the shapes to their new locations. That’s all there is to it.

I wrote two experiments at creating a particle system – in one I had the particles take care of themselves separate from any thing else – they essentially moved themselves around the canvas. The second attempt has a system that updates all of the new particle coordinates before rendering them all to the canvas. There are some subtle differences and effects that can be achieved. In almost all cases the second “style” of updating the canvas is preferred.

Before we go any further I’m assuming that you are using a modern web browser – I’ve not bothered with supporting lesser browsers.

### Experiment One

The methodology in this one is that each particle takes care of itself. That is, it calculates its own position within the canvas and writes itself to the canvas regardless of whatever else might be happening. The caveat here is that we always need to remove the previously rendered shape before plotting the new one else we end up drawing lines on the screen.

You might think that this would be easy to do as we always know the coordinate of the previous shape and can simply erase it. Shapes, however, are anti-aliased. The outer-most anti-aliased edge of the shape (a “ghost”) is always left behind when we attempt to erase only the portion of the canvas where the previously plotted shape was. You can enlarge the bounding area of the shape to be sure to remove all of it but then you see “empty borders” around shapes as they cross each other.

The point is that even though this looks cool its impractical for most purposes.

The first example doesn’t bother to erase the previously plotted shape. As a result we have a series lines – but lines that have opacity and compositing so that we end up with something cool.

The example on the right does attempt to erase the previously plotted shape but as I mentioned above you can still see the “ghost” of that previous shape which leaves a sort of trail behind it as it moves about the screen.

### Experiment Two

This one approaches Canvas animation the way its usually done. First calculate the new position of all shapes, wipe the entire canvas clean, and then write all the shapes to the canvas, repeat.

I wont go through any exhaustive description of how to do things – the described workflow above and the source code below should be all that you need to give it a try for yourself.

```;(function(ns){

var _parts = [];
var _cvs = null;
var _ctx = null;
var _bgColor = null;

ns.setupParts = function(cvsID,bgColor){
_cvs = document.getElementById(cvsID);
_ctx = _cvs.getContext('2d');
_bgColor = bgColor;
}

_parts.push(o);
}

ns.updateCanvasWithParts = function(){
_ctx.clearRect(0,0,_cvs.width,_cvs.height);
if (_bgColor){
_ctx.fillStyle = _bgColor;
_ctx.fillRect(0,0,_cvs.width,_cvs.height);
}
for (var i=0;i<_parts.length;i++){
_ctx.fillStyle = _parts[i].color;
_ctx.globalCompositeOperation = _parts[i].comp;
_ctx.globalAlpha = _parts[i].alpha;
_ctx.fillRect(_parts[i].x, _parts[i].y,_parts[i].height,_parts[i].width);
_parts[i].update();
}
requestAnimationFrame(ns.updateCanvasWithParts);
}

ns.particle = function(config){
var that = this;
this.vx = config.omni ? (Math.random() < 0.5 ? config.vx * -1: config.vx) : config.vx;
this.vy = config.omni ? (Math.random() < 0.5 ? config.vy * -1: config.vy) : config.vy;
this.x = config.x;
this.y = config.y;
this.originX = config.x;
this.originY = config.y;
this.starfield = config.starfield;
this.color = config.color;
this.bgColor = config.bgColor;
this.alpha = config.alpha;
this.comp = config.comp;
this.size = config.size;
this.height = config.uniform ? config.size : Math.round(Math.random() * config.size);
this.width = config.uniform ? config.size : Math.round(Math.random() * config.size);
this.update = function(){
if (!that.starfield){
if (that.x > _cvs.height - that.height){
that.vx = that.vx * -1;
} else if (that.x < 0){
that.vx = Math.abs(that.vx);
}
if (that.y > _cvs.width - that.width){
that.vy = that.vy * -1;
} else if (that.y < 0){
that.vy = Math.abs(that.vy);
}
} else {
if (that.x > _cvs.height + that.size || that.y > _cvs.width + that.size ||
that.x < -that.size || that.y < -that.size){
that.x = that.originX;
that.y = that.originY;
}
}
that.x = that.x + that.vx;
that.y = that.y + that.vy;
}
}

})(this.particles2 = this.particles2 || {});

particles2.setupParts('cvs1','#000');
for (var i=0;i<500;i++){
var color = Math.floor(Math.random()*16777215).toString(16);
var p = new particles2.particle({
color: '#' + color,
comp: null,
alpha:1,
x:(Math.random() * 400),
y:(Math.random() * 400),
vx:(Math.random() * 2),
vy:(Math.random() * 2),
size:(Math.random() * 6),
uniform: true,
omni:false,
starfield:false
});