I have trying to render an image in the browser which is built like this:
A bunch of rectangles are each filled with a radial gradient (ideally Gaussian, but can be approximated with a few stopping points
Each rectangle is rotated and translated before being deposited on a drawing area
The image is flattened by summing all the intensities of the rectangles (and cropping to the drawing area's dimensions )
The intensity is rescaled so that the highest intensity is 255 and the lowest 0 (ideally I can apply some sort of gamma correction too)
Finally an image is drawn where the color of each pixel is taken from a palette of 256 colors.
The reason I cannot do this easily with a canvas object is that I need to be working in floating points or I'll lose precision. I do not know in advance what the maximum intensity and minimum intensity will be, so I cannot merely draw transparent rectangles and hope for the best.
Is there a way to do this in webgl? If so, how would I go about it?
You can use the regular canvas to perform this task :
1) check min/max of your rects, so you can build a mapping function double -> [0-255] out of that range.
2) draw the rects in 'lighter' mode == add the component values.
3) you might have a saturation when several rects overlaps : if so, double the mapping range and go to 2).
Now if you don't have saturation just adjust the range to use the full [0-255] range of the canvas, and you're done.
Since this algorithm makes use of getImageData, it might not reach 60 fps on all browsers/devices. But more than 10fps on desktop/Chrome seems perfectly possible.
Hopefully the code below will clarify my description :
//noprotect
// boilerplate
var cv = document.getElementById('cv');
var ctx = cv.getContext('2d');
// rectangle collection
var rectCount = 30;
var rects = buildRandRects(rectCount);
iterateToMax();
// --------------------------------------------
function iterateToMax() {
var limit = 10; // loop protection
// initialize min/max mapping based on rects min/max
updateMapping(rects);
//
while (true) {
// draw the scene using current mapping
drawScene();
// get the max int value from the canvas
var max = getMax();
if (max == 255) {
// saturation ?? double the min-max interval
globalMax = globalMin + 2 * (globalMax - globalMin);
} else {
// no sauration ? Just adjust the min-max interval
globalMax = globalMin + (max / 255) * (globalMax - globalMin);
drawScene();
return;
}
limit--;
if (limit <= 0) return;
}
}
// --------------------------------------------
// --------------------------------------------
// Oriented rectangle Class.
function Rect(x, y, w, h, rotation, min, max) {
this.min = min;
this.max = max;
this.draw = function () {
ctx.save();
ctx.fillStyle = createRadialGradient(min, max);
ctx.translate(x, y);
ctx.rotate(rotation);
ctx.scale(w, h);
ctx.fillRect(-1, -1, 2, 2);
ctx.restore();
};
var that = this;
function createRadialGradient(min, max) {
var gd = ctx.createRadialGradient(0, 0, 0, 0, 0, 1);
var start = map(that.min);
var end = map(that.max);
gd.addColorStop(0, 'rgb(' + start + ',' + start + ',' + start + ')');
gd.addColorStop(1, 'rgb(' + end + ',' + end + ',' + end + ')');
return gd;
}
}
// Mapping : float value -> 0-255 value
var globalMin = 0;
var globalMax = 0;
function map(value) {
return 0 | (255 * (value - globalMin) / (globalMax - globalMin));
}
// create initial mapping
function updateMapping(rects) {
globalMin = rects[0].min;
globalMax = rects[0].max;
for (var i = 1; i < rects.length; i++) {
var thisRect = rects[i];
if (thisRect.min < globalMin) globalMin = thisRect.min;
if (thisRect.max > globalMax) globalMax = thisRect.max;
}
}
// Random rect collection
function buildRandRects(rectCount) {
var rects = [];
for (var i = 0; i < rectCount; i++) {
var thisMin = Math.random() * 1000;
var newRect = new Rect(Math.random() * 400, Math.random() * 400, 10 + Math.random() * 50, 10 + Math.random() * 50, Math.random() * 2 * Math.PI, thisMin, thisMin + Math.random() * 1000);
rects.push(newRect);
}
return rects;
}
// draw all rects in 'lighter' mode (=sum values)
function drawScene() {
ctx.save();
ctx.globalCompositeOperation = 'source-over';
ctx.clearRect(0, 0, cv.width, cv.height);
ctx.globalCompositeOperation = 'lighter';
for (var i = 0; i < rectCount; i++) {
var thisRect = rects[i];
thisRect.draw();
}
ctx.restore();
}
// get maximum value for r for this canvas
// ( == max r, g, b value for a gray-only drawing. )
function getMax() {
var data = ctx.getImageData(0, 0, cv.width, cv.height).data;
var max = 0;
for (var i = 0; i < data.length; i += 4) {
if (data[i] > max) max = data[i];
if (max == 255) return 255;
}
return max;
}
<canvas id='cv' width = 400 height = 400></canvas>
Related
I'm new in p5js and i want to create a noise effect in an image with it. I create a functional sketch with Java in processing, but when i pass it to p5j something is wrong.
The image is download in the html field hwne i put , but the pixels loc staff doesn't.
Can anyone help me!!
This is my sketch:
function setup()
{
createCanvas(400,300);
img = loadImage("data/monja.jpg");
//surface.setResizable(true);
//surface.setSize(img.width, img.height);
background(0);
}
function draw()
{
loadPixels();
img.loadPixels();
for (let x = 0; x < img.width; x++)
{
for (let y = 0; y < img.height; y++)
{
let loc = x+y*width;
let c = brightness(img.pixels[loc]);
let r = red(img.pixels[loc]);
let g = green(img.pixels[loc]);
let b = blue(img.pixels[loc]);
if (c < 70){
img.pixels[loc]= color(random(255));
}
else {
img.pixels[loc] = color(r, g, b);
}
}
}
updatePixels();
//image(img, 0, 0);
}```
To modify the color of certain pixels in an image here are some things to keep in mind.
When we call loadPixels the pixels array is an array of numbers.
How many numbers each pixel gets is determined by the pixel density
If pixel density is 1 then each pixel will get 4 numbers in the array, each with a value from 0 to 255.
The first number determines the amount of red in the pixel, the second green, the third red and the fourth is the alpha value for transparency.
Here is an example that changes pixels with a high red value to a random gray scale to create a glitch effect.
var img;
var c;
function preload(){
img = loadImage("https://i.imgur.com/rpQdRoY.jpeg");
}
function setup()
{
createCanvas(img.width, img.height);
background(0);
let d = pixelDensity();
img.loadPixels();
for (let i = 0; i < 4 * (img.width*d * img.height*d); i += 4) {
if (img.pixels[i] > 150 && img.pixels[i+1] <100&&img.pixels[i+2] < 100){
let rColor = random(255);
img.pixels[i] = rColor;
img.pixels[i + 1] = rColor;
img.pixels[i + 2] = rColor;
img.pixels[i + 3] = rColor;
}
}
img.updatePixels();
}
function draw() {
image(img,0,0);
}
<script src="https://cdn.jsdelivr.net/npm/p5#1.3.0/lib/p5.js"></script>
I'm doing Spritefonts and currently implemented tint for it on WebGL!
But on canvas2d i tried to do it via ctx.globalCompositeOperation but it shows following
As you see, Black pixels are also filled...
Here is my code...
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
ctx.save();
if (q) ctx.drawImage(fonts[0].src, q.x, q.y, q.w, q.h, x + (spacing || 0) + (i * size), y, size, size);
ctx.globalCompositeOperation = "source-in";
ctx.fillStyle = "green";
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.restore();
}
When trying with "darken" mode instead, It fills correctly but also it fills background (Which i don't want this...)
I also tried with ctx.getImageData() and ctx.putImageData() but letters not shown
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
if (q) {
ctx.drawImage(fonts[0].src, q.x, q.y, q.w, q.h, x + (spacing || 0) + (i * size), y, size, size);
f = ctx.getImageData(x + (spacing || 0) + (i * size), y, size, size);
for (var i = 0; i < f.data.length; i += 4) {
f.data[i + 0] = 100;
f.data[i + 1] = 100;
f.data[i + 2] = 255;
f.data[i + 3] = 255;
}
ctx.putImageData(f, x + (spacing || 0) + (i * size), y, 0, 0, size, size);
}
}
The image i'm using is from here
Fixed by using "lighten" mode for black pixels with filling background, Then applied "darken" mode instead of "source-in" and all done!
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
ctx.save();
ctx.globalCompositeOperation = "lighten";
ctx.fillStyle = ctx.canvas.style.backgroundColor;
ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);
if (q) ctx.drawImage(fonts[0].src, q.x, q.y, q.w, q.h, x + (spacing || 0) + (i * size), y, size, size);
ctx.globalCompositeOperation = "darken";
ctx.fillStyle = "green";
ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.restore();
}
This is better way i found:
Create canvas with dimensions that complies with spritefont image dimensions
Save context state in the created canvas
Set fillStyle of the created canvas context with spritefont text color (Tint)
Set globalAlpha of created canvas context to opacity
Fill created canvas background with spritefont text color (Tint)
Apply "destination-atop" composite mode in created canvas context
Reset globalAlpha of created canvas context to 1 (Default)
Draw spritefont image onto created canvas
Restore context state in created canvas
Then, Let default canvas context (Not created one) draw characters from spritefont image, So we let it draw part of canvas we created (Note that spritefont image fills all created canvas)
Done!
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
var opacity = 0.8;
var color = "green";
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
var c = document.createElement("canvas").getContext("2d");
c.canvas.width = fonts[0].src.width;
c.canvas.height = fonts[0].src.height;
c.save();
c.fillStyle = color;
c.globalAlpha = opacity || 0.8;
c.fillRect(0, 0, c.canvas.width, c.canvas.height);
c.globalCompositeOperation = "destination-atop";
c.globalAlpha = 1;
c.drawImage(fonts[0].src, 0, 0);
c.restore();
if (q) ctx.drawImage(c.canvas, q.x, q.y, q.w, q.h, x + (i * (size + spacing)), y, size, size);
}
I'm generating image programmatically inside canvas.
var canvas = document.getElementById('myCanvas');
var ctx = canvas.getContext('2d');
// here I have some code in loop setting individual pixels
// ...
//
// save image to variable
var dataURL = canvas.toDataURL();
How can I rotate created image by 90 degrees?
EDIT:
This is not duplicate because I don't draw image, it is never visible. I only want to generate it, rotate it and save to variable.
EDIT2:
I'm trying to rotate it with this code:
ctx.translate(canvas.width / 2, canvas.height / 2)
ctx.rotate(90 * Math.PI / 180)
But it doesn't work
EDIT3:
This is more complex example of my code:
var canvas = document.getElementById('myCanvas');
var ctx = canvas.getContext('2d');
canvas.setPixel = function (x, y, color) {
ctx.fillStyle = color;
ctx.fillRect(x, y, 1, 1);
}
for (var i in data) {
for (var j in data[i]) {
switch (data[i][j]) {
case 1:
var color = '#ffff00',
type = 'w'
break
case 3:
var rgb = (256 - parseInt(pixels[i][j]) - minus.grass).toString(16),
color = '#00' + rgb + '00',
type = 'g'
break
case 4:
var rgb = (256 - parseInt(pixels[i][j]) - minus.hills).toString(16),
color = '#' + rgb + rgb + '00',
type = 'h'
break
case 5:
var rgb = (parseInt(pixels[i][j]) + minus.mountains).toString(16),
color = '#' + rgb + rgb + rgb,
type = 'm'
break
case 6:
var rgb = (parseInt(pixels[i][j]) + minus.snow).toString(16),
color = '#' + rgb + rgb + rgb,
type = 'm'
break
}
if (i % fieldSize == 0 && j % fieldSize == 0) {
if (notSet(fields[y])) {
fields[y] = []
}
fields[y][x] = type
x++
}
canvas.setPixel(i, j, color)
}
if (i % fieldSize == 0) {
x = 0
y++
}
}
ctx.translate(canvas.width / 2, canvas.height / 2)
ctx.rotate(90 * Math.PI / 180)
var token = {
type: 'save',
map: canvas.toDataURL('image/png')
}
ws.send(JSON.stringify(token))
To rotate image by 90 degrees I had to put
ctx.translate(0, canvas.height)
ctx.rotate(270 * Math.PI / 180)
before
for (var i in data) {
for (var j in data[i]) {
switch (data[i][j]) {
// ... drawing pixels
}
}
}
Consider this binary image:
A normal edge detection algorithm (Like Canny) takes the binary image as input and results into the contour shown in red. I need another algorithm that takes a point "P" as a second piece of input data. "P" is the black point in the previous image. This algorithm should result into the blue contour. The blue contours represents the point "P" lines-of-sight edge of the binary image.
I searched a lot of an image processing algorithm that achieve this, but didn't find any. I also tried to think about a new one, but I still have a lot of difficulties.
Since you've got a bitmap, you could use a bitmap algorithm.
Here's a working example (in JSFiddle or see below). (Firefox, Chrome, but not IE)
Pseudocode:
// part 1: occlusion
mark all pixels as 'outside'
for each pixel on the edge of the image
draw a line from the source pixel to the edge pixel and
for each pixel on the line starting from the source and ending with the edge
if the pixel is gray mark it as 'inside'
otherwise stop drawing this line
// part 2: edge finding
for each pixel in the image
if pixel is not marked 'inside' skip this pixel
if pixel has a neighbor that is outside mark this pixel 'edge'
// part 3: draw the edges
highlight all the edges
At first this sounds pretty terrible... But really, it's O(p) where p is the number of pixels in your image.
Full code here, works best full page:
var c = document.getElementById('c');
c.width = c.height = 500;
var x = c.getContext("2d");
//////////// Draw some "interesting" stuff ////////////
function DrawScene() {
x.beginPath();
x.rect(0, 0, c.width, c.height);
x.fillStyle = '#fff';
x.fill();
x.beginPath();
x.rect(c.width * 0.1, c.height * 0.1, c.width * 0.8, c.height * 0.8);
x.fillStyle = '#000';
x.fill();
x.beginPath();
x.rect(c.width * 0.25, c.height * 0.02 , c.width * 0.5, c.height * 0.05);
x.fillStyle = '#000';
x.fill();
x.beginPath();
x.rect(c.width * 0.3, c.height * 0.2, c.width * 0.03, c.height * 0.4);
x.fillStyle = '#fff';
x.fill();
x.beginPath();
var maxAng = 2.0;
function sc(t) { return t * 0.3 + 0.5; }
function sc2(t) { return t * 0.35 + 0.5; }
for (var i = 0; i < maxAng; i += 0.1)
x.lineTo(sc(Math.cos(i)) * c.width, sc(Math.sin(i)) * c.height);
for (var i = maxAng; i >= 0; i -= 0.1)
x.lineTo(sc2(Math.cos(i)) * c.width, sc2(Math.sin(i)) * c.height);
x.closePath();
x.fill();
x.beginPath();
x.moveTo(0.2 * c.width, 0.03 * c.height);
x.lineTo(c.width * 0.9, c.height * 0.8);
x.lineTo(c.width * 0.8, c.height * 0.8);
x.lineTo(c.width * 0.1, 0.03 * c.height);
x.closePath();
x.fillStyle = '#000';
x.fill();
}
//////////// Pick a point to start our operations: ////////////
var v_x = Math.round(c.width * 0.5);
var v_y = Math.round(c.height * 0.5);
function Update() {
if (navigator.appName == 'Microsoft Internet Explorer'
|| !!(navigator.userAgent.match(/Trident/)
|| navigator.userAgent.match(/rv 11/))
|| $.browser.msie == 1)
{
document.getElementById("d").innerHTML = "Does not work in IE.";
return;
}
DrawScene();
//////////// Make our image binary (white and gray) ////////////
var id = x.getImageData(0, 0, c.width, c.height);
for (var i = 0; i < id.width * id.height * 4; i += 4) {
id.data[i + 0] = id.data[i + 0] > 128 ? 255 : 64;
id.data[i + 1] = id.data[i + 1] > 128 ? 255 : 64;
id.data[i + 2] = id.data[i + 2] > 128 ? 255 : 64;
}
// Adapted from http://rosettacode.org/wiki/Bitmap/Bresenham's_line_algorithm#JavaScript
function line(x1, y1) {
var x0 = v_x;
var y0 = v_y;
var dx = Math.abs(x1 - x0), sx = x0 < x1 ? 1 : -1;
var dy = Math.abs(y1 - y0), sy = y0 < y1 ? 1 : -1;
var err = (dx>dy ? dx : -dy)/2;
while (true) {
var d = (y0 * c.height + x0) * 4;
if (id.data[d] === 255) break;
id.data[d] = 128;
id.data[d + 1] = 128;
id.data[d + 2] = 128;
if (x0 === x1 && y0 === y1) break;
var e2 = err;
if (e2 > -dx) { err -= dy; x0 += sx; }
if (e2 < dy) { err += dx; y0 += sy; }
}
}
for (var i = 0; i < c.width; i++) line(i, 0);
for (var i = 0; i < c.width; i++) line(i, c.height - 1);
for (var i = 0; i < c.height; i++) line(0, i);
for (var i = 0; i < c.height; i++) line(c.width - 1, i);
// Outline-finding algorithm
function gb(x, y) {
var v = id.data[(y * id.height + x) * 4];
return v !== 128 && v !== 0;
}
for (var y = 0; y < id.height; y++) {
var py = Math.max(y - 1, 0);
var ny = Math.min(y + 1, id.height - 1);
console.log(y);
for (var z = 0; z < id.width; z++) {
var d = (y * id.height + z) * 4;
if (id.data[d] !== 128) continue;
var pz = Math.max(z - 1, 0);
var nz = Math.min(z + 1, id.width - 1);
if (gb(pz, py) || gb(z, py) || gb(nz, py) ||
gb(pz, y) || gb(z, y) || gb(nz, y) ||
gb(pz, ny) || gb(z, ny) || gb(nz, ny)) {
id.data[d + 0] = 0;
id.data[d + 1] = 0;
id.data[d + 2] = 255;
}
}
}
x.putImageData(id, 0, 0);
// Draw the starting point
x.beginPath();
x.arc(v_x, v_y, c.width * 0.01, 0, 2 * Math.PI, false);
x.fillStyle = '#800';
x.fill();
}
Update();
c.addEventListener('click', function(evt) {
var x = evt.pageX - c.offsetLeft,
y = evt.pageY - c.offsetTop;
v_x = x;
v_y = y;
Update();
}, false);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.2.3/jquery.min.js"></script>
<center><div id="d">Click on image to change point</div>
<canvas id="c"></canvas></center>
I would just estimate P's line of sight contour with ray collisions.
RESOLUTION = PI / 720;
For rad = 0 To PI * 2 Step RESOLUTION
ray = CreateRay(P, rad)
hits = Intersect(ray, contours)
If Len(hits) > 0
Add(hits[0], lineOfSightContour)
https://en.wikipedia.org/wiki/Hidden_surface_determination with e.g. a Z-Buffer is relatively easy. Edge detection looks a lot trickier and probably needs a bit of tuning. Why not take an existing edge detection algorithm from a library that somebody else has tuned, and then stick in some Z-buffering code to compute the blue contour from the red?
First approach
Main idea
Run an edge detection algorithm (Canny should do it just fine).
For each contour point C compute the triplet (slope, dir, dist), where:
slope is the slope of the line that passes through P and C
dir is a bit which is set if C is to the right of P (on the x axis) and reset if it is to the left; it used in order to distinguish in between points having the same slope, but on opposite sides of P
dist is the distance in between P and C.
Classify the set of contour points such that a class contains the points with the same key (slope, dir) and keep the one point from each such class having the minimum dist. Let S be the set of these closest points.
Sort S in clockwise order.
Iterate once more through the sorted set and, whenever two consecutive points are too far apart, draw a segment in between them, otherwise just draw the points.
Notes
You do not really need to compute the real distance in between P and C since you only use dist to determine the closest point to P at step 3. Instead you can keep C.x - P.x in dist. This piece of information should also tell you which of two points with the same slope is closest to P. Also, C.x - P.x swallows the dir parameter (in the sign bit). So you do not really need dir either.
The classification in step 3 can ideally be done by hashing (thus, in linear number of steps), but since doubles/floats are subject to rounding, you might need to allow small errors to occur by rounding the values of the slopes.
Second approach
Main idea
You can perform a sort of BFS starting from P, like when trying to determine the country/zone that P resides in. For each pixel, look at the pixels around it that were already visited by BFS (called neighbors). Depending on the distribution of the neighbor pixels that are in the line of sight, determine if the currently visited pixel is in the line of sight too or not. You can probably apply a sort of convolution operator on the neighbor pixels (like with any other filter). Also, you do not really need to decide right away if a pixel is for sure in the line of sight. You could instead compute some probability of that to be true.
Notes
Due to the fact that your graph is a 2D image, BFS should be pretty fast (since the number of edges is linear in the number of vertices).
This second approach eliminates the need to run an edge detection algorithm. Also, if the country/zone P resides in is considerably smaller than the image the overall performance should be better than running an edge detection algorithm solely.
I'm looking at a paper named "Shape Based Image Retrieval Using Generic Fourier Descriptors", but only have rudimentary knowledge of Fourier Descriptors. I am attempting to implement the algorithm on page 12 of the paper, and have some results which I can't really make too much sense out of.
If I create an small image, take calculate the FD for the image, and compare the FD to the same image which has been translated by a single pixel in the x and y directions, the descriptor is completely different, except for the first entry - which is exactly the same. Firstly, a question is, is should these descriptors be exactly the same (as the descriptor is apparently scale, rotation, and translation invariant) between the two images?
Secondly, in the paper, it mentions that descriptors of two separate images are compared by a simple Euclidean distance - therefore, by taking the Euclidean distance between the two descriptors mentioned above, the Euclidean distance would apparently be 0.
I quickly put together some Javascript code to test out the algorithm, which is below.
Does anybody have any input, ideas, ways to move forward?
Thanks,
Paul
var iShape = [
0, 0, 0, 0, 0,
0, 0, 255, 0, 0,
0, 255, 255, 255, 0,
0, 0, 255, 0, 0,
0, 0, 0, 0, 0
];
var ImageWidth = 5, ImageHeight = 5, MaxRFreq = 5, MaxAFreq = 5;
// Calculate centroid
var cX = 0, cY = 0, pCount = 0;
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
if (iShape[y * ImageWidth + x]) {
cX += x;
cY += y;
pCount++;
}
}
}
cX = cX / pCount;
cY = cY / pCount;
console.log("cX = " + cX + ", cY = " + cY);
// Calculate the maximum radius
var maxR = 0;
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
if (iShape[y * ImageWidth + x]) {
var r = Math.sqrt(Math.pow(x - cX, 2) + Math.pow(y - cY, 2));
if (r > maxR) {
maxR = r;
}
}
}
}
// Initialise real / imaginary table
var i;
var FR = [ ];
var FI = [ ];
for (r = 0; r < (MaxRFreq); r++) {
var rRow = [ ];
FR.push(rRow);
var aRow = [ ];
FI.push(aRow);
for (a = 0; a < (MaxAFreq); a++) {
rRow.push(0.0);
aRow.push(0.0);
}
}
var rFreq, aFreq, x, y;
for (rFreq = 0; rFreq < MaxRFreq; rFreq++) {
for (aFreq = 0; aFreq < MaxAFreq; aFreq++) {
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
var radius = Math.sqrt(Math.pow(x - maxR, 2) +
Math.pow(y - maxR, 2));
var theta = Math.atan2(y - maxR, x - maxR);
if (theta < 0.0) {
theta += (2 * Math.PI);
}
var iPixel = iShape[y * ImageWidth + x];
FR[rFreq][aFreq] += iPixel * Math.cos(2 * Math.PI * rFreq *
(radius / maxR) + aFreq * theta);
FI[rFreq][aFreq] -= iPixel * Math.sin(2 * Math.PI * rFreq *
(radius / maxR) + aFreq * theta);
}
}
}
}
// Initialise fourier descriptor table
var FD = [ ];
for (i = 0; i < (MaxRFreq * MaxAFreq); i++) {
FD.push(0.0);
}
// Calculate the fourier descriptor
for (rFreq = 0; rFreq < MaxRFreq; rFreq++) {
for (aFreq = 0; aFreq < MaxAFreq; aFreq++) {
if (rFreq == 0 && aFreq == 0) {
FD[0] = Math.sqrt(Math.pow(FR[0][0], 2) + Math.pow(FR[0][0], 2) /
(Math.PI * maxR * maxR));
} else {
FD[rFreq * MaxAFreq + aFreq] = Math.sqrt(Math.pow(FR[rFreq][aFreq], 2) +
Math.pow(FI[rFreq][aFreq], 2) / FD[0]);
}
}
}
for (i = 0; i < (MaxRFreq * MaxAFreq); i++) {
console.log(FD[i]);
}
There are three separate normalization techniques applied here in order to make the final descriptor invariant to 1) translation and 2) scale 3) rotation.
For the translation invariance part you need to find the centroid of the shape and calculate the vector of every contour point having the centroid as the origin. This is done by substracting the x and y coordinate of the centroid from each point's coordinates, respectively. So in your code the radius and theta of each point should be computes as follows:
var radius = Math.sqrt(Math.pow(x - cX, 2) + Math.pow(y - cY, 2));
var theta = Math.atan2(y - cY, x - cX);
For the scale invariance part you need to find the maximum magnitute(or radius as you say) of every vector (already normalized for translation invariance) and divide the magnitude of each point by the maximum magnitude value. An alternative way of achieving this is to divide every fourier coefficient with the zero-frequency coefficient (first coefficient) as the scale information is represented there. As I can see in you code and in the paper, this is implemented according to the second way I described.
Finally, the rotation invariance is achieved by only keeping the magnitude of the fourier coefficients as you can see in step 6 of the paper's pseudo-code.
In addition to all these, keep in mind that in order to apply the eucidean distance for the descriptor comparison, the length of the descriptor for every shape must be the same. In FFT, the number of the final coefficients depends on the number of the contour points of the shape. The solution I have found to this is to interpolate between points in order to reach a fixed number of points for every shape.
Hope I helped,
Lazaros