Adjust flat map image for other projections - d3.js

I'm using D3.js to create a globe. I have a working SVG wife-frame version, and I'm also trying to create a more detailed textured one, a two-mode thing.
The image I'm using from an API is square:
Which doesn't really work out well when projected to orthographic, it's a lot more "squished" towards the equator than it should be:
Not doing anything particularly special:
const dx = 2048;
const dy = 2048;
const width = 2048;
const height = 2048;
let sourceData = mapImage.getImageData(0, 0, dx, dy).data,
target = ctx.createImageData(width, height),
targetData = target.data;
for (let y = 0, i = -1; y < height; ++y) {
for (let x = 0; x < width; ++x) {
let p = projection.invert([x, y]);
if (p[0] > 180 || p[0] < -180 || p[1] > 90 || p[1] < -90) {
i += 4;
continue;
}
let q = ((90 - p[1]) / 180 * dy | 0) * dx + ((180 + p[0]) / 360 * dx | 0) << 2;
targetData[++i] = sourceData[q];
targetData[++i] = sourceData[++q];
targetData[++i] = sourceData[++q];
targetData[++i] = 255;
}
}
ctx.clearRect(0, 0, width, height);
ctx.putImageData(target, 0, 0);
I'm wondering if there's a straightforward way to make the additional adjustment for the stretching of the map image?
(Bonus points if you can also point me to why the space around the globe is not transparent? But that's not the main question here.)

Related

Providing canvas2d image tint for Spritefonts

I'm doing Spritefonts and currently implemented tint for it on WebGL!
But on canvas2d i tried to do it via ctx.globalCompositeOperation but it shows following
As you see, Black pixels are also filled...
Here is my code...
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
ctx.save();
if (q) ctx.drawImage(fonts[0].src, q.x, q.y, q.w, q.h, x + (spacing || 0) + (i * size), y, size, size);
ctx.globalCompositeOperation = "source-in";
ctx.fillStyle = "green";
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.restore();
}
When trying with "darken" mode instead, It fills correctly but also it fills background (Which i don't want this...)
I also tried with ctx.getImageData() and ctx.putImageData() but letters not shown
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
if (q) {
ctx.drawImage(fonts[0].src, q.x, q.y, q.w, q.h, x + (spacing || 0) + (i * size), y, size, size);
f = ctx.getImageData(x + (spacing || 0) + (i * size), y, size, size);
for (var i = 0; i < f.data.length; i += 4) {
f.data[i + 0] = 100;
f.data[i + 1] = 100;
f.data[i + 2] = 255;
f.data[i + 3] = 255;
}
ctx.putImageData(f, x + (spacing || 0) + (i * size), y, 0, 0, size, size);
}
}
The image i'm using is from here
Fixed by using "lighten" mode for black pixels with filling background, Then applied "darken" mode instead of "source-in" and all done!
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
ctx.save();
ctx.globalCompositeOperation = "lighten";
ctx.fillStyle = ctx.canvas.style.backgroundColor;
ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);
if (q) ctx.drawImage(fonts[0].src, q.x, q.y, q.w, q.h, x + (spacing || 0) + (i * size), y, size, size);
ctx.globalCompositeOperation = "darken";
ctx.fillStyle = "green";
ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.restore();
}
This is better way i found:
Create canvas with dimensions that complies with spritefont image dimensions
Save context state in the created canvas
Set fillStyle of the created canvas context with spritefont text color (Tint)
Set globalAlpha of created canvas context to opacity
Fill created canvas background with spritefont text color (Tint)
Apply "destination-atop" composite mode in created canvas context
Reset globalAlpha of created canvas context to 1 (Default)
Draw spritefont image onto created canvas
Restore context state in created canvas
Then, Let default canvas context (Not created one) draw characters from spritefont image, So we let it draw part of canvas we created (Note that spritefont image fills all created canvas)
Done!
var size = 32;
var x = 200;
var y = 200;
var spacing = 0;
var opacity = 0.8;
var color = "green";
for (var i = 0; i < txt.length; i++) {
var q = fonts[0].info[txt[i]];
var c = document.createElement("canvas").getContext("2d");
c.canvas.width = fonts[0].src.width;
c.canvas.height = fonts[0].src.height;
c.save();
c.fillStyle = color;
c.globalAlpha = opacity || 0.8;
c.fillRect(0, 0, c.canvas.width, c.canvas.height);
c.globalCompositeOperation = "destination-atop";
c.globalAlpha = 1;
c.drawImage(fonts[0].src, 0, 0);
c.restore();
if (q) ctx.drawImage(c.canvas, q.x, q.y, q.w, q.h, x + (i * (size + spacing)), y, size, size);
}

Unity - Improve Mesh generation and rendering performance

since I just recently started looking into meshes, how they work, what they do and so on, I decided to use my own calculations to create a mesh of a circle. Unfortunately though, this is really, really slow!
So I am looking for tips on improvements, to make it slow only (because that's probably the best it will get...)
Here is the code I use to generate a circle:
public static void createCircle(MeshFilter meshFilter, float innerRadius, float outerRadius, Color color, float xPosition = 0, float yPosition = 0, float startDegree = 0, float endDegree = 360, int points = 100)
{
Mesh mesh = meshFilter.mesh;
mesh.Clear();
//These values will result in no (or very ugly in the case of points < 10) circle, so let's safe calculation and just return an empty mesh!
if (startDegree == endDegree || points < 10 || innerRadius >= outerRadius || innerRadius < 0 || outerRadius <= 0)
{
return;
}
//The points for the full circle shall be whatever is given but if its not the full circle we dont need all the points!
points = (int)(Mathf.Abs(endDegree - startDegree) / 360f * points);
//We always need an uneven number of points!
if (points % 2 != 0) { points++; }
Vector3[] vertices = new Vector3[points];
float degreeStepSize = (endDegree - startDegree) * 2 / (points - 3);
float halfRadStepSize = (degreeStepSize) * Mathf.Deg2Rad / 2f;
float startRad = Mathf.Deg2Rad * startDegree;
float endRad = Mathf.Deg2Rad * endDegree;
//Let's save the vector at the beginning and the one on the end to make a perfectly straight line
vertices[0] = new Vector3(Mathf.Sin(startRad) * outerRadius + xPosition, Mathf.Cos(startRad) * outerRadius + yPosition, 0);
vertices[vertices.Length - 1] = new Vector3(Mathf.Sin(endRad) * innerRadius + xPosition, Mathf.Cos(endRad) * innerRadius + yPosition, 0);
for (int i = 1; i < vertices.Length - 1; i++)
{
//Pure coinsidence that saved some calculatons. Half Step Size is the same as what would needed to be calculated here!
float rad = (i - 1) * halfRadStepSize + startRad;
if (i % 2 == 0)
{
vertices[i] = new Vector3(Mathf.Sin(rad) * outerRadius + xPosition, Mathf.Cos(rad) * outerRadius + yPosition, 0);
}
else
{
vertices[i] = new Vector3(Mathf.Sin(rad) * innerRadius + xPosition, Mathf.Cos(rad) * innerRadius + yPosition, 0);
}
}
mesh.vertices = vertices;
int[] tri = new int[(vertices.Length - 2) * 3];
for (int i = 0; i < (vertices.Length - 2); i++)
{
int index = i * 3;
if (i % 2 == 0)
{
tri[index + 0] = i + 0;
tri[index + 1] = i + 2;
tri[index + 2] = i + 1;
}
else
{
tri[index + 0] = i + 0;
tri[index + 1] = i + 1;
tri[index + 2] = i + 2;
}
}
mesh.triangles = tri;
Vector3[] normals = new Vector3[vertices.Length];
Color[] colors = new Color[vertices.Length];
for (int i = 0; i < vertices.Length; i++)
{
normals[i] = Vector3.forward;
colors[i] = color;
}
mesh.normals = normals;
mesh.colors = colors;
meshFilter.mesh = mesh;
}
I know I "could just use the LineRenderer shipped with Unity, it is faster then anything you'll ever write", but that's not the point here.
I am trying to understand meshes and see where I can tweak my code to improve it's performance.
Thanks for your help in advance!
You can almost double the speed by removing extra memory allocation. Since Vector3 is a value type, they are already allocated when you allocate the array. Vector3.forward also allocates a new Vector3 each time, and we can re-use it.
public static void createCircle(MeshFilter meshFilter, float innerRadius, float outerRadius, Color color, float xPosition = 0, float yPosition = 0, float startDegree = 0, float endDegree = 360, int points = 100)
{
Mesh mesh = meshFilter.mesh;
mesh.Clear();
//These values will result in no (or very ugly in the case of points < 10) circle, so let's safe calculation and just return an empty mesh!
if (startDegree == endDegree || points < 10 || innerRadius >= outerRadius || innerRadius < 0 || outerRadius <= 0)
{
return;
}
//The points for the full circle shall be whatever is given but if its not the full circle we dont need all the points!
points = (int)(Mathf.Abs(endDegree - startDegree) / 360f * points);
//We always need an uneven number of points!
if (points % 2 != 0) { points++; }
Vector3[] vertices = new Vector3[points];
float degreeStepSize = (endDegree - startDegree) * 2 / (points - 3);
float halfRadStepSize = (degreeStepSize) * Mathf.Deg2Rad / 2f;
float startRad = Mathf.Deg2Rad * startDegree;
float endRad = Mathf.Deg2Rad * endDegree;
//Let's save the vector at the beginning and the one on the end to make a perfectly straight line
vertices[0] = new Vector3(Mathf.Sin(startRad) * outerRadius + xPosition, Mathf.Cos(startRad) * outerRadius + yPosition, 0);
vertices[vertices.Length - 1] = new Vector3(Mathf.Sin(endRad) * innerRadius + xPosition, Mathf.Cos(endRad) * innerRadius + yPosition, 0);
for (int i = 1; i < vertices.Length - 1; i++)
{
//Pure coinsidence that saved some calculatons. Half Step Size is the same as what would needed to be calculated here!
float rad = (i - 1) * halfRadStepSize + startRad;
if (i % 2 == 0)
{
vertices[i].x = Mathf.Sin(rad) * outerRadius + xPosition;
vertices[i].y = Mathf.Cos(rad) * outerRadius + yPosition;
vertices[i].z = 0;
}
else
{
vertices[i].x = Mathf.Sin(rad) * innerRadius + xPosition;
vertices[i].y = Mathf.Cos(rad) * innerRadius + yPosition;
vertices[i].z = 0;;
}
}
mesh.vertices = vertices;
int[] tri = new int[(vertices.Length - 2) * 3];
for (int i = 0; i < (vertices.Length - 2); i++)
{
int index = i * 3;
if (i % 2 == 0)
{
tri[index + 0] = i + 0;
tri[index + 1] = i + 2;
tri[index + 2] = i + 1;
}
else
{
tri[index + 0] = i + 0;
tri[index + 1] = i + 1;
tri[index + 2] = i + 2;
}
}
mesh.triangles = tri;
Vector3[] normals = new Vector3[vertices.Length];
Color[] colors = new Color[vertices.Length];
var f = Vector3.forward;
for (int i = 0; i < vertices.Length; i++)
{
normals[i].x= f.x;
normals[i].y= f.y;
normals[i].z= f.z;
colors[i] = color;
}
mesh.normals = normals;
mesh.colors = colors;
meshFilter.mesh = mesh;
}

Binary Image "Lines-of-Sight" Edge Detection

Consider this binary image:
A normal edge detection algorithm (Like Canny) takes the binary image as input and results into the contour shown in red. I need another algorithm that takes a point "P" as a second piece of input data. "P" is the black point in the previous image. This algorithm should result into the blue contour. The blue contours represents the point "P" lines-of-sight edge of the binary image.
I searched a lot of an image processing algorithm that achieve this, but didn't find any. I also tried to think about a new one, but I still have a lot of difficulties.
Since you've got a bitmap, you could use a bitmap algorithm.
Here's a working example (in JSFiddle or see below). (Firefox, Chrome, but not IE)
Pseudocode:
// part 1: occlusion
mark all pixels as 'outside'
for each pixel on the edge of the image
draw a line from the source pixel to the edge pixel and
for each pixel on the line starting from the source and ending with the edge
if the pixel is gray mark it as 'inside'
otherwise stop drawing this line
// part 2: edge finding
for each pixel in the image
if pixel is not marked 'inside' skip this pixel
if pixel has a neighbor that is outside mark this pixel 'edge'
// part 3: draw the edges
highlight all the edges
At first this sounds pretty terrible... But really, it's O(p) where p is the number of pixels in your image.
Full code here, works best full page:
var c = document.getElementById('c');
c.width = c.height = 500;
var x = c.getContext("2d");
//////////// Draw some "interesting" stuff ////////////
function DrawScene() {
x.beginPath();
x.rect(0, 0, c.width, c.height);
x.fillStyle = '#fff';
x.fill();
x.beginPath();
x.rect(c.width * 0.1, c.height * 0.1, c.width * 0.8, c.height * 0.8);
x.fillStyle = '#000';
x.fill();
x.beginPath();
x.rect(c.width * 0.25, c.height * 0.02 , c.width * 0.5, c.height * 0.05);
x.fillStyle = '#000';
x.fill();
x.beginPath();
x.rect(c.width * 0.3, c.height * 0.2, c.width * 0.03, c.height * 0.4);
x.fillStyle = '#fff';
x.fill();
x.beginPath();
var maxAng = 2.0;
function sc(t) { return t * 0.3 + 0.5; }
function sc2(t) { return t * 0.35 + 0.5; }
for (var i = 0; i < maxAng; i += 0.1)
x.lineTo(sc(Math.cos(i)) * c.width, sc(Math.sin(i)) * c.height);
for (var i = maxAng; i >= 0; i -= 0.1)
x.lineTo(sc2(Math.cos(i)) * c.width, sc2(Math.sin(i)) * c.height);
x.closePath();
x.fill();
x.beginPath();
x.moveTo(0.2 * c.width, 0.03 * c.height);
x.lineTo(c.width * 0.9, c.height * 0.8);
x.lineTo(c.width * 0.8, c.height * 0.8);
x.lineTo(c.width * 0.1, 0.03 * c.height);
x.closePath();
x.fillStyle = '#000';
x.fill();
}
//////////// Pick a point to start our operations: ////////////
var v_x = Math.round(c.width * 0.5);
var v_y = Math.round(c.height * 0.5);
function Update() {
if (navigator.appName == 'Microsoft Internet Explorer'
|| !!(navigator.userAgent.match(/Trident/)
|| navigator.userAgent.match(/rv 11/))
|| $.browser.msie == 1)
{
document.getElementById("d").innerHTML = "Does not work in IE.";
return;
}
DrawScene();
//////////// Make our image binary (white and gray) ////////////
var id = x.getImageData(0, 0, c.width, c.height);
for (var i = 0; i < id.width * id.height * 4; i += 4) {
id.data[i + 0] = id.data[i + 0] > 128 ? 255 : 64;
id.data[i + 1] = id.data[i + 1] > 128 ? 255 : 64;
id.data[i + 2] = id.data[i + 2] > 128 ? 255 : 64;
}
// Adapted from http://rosettacode.org/wiki/Bitmap/Bresenham's_line_algorithm#JavaScript
function line(x1, y1) {
var x0 = v_x;
var y0 = v_y;
var dx = Math.abs(x1 - x0), sx = x0 < x1 ? 1 : -1;
var dy = Math.abs(y1 - y0), sy = y0 < y1 ? 1 : -1;
var err = (dx>dy ? dx : -dy)/2;
while (true) {
var d = (y0 * c.height + x0) * 4;
if (id.data[d] === 255) break;
id.data[d] = 128;
id.data[d + 1] = 128;
id.data[d + 2] = 128;
if (x0 === x1 && y0 === y1) break;
var e2 = err;
if (e2 > -dx) { err -= dy; x0 += sx; }
if (e2 < dy) { err += dx; y0 += sy; }
}
}
for (var i = 0; i < c.width; i++) line(i, 0);
for (var i = 0; i < c.width; i++) line(i, c.height - 1);
for (var i = 0; i < c.height; i++) line(0, i);
for (var i = 0; i < c.height; i++) line(c.width - 1, i);
// Outline-finding algorithm
function gb(x, y) {
var v = id.data[(y * id.height + x) * 4];
return v !== 128 && v !== 0;
}
for (var y = 0; y < id.height; y++) {
var py = Math.max(y - 1, 0);
var ny = Math.min(y + 1, id.height - 1);
console.log(y);
for (var z = 0; z < id.width; z++) {
var d = (y * id.height + z) * 4;
if (id.data[d] !== 128) continue;
var pz = Math.max(z - 1, 0);
var nz = Math.min(z + 1, id.width - 1);
if (gb(pz, py) || gb(z, py) || gb(nz, py) ||
gb(pz, y) || gb(z, y) || gb(nz, y) ||
gb(pz, ny) || gb(z, ny) || gb(nz, ny)) {
id.data[d + 0] = 0;
id.data[d + 1] = 0;
id.data[d + 2] = 255;
}
}
}
x.putImageData(id, 0, 0);
// Draw the starting point
x.beginPath();
x.arc(v_x, v_y, c.width * 0.01, 0, 2 * Math.PI, false);
x.fillStyle = '#800';
x.fill();
}
Update();
c.addEventListener('click', function(evt) {
var x = evt.pageX - c.offsetLeft,
y = evt.pageY - c.offsetTop;
v_x = x;
v_y = y;
Update();
}, false);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.2.3/jquery.min.js"></script>
<center><div id="d">Click on image to change point</div>
<canvas id="c"></canvas></center>
I would just estimate P's line of sight contour with ray collisions.
RESOLUTION = PI / 720;
For rad = 0 To PI * 2 Step RESOLUTION
ray = CreateRay(P, rad)
hits = Intersect(ray, contours)
If Len(hits) > 0
Add(hits[0], lineOfSightContour)
https://en.wikipedia.org/wiki/Hidden_surface_determination with e.g. a Z-Buffer is relatively easy. Edge detection looks a lot trickier and probably needs a bit of tuning. Why not take an existing edge detection algorithm from a library that somebody else has tuned, and then stick in some Z-buffering code to compute the blue contour from the red?
First approach
Main idea
Run an edge detection algorithm (Canny should do it just fine).
For each contour point C compute the triplet (slope, dir, dist), where:
slope is the slope of the line that passes through P and C
dir is a bit which is set if C is to the right of P (on the x axis) and reset if it is to the left; it used in order to distinguish in between points having the same slope, but on opposite sides of P
dist is the distance in between P and C.
Classify the set of contour points such that a class contains the points with the same key (slope, dir) and keep the one point from each such class having the minimum dist. Let S be the set of these closest points.
Sort S in clockwise order.
Iterate once more through the sorted set and, whenever two consecutive points are too far apart, draw a segment in between them, otherwise just draw the points.
Notes
You do not really need to compute the real distance in between P and C since you only use dist to determine the closest point to P at step 3. Instead you can keep C.x - P.x in dist. This piece of information should also tell you which of two points with the same slope is closest to P. Also, C.x - P.x swallows the dir parameter (in the sign bit). So you do not really need dir either.
The classification in step 3 can ideally be done by hashing (thus, in linear number of steps), but since doubles/floats are subject to rounding, you might need to allow small errors to occur by rounding the values of the slopes.
Second approach
Main idea
You can perform a sort of BFS starting from P, like when trying to determine the country/zone that P resides in. For each pixel, look at the pixels around it that were already visited by BFS (called neighbors). Depending on the distribution of the neighbor pixels that are in the line of sight, determine if the currently visited pixel is in the line of sight too or not. You can probably apply a sort of convolution operator on the neighbor pixels (like with any other filter). Also, you do not really need to decide right away if a pixel is for sure in the line of sight. You could instead compute some probability of that to be true.
Notes
Due to the fact that your graph is a 2D image, BFS should be pretty fast (since the number of edges is linear in the number of vertices).
This second approach eliminates the need to run an edge detection algorithm. Also, if the country/zone P resides in is considerably smaller than the image the overall performance should be better than running an edge detection algorithm solely.

Fourier Shape Descriptors

I'm looking at a paper named "Shape Based Image Retrieval Using Generic Fourier Descriptors", but only have rudimentary knowledge of Fourier Descriptors. I am attempting to implement the algorithm on page 12 of the paper, and have some results which I can't really make too much sense out of.
If I create an small image, take calculate the FD for the image, and compare the FD to the same image which has been translated by a single pixel in the x and y directions, the descriptor is completely different, except for the first entry - which is exactly the same. Firstly, a question is, is should these descriptors be exactly the same (as the descriptor is apparently scale, rotation, and translation invariant) between the two images?
Secondly, in the paper, it mentions that descriptors of two separate images are compared by a simple Euclidean distance - therefore, by taking the Euclidean distance between the two descriptors mentioned above, the Euclidean distance would apparently be 0.
I quickly put together some Javascript code to test out the algorithm, which is below.
Does anybody have any input, ideas, ways to move forward?
Thanks,
Paul
var iShape = [
0, 0, 0, 0, 0,
0, 0, 255, 0, 0,
0, 255, 255, 255, 0,
0, 0, 255, 0, 0,
0, 0, 0, 0, 0
];
var ImageWidth = 5, ImageHeight = 5, MaxRFreq = 5, MaxAFreq = 5;
// Calculate centroid
var cX = 0, cY = 0, pCount = 0;
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
if (iShape[y * ImageWidth + x]) {
cX += x;
cY += y;
pCount++;
}
}
}
cX = cX / pCount;
cY = cY / pCount;
console.log("cX = " + cX + ", cY = " + cY);
// Calculate the maximum radius
var maxR = 0;
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
if (iShape[y * ImageWidth + x]) {
var r = Math.sqrt(Math.pow(x - cX, 2) + Math.pow(y - cY, 2));
if (r > maxR) {
maxR = r;
}
}
}
}
// Initialise real / imaginary table
var i;
var FR = [ ];
var FI = [ ];
for (r = 0; r < (MaxRFreq); r++) {
var rRow = [ ];
FR.push(rRow);
var aRow = [ ];
FI.push(aRow);
for (a = 0; a < (MaxAFreq); a++) {
rRow.push(0.0);
aRow.push(0.0);
}
}
var rFreq, aFreq, x, y;
for (rFreq = 0; rFreq < MaxRFreq; rFreq++) {
for (aFreq = 0; aFreq < MaxAFreq; aFreq++) {
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
var radius = Math.sqrt(Math.pow(x - maxR, 2) +
Math.pow(y - maxR, 2));
var theta = Math.atan2(y - maxR, x - maxR);
if (theta < 0.0) {
theta += (2 * Math.PI);
}
var iPixel = iShape[y * ImageWidth + x];
FR[rFreq][aFreq] += iPixel * Math.cos(2 * Math.PI * rFreq *
(radius / maxR) + aFreq * theta);
FI[rFreq][aFreq] -= iPixel * Math.sin(2 * Math.PI * rFreq *
(radius / maxR) + aFreq * theta);
}
}
}
}
// Initialise fourier descriptor table
var FD = [ ];
for (i = 0; i < (MaxRFreq * MaxAFreq); i++) {
FD.push(0.0);
}
// Calculate the fourier descriptor
for (rFreq = 0; rFreq < MaxRFreq; rFreq++) {
for (aFreq = 0; aFreq < MaxAFreq; aFreq++) {
if (rFreq == 0 && aFreq == 0) {
FD[0] = Math.sqrt(Math.pow(FR[0][0], 2) + Math.pow(FR[0][0], 2) /
(Math.PI * maxR * maxR));
} else {
FD[rFreq * MaxAFreq + aFreq] = Math.sqrt(Math.pow(FR[rFreq][aFreq], 2) +
Math.pow(FI[rFreq][aFreq], 2) / FD[0]);
}
}
}
for (i = 0; i < (MaxRFreq * MaxAFreq); i++) {
console.log(FD[i]);
}
There are three separate normalization techniques applied here in order to make the final descriptor invariant to 1) translation and 2) scale 3) rotation.
For the translation invariance part you need to find the centroid of the shape and calculate the vector of every contour point having the centroid as the origin. This is done by substracting the x and y coordinate of the centroid from each point's coordinates, respectively. So in your code the radius and theta of each point should be computes as follows:
var radius = Math.sqrt(Math.pow(x - cX, 2) + Math.pow(y - cY, 2));
var theta = Math.atan2(y - cY, x - cX);
For the scale invariance part you need to find the maximum magnitute(or radius as you say) of every vector (already normalized for translation invariance) and divide the magnitude of each point by the maximum magnitude value. An alternative way of achieving this is to divide every fourier coefficient with the zero-frequency coefficient (first coefficient) as the scale information is represented there. As I can see in you code and in the paper, this is implemented according to the second way I described.
Finally, the rotation invariance is achieved by only keeping the magnitude of the fourier coefficients as you can see in step 6 of the paper's pseudo-code.
In addition to all these, keep in mind that in order to apply the eucidean distance for the descriptor comparison, the length of the descriptor for every shape must be the same. In FFT, the number of the final coefficients depends on the number of the contour points of the shape. The solution I have found to this is to interpolate between points in order to reach a fixed number of points for every shape.
Hope I helped,
Lazaros

html5 canvas, how to export a 2x image?

I designed a web app with html5 canvas. To export an image, the code will be below:
var img = canvas.toDataURL("image/png");
Is there any way to export a 2x image?
It is for hdpi display like apple retina display.
Yes there are a few ways but every time you stretch a non vector image you will get some pixel distortion. However if its only two times the size you could get away with it using nearest neighbor. The below example shows two different methods, one is just stretching the image, the other uses nearest neighbor with a zoom factor of two.
Live Demo
var canvas = document.getElementById("canvas"),
ctx = canvas.getContext("2d"),
canvas2 = document.getElementById("canvas2"),
ctx2 = canvas2.getContext("2d"),
tempCtx = document.createElement('canvas').getContext('2d'),
img = document.getElementById("testimg"),
zoom = 2;
tempCtx.drawImage(img, 0, 0);
var imgData = tempCtx.getImageData(0, 0, img.width, img.height).data;
canvas.width = img.width * zoom;
canvas.height = img.height * zoom;;
// nearest neighbor
for (var x = 0; x < img.width; ++x) {
for (var y = 0; y < img.height; ++y) {
var i = (y * img.width + x) * 4;
var r = imgData[i];
var g = imgData[i + 1];
var b = imgData[i + 2];
var a = imgData[i + 3];
ctx.fillStyle = "rgba(" + r + "," + g + "," + b + "," + (a / 255) + ")";
ctx.fillRect(x * zoom, y * zoom, zoom, zoom);
}
}
// stretched
ctx2.drawImage(img, 0, 0, 140, 140);
#phrogz has a great example of this here as well, showing a few different ways you can accomplish image re-sizing.

Resources