Related
I need an algorithm to give me coordinates to the nearest cells (in order of distance) to another cell in a 2D grid. Its for a search algorithm that then checks those coordinates for all sorts of things for suitability. Anyways, so far I came up with this:
function testy(cx, cy, idx) {
var radius = Math.floor(Math.sqrt(idx / Math.PI));
var segment = Math.round(idx - (radius * Math.PI));
var angle = segment / radius;
var x = Math.round(cx + radius * Math.cos(angle));
var y = Math.round(cy + radius * Math.sin(angle));
return [x, y];
}
addEventListener("load", function() {
var canv = document.createElement("canvas");
document.body.appendChild(canv);
canv.width = 800;
canv.height = 600;
var ctx = canv.getContext("2d");
var scale = 5;
var idx = 0;
var idx_end = 10000;
var func = function() {
var xy = testy(0,0,idx++);
var x = xy[0] * scale + canv.width / 2;
var y = xy[1] * scale + canv.height / 2;
ctx.rect(x, y, scale, scale);
ctx.fill();
if (idx < idx_end) setTimeout(func, 0);
}
func();
});
but as you can tell, its kinda crap because it skips some cells. There's a few assumptions I'm making there:
That the circumference of a circle of a certain radius corresponds to the number of cells on the path of that circle. I didn't think that would be too great of a problem though since the actual number of cells in a radius should be lower than the circumference leading to duplication(which in small amounts is ok) but not exclusion(not ok).
That the radius of a circle by the n-th index specified would be slightly more than Math.floor(Math.sqrt(idx / Math.PI)) because each increase of 1 to the radius corresponds to 2 * Math.PI being added to the circumference of the circle. Again, should lead to slight duplication but no exclusion.
Other than that I have no idea what could be wrong with it, I fail at math any more complex than this so probably something to do with that.
Perhaps there is another algorithm like this already out there though? One that doesn't skip cells? Language doesn't really matter, I'm using js to prototype it but it can be whatever.
Instead of thinking about the full circle, think about a quadrant. Adapting that to the full circle later should be fairly easy. Use (0,0) as the center of the circle for convenience. So you want to list grid cells with x,y ≥ 0 in order of non-decreasing x² + y².
One useful data structure is a priority queue. It can be used to keep track of the next y value for every x value, and you can extract the one with minimal x² + y² easily.
q = empty priority queue, for easy access to element with minimal x²+y²
Insert (0,0) into queue
while queue is not empty:
remove minimal element from queue and call it (x,y)
insert (x,y+1) into queue unless y+1 is off canvas
if y = 0:
insert (x+1,0) into queue unless x+1 is off canvas
do whatever you want to do with (x,y)
So for a canvas of size n this will enumerate all the n² points, but the priority queue will only contain n elements at most. The whole loop runs in O(n² log(n)). And if you abort the loop eraly because you found what you were looking for, it gets cheaper still, in contrast to simply sorting all the points. Another benefit is that you can use integer arithmetic exclusively, so numeric errors won't be an issue. One drawback is that JavaScript does not come with a priority queue out of the box, but I'm sure you can find an implementation you can reuse, e.g. tiniqueue.
When doing full circle, you'd generate (−x,y) unless x=0, and likewise for (x,−y) and (−x,−y). You could exploit symmetry a bit more by only having the loop over ⅛ of the circle, i.e. not inserting (x,y+1) if x=y, and then also generating (y,x) as a separate point unless x=y. Difference in performance should be marginal for many use cases.
"use strict";
function distCompare(a, b) {
const a2 = a.x*a.x + a.y*a.y;
const b2 = b.x*b.x + b.y*b.y;
return a2 < b2 ? -1 : a2 > b2 ? 1 : 0;
}
// Yields points in the range -w <= x <= w and -h <= y <= h
function* aroundOrigin(w,h) {
const q = TinyQueue([{x:0, y:0}], distCompare);
while (q.length) {
const p = q.pop();
yield p;
if (p.x) yield {x:-p.x, y:p.y};
if (p.y) yield {x:p.x, y:-p.y};
if (p.x && p.y) yield {x:-p.x, y:-p.y};
if (p.y < h) q.push({x:p.x, y:p.y+1});
if (p.y == 0 && p.x < w) q.push({x:p.x + 1, y:0});
}
}
// Yields points around (cx,cy) in range 0 <= x < w and 0 <= y < h
function* withOffset(cx, cy, w, h) {
const delegate = aroundOrigin(
Math.max(cx, w - cx - 1), Math.max(cy, h - cy - 1));
for(let p of delegate) {
p = {x: p.x + cx, y: p.y + cy};
if (p.x >= 0 && p.x < w && p.y >= 0 && p.y < h) yield p;
}
}
addEventListener("load", function() {
const canv = document.createElement("canvas");
document.body.appendChild(canv);
const cw = 800, ch = 600;
canv.width = cw;
canv.height = ch;
const ctx = canv.getContext("2d");
const scale = 5;
const w = Math.ceil(cw / scale);
const h = Math.ceil(ch / scale);
const cx = w >> 1, cy = h >> 1;
const pointgen = withOffset(cx, cy, w, h);
let cntr = 0;
var func = function() {
const {value, done} = pointgen.next();
if (done) return;
if (cntr++ % 16 === 0) {
// lighten older parts so that recent activity is more visible
ctx.fillStyle = "rgba(255,255,255,0.01)";
ctx.fillRect(0, 0, cw, ch);
ctx.fillStyle = "rgb(0,0,0)";
}
ctx.fillRect(value.x * scale, value.y*scale, scale, scale);
setTimeout(func, 0);
}
func();
});
<script type="text/javascript">module={};</script>
<script src="https://cdn.rawgit.com/mourner/tinyqueue/54dc3eb1/index.js"></script>
I am working on a project where four randomly placed robots each have unique Cartesian coordinates. I need to find a way to transform these coordinates into the coordinates of a square with side length defined by the user of the program.
For example, let's say I have four coordinates (5,13), (8,17), (13,2), and (6,24) that represent the coordinates of four robots. I need to find a square's coordinates such that the four robots are closest to these coordinates.
Thanks in advance.
As far as I understand your question you are looking for the centroid of the four points, the point which has equal — and thus minimal — distance to all points. It is calculated as the average for each coordinate:
The square's edge length is irrelevant to the position, though.
Update
If you additionally want to minimize the square corners' distance to a robot position, you can do the following:
Calculate the centroid c like described above and place the square there.
Imagine a circle with center at c and diameter of the square's edge length.
For each robot position calculate the point on the circle with shortest distance to the robot and use that as a corner of the square.
It looks as if the original poster is not coming back to share his solution here, so I'll post what I was working on.
Finding the center point of the four robots and then drawing the square around this point is indeed a good way to start, but it doesn't necessarily give the optimal result. For the example given in the question, the center point is (8,14) and the total distance is 22.688 (assuming a square size of 10).
When you draw the vector from a corner of the square to the closest robot, this vector shows you in which direction the square should move to reduce the distance from that corner to its closest robot. If you calculate the sum of the direction of these four vectors (by changing the vectors to size 1 before adding them up) then moving the square in the resulting direction will reduce the total distance.
I dreaded venturing into differential equation territory here, so I devised a simple algorithm which repeatedly calculates the direction to move in, and moves the square in ever decreasing steps, until a certain precision is reached.
For the example in the question, the optimal location it finds is (10,18), and the total distance is 21.814, which is an improvement of 0.874 over the center position (assuming a square size of 10).
Press "run code snippet" to see the algorithm in action with randomly generated positions. The scattered green dots are the center points that are considered while searching the optimal location for the square.
function positionSquare(points, size) {
var center = {x: 0, y:0};
for (var i in points) {
center.x += points[i].x / points.length;
center.y += points[i].y / points.length;
}
paintSquare(canvas, square(center), 1, "#D0D0D0");
order(points);
textOutput("<P>center position: " + center.x.toFixed(3) + "," + center.y.toFixed(3) + "<BR>total distance: " + distance(center, points).toFixed(3) + "</P>");
for (var step = 1; step > 0.0001; step /= 2)
{
var point = center;
var shortest, dist = distance(center, points);
do
{
center = point;
shortest = dist;
var dir = direction();
paintDot(canvas, center.x, center.y, 1, "green");
point.x = center.x + Math.cos(dir) * step;
point.y = center.y + Math.sin(dir) * step;
dist = distance(point, points);
}
while (dist < shortest)
}
textOutput("<P>optimal position: " + center.x.toFixed(3) + "," + center.y.toFixed(3) + "<BR>total distance: " + distance(point, points).toFixed(3) + "</P>");
return square(center);
function order(points) {
var clone = [], best = 0;
for (var i = 0; i < 2; i++) {
clone[i] = points.slice();
for (var j in clone[i]) clone[i][j].n = j;
if (i) {
clone[i].sort(function(a, b) {return b.y - a.y});
if (clone[i][0].x > clone[i][1].x) swap(clone[i], 0, 1);
if (clone[i][2].x < clone[i][3].x) swap(clone[i], 2, 3);
} else {
clone[i].sort(function(a, b) {return a.x - b.x});
swap(clone[i], 1, 3);
if (clone[i][0].y < clone[i][3].y) swap(clone[i], 0, 3);
if (clone[i][1].y < clone[i][2].y) swap(clone[i], 1, 2);
}
}
if (distance(center, clone[0]) > distance(center, clone[1])) best = 1;
for (var i in points) points[i] = {x: clone[best][i].x, y: clone[best][i].y};
function swap(a, i, j) {
var temp = a[i]; a[i] = a[j]; a[j] = temp;
}
}
function direction() {
var d, dx = 0, dy = 0, corners = square(center);
for (var i in points) {
d = Math.atan2(points[i].y - corners[i].y, points[i].x - corners[i].x);
dx += Math.cos(d);
dy += Math.sin(d);
}
return Math.atan2(dy, dx);
}
function distance(center, points) {
var d = 0, corners = square(center);
for (var i in points) {
var dx = points[i].x - corners[i].x;
var dy = points[i].y - corners[i].y;
d += Math.sqrt(Math.pow(dx, 2) + Math.pow(dy, 2));
}
return d;
}
function square(center) {
return [{x: center.x - size / 2, y: center.y + size / 2},
{x: center.x + size / 2, y: center.y + size / 2},
{x: center.x + size / 2, y: center.y - size / 2},
{x: center.x - size / 2, y: center.y - size / 2}];
}
}
// PREPARE CANVAS
var canvas = document.getElementById("canvas");
canvas.width = 200; canvas.height = 200;
canvas = canvas.getContext("2d");
// GENERATE TEST DATA AND RUN FUNCTION
var points = [{x:5, y:13}, {x:8, y:17}, {x:13, y:2}, {x:6, y:24}];
for (var i = 0; i < 4; i++) {
points[i].x = 1 + 23 * Math.random(); points[i].y = 1 + 23 * Math.random();
}
for (var i in points) textOutput("point: " + points[i].x.toFixed(3) + "," + points[i].y.toFixed(3) + "<BR>");
var size = 10;
var square = positionSquare(points, size);
// SHOW RESULT ON CANVAS
for (var i in points) {
paintDot(canvas, points[i].x, points[i].y, 5, "red");
paintLine(canvas, points[i].x, points[i].y, square[i].x, square[i].y, 1, "blue");
}
paintSquare(canvas, square, 1, "green");
function paintDot(canvas, x, y, size, color) {
canvas.beginPath();
canvas.arc(8 * x, 200 - 8 * y, size, 0, 6.2831853);
canvas.closePath();
canvas.fillStyle = color;
canvas.fill();
}
function paintLine(canvas, x1, y1, x2, y2, width, color) {
canvas.beginPath();
canvas.moveTo(8 * x1, 200 - 8 * y1);
canvas.lineTo(8 * x2, 200 - 8 * y2);
canvas.strokeStyle = color;
canvas.stroke();
}
function paintSquare(canvas, square, width, color) {
canvas.rect(8 * square[0].x , 200 - 8 * square[0].y, 8 * size, 8 * size);
canvas.strokeStyle = color;
canvas.stroke();
}
// TEXT OUTPUT
function textOutput(t) {
var output = document.getElementById("output");
output.innerHTML += t;
}
<BODY STYLE="margin: 0; border: 0; padding: 0;">
<CANVAS ID="canvas" STYLE="width: 200px; height: 200px; float: left; background-color: #F8F8F8;"></CANVAS>
<DIV ID="output" STYLE="width: 400px; height: 200px; float: left; margin-left: 10px;"></DIV>
</BODY>
Further improvements: I haven't yet taken into account what happens when a corner and a robot are in the same spot, but the overall position isn't optimal. Since the direction from the corner to the robot is undefined, it should probably be taken out of the equation temporarily.
I have a rectangular grid of variable size but averaging 500x500 with a small number of x,y points in it (less than 5). I need to find an algorithm that returns an x,y pair that is the farthest away possible from any of the other points.
Context: App's screen (grid) and a set of x,y points (enemies). The player dies and I need an algorithm that respawns them as far away from the enemies so that they don't die immediately after respawning.
What I have so far: The algorithm I wrote works but doesn't perform that great in slower phones. I'm basically dividing up the grid into squares (much like a tic tac toe) and I assign each square a number. I then check every single square against all enemies and store what the closest enemy was at each square. The square with the highest number is the square that has the closest enemy furthest away. I also tried averaging the existing points and doing something similar to this and while the performance was acceptable, the reliability of the method was not.
This is the simplest algorithm I could think of that still gives good results. It only checks 9 possible positions: the corners, the middle of the sides, and the center point. Most of the time the player ends up in a corner, but you obviously need more positions than enemies.
The algorithm runs in 0.013ms on my i5 desktop. If you replace the Math.pow() by Math.abs(), that comes down to 0.0088ms, though obviously with less reliable results. (Oddly enough, that's slower than my other answer which uses trigonometry functions.)
Running the code snippet (repeatedly) will show the result with randomly positioned enemies in a canvas element.
function furthestFrom(enemy) {
var point = [{x:0,y:0},{x:250,y:0},{x:500,y:0},{x:0,y:250},{x:250,y:250},{x:500,y:250},{x:0,y:500},{x:250,y:500},{x:500,y:500}];
var dist2 = [500000,500000,500000,500000,500000,500000,500000,500000,500000];
var max = 0, furthest;
for (var i in point) {
for (var j in enemy) {
var d = Math.pow(point[i].x - enemy[j].x, 2) + Math.pow(point[i].y - enemy[j].y, 2);
if (d < dist2[i]) dist2[i] = d;
}
if (dist2[i] > max) {
max = dist2[i];
furthest = i;
}
}
return(point[furthest]);
}
// CREATE TEST DATA
var enemy = [];
for (var i = 0; i < 5; i++) enemy[i] = {x: Math.round(Math.random() * 500), y: Math.round(Math.random() * 500)};
// RUN FUNCTION
var result = furthestFrom(enemy);
// SHOW RESULT ON CANVAS
var canvas = document.getElementById("canvas");
canvas.width = 500; canvas.height = 500;
canvas = canvas.getContext("2d");
for (var i = 0; i < 5; i++) {
paintDot(canvas, enemy[i].x, enemy[i].y, 10, "red");
}
paintDot(canvas, result.x, result.y, 20, "blue");
function paintDot(canvas, x, y, size, color) {
canvas.beginPath();
canvas.arc(x, y, size, 0, 6.2831853);
canvas.closePath();
canvas.fillStyle = color;
canvas.fill();
}
<BODY STYLE="margin: 0; border: 0; padding: 0;">
<CANVAS ID="canvas" STYLE="width: 200px; height: 200px; background-color: #EEE;"></CANVAS>
</BODY>
This is similar to what you are already doing, but with two passes where the first pass can be fairly crude. First decrease resolution. Divide the 500x500 grid into 10x10 grids each of which is 50x50. For each of the resulting 100 subgrids -- determine which have at least one enemy and locate the subgrid that is furthest away from a subgrid which contains an enemy. At this stage there is only 100 subgrids to worry about. Once you find the subgrid which is furthest away from an enemy -- increase resolution. That subgrid has 50x50 = 2500 squares. Do your original approach with those squares. The result is 50x50 + 100 = 2600 squares to process rather than 500x500 = 250,000. (Adjust the numbers as appropriate for the case in which there isn't 500x500 but with the same basic strategy).
Here is a Python3 implementation. It uses two functions:
1) fullResSearch(a,b,n,enemies) This function takes a set of enemies, a corner location (a,b) and an int, n, and finds the point in the nxn square of positions whose upper-left hand corner is (a,b) and finds the point in that square whose which has the maximum min-distance to an enemy. The enemies are not assumed to be in this nxn grid (although they certainly can be)
2) findSafePoint(n, enemies, mesh = 20) This function takes a set of enemies who are assumed to be in the nxn grid starting at (0,0). mesh determines the size of the subgrids, defaulting to 20. The overall grid is split into mesh x mesh subgrids (or slightly smaller along the boundaries if mesh doesn't divide n) which I think of as territories. I call a territory an enemy territory if it has an enemy in it. I create the set of enemy territories and pass it to fullResSearch with parameter n divided by mesh rather than n. The return value gives me the territory which is farthest from any enemy territory. Such a territory can be regarded as fairly safe. I feed that territory back into fullResSearch to find the safest point in that territory as the overall return function. The resulting point is either optimal or near-optimal and is computed very quickly. Here is the code (together with a test function):
import random
def fullResSearch(a,b,n,enemies):
minDists = [[0]*n for i in range(n)]
for i in range(n):
for j in range(n):
minDists[i][j] = min((a+i - x)**2 + (b+j - y)**2 for (x,y) in enemies)
maximin = 0
for i in range(n):
for j in range(n):
if minDists[i][j] > maximin:
maximin = minDists[i][j]
farthest = (a+i,b+j)
return farthest
def findSafePoint(n, enemies, mesh = 20):
m = n // mesh
territories = set() #enemy territories
for (x,y) in enemies:
i = x//mesh
j = y//mesh
territories.add((i,j))
(i,j) = fullResSearch(0,0,m,territories)
a = i*mesh
b = j*mesh
k = min(mesh,n - a,n - b) #in case mesh doesn't divide n
return fullResSearch(a,b,k,enemies)
def test(n, numEnemies, mesh = 20):
enemies = set()
count = 0
while count < numEnemies:
i = random.randint(0,n-1)
j = random.randint(0,n-1)
if not (i,j) in enemies:
enemies.add ((i,j))
count += 1
for e in enemies: print("Enemy at", e)
print("Safe point at", findSafePoint(n,enemies, mesh))
A typical run:
>>> test(500,5)
Enemy at (216, 67)
Enemy at (145, 251)
Enemy at (407, 256)
Enemy at (111, 258)
Enemy at (26, 298)
Safe point at (271, 499)
(I verified by using fullResSearch on the overall grid that (271,499) is in fact optimal for these enemies)
This method looks at all the enemies from the center point, checks the direction they're in, finds the emptiest sector, and then returns a point on a line through the middle of that sector, 250 away from the center.
The result isn't always perfect, and the safe spot is never in the center (though that could be added), but maybe it's good enough.
The algorithm runs more than a million times per second on my i5 desktop, but a phone may not be that good with trigonometry. The algorithm uses 3 trigonometry functions per enemy: atan2(), cos() and sin(). Those will probably have the most impact on the speed of execution. Maybe you could replace the cos() and sin() with a lookup table.
Run the code snippet to see an example with randomly positioned enemies.
function furthestFrom(e) {
var dir = [], widest = 0, bisect;
for (var i = 0; i < 5; i++) {
dir[i] = Math.atan2(e[i].y - 250, e[i].x - 250);
}
dir.sort(function(a, b){return a - b});
dir.push(dir[0] + 6.2831853);
for (var i = 0; i < 5; i++) {
var angle = dir[i + 1] - dir[i];
if (angle > widest) {
widest = angle;
bisect = dir[i] + angle / 2;
}
}
return({x: 250 * (1 + Math.cos(bisect)), y: 250 * (1 + Math.sin(bisect))});
}
// CREATE TEST DATA
var enemy = [];
for (var i = 0; i < 5; i++) enemy[i] = {x: Math.round(Math.random() * 500), y: Math.round(Math.random() * 500)};
// RUN FUNCTION AND SHOW RESULT ON CANVAS
var result = furthestFrom(enemy);
var canvas = document.getElementById("canvas");
canvas.width = 500; canvas.height = 500;
canvas = canvas.getContext("2d");
for (var i = 0; i < 5; i++) {
paintDot(canvas, enemy[i].x, enemy[i].y, "red");
}
paintDot(canvas, result.x, result.y, "blue");
// PAINT DOT ON CANVAS
function paintDot(canvas, x, y, color) {
canvas.beginPath();
canvas.arc(x, y, 10, 0, 6.2831853);
canvas.closePath();
canvas.fillStyle = color;
canvas.fill();
}
<BODY STYLE="margin: 0; border: 0; padding: 0">
<CANVAS ID="canvas" STYLE="width: 200px; height: 200px; background-color: #EEE;"CANVAS>
</BODY>
Here's an interesting solution, however I cannot test it's efficiency. For each enemy, make a line of numbers from each number, starting with one and increasing by one for each increase in distance. Four initial lines will come from the four edges and each time you go one further out, you create another line coming out at a 90 degree angle, also increasing the number each change in distance. If the number line encounters an already created number that is smaller than it, it will not overwrite it and will stop reaching further. Essentially, this makes it so that if the lines find a number smaller than it, it won't check any further grid marks, eliminating the need to check the entire grid for all of the enemies.
<<<<<<^^^^^^^
<<<<<<^^^^^^^
<<<<<<X>>>>>>
vvvvvvv>>>>>>
vvvvvvv>>>>>>
public void map(int posX, int posY)
{
//left up right down
makeLine(posX, posY, -1, 0, 0, -1);
makeLine(posX, posY, 0, 1, -1, 0);
makeLine(posX, posY, 1, 0, 0, 1);
makeLine(posX, posY, 0, -1, 1, 0);
grid[posX][posY] = 1000;
}
public void makeLine(int posX, int posY, int dirX, int dirY, int dir2X, int dir2Y)
{
int currentVal = 1;
posX += dirX;
posY += dirY;
while (0 <= posX && posX < maxX && posY < maxY && posY >= 0 && currentVal < grid[posX][posY])
{
int secondaryPosX = posX + dir2X;
int secondaryPosY = posY + dir2Y;
int secondaryVal = currentVal + 1;
makeSecondaryLine( secondaryPosX, secondaryPosY, dir2X, dir2Y, secondaryVal);
makeSecondaryLine( secondaryPosX, secondaryPosY, -dir2X, -dir2Y, secondaryVal);
grid[posX][posY] = currentVal;
posX += dirX;
posY += dirY;
currentVal++;
}
}
public void makeSecondaryLine(int secondaryPosX, int secondaryPosY, int dir2X, int dir2Y, int secondaryVal)
{
while (0 <= secondaryPosX && secondaryPosX < maxX && secondaryPosY < maxY &&
secondaryPosY >= 0 && secondaryVal < grid[secondaryPosX][secondaryPosY])
{
grid[secondaryPosX][secondaryPosY] = secondaryVal;
secondaryPosX += dir2X;
secondaryPosY += dir2Y;
secondaryVal++;
}
}
}
Here is the code I used to map out the entire grid. The nice thing about this, is that the number of times the number is checked/written is not that much dependent on the number of enemies on the screen. Using a counter and randomly generated enemies, I was able to get this: 124 enemies and 1528537 checks, 68 enemies and 1246769 checks, 15 enemies and 795695 500 enemies and 1747452 checks. This is a huge difference compared to your earlier code which would do number of enemies * number of spaces.
for 124 enemies you'd have done 31000000 checks, while instead this did 1528537, which is less than 5% of the number of checks normally done.
You can choose some random point at the grid and then move it iteratively from the enemies, here is my implementation in python:
from numpy import array
from numpy.linalg import norm
from random import random as rnd
def get_pos(enem):
# chose random start position
pos = array([rnd() * 500., rnd() * 500.])
# make several steps from enemies
for i in xrange(25): # 25 steps
s = array([0., 0.]) # step direction
for e in enem:
vec = pos - array(e) # direction from enemy
dist = norm(vec) # distance from enemy
vec /= dist # normalize vector
# calculate size of step
step = (1000. / dist) ** 2
vec *= step
s += vec
# update position
pos += s
# ensure that pos is in bounds
pos[0] = min(max(0, pos[0]), 500.)
pos[1] = min(max(0, pos[1]), 500.)
return pos
def get_dist(enem, pos):
dists = [norm(pos - array(e)) for e in enem]
print 'Min dist: %f' % min(dists)
print 'Avg dist: %f' % (sum(dists) / len(dists))
enem = [(0., 0.), (250., 250.), (500., 0.), (0., 500.), (500., 500.)]
pos = get_pos(enem)
print 'Position: %s' % pos
get_dist(enem, pos)
Output:
Position: [ 0. 250.35338215]
Min dist: 249.646618
Avg dist: 373.606883
Triangulate the enemies (there's less than 5?); and triangulate each corner of the grid with the closest pair of enemies to it. The circumcenter of one of these triangles should be a decent place to re-spawn.
Below is an example in JavaScript. I used the canvas method from m69's answer for demonstration. The green dots are the candidates tested to arrive at the blue-colored suggestion. Since we are triangulating the corners, they are not offered as solutions here (perhaps the randomly-closer solutions can be exciting for a player? Alternatively, just test for the corners as well..).
// http://stackoverflow.com/questions/4103405/what-is-the-algorithm-for-finding-the-center-of-a-circle-from-three-points
function circumcenter(x1,y1,x2,y2,x3,y3)
{
var offset = x2 * x2 + y2 * y2;
var bc = ( x1 * x1 + y1 * y1 - offset ) / 2;
var cd = (offset - x3 * x3 - y3 * y3) / 2;
var det = (x1 - x2) * (y2 - y3) - (x2 - x3)* (y1 - y2);
var idet = 1/det;
var centerx = (bc * (y2 - y3) - cd * (y1 - y2)) * idet;
var centery = (cd * (x1 - x2) - bc * (x2 - x3)) * idet;
return [centerx,centery];
}
var best = 0,
candidates = [];
function better(pt,pts){
var temp = Infinity;
for (var i=0; i<pts.length; i+=2){
var d = (pts[i] - pt[0])*(pts[i] - pt[0]) + (pts[i+1] - pt[1])*(pts[i+1] - pt[1]);
if (d <= best)
return false;
else if (d < temp)
temp = d;
}
best = temp;
return true;
}
function f(es){
if (es.length < 2)
return "farthest corner";
var corners = [0,0,500,0,500,500,0,500],
bestcandidate;
// test enemies only
if (es.length > 2){
for (var i=0; i<es.length-4; i+=2){
for (var j=i+2; j<es.length-2; j+=2){
for (var k=j+2; k<es.length; k+=2){
var candidate = circumcenter(es[i],es[i+1],es[j],es[j+1],es[k],es[k+1]);
if (candidate[0] < 0 || candidate[1] < 0 || candidate[0] > 500 || candidate[1] > 500)
continue;
candidates.push(candidate[0]);
candidates.push(candidate[1]);
if (better(candidate,es))
bestcandidate = candidate.slice();
}
}
}
}
//test corners
for (var i=0; i<8; i+=2){
for (var j=0; j<es.length-2; j+=2){
for (var k=j+2; k<es.length; k+=2){
var candidate = circumcenter(corners[i],corners[i+1],es[j],es[j+1],es[k],es[k+1]);
if (candidate[0] < 0 || candidate[1] < 0 || candidate[0] > 500 || candidate[1] > 500)
continue;
candidates.push(candidate[0]);
candidates.push(candidate[1]);
if (better(candidate,es))
bestcandidate = candidate.slice();
}
}
}
best = 0;
return bestcandidate;
}
// SHOW RESULT ON CANVAS
var canvas = document.getElementById("canvas");
canvas.width = 500; canvas.height = 500;
context = canvas.getContext("2d");
//setInterval(function() {
// CREATE TEST DATA
context.clearRect(0, 0, canvas.width, canvas.height);
candidates = [];
var enemy = [];
for (var i = 0; i < 8; i++) enemy.push(Math.round(Math.random() * 500));
// RUN FUNCTION
var result = f(enemy);
for (var i = 0; i < 8; i+=2) {
paintDot(context, enemy[i], enemy[i+1], 10, "red");
}
for (var i = 0; i < candidates.length; i+=2) {
paintDot(context, candidates[i], candidates[i+1], 7, "green");
}
paintDot(context, result[0], result[1], 18, "blue");
function paintDot(context, x, y, size, color) {
context.beginPath();
context.arc(x, y, size, 0, 6.2831853);
context.closePath();
context.fillStyle = color;
context.fill();
}
//},1500);
<BODY STYLE="margin: 0; border: 0; padding: 0;">
<CANVAS ID="canvas" STYLE="width: 200px; height: 200px; background:
radial-gradient(rgba(255,255,255,0) 0, rgba(255,255,255,.15) 30%, rgba(255,255,255,.3) 32%, rgba(255,255,255,0) 33%) 0 0,
radial-gradient(rgba(255,255,255,0) 0, rgba(255,255,255,.1) 11%, rgba(255,255,255,.3) 13%, rgba(255,255,255,0) 14%) 0 0,
radial-gradient(rgba(255,255,255,0) 0, rgba(255,255,255,.2) 17%, rgba(255,255,255,.43) 19%, rgba(255,255,255,0) 20%) 0 110px,
radial-gradient(rgba(255,255,255,0) 0, rgba(255,255,255,.2) 11%, rgba(255,255,255,.4) 13%, rgba(255,255,255,0) 14%) -130px -170px,
radial-gradient(rgba(255,255,255,0) 0, rgba(255,255,255,.2) 11%, rgba(255,255,255,.4) 13%, rgba(255,255,255,0) 14%) 130px 370px,
radial-gradient(rgba(255,255,255,0) 0, rgba(255,255,255,.1) 11%, rgba(255,255,255,.2) 13%, rgba(255,255,255,0) 14%) 0 0,
linear-gradient(45deg, #343702 0%, #184500 20%, #187546 30%, #006782 40%, #0b1284 50%, #760ea1 60%, #83096e 70%, #840b2a 80%, #b13e12 90%, #e27412 100%);
background-size: 470px 470px, 970px 970px, 410px 410px, 610px 610px, 530px 530px, 730px 730px, 100% 100%;
background-color: #840b2a;"></CANVAS>
<!-- http://lea.verou.me/css3patterns/#rainbow-bokeh -->
</BODY>
I'm looking at a paper named "Shape Based Image Retrieval Using Generic Fourier Descriptors", but only have rudimentary knowledge of Fourier Descriptors. I am attempting to implement the algorithm on page 12 of the paper, and have some results which I can't really make too much sense out of.
If I create an small image, take calculate the FD for the image, and compare the FD to the same image which has been translated by a single pixel in the x and y directions, the descriptor is completely different, except for the first entry - which is exactly the same. Firstly, a question is, is should these descriptors be exactly the same (as the descriptor is apparently scale, rotation, and translation invariant) between the two images?
Secondly, in the paper, it mentions that descriptors of two separate images are compared by a simple Euclidean distance - therefore, by taking the Euclidean distance between the two descriptors mentioned above, the Euclidean distance would apparently be 0.
I quickly put together some Javascript code to test out the algorithm, which is below.
Does anybody have any input, ideas, ways to move forward?
Thanks,
Paul
var iShape = [
0, 0, 0, 0, 0,
0, 0, 255, 0, 0,
0, 255, 255, 255, 0,
0, 0, 255, 0, 0,
0, 0, 0, 0, 0
];
var ImageWidth = 5, ImageHeight = 5, MaxRFreq = 5, MaxAFreq = 5;
// Calculate centroid
var cX = 0, cY = 0, pCount = 0;
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
if (iShape[y * ImageWidth + x]) {
cX += x;
cY += y;
pCount++;
}
}
}
cX = cX / pCount;
cY = cY / pCount;
console.log("cX = " + cX + ", cY = " + cY);
// Calculate the maximum radius
var maxR = 0;
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
if (iShape[y * ImageWidth + x]) {
var r = Math.sqrt(Math.pow(x - cX, 2) + Math.pow(y - cY, 2));
if (r > maxR) {
maxR = r;
}
}
}
}
// Initialise real / imaginary table
var i;
var FR = [ ];
var FI = [ ];
for (r = 0; r < (MaxRFreq); r++) {
var rRow = [ ];
FR.push(rRow);
var aRow = [ ];
FI.push(aRow);
for (a = 0; a < (MaxAFreq); a++) {
rRow.push(0.0);
aRow.push(0.0);
}
}
var rFreq, aFreq, x, y;
for (rFreq = 0; rFreq < MaxRFreq; rFreq++) {
for (aFreq = 0; aFreq < MaxAFreq; aFreq++) {
for (x = 0; x < ImageWidth; x++) {
for (y = 0; y < ImageHeight; y++) {
var radius = Math.sqrt(Math.pow(x - maxR, 2) +
Math.pow(y - maxR, 2));
var theta = Math.atan2(y - maxR, x - maxR);
if (theta < 0.0) {
theta += (2 * Math.PI);
}
var iPixel = iShape[y * ImageWidth + x];
FR[rFreq][aFreq] += iPixel * Math.cos(2 * Math.PI * rFreq *
(radius / maxR) + aFreq * theta);
FI[rFreq][aFreq] -= iPixel * Math.sin(2 * Math.PI * rFreq *
(radius / maxR) + aFreq * theta);
}
}
}
}
// Initialise fourier descriptor table
var FD = [ ];
for (i = 0; i < (MaxRFreq * MaxAFreq); i++) {
FD.push(0.0);
}
// Calculate the fourier descriptor
for (rFreq = 0; rFreq < MaxRFreq; rFreq++) {
for (aFreq = 0; aFreq < MaxAFreq; aFreq++) {
if (rFreq == 0 && aFreq == 0) {
FD[0] = Math.sqrt(Math.pow(FR[0][0], 2) + Math.pow(FR[0][0], 2) /
(Math.PI * maxR * maxR));
} else {
FD[rFreq * MaxAFreq + aFreq] = Math.sqrt(Math.pow(FR[rFreq][aFreq], 2) +
Math.pow(FI[rFreq][aFreq], 2) / FD[0]);
}
}
}
for (i = 0; i < (MaxRFreq * MaxAFreq); i++) {
console.log(FD[i]);
}
There are three separate normalization techniques applied here in order to make the final descriptor invariant to 1) translation and 2) scale 3) rotation.
For the translation invariance part you need to find the centroid of the shape and calculate the vector of every contour point having the centroid as the origin. This is done by substracting the x and y coordinate of the centroid from each point's coordinates, respectively. So in your code the radius and theta of each point should be computes as follows:
var radius = Math.sqrt(Math.pow(x - cX, 2) + Math.pow(y - cY, 2));
var theta = Math.atan2(y - cY, x - cX);
For the scale invariance part you need to find the maximum magnitute(or radius as you say) of every vector (already normalized for translation invariance) and divide the magnitude of each point by the maximum magnitude value. An alternative way of achieving this is to divide every fourier coefficient with the zero-frequency coefficient (first coefficient) as the scale information is represented there. As I can see in you code and in the paper, this is implemented according to the second way I described.
Finally, the rotation invariance is achieved by only keeping the magnitude of the fourier coefficients as you can see in step 6 of the paper's pseudo-code.
In addition to all these, keep in mind that in order to apply the eucidean distance for the descriptor comparison, the length of the descriptor for every shape must be the same. In FFT, the number of the final coefficients depends on the number of the contour points of the shape. The solution I have found to this is to interpolate between points in order to reach a fixed number of points for every shape.
Hope I helped,
Lazaros
I have an application that defines a real world rectangle on top of an image/photograph, of course in 2D it may not be a rectangle because you are looking at it from an angle.
The problem is, say that the rectangle needs to have grid lines drawn on it, for example if it is 3x5 so I need to draw 2 lines from side 1 to side 3, and 4 lines from side 2 to side 4.
As of right now I am breaking up each line into equidistant parts, to get the start and end point of all the grid lines. However the more of an angle the rectangle is on, the more "incorrect" these lines become, as horizontal lines further from you should be closer together.
Does anyone know the name of the algorithm that I should be searching for?
Yes I know you can do this in 3D, however I am limited to 2D for this particular application.
Here's the solution.
The basic idea is you can find the perspective correct "center" of your rectangle by connecting the corners diagonally. The intersection of the two resulting lines is your perspective correct center. From there you subdivide your rectangle into four smaller rectangles, and you repeat the process. The number of times depends on how accurate you want it. You can subdivide to just below the size of a pixel for effectively perfect perspective.
Then in your subrectangles you just apply your standard uncorrected "textured" triangles, or rectangles or whatever.
You can perform this algorithm without going to the complex trouble of building a 'real' 3d world. it's also good for if you do have a real 3d world modeled, but your textriangles are not perspective corrected in hardware, or you need a performant way to get perspective correct planes without per pixel rendering trickery.
Image: Example of Bilinear & Perspective Transform (Note: The height of top & bottom horizontal grid lines is actually half of the rest lines height, on both drawings)
========================================
I know this is an old question, but I have a generic solution so I decided to publish it hopping it will be useful to the future readers.
The code bellow can draw an arbitrary perspective grid without the need of repetitive computations.
I begin actually with a similar problem: to draw a 2D perspective Grid and then transform the underline image to restore the perspective.
I started to read here:
http://www.imagemagick.org/Usage/distorts/#bilinear_forward
and then here (the Leptonica Library):
http://www.leptonica.com/affine.html
were I found this:
When you look at an object in a plane from some arbitrary direction at
a finite distance, you get an additional "keystone" distortion in the
image. This is a projective transform, which keeps straight lines
straight but does not preserve the angles between lines. This warping
cannot be described by a linear affine transformation, and in fact
differs by x- and y-dependent terms in the denominator.
The transformation is not linear, as many people already pointed out in this thread. It involves solving a linear system of 8 equations (once) to compute the 8 required coefficients and then you can use them to transform as many points as you want.
To avoid including all Leptonica library in my project, I took some pieces of code from it, I removed all special Leptonica data-types & macros, I fixed some memory leaks and I converted it to a C++ class (mostly for encapsulation reasons) which does just one thing:
It maps a (Qt) QPointF float (x,y) coordinate to the corresponding Perspective Coordinate.
If you want to adapt the code to another C++ library, the only thing to redefine/substitute is the QPointF coordinate class.
I hope some future readers would find it useful.
The code bellow is divided into 3 parts:
A. An example on how to use the genImageProjective C++ class to draw a 2D perspective Grid
B. genImageProjective.h file
C. genImageProjective.cpp file
//============================================================
// C++ Code Example on how to use the
// genImageProjective class to draw a perspective 2D Grid
//============================================================
#include "genImageProjective.h"
// Input: 4 Perspective-Tranformed points:
// perspPoints[0] = top-left
// perspPoints[1] = top-right
// perspPoints[2] = bottom-right
// perspPoints[3] = bottom-left
void drawGrid(QPointF *perspPoints)
{
(...)
// Setup a non-transformed area rectangle
// I use a simple square rectangle here because in this case we are not interested in the source-rectangle,
// (we want to just draw a grid on the perspPoints[] area)
// but you can use any arbitrary rectangle to perform a real mapping to the perspPoints[] area
QPointF topLeft = QPointF(0,0);
QPointF topRight = QPointF(1000,0);
QPointF bottomRight = QPointF(1000,1000);
QPointF bottomLeft = QPointF(0,1000);
float width = topRight.x() - topLeft.x();
float height = bottomLeft.y() - topLeft.y();
// Setup Projective trasform object
genImageProjective imageProjective;
imageProjective.sourceArea[0] = topLeft;
imageProjective.sourceArea[1] = topRight;
imageProjective.sourceArea[2] = bottomRight;
imageProjective.sourceArea[3] = bottomLeft;
imageProjective.destArea[0] = perspPoints[0];
imageProjective.destArea[1] = perspPoints[1];
imageProjective.destArea[2] = perspPoints[2];
imageProjective.destArea[3] = perspPoints[3];
// Compute projective transform coefficients
if (imageProjective.computeCoeefficients() != 0)
return; // This can actually fail if any 3 points of Source or Dest are colinear
// Initialize Grid parameters (without transform)
float gridFirstLine = 0.1f; // The normalized position of first Grid Line (0.0 to 1.0)
float gridStep = 0.1f; // The normalized Grd size (=distance between grid lines: 0.0 to 1.0)
// Draw Horizonal Grid lines
QPointF lineStart, lineEnd, tempPnt;
for (float pos = gridFirstLine; pos <= 1.0f; pos += gridStep)
{
// Compute Grid Line Start
tempPnt = QPointF(topLeft.x(), topLeft.y() + pos*width);
imageProjective.mapSourceToDestPoint(tempPnt, lineStart);
// Compute Grid Line End
tempPnt = QPointF(topRight.x(), topLeft.y() + pos*width);
imageProjective.mapSourceToDestPoint(tempPnt, lineEnd);
// Draw Horizontal Line (use your prefered method to draw the line)
(...)
}
// Draw Vertical Grid lines
for (float pos = gridFirstLine; pos <= 1.0f; pos += gridStep)
{
// Compute Grid Line Start
tempPnt = QPointF(topLeft.x() + pos*height, topLeft.y());
imageProjective.mapSourceToDestPoint(tempPnt, lineStart);
// Compute Grid Line End
tempPnt = QPointF(topLeft.x() + pos*height, bottomLeft.y());
imageProjective.mapSourceToDestPoint(tempPnt, lineEnd);
// Draw Vertical Line (use your prefered method to draw the line)
(...)
}
(...)
}
==========================================
//========================================
//C++ Header File: genImageProjective.h
//========================================
#ifndef GENIMAGE_H
#define GENIMAGE_H
#include <QPointF>
// Class to transform an Image Point using Perspective transformation
class genImageProjective
{
public:
genImageProjective();
int computeCoeefficients(void);
int mapSourceToDestPoint(QPointF& sourcePoint, QPointF& destPoint);
public:
QPointF sourceArea[4]; // Source Image area limits (Rectangular)
QPointF destArea[4]; // Destination Image area limits (Perspectivelly Transformed)
private:
static int gaussjordan(float **a, float *b, int n);
bool coefficientsComputed;
float vc[8]; // Vector of Transform Coefficients
};
#endif // GENIMAGE_H
//========================================
//========================================
//C++ CPP File: genImageProjective.cpp
//========================================
#include <math.h>
#include "genImageProjective.h"
// ----------------------------------------------------
// class genImageProjective
// ----------------------------------------------------
genImageProjective::genImageProjective()
{
sourceArea[0] = sourceArea[1] = sourceArea[2] = sourceArea[3] = QPointF(0,0);
destArea[0] = destArea[1] = destArea[2] = destArea[3] = QPointF(0,0);
coefficientsComputed = false;
}
// --------------------------------------------------------------
// Compute projective transform coeeeficients
// RetValue: 0: Success, !=0: Error
/*-------------------------------------------------------------*
* Projective coordinate transformation *
*-------------------------------------------------------------*/
/*!
* computeCoeefficients()
*
* Input: this->sourceArea[4]: (source 4 points; unprimed)
* this->destArea[4]: (transformed 4 points; primed)
* this->vc (computed vector of transform coefficients)
* Return: 0 if OK; <0 on error
*
* We have a set of 8 equations, describing the projective
* transformation that takes 4 points (sourceArea) into 4 other
* points (destArea). These equations are:
*
* x1' = (c[0]*x1 + c[1]*y1 + c[2]) / (c[6]*x1 + c[7]*y1 + 1)
* y1' = (c[3]*x1 + c[4]*y1 + c[5]) / (c[6]*x1 + c[7]*y1 + 1)
* x2' = (c[0]*x2 + c[1]*y2 + c[2]) / (c[6]*x2 + c[7]*y2 + 1)
* y2' = (c[3]*x2 + c[4]*y2 + c[5]) / (c[6]*x2 + c[7]*y2 + 1)
* x3' = (c[0]*x3 + c[1]*y3 + c[2]) / (c[6]*x3 + c[7]*y3 + 1)
* y3' = (c[3]*x3 + c[4]*y3 + c[5]) / (c[6]*x3 + c[7]*y3 + 1)
* x4' = (c[0]*x4 + c[1]*y4 + c[2]) / (c[6]*x4 + c[7]*y4 + 1)
* y4' = (c[3]*x4 + c[4]*y4 + c[5]) / (c[6]*x4 + c[7]*y4 + 1)
*
* Multiplying both sides of each eqn by the denominator, we get
*
* AC = B
*
* where B and C are column vectors
*
* B = [ x1' y1' x2' y2' x3' y3' x4' y4' ]
* C = [ c[0] c[1] c[2] c[3] c[4] c[5] c[6] c[7] ]
*
* and A is the 8x8 matrix
*
* x1 y1 1 0 0 0 -x1*x1' -y1*x1'
* 0 0 0 x1 y1 1 -x1*y1' -y1*y1'
* x2 y2 1 0 0 0 -x2*x2' -y2*x2'
* 0 0 0 x2 y2 1 -x2*y2' -y2*y2'
* x3 y3 1 0 0 0 -x3*x3' -y3*x3'
* 0 0 0 x3 y3 1 -x3*y3' -y3*y3'
* x4 y4 1 0 0 0 -x4*x4' -y4*x4'
* 0 0 0 x4 y4 1 -x4*y4' -y4*y4'
*
* These eight equations are solved here for the coefficients C.
*
* These eight coefficients can then be used to find the mapping
* (x,y) --> (x',y'):
*
* x' = (c[0]x + c[1]y + c[2]) / (c[6]x + c[7]y + 1)
* y' = (c[3]x + c[4]y + c[5]) / (c[6]x + c[7]y + 1)
*
*/
int genImageProjective::computeCoeefficients(void)
{
int retValue = 0;
int i;
float *a[8]; /* 8x8 matrix A */
float *b = this->vc; /* rhs vector of primed coords X'; coeffs returned in vc[] */
b[0] = destArea[0].x();
b[1] = destArea[0].y();
b[2] = destArea[1].x();
b[3] = destArea[1].y();
b[4] = destArea[2].x();
b[5] = destArea[2].y();
b[6] = destArea[3].x();
b[7] = destArea[3].y();
for (i = 0; i < 8; i++)
a[i] = NULL;
for (i = 0; i < 8; i++)
{
if ((a[i] = (float *)calloc(8, sizeof(float))) == NULL)
{
retValue = -100; // ERROR_INT("a[i] not made", procName, 1);
goto Terminate;
}
}
a[0][0] = sourceArea[0].x();
a[0][1] = sourceArea[0].y();
a[0][2] = 1.;
a[0][6] = -sourceArea[0].x() * b[0];
a[0][7] = -sourceArea[0].y() * b[0];
a[1][3] = sourceArea[0].x();
a[1][4] = sourceArea[0].y();
a[1][5] = 1;
a[1][6] = -sourceArea[0].x() * b[1];
a[1][7] = -sourceArea[0].y() * b[1];
a[2][0] = sourceArea[1].x();
a[2][1] = sourceArea[1].y();
a[2][2] = 1.;
a[2][6] = -sourceArea[1].x() * b[2];
a[2][7] = -sourceArea[1].y() * b[2];
a[3][3] = sourceArea[1].x();
a[3][4] = sourceArea[1].y();
a[3][5] = 1;
a[3][6] = -sourceArea[1].x() * b[3];
a[3][7] = -sourceArea[1].y() * b[3];
a[4][0] = sourceArea[2].x();
a[4][1] = sourceArea[2].y();
a[4][2] = 1.;
a[4][6] = -sourceArea[2].x() * b[4];
a[4][7] = -sourceArea[2].y() * b[4];
a[5][3] = sourceArea[2].x();
a[5][4] = sourceArea[2].y();
a[5][5] = 1;
a[5][6] = -sourceArea[2].x() * b[5];
a[5][7] = -sourceArea[2].y() * b[5];
a[6][0] = sourceArea[3].x();
a[6][1] = sourceArea[3].y();
a[6][2] = 1.;
a[6][6] = -sourceArea[3].x() * b[6];
a[6][7] = -sourceArea[3].y() * b[6];
a[7][3] = sourceArea[3].x();
a[7][4] = sourceArea[3].y();
a[7][5] = 1;
a[7][6] = -sourceArea[3].x() * b[7];
a[7][7] = -sourceArea[3].y() * b[7];
retValue = gaussjordan(a, b, 8);
Terminate:
// Clean up
for (i = 0; i < 8; i++)
{
if (a[i])
free(a[i]);
}
this->coefficientsComputed = (retValue == 0);
return retValue;
}
/*-------------------------------------------------------------*
* Gauss-jordan linear equation solver *
*-------------------------------------------------------------*/
/*
* gaussjordan()
*
* Input: a (n x n matrix)
* b (rhs column vector)
* n (dimension)
* Return: 0 if ok, 1 on error
*
* Note side effects:
* (1) the matrix a is transformed to its inverse
* (2) the vector b is transformed to the solution X to the
* linear equation AX = B
*
* Adapted from "Numerical Recipes in C, Second Edition", 1992
* pp. 36-41 (gauss-jordan elimination)
*/
#define SWAP(a,b) {temp = (a); (a) = (b); (b) = temp;}
int genImageProjective::gaussjordan(float **a, float *b, int n)
{
int retValue = 0;
int i, icol=0, irow=0, j, k, l, ll;
int *indexc = NULL, *indexr = NULL, *ipiv = NULL;
float big, dum, pivinv, temp;
if (!a)
{
retValue = -1; // ERROR_INT("a not defined", procName, 1);
goto Terminate;
}
if (!b)
{
retValue = -2; // ERROR_INT("b not defined", procName, 1);
goto Terminate;
}
if ((indexc = (int *)calloc(n, sizeof(int))) == NULL)
{
retValue = -3; // ERROR_INT("indexc not made", procName, 1);
goto Terminate;
}
if ((indexr = (int *)calloc(n, sizeof(int))) == NULL)
{
retValue = -4; // ERROR_INT("indexr not made", procName, 1);
goto Terminate;
}
if ((ipiv = (int *)calloc(n, sizeof(int))) == NULL)
{
retValue = -5; // ERROR_INT("ipiv not made", procName, 1);
goto Terminate;
}
for (i = 0; i < n; i++)
{
big = 0.0;
for (j = 0; j < n; j++)
{
if (ipiv[j] != 1)
{
for (k = 0; k < n; k++)
{
if (ipiv[k] == 0)
{
if (fabs(a[j][k]) >= big)
{
big = fabs(a[j][k]);
irow = j;
icol = k;
}
}
else if (ipiv[k] > 1)
{
retValue = -6; // ERROR_INT("singular matrix", procName, 1);
goto Terminate;
}
}
}
}
++(ipiv[icol]);
if (irow != icol)
{
for (l = 0; l < n; l++)
SWAP(a[irow][l], a[icol][l]);
SWAP(b[irow], b[icol]);
}
indexr[i] = irow;
indexc[i] = icol;
if (a[icol][icol] == 0.0)
{
retValue = -7; // ERROR_INT("singular matrix", procName, 1);
goto Terminate;
}
pivinv = 1.0 / a[icol][icol];
a[icol][icol] = 1.0;
for (l = 0; l < n; l++)
a[icol][l] *= pivinv;
b[icol] *= pivinv;
for (ll = 0; ll < n; ll++)
{
if (ll != icol)
{
dum = a[ll][icol];
a[ll][icol] = 0.0;
for (l = 0; l < n; l++)
a[ll][l] -= a[icol][l] * dum;
b[ll] -= b[icol] * dum;
}
}
}
for (l = n - 1; l >= 0; l--)
{
if (indexr[l] != indexc[l])
{
for (k = 0; k < n; k++)
SWAP(a[k][indexr[l]], a[k][indexc[l]]);
}
}
Terminate:
if (indexr)
free(indexr);
if (indexc)
free(indexc);
if (ipiv)
free(ipiv);
return retValue;
}
// --------------------------------------------------------------
// Map a source point to destination using projective transform
// --------------------------------------------------------------
// Params:
// sourcePoint: initial point
// destPoint: transformed point
// RetValue: 0: Success, !=0: Error
// --------------------------------------------------------------
// Notes:
// 1. You must call once computeCoeefficients() to compute
// the this->vc[] vector of 8 coefficients, before you call
// mapSourceToDestPoint().
// 2. If there was an error or the 8 coefficients were not computed,
// a -1 is returned and destPoint is just set to sourcePoint value.
// --------------------------------------------------------------
int genImageProjective::mapSourceToDestPoint(QPointF& sourcePoint, QPointF& destPoint)
{
if (coefficientsComputed)
{
float factor = 1.0f / (vc[6] * sourcePoint.x() + vc[7] * sourcePoint.y() + 1.);
destPoint.setX( factor * (vc[0] * sourcePoint.x() + vc[1] * sourcePoint.y() + vc[2]) );
destPoint.setY( factor * (vc[3] * sourcePoint.x() + vc[4] * sourcePoint.y() + vc[5]) );
return 0;
}
else // There was an error while computing coefficients
{
destPoint = sourcePoint; // just copy the source to destination...
return -1; // ...and return an error
}
}
//========================================
Using Breton's subdivision method (which is related to Mongo's extension method), will get you accurate arbitrary power-of-two divisions. To split into non-power-of-two divisions using those methods you will have to subdivide to sub-pixel spacing, which can be computationally expensive.
However, I believe you may be able to apply a variation of Haga's Theorem (which is used in origami to divide a side into Nths given a side divided into (N-1)ths) to the perspective-square subdivisions to produce arbitrary divisions from the closest power of 2 without having to continue subdividing.
The most elegant and fastest solution would be to find the homography matrix, which maps rectangle coordinates to photo coordinates.
With a decent matrix library it should not be a difficult task, as long as you know your math.
Keywords: Collineation, Homography, Direct Linear Transformation
However, the recursive algorithm above should work, but probably if your resources are limited, projective geometry is the only way to go.
I think the selected answer is not the best solution available. A better solution is to apply perspective (projective) transformation of a rectangle to simple grid as following Matlab script and image show. You can implement this algorithm with C++ and OpenCV as well.
function drawpersgrid
sz = [ 24, 16 ]; % [x y]
srcpt = [ 0 0; sz(1) 0; 0 sz(2); sz(1) sz(2)];
destpt = [ 20 50; 100 60; 0 150; 200 200;];
% make rectangular grid
[X,Y] = meshgrid(0:sz(1),0:sz(2));
% find projective transform matching corner points
tform = maketform('projective',srcpt,destpt);
% apply the projective transform to the grid
[X1,Y1] = tformfwd(tform,X,Y);
hold on;
%% find grid
for i=1:sz(2)
for j=1:sz(1)
x = [ X1(i,j);X1(i,j+1);X1(i+1,j+1);X1(i+1,j);X1(i,j)];
y = [ Y1(i,j);Y1(i,j+1);Y1(i+1,j+1);Y1(i+1,j);Y1(i,j)];
plot(x,y,'b');
end
end
hold off;
In the special case when you look perpendicular to sides 1 and 3, you can divide those sides in equal parts. Then draw a diagonal, and draw parallels to side 1 through each intersection of the diagonal and the dividing lines drawn earlier.
This a geometric solution I thought out. I do not know whether the 'algorithm' has a name.
Say you want to start by dividing the 'rectangle' into n pieces with vertical lines first.
The goal is to place points P1..Pn-1 on the top line which we can use to draw lines through them to the points where the left and right line meet or parallel to them when such point does not exist.
If the top and bottom line are parallel to each other just place thoose points to split the top line between the corners equidistantly.
Else place n points Q1..Qn on the left line so that theese and the top-left corner are equidistant and i < j => Qi is closer to the top-left cornern than Qj.
In order to map the Q-points to the top line find the intersection S of the line from Qn through the top-right corner and the parallel to the left line through the intersection of top and bottom line. Now connect S with Q1..Qn-1. The intersection of the new lines with the top line are the wanted P-points.
Do this analog for the horizontal lines.
Given a rotation around the y axis, especially if rotation surfaces are planar, the perspective is generated by vertical gradients. These get progressively closer in perspective. Instead of using diagonals to define four rectangles, which can work given powers of two... define two rectangles, left and right. They'll be higher than wide, eventually, if one continues to divide the surface into narrower vertical segments. This can accommodate surfaces that are not square. If a rotation is around the x axis, then horizontal gradients are needed.
What you need to do is represent it in 3D (world) and then project it down to 2D (screen).
This will require you to use a 4D transformation matrix which does the projection on a 4D homogeneous down to a 3D homogeneous vector, which you can then convert down to a 2D screen space vector.
I couldn't find it in Google either, but a good computer graphics books will have the details.
Keywords are projection matrix, projection transformation, affine transformation, homogeneous vector, world space, screen space, perspective transformation, 3D transformation
And by the way, this usually takes a few lectures to explain all of that. So good luck.