Here is an example:
Suppose there are 4 points: A, B, C, and D
Given that Point A is at (0,0):
and the distances:
A to B: 7
A to C: 5
A to D: 9
B to C: 6
B to D: 5
C to D: 7
The goal would be to find a solution to points B(x,y), C(x,y) and D(x,y)
What is an algorithm to find the points ( up to 50 of them ) given the distances between them?
OK,you have 4 points A, B, C, and D which are separated from one another such that the lengths of the distances between each pair of points is AB=7, AC=5, BC=6, AD=9, BD=5, and CD=7. Axyz=(0,0,0), Bxyz=(7,0,0), Cxyz=(2.7,4.2,0), Dxyz=(7.5,1.9,4.6) (rounding to the first decimal).
We set point A at the origin Axyz= (0,0,0).
We set point B at x=7,y=0,z=0 Bxyz= (7,0,0).
We find the x coordinate for point C by using the law of cosines:
((AB^2+AC^2-BC^2)/2)/Bx = Cx
((7^2+5^2-6^2)/2)/7=
((49+25-36)/2)/7= 38/14 = 2.714286
We then use the pythagorean theorem to find Cy:
sqrt(AC^2-Cx^2)=Cy
sqrt(25-7.367347)=4.199
So Cxyz=(2.714,4.199,0)
We find Dx in much the same way we found Cx:
((AB^2+AD^2-BD^2)/2)/Bx =Dx
((49+81-25)/2)/7= 7.5 = Dx
We find Dy by a slightly different formula:
(((AC^2+AD^2-CD^2)/2)-(Cx*Dx))/Dy
(((25+81-49)/2)-(2.714*7.5))/4.199= 1.94 (approx)
Having found Dx and Dy, we can find Dz by using Pythagorean theorem:
sqrt(AD^2-Dx^2-Dy^2)=
sqrt(9^2-7.5^2-1.94^2) = 4.58
So Dxyz=(7.5, 1.94, 4.58)
If you have pairwise distances between each of a set of 50 points, then you might need as many as 49 dimensions in order to obtain coordinates for all the points. If A, B, C, D, and E are all separated by 10 lengths units from each of every other, then you would need 4 spatial dimensions - if you introduce another point (F) which is also equidistant from all the other points, then you will need 5 dimensions. The algorithm works the same no matter how many dimensions are necessary (and in fact it works best when the maximum number of dimensions IS required-). The algorithm also works when the distances violate the triangle rule - such as if AB=3, AC=4, and BC=13 - the coordinates are A=0,0; B=3,0; and C=-24,23.66i. If the triangle rule is violated, then some of the coordinates will simply be imaginary valued. No big deal.
In general for point G, the coordinates (x1st, x2nd, x3rd, x4th, x5th, and x6th) can be found thusly:
G1st=((AB^2+AG^2-BG^2)/2)/(B1st)
G2nd=(((AC^2+AG^2-CG^2)/2)-(C1st*G1st))/(C2nd)
G3rd=(((AD^2+AG^2-DG^2)/2)-(D1stG1st)-(D2ndG2nd))/(D3rd)
G4th=(((AE^2+AG^2-EG^2)/2)-(E1stG1st)-(E2ndG2nd)-(E3rd*G3rd))/(E4th)
G5th=(((AF^2+AG^2-FG^2)/2)-(F1stG1st)-(F2ndG2nd)-(F3rdG3rd)-(F4thG4th))/(F5th)
G6th=sqrt(AG^2-G1st^2-G2nd^2-G3rd^2-G4th^2-G5th^2)
For the 5th point you find the first three coordinates with lawofcosine calculations and you find the 4th coordinate with a pythagoreantheorem calculations. For the 6th point you find the first 4 coordinates with 4 lawofcosine calculations and then you obtain the final coordinate with the pythagoreantheorem calculation. For the 50th point, you find the first 48 coordinates with 48 lawofcosines calculations and the 49th coordinate is found with a pythagoreantheorem calculation. So for 50 points, there will be 48 pythagoreantheorem calculations altogether plus 1128 lawofcosine calculations.
The algorithm is fairly straightforward:
A is always set at the origin and B is set at x=AB (or rather B1st=AB)
C1st is found by using the law of cosines ((AB^2+AC^2-BC^2)/2)/(B1st)
C2nd is then found with pythagorean theorem (sqrt(AC^2-C1st^2))
BUT WHAT IF C2nd = 0? This is not necessarily a problem, but it can become a problem for finding D2nd, D3rd, E2nd, E3rd, E4th, etc.
If AB=4, AC=8, BC=4, then we will obtain A (0,0), B (4,0), and C (8,0). If AD=4, BD=8, and CD=12, then there will be no problem for finding coordinates for D which would be D (-4,0).
However, if CD is not equal to 12, then we WILL have a problem. For instance, if CD=5, then we might find that we should go back and calculate coordinates for the points in a different order such as ACDB, that way we can get A=(0,0,0);C=(8,0,0); D=(3.44,2.04,0); and B=(4,-14.55,14.55i). This is a fairly intuitive solution, but it interrupts the flow of the algorithm because we have to go backwards and start over in a different order.
Another solution to the problem which does not necessitate interrupting the flow of computations is to deliberately introduce an error whenever a pythagoreantheorem calculation gives us a zero. -- Instead of a zero, put a 0.1 or 0.01 as the C2nd coordinate. This will allow one to proceed with calculating coordinates for the remaining points without interruption and the accuracy of the final results will suffer only a little (truth be told the algorithm is subject to cumulative rounding errors anyhow, so its no big deal). Also the deliberate introduction of error is the only way to obtain a solution at all in some cases:
Consider once again 4 points A, B, C, and D with distances such the AB=4, AC=8, BC=4, AD=4, BD=8, and CD=4 (we previously have had CD at 12, and CD at 5). When CD=4, there IS NO exact solution no matter what order you calculate the points. Go ahead and try.
A=(0,0,0), B=(4,0,0), C=(8,0,0)... If you introduce an error at C2nd so that instead of zero you put 0.1 such that C=(8,0.1,0), then you can obtain a solution for point D's coordinates D=(-4,640,640i). If you introduce a smaller error for C2nd such that C=(8,0.01,0), then you get D=(-4,6400,6400i). As C2nd gets closer and closer to zero, D2nd, and D3rd just get farther and farther away along the same direction. A similar result occurs sometimes when the distance between two points is close to zero. The algorithm ofcourse will not work with a distance that is actually equal to zero such with AB=5,AC=8, and BC=0. But it will work with BC=0.000001.
Anyway, I think this has answered your question you asked a year ago.
Related
Most of the implementations of the algorithm to find the closest pair of points in the plane that I've seen online have one of two deficiencies: either they fail to meet an O(nlogn) runtime, or they fail to accommodate the case where some points share an x-coordinate. Is a hash map (or equivalent) required to solve this problem optimally?
Roughly, the algorithm in question is (per CLRS Ch. 33.4):
For an array of points P, create additional arrays X and Y such that X contains all points in P, sorted by x-coordinate and Y contains all points in P, sorted by y-coordinate.
Divide the points in half - drop a vertical line so that you split X into two arrays, XL and XR, and divide Y similarly, so that YL contains all points left of the line and YR contains all points right of the line, both sorted by y-coordinate.
Make recursive calls for each half, passing XL and YL to one and XR and YR to the other, and finding the minimum distance, d in each of those halves.
Lastly, determine if there's a pair with one point on the left and one point on the right of the dividing line with distance smaller than d; through a geometric argument, we find that we can adopt the strategy of just searching through the next 7 points for every point within distance d of the dividing line, meaning the recombination of the divided subproblems is only an O(n) step (even if it looks n2 at first glance).
This has some tricky edge cases. One way people deal with this is sorting the strip of points of distance d from the dividing line at every recombination step (e.g. here), but this is known to result in an O(nlog2n) solution.
Another way people deal with edge cases is by assuming each point has a distinct x-coordinate (e.g. here): note the snippet in closestUtil which adds to Pyl (or YL as we call it) if the x-coordinate of a point in Y is <= the line, or to Pyr (YR) otherwise. Note that if all points lie on the same vertical line, this would result us writing past the end of the array in C++, as we write all n points to YL.
So the tricky bit when points can have the same x-coordinate is dividing the points in Y into YL and YR depending on whether a point p in Y is in XL or XR. The pseudocode in CLRS for this is (edited slightly for brevity):
for i = 1 to Y.length
if Y[i] in X_L
Y_L.length = Y_L.length + 1;
Y_L[Y_L.length] = Y[i]
else Y_R.length = Y_R.length + 1;
Y_R[Y_R.length] = Y[i]
However, absent of pseudocode, if we're working with plain arrays, we don't have a magic function that can determine whether Y[i] is in X_L in O(1) time. If we're assured that all x-coordinates are distinct, sure - we know that anything with an x-coordinate less than the dividing line is in XL, so with one comparison we know what array to partition any point p in Y into. But in the case where x-coordinates are not necessarily distinct (e.g. in the case where they all lie on the same vertical line), do we require a hash map to determine whether a point in Y is in XL or XR and successfully break down Y into YL and YR in O(n) time? Or is there another strategy?
Yes, there are at least two approaches that work here.
The first, as Bing Wang suggests, is to apply a rotation. If the angle is sufficiently small, this amounts to breaking ties by y coordinate after comparing by x, no other math needed.
The second is to adjust the algorithm on G4G to use a linear-time partitioning algorithm to divide the instance, and a linear-time sorted merge to conquer it. Presumably this was not done because the author valued the simplicity of sorting relative to the previously mentioned algorithms in most programming languages.
Tardos & Kleinberg suggests annotating each point with its position (index) in X.
You could do this in N time, or, if you really, really want to, you could do it "for free" in the sorting operation.
With this annotation, you could do your O(1) partitioning, and then take the position pr of the right-most point in Xl in O(1), using it to determine weather a point in Y goes in Yl (position <= pr), or Yr (position > pr). This does not require an extra data structure like a hash map, but it does require that those same positions are used in X and Y.
NB:
It is not immediately obvious to me that the partitioning of Y is the only problem that arises when multiple points have the same coordinate on the x-axis. It seems to me that the proof of linearity of the comparisons neccesary across partitions breaks, but I have seen only the proof that you need only 15 comparisons, not the proof for the stricter 7-point version, so i cannot be sure.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Recently, I've attended programming competition. There was a problem from it that I am still mulling over. Programming language does not matter, but I've wrote it in C++. The task was this:
As you already know, Flatland is located on the plane. There are n
cities in Flatland, i-th of these cities is located at the point (xi,
yi). There are ai citizens living in i-th city. The king of
Flatland has decided to divide the kingdom between his two sons. He
wants to build a wall in the form of infinite straight line; each of
the parts will be ruled by one of the sons. The wall cannot pass
through any city. To avoid envy between brothers, the populations of
two parts must be as close as possible; formally, if a and b are
the total number of citizens living in cities of the first and the
second part respectively, the value of |a - b| must be minimized.
Help the king to find the optimal division. Number of cities is less
than 1000. And all coordinates are integers. Output of algorithm
should be integer number of minimal |a-b|
Okay, if I knew the direction of line, it will be really easy task - binary search:
I don't want code, I want ideas because I don't have any. If I catch idea I can write code!
I don't know optimal direction, but I think it could be found somehow. So could it be found or is this task solved other way?
An example where the horizontal/vertical line is not optimal:
1
\
\
2 \ 1
The Ansatz
A brute force method would be to check all possible division...
First it should be noted, that the exact orientation of the line does not matter. It can always be shifted by small amounts and there are cases with more than one minimum. What matters it what cities go to which side of the kingdom. Even when simply trying all such possible combinations, it is not trivial to find them. To do so, I propose the following algorithm:
How to find all possible divisions
For each pair of cities x and y, the line connecting them, divides the kingdom in "left" and "right". Then consider all possible combinations of left, right, x and y:
left + x + y vs right (C)
left + x vs right + y (A)
left + y vs right + x (D)
left vs right + x + y (B)
Actually I am not 100% sure but I think in this way you can find all possible division with a finite number of trials. As the cities have no size (I assumed 0 radius), the line connecting x and y can be shifted slightly to include either city on either side of the kindom.
One counter example where this simple method will definitely fail is when more than 2 cities lie on a straight line
Example
This picture illustrates one step of my above algorithm for the example from the OP. x and y are the two cities with 1 inhabitants. Actually with this pair of cities one gets already all possible divisions. (However 3 points is trivial anyhow, as there is no geometrical restriction on what combinations are possible. Interestingly only starting with 4 points their location on the plane really matters.)
Colinear points
Following some discussion and fruitful comments, I came to the conclusion that colinear points are not really a problem. One just has to consider these points when evaluating the 4 possible divisions (for each pair of points). E.g. assume in the above example is another point at (-1,2). Then this point would lie on the left for A and C and on the right for B and D.
For each angle A, consider the family of parallel lines which make an angle of A with the x-axis, with special case A=0 corresponding to the family of lines parallel to the X-axis.
Given A, you can use a binary search to find the line in the family which divides the kingdom most nearly equally. So we have a function f from angles to integers, mapping each angle A to the minimum value of |a-b| for lines in the family corresponding to A.
How many angles do we need to try? The situation changes materially only when A is an angle corresponding to a line between two points, an angle which I will call a "jump angle". The function is continuous, and therefore constant, away from jump angles. We have to try jump angles, of which there are about n choose 2, approximately 500,000 at most. We also have to try intervals of angles between jump angles, doubling the size, to 1,000,000 at most.
Instead of angles, it's probably more sensible to use slopes. I just like thinking in terms of angles.
The time complexity of this approach is O(n^2 log n), n^2 for the number of angles, log n for the binary search. If we can learn more about the function f, it may be possible to use a faster method to minimize f than checking every possibility. For example, it seems reasonable that the minimum of f can be found at an angle not equal to a jump angle.
It may also be possible to eliminate the binary search by using the centroid of the cities. We calculate the weighted average
(a1(x1,y1) + a2(x2,y2) + ... + an(xn,yn))/(a1+a2+...+an)
I think that lines balancing the population will pass through that point. (Hmm.) If that's the case, we only have to think about angles.
Case where n is less than 3
The base case is where there are two cities: in which case you simple take the perpendicular line on the line that connects the two cities.
Case with three or more cities
You can discretize the tangent by taking every pair of two cities, and see the line that connects them as the direction of the infinite line.
Why this works
If you split the number of cities in two parts, there is at least one half with two or more cities. For that part, there are two points that are the closest to the border. Whether the border passes "very closely" to that line or has the same line does not matter; because a "slightly different tangent" will not swap any city (otherwise these cities were not the closest). Since we try "every border", we will eventually generate a border with the given tangent.
Example:
Say you have the following scenario:
1
\
2\ 1
With the numbers showing the values. In this case the two closest points at the border are the one at the top and the right. So we construct a line that points 45 degrees downwards. Now we use binary search to find the most optimal split: we rotate all points, order them by ascending rotated x-value, then perform binary search on the weights. The optimal one is to split it between the origin and the two other points.
Now with four points:
1 2
2 1
Here we will investigate the following lines:
\ 1\|/2 /
\ /|\ /
----+----
/ \|/ \
/ 2/|\1 \
And this will return either the horizontal or the vertical line.
There is a single possibility - as pointed out by #Nemo that all these points are lying on the same line. In such case there is no tangent that makes sense. In that case, one can use the perpendicular tangent as well.
Pseudocode:
for v in V
for w in V\{v}
calculate tangent
for tangent and perpendicular tangent
rotate all points such that the tangent is rotated to the y-axis
look for a rotated line in the y-direction that splits the cities optimal
return the best split found
Furthermore as nearly all geometrical approaches, this method can suffer from the fact that multiple dots are located on the same line in which case by adding a simple rotation one can either include/exclude one of the points. This is indeed a dirty hack to the problem.
This Haskell program calculates the "optimal direction" (if the above solution is correct) for a given list of points:
import Data.List
type Point = (Int,Int)
type WPoint = (Point,Int)
type Direction = Point
dirmul :: Direction -> WPoint -> Int
dirmul (dx,dy) ((xa,ya),_) = xa*dx+ya*dy
dirCompare :: Direction -> WPoint -> WPoint -> Ordering
dirCompare d pa pb = compare (dirmul d pa) (dirmul d pb)
optimalSplit :: [WPoint] -> Direction
optimalSplit pts = (-dy,dx)
where wsum = sum $ map snd pts
(dx,dy) = argmin (bestSplit pts wsum) $ concat [splits pa pb | pa <- pts, pb <- pts, pa /= pb]
splits :: WPoint -> WPoint -> [Direction]
splits ((xa,ya),_) ((xb,yb),_) = [(xb-xa,yb-ya),(ya-yb,xb-xa)]
bestSplit :: [WPoint] -> Int -> Direction -> Int
bestSplit pts wsum d = bestSplitScan cmp ordl 0 wsum
where cmp = dirCompare d
ordl = sortBy cmp pts
bestSplitScan :: ((a,Int) -> (a,Int) -> Ordering) -> [(a,Int)] -> Int -> Int -> Int
bestSplitScan _ [] l r = abs $ l-r
bestSplitScan cmp ((x1,w1):xs) l r = min (abs $ l-r) (bestSplitScan cmp (dropWhile eqf xs) (l+d) (r-d))
where eqf = (==) EQ . cmp (x1,w1)
d = w1+(sum $ map snd $ takeWhile eqf xs)
argmin :: (Ord b) => (a -> b) -> [a] -> a
argmin _ [x] = x
argmin f (x:xs) | (f x) <= f ax = x
| otherwise = ax
where ax = argmin f xs
For instance:
*Main> optimalSplit [((0,0),2),((0,1),1),((1,0),1)]
(-1,1)
*Main> optimalSplit [((0,0),2),((0,1),1),((1,0),1),((1,1),2)]
(-1,0)
So the direction is a line in which if the line moves one element to the left, it moves one element to the top as well. This is the first example. For the second case, it picks a line that moves in the x-direction so it splits horizontally. This algorithm allows only integral points and does not take into account slightly tweaking the line in case the points are placed on the same line: these are all in or all out for a line parallel.
[Edit: Bold-faced text is relevant to concerns expressed previously in comments.]
[Edit 2: As I should have pointed out earlier, this answer is a supplement to the earlier answer by tobi303, which gives a similar algorithm. The main purpose was to show that the basic idea of that algorithm is sound and sufficiently general.
Despite minor differences in the details of the algorithms proposed in the two answers, I think a careful reading of the "why it works" section, applied to either algorithm, will show that the algorithm is in fact complete.]
If all the cities are in one straight line
(including the case where there
are only one or two cities), then the solution is simple.
I assume you can detect and solve this case, so the rest of the
answer will deal with all other cases.
If there are more than two cities and they are not all collinear,
the "brute force" solution is:
for each city X,
for each city Y where Y is not X
construct a directed line that passes through X and then Y.
Divide the cities in two subsets:
S1 = all the cities to the left of this line
S2 = all the other cities (including cities exactly on the line)
Evaluate the "unfairness" of this division.
Among all subdivisions of cities found in this way,
choose the one with the least unfairness. Return the difference. Done.
Note that the line found in this way is not the line that divides the cities "fairly"; it is merely parallel to some such line.
If we had to find the actual dividing line we would have to do a little more work to figure out
exactly where to put that parallel line. But the requested return value
is merely |a-b|.
Why this works:
Suppose that the line L1 divides the cities in the fairest way possible.
There is not a unique line that does this;
there will be (mathematically speaking) an infinite number of lines
that achieve the same "best" division, but such lines exist, and
all we need to suppose is that L1 is one of those lines.
Let the city A be the closest to L1 on one side of the line
and the city B be closest to L1 on the other side.
(If A and B are not uniquely identified, that is if there are two or more
cities on one side of L1 that are tied for "closest to L1",
we can set L2 = L1 and skip forward to the procedure for L2, below.)
Consider rotations of L1 in each direction, using the point where L1 crosses
the line AB as a pivot point. In at least one direction of rotation,
a rotated image of L1 will "hit" one of the other cities,
call it C, without touching either A or B.
(This follows from the fact that the cities are not all along one line.)
At that point, C is closer to the image of L1 than A or B (whichever
of those cities is on the same side of the original L1 as C was).
The Mean Value Theorem of calculus tells us that at some point during
the rotation, C was exactly as close to the rotated image of L1
as the city A or B, whichever is on the same side of that line.
What this shows is that there is always a line L2 that divides the cities
as fairly as possible, such that there are two cities, D and E,
on the same side of L2 and tied for "closest city to L2" among all
cities on that side of L2.
Now consider two directed lines through D and E: L3, which passes through
D and then E, and L4, which passes through E and then D.
The cities that are on the other side of L2 than D and E consist either of
all the cities to the left of L3, or all the cities to the left of L4.
(Note that this works even if L3 and L4 happen
to pass through more than two cities.)
The procedure described before is simply a way to find all possible
lines that could be line L3 or line L4 in any execution of this
procedure starting from a line L1 that solves the problem.
(Note that while there are always infinite possible choices of L1,
every such L1 results in lines L3 and L4 selected from the finite set of
lines that pass through two or more cities.)
So the procedure will find the division of cities described by L1,
which is the solution to the problem.
I need to superimpose two groups of 3D points on top of each other; i.e. find rotation and translation matrices to minimize the RMSD (root mean square deviation) between their coordinates.
I currently use Kabsch algorithm, which is not very useful for many of the cases I need to deal with. Kabsch requires equal number of points in both data sets, plus, it needs to know which point is going to be aligned with which one beforehand. For my case, the number of points will be different, and I don't care which point corresponds to which in the final alignment, as long as the RMSD is minimized.
So, the algorithm will (presumably) find a 1-1 mapping between the subsets of two point sets such that AFTER rotation&translation, the RMSD is minimized.
I know some algorithms that deal with different number of points, however they all are protein-based, that is, they try to align the backbones together (some continuous segment is aligned with another continuous segment etc), which is not useful for points floating in space, without any connections. (OK, to be clear, some points are connected; but there are points without any connections which I don't want to ignore during superimposition.)
Only algorithm that I found is DIP-OVL, found in STRAP software module (open source). I tried the code, but the behaviour seems erratic; sometimes it finds good alignments, sometimes it can't align a set of few points with itself after a simple X translation.
Anyone know of an algorithm that deals with such limitations? I'll have at most ~10^2 to ~10^3 points if the performance is an issue.
To be honest, the objective function to use is not very clear. RMSD is defined as the RMS of the distance between the corresponding points. If I have two sets with 50 and 100 points, and the algorithm matches 1 or few points within the sets, the resulting RMSD between those few points will be zero, while the overall superposition may not be so great. RMSD between all pairs of points is not a better solution (I think).
Only thing I can think of is to find the closest point in set X for each point in set Y (so there will be exactly min(|X|,|Y|) matches, e.g. 50 in that case) and calculate RMSD from those matches. But the distance calculation and bipartite matching portion seems too computationally complex to call in a batch fashion. Any help in that area will help as well.
Thanks!
What you said looks like a "cloud to cloud registration" task. Take a look into http://en.wikipedia.org/wiki/Iterative_closest_point and http://www.willowgarage.com/blog/2011/04/10/modular-components-point-cloud-registration for example. You can play with your data in open source Point Cloud Library to see if it works for you.
If you know which pairs of points correspond to each other, you can recover the transformation matrix with Linear Least Squares (LLS).
When considering LLS, you normally would want to find an approximation of x in A*x = b. With a transpose, you can solve for A instead of x.
Extend each source and target vector with "1", so they look like <x, y z, 1>
Equation: A · xi = bi
Extend to multiple vectors: A · X = B
Transpose: (A · X)T = BT
Simplify: XT · AT = BT
Substitute P = XT, Q = AT and R = BT. The result is: P · Q = R
Apply the formula for LLS: Q ≈ (PT · P)-1 · PT · R.
Substitute back: AT ≈ (X · XT)-1 · X · BT
Solve for A, and simplify: A ≈ B · XT · (X · XT)-1
(B · XT) and (X · XT) can be computed iteratively by summing up the outer products of the individual vector pairs.
B · XT = ∑bi·xiT
X · XT = ∑xi·xiT
A ≈ (∑bi·xiT) · (∑xi·xiT)-1
No matrix will be bigger than 4×4, so the algorithm does not use any excessive memory.
The result is not necessarily affine, but probably close. With some further processing, you can make it affine.
The best algorithm for discovering alignments through superimposition is Procrustes Analysis or Horn's method. Please follow this Stackoverflow link.
I have two sets of three (non-collinear) points, in three dimensions. I know the correspondence between the points - i.e. set 1 is {A, B, C} and set 2 is {A', B', C'}.
I want to find the combination of translation and rotation that will transform A' to A, B' to B, and C' to C. Note: There is no scaling involved. (I know this for certain, although I am curious about how to handle it if it did exist.)
I found what looks like a solid explanation while trying to work out how to do this. Section 2 (page 3) entitled "Three Point Registration" appears to be what I need to do. I understand steps 1 through 4 and 6 through 7 just fine, but 5 has me stumped.
5. Build the rotation matrices for both point sets:
Rl = [xl, yl, zl], Rr = [xr, yr, zr]
How do I do that???
Later I plan to implement a least squares solution, but I want to do this first.
this document appears to have an identical copy of that section, but following that is a worked example. i must admit that it's still not clear to me how the step works, but you may find it clearer than me.
update: column 1 of Rl is the x axis constructed earlier ([0,1,0] in terms of the original axes). so i imagine that x, y and z are the axes, as column vectors. which makes sense... and i assume Rr is the same in the other coordinate system.
is that clear?
I'll take a stab at it.
each point gets an equation: a_1x + b_1y + c_1z = d_1, right, so make 2 3x3 matrices of the
a,b,c values.
then, since each point is independent of one another, you can solve for the transform between the two matrices, A and A'
T A = A'
After some linear algebra,
T = A' inv(A)
Try it in MATLAB and let us know.
I'm training code problems, and on this one I am having problems to solve it, can you give me some tips how to solve it please.
The problem is taken from here:
https://www.ieee.org/documents/IEEEXtreme2008_Competitition_book_2.pdf
Problem 12: Cynical Times.
The problem is something like this (but do refer to above link of the source problem, it has a diagram!):
Your task is to find the sequence of points on the map that the bomber is expected to travel such that it hits all vital links. A link from A to B is vital when its absence isolates completely A from B. In other words, the only way to go from A to B (or vice versa) is via that link.
Due to enemy counter-attack, the plane may have to retreat at any moment, so the plane should follow, at each moment, to the closest vital link possible, even if in the end the total distance grows larger.
Given all coordinates (the initial position of the plane and the nodes in the map) and the range R, you have to determine the sequence of positions in which the plane has to drop bombs.
This sequence should start (takeoff) and finish (landing) at the initial position. Except for the start and finish, all the other positions have to fall exactly in a segment of the map (i.e. it should correspond to a point in a non-hit vital link segment).
The coordinate system used will be UTM (Universal Transverse Mercator) northing and easting, which basically corresponds to a Euclidian perspective of the world (X=Easting; Y=Northing).
Input
Each input file will start with three floating point numbers indicating the X0 and Y0 coordinates of the airport and the range R. The second line contains an integer, N, indicating the number of nodes in the road network graph. Then, the next N (<10000) lines will each contain a pair of floating point numbers indicating the Xi and Yi coordinates (1 < i<=N). Notice that the index i becomes the identifier of each node. Finally, the last block starts with an integer M, indicating the number of links. Then the next M (<10000) lines will each have two integers, Ak and Bk (1 < Ak,Bk <=N; 0 < k < M) that correspond to the identifiers of the points that are linked together.
No two links will ever cross with each other.
Output
The program will print the sequence of coordinates (pairs of floating point numbers with exactly one decimal place), each one at a line, in the order that the plane should visit (starting and ending in the airport).
Sample input 1
102.3 553.9 0.2
14
342.2 832.5
596.2 638.5
479.7 991.3
720.4 874.8
744.3 1284.1
1294.6 924.2
1467.5 659.6
1802.6 659.6
1686.2 860.7
1548.6 1111.2
1834.4 1054.8
564.4 1442.8
850.1 1460.5
1294.6 1485.1
17
1 2
1 3
2 4
3 4
4 5
4 6
6 7
7 8
8 9
8 10
9 10
10 11
6 11
5 12
5 13
12 13
13 14
Sample output 1
102.3 553.9
720.4 874.8
850.1 1460.5
102.3 553.9
Pre-process the input first, so you identify the choke points. Algorithms like Floyd-Warshall would help you.
Model the problem as a Heuristic Search problem, you can compute a MST which covers all choke-points and take the sum of the costs of the edges as a heuristic.
As the commenters said, try to make concrete questions, either here or to the TA supervising your class.
Don't forget to mention where you got these hints.
The problem can be broken down into two parts.
1) Find the vital links.
These are nothing but the Bridges in the graph described. See the wiki page (linked to in the previous sentence), it mentions an algorithm by Tarjan to find the bridges.
2) Once you have the vital links, you need to find the smallest number of points which given the radius of the bomb, will cover the links. For this, for each link, you create a region around it, where dropping the bomb will destroy it. Now you form a graph of these regions (two regions are adjacent if they intersect). You probably need to find a minimum clique partition in this graph.
Haven't thought it through (especially part 2), but hope it helps.
And good luck in the contest!
I think Moron' is right about the first part, but on the second part...
The problem description does not tell anything about "smallest number of points". It tells that the plane flies to the closest vital link.
So, I think the part 2 will be much simpler:
Find the closest non-hit segment to the current location.
Travel to the closest point on the closest segment.
Bomb the current location (remove all segments intersecting a circle)
Repeat until there are no non-hit vital links left.
This straight-forward algorithm has a complexity of O(N*N), but this should be sufficient considering input constraints.