I have a loosely connected graph. For every edge in this graph, I know the approximate distance d(v,w) between node v and w at positions p(v) and p(w) as a vector in R3, not only as an euclidean distance. The error shall be small (lets say < 3%) and the first node is at <0,0,0>.
If there were no errors at all, I can calculate the node-positions this way:
set p(first_node) = <0,0,0>
calculate_position(first_node)
calculate_position(v):
for (v,w) in Edges:
if p(w) is not set:
set p(w) = p(v) + d(v,w)
calculate_position(w)
for (u,v) in Edges:
if p(u) is not set:
set p(u) = p(v) - d(u,v)
calculate_position(u)
The errors of the distance are not equal. But to keep things simple, assume the relative error (d(v,w)-d'(v,w))/E(v,w) is N(0,1)-normal-distributed. I want to minimize the sum of the squared error
sum( ((p(v)-p(w)) - d(v,w) )^2/E(v,w)^2 ) for all edges
The graph may have a moderate amount of Nodes ( > 100 ) but with just some connections between the nodes and have been "prefiltered" (split into subgraphs, if there is only one connection between these subgraphs).
I have tried a simplistic "physical model" with hooks low but its slow and unstable. Is there a better algorithm or heuristic for this kind of problem?
This looks like linear regression. Take error terms of the following form, i.e. without squares and split into separate coordinates:
(px(v) - px(w) - dx(v,w))/E(v,w)
(py(v) - py(w) - dy(v,w))/E(v,w)
(pz(v) - pz(w) - dz(v,w))/E(v,w)
If I understood you correctly, you are looking for values px(v), py(v) and pz(v) for all nodes v such that the sum of squares of the above terms is minimized.
You can do this by creating a matrix A and a vector b in the following way: every row corresponds to one of equation of the above form, and every column of A corresponds to one variable, i.e. a single coordinate. For n vertices and m edges, the matrix A will have 3m rows (since you separate coordinates) and 3n−3 columns (since you also fix the first node px(0)=py(0)=pz(0)=0).
The row for (px(v) - px(w) - dx(v,w))/E(v,w) would have an entry 1/E(v,w) in the column for px(v) and an entry -1/E(v,w) in the column for px(w). All other columns would be zero. The corresponding entry in the vector b would be dx(v,w)/E(v,w).
Now solve the linear equation (AT·A)x = AT·b where AT denotes the transpose of A. The solution vector x will contain the coordinates for your vertices. You can break this into three independent problems, one for each coordinate direction, to keep the size of the linear equation system down.
Related
I have a 2D matrix with boolean values, which is updated highly frequently. I want to choose a 2D index {x, y} within the matrix, and find the nearest element that is "true" in the table, without going through all the elements (the matrix is massive).
For example, if I have the matrix:
0000100
0100000
0000100
0100001
and I choose a coordinate {x1, y1} such as {4, 3}, I want returned the location of the closest "true" value, which in this case is {5, 3}. The distance between the elements is measured using the standard Pythagorean equation:
distance = sqrt(distX * distX + distY * distY) where distX = x1 - x and distY = y1 - y.
I can go through all the elements in the matrix and keep a list of "true" values and select the one with the shortest distance result, but it's extremely inefficient. What algorithm can I use to reduce search time?
Details: The matrix size is 1920x1080, and around 25 queries will be made every frame. The entire matrix is updated every frame. I am trying to maintain a reasonable framerate, more than 7fps is enough.
If matrix is always being updated, then there is no need to build some auxillary structure like distance transform, Voronoy diagram etc.
You can just execute search like BFS (bread-first search) propagating from query point. The only difference from usual BFS is euclidean metrics. So you can generate (u, v) pairs ordered by (u^2+v^2) and check symmetric points shifted by (+-u,+-v),(+-v,+-u) combinations (four points when u or v is zero, eight points otherwise)
You could use a tree data structure like a quad-tree (see https://en.wikipedia.org/wiki/Quadtree) to store all locations with value "true". In this way it should be possible to quickly iterate over all "true" values in the neighborhood of a given location. Furthermore, the tree can be updated in logarithmic time, if the value of a location changes.
Given a bunch of arbitrary vectors (stored in a matrix A) and a radius r, I'd like to find all integer-valued linear combinations of those vectors which land inside a sphere of radius r. The necessary coordinates I would then store in a Matrix V. So, for instance, if the linear combination
K=[0; 1; 0]
lands inside my sphere, i.e. something like
if norm(A*K) <= r then
V(:,1)=K
end
etc.
The vectors in A are sure to be the simplest possible basis for the given lattice and the largest vector will have length 1. Not sure if that restricts the vectors in any useful way but I suspect it might. - They won't have as similar directions as a less ideal basis would have.
I tried a few approaches already but none of them seem particularly satisfying. I can't seem to find a nice pattern to traverse the lattice.
My current approach involves starting in the middle (i.e. with the linear combination of all 0s) and go through the necessary coordinates one by one. It involves storing a bunch of extra vectors to keep track of, so I can go through all the octants (in the 3D case) of the coordinates and find them one by one. This implementation seems awfully complex and not very flexible (in particular it doesn't seem to be easily generalizable to arbitrary numbers of dimension - although that isn't strictly necessary for the current purpose, it'd be a nice-to-have)
Is there a nice* way to find all the required points?
(*Ideally both efficient and elegant**. If REALLY necessary, it wouldn't matter THAT much to have a few extra points outside the sphere but preferably not that many more. I definitely do need all the vectors inside the sphere. - if it makes a large difference, I'm most interested in the 3D case.
**I'm pretty sure my current implementation is neither.)
Similar questions I found:
Find all points in sphere of radius r around arbitrary coordinate - this is actually a much more general case than what I'm looking for. I am only dealing with periodic lattices and my sphere is always centered at 0, coinciding with one point on the lattice.
But I don't have a list of points but rather a matrix of vectors with which I can generate all the points.
How to efficiently enumerate all points of sphere in n-dimensional grid - the case for a completely regular hypercubic lattice and the Manhattan-distance. I'm looking for completely arbitary lattices and euclidean distance (or, for efficiency purposes, obviously the square of that).
Offhand, without proving any assertions, I think that 1) if the set of vectors is not of maximal rank then the number of solutions is infinite; 2) if the set is of maximal rank, then the image of the linear transformation generated by the vectors is a subspace (e.g., plane) of the target space, which intersects the sphere in a lower-dimensional sphere; 3) it follows that you can reduce the problem to a 1-1 linear transformation (kxk matrix on a k-dimensional space); 4) since the matrix is invertible, you can "pull back" the sphere to an ellipsoid in the space containing the lattice points, and as a bonus you get a nice geometric description of the ellipsoid (principal axis theorem); 5) your problem now becomes exactly one of determining the lattice points inside the ellipsoid.
The latter problem is related to an old problem (counting the lattice points inside an ellipse) which was considered by Gauss, who derived a good approximation. Determining the lattice points inside an ellipse(oid) is probably not such a tidy problem, but it probably can be reduced one dimension at a time (the cross-section of an ellipsoid and a plane is another ellipsoid).
I found a method that makes me a lot happier for now. There may still be possible improvements, so if you have a better method, or find an error in this code, definitely please share. Though here is what I have for now: (all written in SciLab)
Step 1: Figure out the maximal ranges as defined by a bounding n-parallelotope aligned with the axes of the lattice vectors. Thanks for ElKamina's vague suggestion as well as this reply to another of my questions over on math.se by chappers: https://math.stackexchange.com/a/1230160/49989
function I=findMaxComponents(A,r) //given a matrix A of lattice basis vectors
//and a sphere radius r,
//find the corners of the bounding parallelotope
//built from the lattice, and store it in I.
[dims,vecs]=size(A); //figure out how many vectors there are in A (and, unnecessarily, how long they are)
U=eye(vecs,vecs); //builds matching unit matrix
iATA=pinv(A'*A); //finds the (pseudo-)inverse of A^T A
iAT=pinv(A'); //finds the (pseudo-)inverse of A^T
I=[]; //initializes I as an empty vector
for i=1:vecs //for each lattice vector,
t=r*(iATA*U(:,i))/norm(iAT*U(:,i)) //find the maximum component such that
//it fits in the bounding n-parallelotope
//of a (n-1)-sphere of radius r
I=[I,t(i)]; //and append it to I
end
I=[-I;I]; //also append the minima (by symmetry, the negative maxima)
endfunction
In my question I only asked for a general basis, i.e, for n dimensions, a set of n arbitrary but linearly independent vectors. The above code, by virtue of using the pseudo-inverse, works for matrices of arbitrary shapes and, similarly, Scilab's "A'" returns the conjugate transpose rather than just the transpose of A so it equally should work for complex matrices.
In the last step I put the corresponding minimal components.
For one such A as an example, this gives me the following in Scilab's console:
A =
0.9701425 - 0.2425356 0.
0.2425356 0.4850713 0.7276069
0.2425356 0.7276069 - 0.2425356
r=3;
I=findMaxComponents(A,r)
I =
- 2.9494438 - 3.4186986 - 4.0826424
2.9494438 3.4186986 4.0826424
I=int(I)
I =
- 2. - 3. - 4.
2. 3. 4.
The values found by findMaxComponents are the largest possible coefficients of each lattice vector such that a linear combination with that coefficient exists which still land on the sphere. Since I'm looking for the largest such combinations with integer coefficients, I can safely drop the part after the decimal point to get the maximal plausible integer ranges. So for the given matrix A, I'll have to go from -2 to 2 in the first component, from -3 to 3 in the second and from -4 to 4 in the third and I'm sure to land on all the points inside the sphere (plus superfluous extra points, but importantly definitely every valid point inside) Next up:
Step 2: using the above information, generate all the candidate combinations.
function K=findAllCombinations(I) //takes a matrix of the form produced by
//findMaxComponents() and returns a matrix
//which lists all the integer linear combinations
//in the respective ranges.
v=I(1,:); //starting from the minimal vector
K=[];
next=1; //keeps track of what component to advance next
changed=%F; //keeps track of whether to add the vector to the output
while or(v~=I(2,:)) //as long as not all components of v match all components of the maximum vector
if v <= I(2,:) then //if each current component is smaller than each largest possible component
if ~changed then
K=[K;v]; //store the vector and
end
v(next)=v(next)+1; //advance the component by 1
next=1; //also reset next to 1
changed=%F;
else
v(1:next)=I(1,1:next); //reset all components smaller than or equal to the current one and
next=next+1; //advance the next larger component next time
changed=%T;
end
end
K=[K;I(2,:)]'; //while loop ends a single iteration early so add the maximal vector too
//also transpose K to fit better with the other functions
endfunction
So now that I have that, all that remains is to check whether a given combination actually does lie inside or outside the sphere. All I gotta do for that is:
Step 3: Filter the combinations to find the actually valid lattice points
function points=generatePoints(A,K,r)
possiblePoints=A*K; //explicitly generates all the possible points
points=[];
for i=possiblePoints
if i'*i<=r*r then //filter those that are too far from the origin
points=[points i];
end
end
endfunction
And I get all the combinations that actually do fit inside the sphere of radius r.
For the above example, the output is rather long: Of originally 315 possible points for a sphere of radius 3 I get 163 remaining points.
The first 4 are: (each column is one)
- 0.2425356 0.2425356 1.2126781 - 0.9701425
- 2.4253563 - 2.6678919 - 2.4253563 - 2.4253563
1.6977494 0. 0.2425356 0.4850713
so the remainder of the work is optimization. Presumably some of those loops could be made faster and especially as the number of dimensions goes up, I have to generate an awful lot of points which I have to discard, so maybe there is a better way than taking the bounding n-parallelotope of the n-1-sphere as a starting point.
Let us just represent K as X.
The problem can be represented as:
(a11x1 + a12x2..)^2 + (a21x1 + a22x2..)^2 ... < r^2
(x1,x2,...) will not form a sphere.
This can be done with recursion on dimension--pick a lattice hyperplane direction and index all such hyperplanes that intersect the r-radius ball. The ball intersection of each such hyperplane itself is a ball, in one lower dimension. Repeat. Here's the calling function code in Octave:
function lat_points(lat_bas_mx,rr)
% **globals for hyperplane lattice point recursive function**
clear global; % this seems necessary/important between runs of this function
global MLB;
global NN_hat;
global NN_len;
global INP; % matrix of interior points, each point(vector) a column vector
global ctr; % integer counter, for keeping track of lattice point vectors added
% in the pre-allocated INP matrix; will finish iteration with actual # of points found
ctr = 0; % counts number of ball-interior lattice points found
MLB = lat_bas_mx;
ndim = size(MLB)(1);
% **create hyperplane normal vectors for recursion step**
% given full-rank lattice basis matrix MLB (each vector in lattice basis a column),
% form set of normal vectors between successive, nested lattice hyperplanes;
% store them as columnar unit normal vectors in NN_hat matrix and their lengths in NN_len vector
NN_hat = [];
for jj=1:ndim-1
tmp_mx = MLB(:,jj+1:ndim);
tmp_mx = [NN_hat(:,1:jj-1),tmp_mx];
NN_hat(:,jj) = null(tmp_mx'); % null space of transpose = orthogonal to columns
tmp_len = norm(NN_hat(:,jj));
NN_hat(:,jj) = NN_hat(:,jj)/tmp_len;
NN_len(jj) = dot(MLB(:,jj),NN_hat(:,jj));
if (NN_len(jj)<0) % NN_hat(:,jj) and MLB(:,jj) must have positive dot product
% for cutting hyperplane indexing to work correctly
NN_hat(:,jj) = -NN_hat(:,jj);
NN_len(jj) = -NN_len(jj);
endif
endfor
NN_len(ndim) = norm(MLB(:,ndim));
NN_hat(:,ndim) = MLB(:,ndim)/NN_len(ndim); % the lowest recursion level normal
% is just the last lattice basis vector
% **estimate number of interior lattice points, and pre-allocate memory for INP**
vol_ppl = prod(NN_len); % the volume of the ndim dimensional lattice paralellepiped
% is just the product of the NN_len's (they amount to the nested altitudes
% of hyperplane "paralellepipeds")
vol_bll = exp( (ndim/2)*log(pi) + ndim*log(rr) - gammaln(ndim/2+1) ); % volume of ndim ball, radius rr
est_num_pts = ceil(vol_bll/vol_ppl); % estimated number of lattice points in the ball
err_fac = 1.1; % error factor for memory pre-allocation--assume max of err_fac*est_num_pts columns required in INP
INP = zeros(ndim,ceil(err_fac*est_num_pts));
% **call the (recursive) function**
% for output, global variable INP (matrix of interior points)
% stores each valid lattice point (as a column vector)
clp = zeros(ndim,1); % confirmed lattice point (start at origin)
bpt = zeros(ndim,1); % point at center of ball (initially, at origin)
rd = 1; % initial recursion depth must always be 1
hyp_fun(clp,bpt,rr,ndim,rd);
printf("%i lattice points found\n",ctr);
INP = INP(:,1:ctr); % trim excess zeros from pre-allocation (if any)
endfunction
Regarding the NN_len(jj)*NN_hat(:,jj) vectors--they can be viewed as successive (nested) altitudes in the ndim-dimensional "parallelepiped" formed by the vectors in the lattice basis, MLB. The volume of the lattice basis parallelepiped is just prod(NN_len)--for a quick estimate of the number of interior lattice points, divide the volume of the ndim-ball of radius rr by prod(NN_len). Here's the recursive function code:
function hyp_fun(clp,bpt,rr,ndim,rd)
%{
clp = the lattice point we're entering this lattice hyperplane with
bpt = location of center of ball in this hyperplane
rr = radius of ball
rd = recrusion depth--from 1 to ndim
%}
global MLB;
global NN_hat;
global NN_len;
global INP;
global ctr;
% hyperplane intersection detection step
nml_hat = NN_hat(:,rd);
nh_comp = dot(clp-bpt,nml_hat);
ix_hi = floor((rr-nh_comp)/NN_len(rd));
ix_lo = ceil((-rr-nh_comp)/NN_len(rd));
if (ix_hi<ix_lo)
return % no hyperplane intersections detected w/ ball;
% get out of this recursion level
endif
hp_ix = [ix_lo:ix_hi]; % indices are created wrt the received reference point
hp_ln = length(hp_ix);
% loop through detected hyperplanes (updated)
if (rd<ndim)
bpt_new_mx = bpt*ones(1,hp_ln) + NN_len(rd)*nml_hat*hp_ix; % an ndim by length(hp_ix) matrix
clp_new_mx = clp*ones(1,hp_ln) + MLB(:,rd)*hp_ix; % an ndim by length(hp_ix) matrix
dd_vec = nh_comp + NN_len(rd)*hp_ix; % a length(hp_ix) row vector
rr_new_vec = sqrt(rr^2-dd_vec.^2);
for jj=1:hp_ln
hyp_fun(clp_new_mx(:,jj),bpt_new_mx(:,jj),rr_new_vec(jj),ndim,rd+1);
endfor
else % rd=ndim--so at deepest level of recursion; record the points on the given 1-dim
% "lattice line" that are inside the ball
INP(:,ctr+1:ctr+hp_ln) = clp + MLB(:,rd)*hp_ix;
ctr += hp_ln;
return
endif
endfunction
This has some Octave-y/Matlab-y things in it, but most should be easily understandable; M(:,jj) references column jj of matrix M; the tic ' means take transpose; [A B] concatenates matrices A and B; A=[] declares an empty matrix.
Updated / better optimized from original answer:
"vectorized" the code in the recursive function, to avoid most "for" loops (those slowed it down a factor of ~10; the code now is a bit more difficult to understand though)
pre-allocated memory for the INP matrix-of-interior points (this speeded it up by another order of magnitude; before that, Octave was having to resize the INP matrix for every call to the innermost recursion level--for large matrices/arrays that can really slow things down)
Because this routine was part of a project, I also coded it in Python. From informal testing, the Python version is another 2-3 times faster than this (Octave) version.
For reference, here is the old, much slower code in the original posting of this answer:
% (OLD slower code, using for loops, and constantly resizing
% the INP matrix) loop through detected hyperplanes
if (rd<ndim)
for jj=1:length(hp_ix)
bpt_new = bpt + hp_ix(jj)*NN_len(rd)*nml_hat;
clp_new = clp + hp_ix(jj)*MLB(:,rd);
dd = nh_comp + hp_ix(jj)*NN_len(rd);
rr_new = sqrt(rr^2-dd^2);
hyp_fun(clp_new,bpt_new,rr_new,ndim,rd+1);
endfor
else % rd=ndim--so at deepest level of recursion; record the points on the given 1-dim
% "lattice line" that are inside the ball
for jj=1:length(hp_ix)
clp_new = clp + hp_ix(jj)*MLB(:,rd);
INP = [INP clp_new];
endfor
return
endif
I have few Cartesian points of the form : (x,y)
where x and y both are non-negative integers.
For e.g.
(0,0) , (1,1), (0,1)
I need an algorithm to arrange the above points
in such a way that going from one point to other
changes either x or y by 1.
In other words, I would like to avoid
diagonal movement.
So, the above mentioned points will be arranged like :
(0,0), (0,1), (1,1).
Similarly for (0,0),(1,1),(0,2)
there is no such arrangement possible.
I am not sure about what to call it
but I would call it Manhattan ordering.
Can anyone help?
If you are looking for an arrangement that does not repeat vertices:
What you seem to be looking for is a Hamiltonian Path in a Grid Graph.
This is known to be NP-Complete for general Grid Graphs, see Hamilton Paths in Grid Graphs.
So you can probably try your luck with any of the approximate/heuristic/etc algorithms known for Hamiltonian Path/Euclidean Traveling Salesman Problem.
If you are looking for an arrangement that can repeat, but want the minimum possible number of points in the arrangement:
This is again NP-Complete. The above problem can be reduced to it. This is because the minimum possible walk has n vertices if and only if the graph has a hamiltonian path.
If you are just looking for some arrangement of points,
Then all you need to do is check if the graph is connected. If it is not connected, there can be no such arrangement.
You can do a depth first search to figure that out. The depth first search will also give you such an arrangement in case the graph is connected.
If you want something closer to optimal (but in reasonably fast time), you can probably use approximation algorithms for the Euclidean Traveling Salesman problem.
You can construct a graph with the vertices being your points, and the edges being the valid steps.
What you're then looking for is a Hamiltonian path for this graph. This, in its general form, is an NP-complete problem, which means there is no known efficient solution (i.e. one that scales well with the number of points). Wikipedia describes a randomized algorithm that is "fast on most graphs" and might be of use:
Start from a random vertex, and continue if there is a neighbor not visited. If there are no more unvisited neighbors, and the path formed isn't Hamiltonian, pick a neighbor uniformly at random, and rotate using that neighbor as a pivot. (That is, add an edge to that neighbor, and remove one of the existing edges from that neighbor so as not to form a loop.) Then, continue the algorithm at the new end of the path.
A more efficient solution might exist for this particular situation, though.
Think of it as a graph where each node as at most four edges. Then do depth/breadth-first search.
This could be simplified as minimizing the distance between each consecutive point. Going from (0,0) to (0,1) is simply 1 unit, but going from (0,0) to (1,1) is actually sqrt(2). So if you arrange the points into a graph, and then perform a simple minimum-total-distance traversal (traveling salesman), it should arrange them correctly.
Edit: If you never want a step that would be larger than 1, simply do not add any edges that are greater than 1. The traversal will still work correctly, and ignore any paths that require a movement > 1.
Edit 2: To clarify further, you can use any edge selection algorithm you wish. If you're ok with it moving 2 spaces, as long as the space is not diagonal, then you may choose to put an edge between (0,2) and (0,4). The minimum distance algorithm will still work. Simply place the edges in a smart way, and you can use any number of selection criteria to determine the outcome.
One way to do this is to create two sorted lists of the coordinates. One sorted by x and one sorted by y. Then, recursively find a solution.
Code coming (not sure what language yet; maybe pseudocode?)... Edit - nevermind, since my answer isn't as good as some of the others anyways.
How about a brute force recursive REXX routine... Try all possible
paths and print those that work out.
/* rexx */
point. = 0 /* Boolean for each existing point */
say 'Enter origin x and y coordinate:'
pull xo yo
point.xo.yo = 1 /* Point exists... */
say 'Enter destination x and y coordinate:'
pull xd yd
point.xd.yd = 1 /* Point exists... */
say 'Enter remaining x and y coordinates, one pair per line:'
do forever
pull x y
if x = '' then leave
point.x.y = 1
end
path = ''
call findpath xo yo path
say 'All possible paths have been displayed'
return
findpath: procedure expose point. xd yd
arg x y path
if \point.x.y then return /* no such point */
newpoint = '(' || x y || ')'
if pos(newpoint, path) > 0 then return /* abandon on cycle */
if x = xd & y = yd then /* found a path */
say path newpoint
else do /* keep searching */
call findpath x+1 y path newpoint
call findpath x-1 y path newpoint
call findpath x y+1 path newpoint
call findpath x y-1 path newpoint
end
return
Example session:
Path.rex
Enter origin x and y coordinate:
0 0
Enter destination x and y coordinate:
2 2
Enter remaining x and y coordinates, one pair per line:
0 1
1 1
2 1
1 2
(0 0) (0 1) (1 1) (2 1) (2 2)
(0 0) (0 1) (1 1) (1 2) (2 2)
All possible paths have been displayed
Don't try this on anything big though - could take a very long time! But then the question never mentioned anything about being an optimal solution.
i'm not sure how to solve the following problem:
i have a triangle with each of the three known vertex positions A,B,C being inaccurate, meaning they can each deviate up to certain known radii rA, rB, rC into arbitrary directions.
given such a triangle, i want to calculate how much the difference of two specific edge lengths (for instance the difference between lengths of edge a and edge b) of the triangle may change in the worst case. is there any elegant mathematical solution to this problem?
the naive way i thought of is calculating all 360^3 angle combinations and measuring the edge differences for each case, which is a rather high overhead.
The following image illustrates the solution:
MinMaxEdgeDiff.png http://www.freeimagehosting.net/uploads/b0f0f84635.png
Some points to note:
Line segments AC1 and BC1 represent the largest possible value of |BC| - |AC|, while lines AC2 and BC2 represent the smallest possible value. At C1, the tangent to the circle must bisect the angle made by AC1 and BC1; likewise for C2.
AC1 (when extended via the dashed line) and AC2 both go through A. Likewise, BC1 and BC2 go through B. Any deviation from the center, and the lines would longer be maximally long or minimally short.
The largest and smallest differences are:
d1 = |BC1| - |AC1| = (|B->C1| + _rB_) - (|A->C1| - _rA_)
= |B->C1| - |A->C1| + (_rA_ + _rB_)
d2 = |BC2| - |AC2| = (|B->C2| - _rB_) - (|A->C2| + _rA_)
= |B->C2| - |A->C2| - (_rA_ + _rB_)
Thus the variation between the largest and smallest differences is:
d1 - d2 = (|B->C1| - |A->C1|) - (|B->C2| - |A->C2|) + 2*(_rA_ + _rB_)
The last point hints that the solution can be found by solving from the centers A and B, and then adding the radii rA and rB. Thus the locations of C1 and C2 may be discovered iteratively (and separately, since they are independent of each other) by varying just a single angle around C's bounding circle.
I suspect that there's an analytical solution. It's an interesting problem, but not enough for me to bash my head against this particular task. Sorry. ;-)
I have two line segments: X1,Y1,Z1 - X2,Y2,Z2 And X3,Y3,Z3 - X4,Y4,Z4
I am trying to find the shortest distance between the two segments.
I have been looking for a solution for hours, but all of them seem to work with lines rather than line segments.
Any ideas how to go about this, or any sources of furmulae?
I'll answer this in terms of matlab, but other programming environments can be used. I'll add that this solution is valid to solve the problem in any number of dimensions (>= 3).
Assume that we have two line segments in space, PQ and RS. Here are a few random sets of points.
> P = randn(1,3)
P =
-0.43256 -1.6656 0.12533
> Q = randn(1,3)
Q =
0.28768 -1.1465 1.1909
> R = randn(1,3)
R =
1.1892 -0.037633 0.32729
> S = randn(1,3)
S =
0.17464 -0.18671 0.72579
The infinite line PQ(t) is easily defined as
PQ(u) = P + u*(Q-P)
Likewise, we have
RS(v) = R + v*(S-R)
See that for each line, when the parameter is at 0 or 1, we get one of the original endpoints on the line returned. Thus, we know that PQ(0) == P, PQ(1) == Q, RS(0) == R, and RS(1) == S.
This way of defining a line parametrically is very useful in many contexts.
Next, imagine we were looking down along line PQ. Can we find the point of smallest distance from the line segment RS to the infinite line PQ? This is most easily done by a projection into the null space of line PQ.
> N = null(P-Q)
N =
-0.37428 -0.76828
0.9078 -0.18927
-0.18927 0.61149
Thus, null(P-Q) is a pair of basis vectors that span the two dimensional subspace orthogonal to the line PQ.
> r = (R-P)*N
r =
0.83265 -1.4306
> s = (S-P)*N
s =
1.0016 -0.37923
Essentially what we have done is to project the vector RS into the 2 dimensional subspace (plane) orthogonal to the line PQ. By subtracting off P (a point on line PQ) to get r and s, we ensure that the infinite line passes through the origin in this projection plane.
So really, we have reduced this to finding the minimum distance from the line rs(v) to the origin (0,0) in the projection plane. Recall that the line rs(v) is defined by the parameter v as:
rs(v) = r + v*(s-r)
The normal vector to the line rs(v) will give us what we need. Since we have reduced this to 2 dimensions because the original space was 3-d, we can do it simply. Otherwise, I'd just have used null again. This little trick works in 2-d:
> n = (s - r)*[0 -1;1 0];
> n = n/norm(n);
n is now a vector with unit length. The distance from the infinite line rs(v) to the origin is simple.
> d = dot(n,r)
d =
1.0491
See that I could also have used s, to get the same distance. The actual distance is abs(d), but as it turns out, d was positive here anyway.
> d = dot(n,s)
d =
1.0491
Can we determine v from this? Yes. Recall that the origin is a distance of d units from the line that connects points r and s. Therefore we can write dn = r + v(s-r), for some value of the scalar v. Form the dot product of each side of this equation with the vector (s-r), and solve for v.
> v = dot(s-r,d*n-r)/dot(s-r,s-r)
v =
1.2024
This tells us that the closest approach of the line segment rs to the origin happened outside the end points of the line segment. So really the closest point on rs to the origin was the point rs(1) = s.
Backing out from the projection, this tells us that the closest point on line segment RS to the infinite line PQ was the point S.
There is one more step in the analysis to take. What is the closest point on the line segment PQ? Does this point fall inside the line segment, or does it too fall outside the endpoints?
We project the point S onto the line PQ. (This expression for u is easily enough derived from similar logic as I did before. Note here that I've used \ to do the work.)
> u = (Q-P)'\((S - (S*N)*N') - P)'
u =
0.95903
See that u lies in the interval [0,1]. We have solved the problem. The point on line PQ is
> P + u*(Q-P)
ans =
0.25817 -1.1677 1.1473
And, the distance between closest points on the two line segments was
> norm(P + u*(Q-P) - S)
ans =
1.071
Of course, all of this can be compressed into just a few short lines of code. But it helps to expand it all out to gain understanding of how it works.
One basic approach is the same as computing the shortest distance between 2 lines, with one exception.
If you look at most algorithms for finding the shortest distance between 2 lines, you'll find that it finds the points on each line that are the closest, then computes the distance from them.
The trick to extend this to segments (or rays), is to see if that point is beyond one of the end points of the line, and if so, use the end point instead of the actual closest point on the infinite line.
For a concrete sample, see:
http://softsurfer.com/Archive/algorithm_0106/algorithm_0106.htm
More specifically:
http://softsurfer.com/Archive/algorithm_0106/algorithm_0106.htm#dist3D_Segment_to_Segment()
I would parameterize both line segments to use one parameter each, bound between 0 and 1, inclusive. Then find the difference between both line functions and use that as the objective function in a linear optimization problem with the parameters as variables.
So say you have a line from (0,0,0) to (1,0,0) and another from (0,1,0) to (0,0,0) (Yeah, I'm using easy ones). The lines can be parameterized like (1*t,0*t,0*t) where t lies in [0,1] and (0*s,1*s,0*s) where s lies in [0,1], independent of t.
Then you need to minimize ||(1*t,1*s,0)|| where t, s lie in [0,1]. That's a pretty simple problem to solve.
How about extending the line segments into infinite lines and find the shortest distance between the two lines. Then find the points on each line that are the end points of the shortest distance line segment.
If the point for each line is on the original line segment, then the you have the answer. If a point for each line is not on the original segment, then the point is one of the original line segments' end points.
Finding the distance between two finite lines based on finding between two infinite lines and then bound the infinite lines to the finite lines doesn't work always. for example try this points
Q=[5 2 0]
P=[2 2 0]
S=[3 3.25 0]
R=[0 3 0]
Based on infinite approach the algorithm select R and P for distance calculation (distance=2.2361), but somewhere in the middle of R and S has got a closer distance to the P point. Apparently, selecting P and [2 3.166] from R to S line has lower distance of 1.1666. Even this answer could get better by precise calculation and finding orthogonal line from P to R S line.
First, find the closest approach Line Segment bridging between their extended lines. Let's call this LineSeg BR.
If BR.endPt1 falls on LS1 and BR.endPt2 falls on LS2, you're done...just calculate the length of BR.
If the bridge BR intersects LS1 but not LS2, use the shorter of these two distances: smallerOf(dist(BR.endPt1, LS2.endPt1), dist(BR.endPt1, LS2.endPt2))
If the bridge BR intersects LS2 but not LS1, use the shorter of these two distances: smallerOf(dist(BR.endPt2, LS1.endPt1), dist(BR.endPt2, LS1.endPt2))
If none of these conditions hold, the closest distance is the closest pairing of endpoints on opposite Line Segs.
This question is the topic of the article On fast computation of distance between line segments by Vladimir J. Lumelksy 1985. It goes even further by finding not only the minimal Euclidean distance (MinD) but a point on each segment separated by that distance.
The general algorithm is as follows:
Compute the global MinD (global means the distance between two infinite lines containing the segments) and coordinates of both points (bases) of the line of minimum distances, see skew lines; if both bases lie inside the segments, then actual MinD is equal to the global MinD; otherwise, continue.
Compute distances between the endpoints of both segments (a total of four distances).
Compute coordinates of the base points of perpendiculars from the endpoints of one segment onto the other segment; compute the lengths of those perpendiculars whose base points lie inside the corresponding segments (up to four base point, and four distances).
Out of the remaining distances, the smallest is the sought actual MinD.
Altogether, this represents the computation of six points and of nine distances.
The article then describes and proves how to reduce the amount of tests based on the data received in initial steps of the algorithm and how to handle degenerate cases (e.g. equal endpoints of a segment).
C-language implementation by Eric Larsen can be found here, see SegPoints() function.