modify distances to violate triangle inequality - algorithm

I would like to test a conjecture that says that the performance of a certain algorithm depends on the degree to which c violates the triangle inequality, where c is the distance matrix of the instance being solved by the algorithm.
My question is, what is a good way to modify the distance matrix c so that I can control the degree to which it violates the triangle inequality?
So far the best I have come up with is something like this: For parameters p and q, randomly choose p% of the elements of c, and for each chosen element, multiply it by 1+u, where u is chosen uniformly from [0,q].
I suspect we can do better. Any ideas?

I would run a number of tests. For each test I would generate two matrices. One completely random but symmetrical. This will almost certainly violate the triangle inequality. The other the distance matrix based on calculating the distances between random points in n-dimensional space, so it obeys the triangle inequality. Now use a weighted average of these two, using a variety of different weights. Repeat with different base matrices.
Just to add something not in the comment I will note that if you consider any triple of points you can work out the weight, if any, at which the triangle inequality is violated for that triangle, because you are just checking three inequalities where the values are linear plus a constant in the weights. Considering all triples isn't great, since this is O(n^3) in the number of points, but the matrices have O(n^2) elements, so you can always claim its only O(n^1.5) in the amount of input data.

Related

Two salesmen - one always visits the nearest neighbour, the other the farthest

Consider this question relative to graph theory:
Let G a complete (every vertex is connected to all the other vertices) non-directed graph of size N x N. Two "salesmen" travel this way: the first always visits the nearest non visited vertex, the second the farthest, until they have both visited all the vertices. We must generate a matrix of distances and the starting points for the two salesmen (they can be different) such that:
All the distances are unique Edit: positive integers
The distance from a vertex to itself is always 0.
The difference between the total distance covered by the two salesmen must be a specific number, D.
The distance from A to B is equal to the distance from B to A
What efficient algorithms cn be useful to help me? I can only think of backtracking, but I don't see any way to reduce the work to be done by the program.
Geometry is helpful.
Using the distances of points on a circle seems like it would work. Seems like you could determine adjust D by making the circle radius larger or smaller.
Alternatively really any 2D shape, where the distances are all different could probably used as well. In this case you should scale up or down the shape to obtain the correct D.
Edit: Now that I think about it, the simplest solution may be to simply pick N random 2D points, say 32 bit integer coordinates to lower the chances of any distances being too close to equal. If two distances are too close, just pick a different point for one of them until it's valid.
Ideally, you'd then just need to work out a formula to determine the relationship between D and the scaling factor, which I'm not sure of offhand. If nothing else, you could also just use binary search or interpolation search or something to search for scaling factor to obtain the required D, but that's a slower method.

List all sets of points that are enclosed by a circle with given radius

My problem is: Given N points in a plane and a number R, list/enumerate all subsets of points, where points in each subset are enclosed by a circle with radius of R. Two subsets should be different and not covered each other.
Efficiency may not be important, but the algorithm should not be too slow.
In a special case, can we find K subsets with most points? Approximation algorithm can be accepted.
Thanks,
Edit: It seems that the statement is not clear to understand. My bad!
So I restate my question as follows: Given N points and a circle with fixed radius R, use the circle to scan whole the space. At a time, the circle will cover a subset of points. The goal is to list all the possible subset of points that can be covered by such an R-radius circle. One subset cannot be a superset of other subsets.
I am not sure I get what you mean by 'not covered'. If you drop this, what you are looking for is exactely a Cech complex whose complexity is high, you wont have efficient algorithm if you dont have condition on the sampling (sampling should be sparse enough and R not too big otherwise you could have 2^n subsets with n your number of points). You have to enumerate all subsets and check if their minimal enclosing ball radius is lower than R. You can reduce the search to all subsets whose diameter is lower than R (eg pairwise distance lower than R) which may be sufficient in your case.
If 'not covered' for two subsets mean that one is not included into the other, you can have many different decompositions. One of interest is the alpha-complex as it can be computed efficiently in O(nlogn) in dimension 2-3 (I will suggest to use CGAL to compute it, you can also see what it means with pictures). If your points are high dimensional, then you will probably end up computing a Cech complex.
Without loss of generality, we can assume that the enclosing circles considered pass through at least two points (ignoring the trivial cases of no points or one point and assuming that your motivation is maximizing density, so that you don't care if non-maximal subsets are omitted). Build a proximity structure (kd-tree, cover tree, etc.) on the input points. For each input point p, use the structure to find all points q such that d(p, q) ≤ 2R. For each point q, there are one or two circles that contain p and q on their boundary. Find their centers by solving some quadratic equations and then look among the other choices of q to determine the subset.

Sorting multidimensional vectors

If I have a set of k vectors of n dimensions, how can I sort these such that the distance between each consecutive pair of vectors is the minimal possible? The distance can be calculated by using the Euclidian distance, but how is the "sorting" then implemented in an effective manner?
I'm thinking one approach would be to select a vector at random, calculate the distance to all other vectors, pick the vector that minimizes the distance as the next vector and repeat until all vectors have been "sorted". However, this greedy search would probably render different results depending on which vector I start with.
Any ideas on how to do this?
If you really want just 'that the distance between each consecutive pair of vectors is the minimal possible' without randomness, you can firstly find 2 closest points (by O(n log n) algo like this) - let's say, p and q, then search for closest points for p (let's say, r) and q (let's say, s), then compare distance (p,r) and (q,s) and if the first is smaller, start with q,p,r and use your greedy algo (in other case, obviously, start with p,q,s).
However, if your goal is actually to arrange points so that the sum of all paired distances is smallest, you should choose any approximate solution for Travelling salesman problem. Note this trick in order to reduce your task to TSP.

Intersection of line and convex set

Let X be a collection of n points in some moderate-dimensional space - for now, say R^5. Let S be the convex hull of X, let p be a point in S, and let v be any direction. Finally, let L = {p + lambda v : lambda a real number} be the line passing through p in direction v.
I am interested in finding a reasonably efficient algorithm for computing the intersection of S with L. I'd also be interested in hearing if it is known that no such algorithm exists! Note that this intersection can be represented by the (extreme) two points of intersection of L with the boundary of S. I'm particularly interested in finding an algorithm that behaves well when n is large.
I should say that it is easy to do this very efficiently in two dimensions. In that case, one can order the points of X in 'clockwise order' as seen from p, and then do binary search. So, the initial ordering takes O(n log(n)) steps and then further lookups take O(log(n)) steps. I don't see what the analogous algorithm should be in higher dimensions. Part of the problem is that a convex body in two dimensions has n vertices and n faces, while a convex body in 3 or higher dimensions can have n vertices but many, many more than n faces.
You can write a simple linear program for this. You want to minimise/maximise lambda subject to the constraint that x + lambda v lies in the convex hull of your input points. "Lies in the convex hull of" is coordinatewise equality between two points, one of which is a nonnegative weighted average of your input points such that the weights sum to 1.
As a practical matter, it may be useful to start with a handful of randomly chosen points, get a convex combination or a certificate of infeasibility, then interpret the cerificate of infeasibility as a linear inequality and find the input point that most violates it. If you're using a practical solver, this means you want to formulate the dual, switch a bunch of presolve things off, and run the above essentially as a cutting plane method using certificates of unboundedness instead. It is likely that, unless you have pathological data, you will only need to tell the LP solver about a small handful of your input points.

Choose rectangles with maximal intersection area

In this problem r is a fixed positive integer. You are given N rectangles, all the same size, in the plane. The sides are either vertical or horizontal. We assume the area of the intersection of all N rectangles has non-zero area. The problem is how to find N-r of these rectangles, so as to maximize the area of the intersection. This problem arises in practical microscopy when one repeatedly images a given biological specimen, and alignment changes slightly during this process, due to physical reasons (e.g. differential expansion of parts of the microscope and camera). I have expressed the problem for dimension d=2. There is a similar problem for each d>0. For d=1, an O(N log(N)) solution is obtained by sorting the lefthand endpoints of the intervals. But let's stick with d=2. If r=1, one can again solve the problem in time O(N log(N)) by sorting coordinates of the corners.
So, is the original problem solved by solving first the case (N,1) obtaining N-1 rectangles, then solving the case (N-1,1), getting N-2 rectangles, and so on, until we reduce to N-r rectangles? I would be interested to see an explicit counter-example to this optimistic attempted procedure. It would be even more interesting if the procedure works (proof please!), but that seems over-optimistic.
If r is fixed at some value r>1, and N is large, is this problem in one of the NP classes?
Thanks for any thoughts about this.
David
Since the intersection of axis-aligned rectangles is an axis-aligned rectangle, there are O(N4) possible intersections (O(N) lefts, O(N) rights, O(N) tops, O(N) bottoms). The obvious O(N5) algorithm is to try all of these, checking for each whether it's contained in at least N - r rectangles.
An improvement to O(N3) is to try all O(N2) intervals in the X dimension and run the 1D algorithm in the Y dimension on those rectangles that contain the given X-interval. (The rectangles need to be sorted only once.)
How large is N? I expect that fancy data structures might lead to an O(N2 log N) algorithm, but it wouldn't be worth your time if a cubic algorithm suffices.
I think I have a counter-example. Let's say you have r := N-2. I.e. you want to find two rectangles with maximum overlapping. Let's say you have to rectangles covering the same area (=maximum overlapping). Those two will be the optimal result in the end.
Now we need to construct some more rectangles, such that at least one of those two get removed in a reduction step.
Let's say we have three rectangles which overlap a lot..but they are not optimal. They have a very small overlapping area with the other two rectangles.
Now if you want to optimize the area for four rectangles, you will remove one of the two optimal rectangles, right? Or maybe you don't HAVE to, but you're not sure which decision is optimal.
So, I think your reduction algorithm is not quite correct. Atm I'm not sure if there is a good algorithm for this or in which complexity class this belongs to, though. If I have time I think about it :)
Postscript. This is pretty defective, but may spark some ideas. It's especially defective where there are outliers in a quadrant that are near the X and Y axes - they will tend to reinforce each other, as if they were both at 45 degrees, pushing the solution away from that quadrant in a way that may not make sense.
-
If r is a lot smaller than N, and N is fairly large, consider this:
Find the average center.
Sort the rectangles into 2 sequences by (X - center.x) + (Y - center.y) and (X - center.x) - (Y - center.y), where X and Y are the center of each rectangle.
For any solution, all of the reject rectangles will be members of up to 4 subsequences, each of which is a head or tail of each of the 2 sequences. Assuming N is a lot bigger than r, most the time will be in sorting the sequences - O(n log n).
To find the solution, first find the intersection given by removing the r rectangles at the head and tail of each sequence. Use this base intersection to eliminate consideration of the "core" set of rectangles that you know will be in the solution. This will reduce the intersection computations to just working with up to 4*r + 1 rectangles.
Each of the 4 sequence heads and tails should be associated with an array of r rectangles, each entry representing the intersection given by intersecting the "core" with the i innermost rectangles from the head or tail. This precomputation reduces the complexity of finding the solution from O(r^4) to O(r^3).
This is not perfect, but it should be close.
Defects with a small r will come from should-be-rejects that are at off angles, with alternatives that are slightly better but on one of the 2 axes. The maximum error is probably computable. If this is a concern, use a real area-of-non-intersection computation instead of the simple "X+Y" difference formula I used.
Here is an explicit counter-example (with N=4 and r=2) to the greedy algorithm proposed by the asker.
The maximum intersection between three of these rectangles is between the black, blue, and green rectangles. But, it's clear that the maximum intersection between any two of these three is smaller than intersection between the black and the red rectangles.
I now have an algorithm, pretty similar to Ed Staub's above, with the same time estimates. It's a bit different from Ed's, since it is valid for all r
The counter-example by mhum to the greedy algorithm is neat. Take a look.
I'm still trying to get used to this site. Somehow an earlier answer by me was truncated to two sentences. Thanks to everyone for their contributions, particularly to mhum whose counter-example to the greedy algorithm is satisfying. I now have an answer to my own question. I believe it is as good as possible, but lower bounds on complexity are too difficult for me. My solution is similar to Ed Staub's above and gives the same complexity estimates, but works for any value of r>0.
One of my rectangles is determined by its lower left corner. Let S be the set of lower left corners. In time O(N log(N)) we sort S into Sx according to the sizes of the x-coordinates. We don't care about the order within Sx between two lower left corners with the same x-coord. Similarly the sorted sequence Sy is defined by using the sizes of the y-coords. Now let u1, u2, u3 and u4 be non-negative integers with u1+u2+u3+u4=r. We compute what happens to the area when we remove various rectangles that we now name explicitly. We first remove the u1-sized head of Sx and the u2-sized tail of Sx. Let Syx be the result of removing these u1+u2 entries from Sy. We remove the u3-sized head of Syx and the u4-sized tail of Syx. One can now prove that one of these possible choices of (u1,u2,u3,u4) gives the desired maximal area of intersection. (Email me if you want a pdf of the proof details.) The number of such choices is equal to the number of integer points in the regular tetrahedron in 4-d euclidean space with vertices at the 4 points whose coordinate sum is r and for which 3 of the 4 coordinates are equal to 0. This is bounded by the volume of the tetrahedron, giving a complexity estimate of O(r^3).
So my algorithm has time complexity O(N log(N)) + O(r^3).
I believe this produces a perfect solution.
David's solution is easier to implement, and should be faster in most cases.
This relies on the assumption that for any solution, at least one of the rejects must be a member of the complex hull. Applying this recursively leads to:
Compute a convex hull.
Gather the set of all candidate solutions produced by:
{Remove a hull member, repair the hull} r times
(The hull doesn't really need to be repaired the last time.)
If h is the number of initial hull members, then the complexity is less than
h^r, plus the cost of computing the initial hull. I am assuming that a hull algorithm is chosen such that the sorted data can be kept and reused in the hull repairs.
This is just a thought, but if N is very large, I would probably try a Monte-Carlo algorithm.
The idea would be to generate random points (say, uniformly in the convex hull of all rectangles), and score how each random point performs. If the random point is in N-r or more rectangles, then update the number of hits of each subset of N-r rectangles.
In the end, the N-r rectangle subset with the most random points in it is your answer.
This algorithm has many downsides, the most obvious one being that the result is random and thus not guaranteed to be correct. But as most Monte-Carlo algorithms it scales well, and you should be able to use it with higher dimensions as well.

Resources