Given a set of intervals on the real line and some parameter d > 0. Find a sequence of points with gaps between neighbors less or equal to d, such that the number of intervals that contain any of the points is minimized.
To prevent trivial solutions we ask that the first point from the sequence is before the first interval, and the last point is after the last interval. The intervals can be thought of right-open.
Does this problem have a name? Maybe even an algorithm and a complexity bound?
Some background:
This is motivated by a question from topological data analysis, but it seems so general, that it could be interesting for other topics, e.g. task scheduling (given a factory that has to shut down at least once a year and wants to minimize the number of tasks inflicted by the maintenance...)
We were thinking of integer programming and minimum cuts, but the d-parameter does not quite fit. We also implemented approximate greedy solutions in n^2 and n*logn time, but they can run into very bad local optima.
Show me a picture
I draw intervals by lines. The following diagram shows 7 intervals. d is such that you have to cut at least every fourth character. At the bottom of the diagram you see two solutions (marked with x and y) to the diagram. x cuts through the four intervals in the top, whereas y cuts through the three intervals at the bottom. y is optimal.
——— ———
——— ———
———
———
———
x x x x
y y y
Show me some code:
How should we define fun in the following snippet?
intervals = [(0, 1), (0.5, 1.5), (0.5, 1.5)]
d = 1.1
fun(intervals, d)
>>> [-0.55, 0.45, 1.55] # Or something close to it
In this small example the optimal solution will cut the first interval, but not the second and third. Obviously, the algorithm should work with more complicated examples as well.
A tougher test can be the following: Given a uniform distribution of interval start times on [0, 100] and lengths uniform on [0, d], one can compute the expected number of cuts by a regular grid [0, d, 2d, 3d,..] to be slightly below 0.5*n. And the optimal solution should be better:
n = 10000
delta = 1
starts = np.random.uniform(low=0., high=99, size=n)
lengths = np.random.uniform(low=0., high=1, size=n)
rand_intervals = np.array([starts, starts + lengths]).T
regular_grid = np.arange(0, 101, 1)
optimal_grid = fun(rand_intervals)
# This computes the number of intervals being cut by one of the points
def cuts(intervals, grid):
bins = np.digitize(intervals, grid)
return sum(bins[:,0] != bins[:,1])
cuts(rand_intervals, regular_grid)
>>> 4987 # Expected to be slightly below 0.5*n
assert cuts(rand_intervals, optimal_grid) <= cuts(rand_intervals, regular_grid)
You can solve this optimally through dynamic programming by maintaining an array S[k] where S[k] is the best solution (covers the largest amount of space) while having k intervals with a point in it. Then you can repeatedly remove your lowest S[k], extend it in all possible ways (limiting yourself to the relevant endpoints of intervals plus the last point in S[k] + delta), and updating S with those new possible solutions.
When the lowest possible S[k] in your table covers the entire range, you are done.
A Python 3 solution using intervaltree from pip:
from intervaltree import Interval, IntervalTree
def optimal_points(intervals, d, epsilon=1e-9):
intervals = [Interval(lr[0], lr[1]) for lr in intervals]
tree = IntervalTree(intervals)
start = min(iv.begin for iv in intervals)
stop = max(iv.end for iv in intervals)
# The best partial solution with k intervals containing a point.
# We also store the intervals that these points are contained in as a set.
sols = {0: ([start], set())}
while True:
lowest_k = min(sols.keys())
s, contained = sols.pop(lowest_k)
# print(lowest_k, s[-1]) # For tracking progress in slow instances.
if s[-1] >= stop:
return s
relevant_intervals = tree[s[-1]:s[-1] + d]
relevant_points = [iv.begin - epsilon for iv in relevant_intervals]
relevant_points += [iv.end + epsilon for iv in relevant_intervals]
extensions = {s[-1] + d} | {p for p in relevant_points if s[-1] < p < s[-1] + d}
for ext in sorted(extensions, reverse=True):
new_s = s + [ext]
new_contained = set(tree[ext]) | contained
new_k = len(new_contained)
if new_k not in sols or new_s[-1] > sols[new_k][0][-1]:
sols[new_k] = (new_s, new_contained)
If the range and precision could be feasible for iterating over, we could first merge and count the intervals. For example,
[(0, 1), (0.5, 1.5), (0.5, 1.5)] ->
[(0, 0.5, 1), (0.5, 1, 3), (1, 1.5, 2)]
Now let f(n, k) represent the optimal solution with k points up to n on the number line. Then:
f(n, k) = min(
num_intervals(n) + f(n - i, k - 1)
)
num_intervals(n) is known in O(1)
from a pointer in the merged interval list.
n-i is not every precision point up to n. Rather, it's
every point not more than d back that marks a change
from one merged interval to the next as we move it
back from our current pointer in the merged-interval
list.
One issue to note is that we need to store the distance between the rightmost and previous point for any optimal f(n, k). This is to avoid joining f(n - i, k - 1) where the second to rightmost point would be less than d away from our current n, making the new middle point, n - i, superfluous and invalidating this solution. (I'm not sure I've thought this issue through enough. Perhaps someone could point out something that's amiss.)
How would we know k is high enough? Given that the optimal solution may be lower than the current k, we assume that the recurrence would prevent us from finding an instance based on the idea in the above paragraph:
0.......8
——— ———
——— ———
———
———
———
x x x x
y y y
d = 4
merged list:
[(1, 3, 2), (3, 4, 5), (4, 5, 3), (5, 6, 5), (6, 8, 2)]
f(4, 2) = (3, 0) // (intersections, previous point)
f(8, 3) = (3, 4)
There are no valid solutions for f(8, 4) since the
break point we may consider between interval change
in the merged list is before the second-to-last
point in f(8, 3).
I am modelling a particle in 3D space.
{0} The particle starts at time t0 from a known position P0 with a velocity V0. The velocity is computed using its known previous position of P-1 at t-1.
{1} The particle is targeted to go to P1 at t1 with a known velocity of V1.
{..} The particle moves as fast as it can, without jerks (C1 continuous) bound by a set of constraints that clamp the acceleration along x, y and z independently. The maximum acceleration/deceleration along x, y and z are known and are Xa, Ya, and Za. The max rate of change of acceleration along x, y and z are defined by Xr, Yr, and Zr.
{n} After an unknown number of time steps it reaches Pn at some time (say tn) with a velocity of Vn.
{n+1} It moves to Pn+1 at tn+1.
The problem I have is to compute the minimum time for the transition from P0 to Pn and to generate the intermediate positions and velocity directions thereof. A secondary goal is to accelerate smoothly instead of applying acceleration that results in jerks.
Current Approach:
find the dimension {x, y or z} that will take the longest to align from start P0 to end Pn. This will be the critical dimension and will determine the total time. This is fairly straightforward and I can write something to this effect.
interpolate smoothly without jitters from P0 to Pn in all dimensions such that the velocity at Pn is as expected. I am not sure, how to approach this.
Any inputs/physics engines that already do this will be useful. It is a commercial project and I cannot put dependencies on large 3rd party libraries with restrictive licenses.
Note: Particle at P0 and Pn have little or no acceleration.
If I understand correctly, you have a point (P0, V0), with V0 = P0 - P-1, and a point (Pn, Vn), with Vn = Pn - Pn-1, and you want to find the fewest intermediate points by adjusting the acceleration at each time step.
Let's define the acceleration at ti: Ai = Vi - Vi-1, with abs(Ai) <= mA. Here, since the problem is axis-independant, abs is the member-wise absolute instead of the norm (or vector magnitude), and mA is the maximum acceleration vector, positive in each dimension. Let's also consider that Pn > P0 (member-wise).
From that, we get Vi = Vi-1 + Ai and so Pi = Pi-1 + Vi-1 + Ai.
If you need to go from some point to another in the fastest way possible, the obvious thing to do, whatever the initial velocity, is accelerate as much as possible until you reach the goal. However, since your problem is discrete and you have a terminal velocity Vn, using that method will probably lead too far and with a different terminal velocity.
However, you can do the same thing in reverse, starting from the end point. And if you start simultaneously from both points, you will make two paths crossing each other in each dimension (not necessarily crossing in 3D, but, in each dimension, the relative direction of both paths changes at some "crossing" point).
Let's take a one-dimensional example. (P0, V0) = (0, -2) and (Pn, Vn) = (35, -1), and mA = 1.
The first path, with Ai = mA, goes like this:
(0, -2) -> (-1, -1) -> (-1, 0) -> (0, 1) -> (2, 2) ->
(5, 3) -> (9, 4) -> (14, 5) -> (20, 6) -> (27, 7) -> ...
The second path, with Ai = -mA but in reverse, goes like this:
(35, -1) <- (36, 0) <- (36, 1) <- (35, 2) <- (33, 3) <-
(30, 4) <- (26, 5) <- (21, 6) <- (15, 7) <- ...
You can see the paths cross with the same velocity somewhere between 20 and 21. That gives you the fastest acceleration and deceleration parts of the path you need, but the two parts aren't connected. However, it's easy to connect them by finding the closest points of same velocity; let's call these points Pq and Pr. Here, Pq = (20, 6) and Pr = (21, 6). Since that velocity is calculated between current and previous points, take the point before Pq (Pq-1, or (14, 5) in the example) and the point Pr, and try connecting them.
If Pq >= Pr >= Pq - 2mA, then you can connect them directly by taking Pq-1 unchanged, and Pr with Vr = Pr - Pq-1.
Else, take Pq-2 and Pr-1 (where Vr-1 = Vr - mA, because it's in reverse) and try connecting those by adding intermediate points. Since these points have a velocity difference of mA, you can search only for intermediate points with the same velocity Vs such that Vq-2 <= Vs <= Vr-1.
If you still can't find a solution, then take Pq-3 and Pr-2 and repeat the process with more intermediate points.
In the example I took, Pq < Pr, so we have to try with Pq-2 = (9, 4) and Pr-1 = (26, 5). We can connect those with a sequence of 3 points, for example (9, 4) -> (13, 4) -> (17, 4) -> (21, 4) -> (26, 5).
In any case, this method will give you the smallest amount of intermediate points, meaning the fastest path between P0 and Pn.
If you then want to reduce jerk, then you can forget the points calculated previously and do an interpolation with the number of points you now know to be minimal.
After playing around with some ideas, I came up with another solution, more accurate and probably faster, if done correctly, than that of my previous answer. It is however quite complicated and requires quite a bit of maths, although not very complex maths. Moreover, this is a work in progress: I am still investigating some areas. Nonetheless, from what I've tried, it does already produce very good results.
The problem
Definitions and goal
Throughout this answer, p[n] refers to the position of the nth point, v[n] to its velocity, a[n] to its acceleration, and j[n] to its jerk (the derivative of acceleration). The velocity of the nth point depends only on its position and that of the previous point. Similarly for acceleration and jerk, but with the points velocity and acceleration, respectively.
We have a start point and an end point, respectively p[0] and p[n], both with associated velocities v[0] and v[n]. The goal is to place n-1 points in between, with an arbitrary n, such that, along the X, Y, and Z axes, the absolute values of acceleration and jerk at any of these points (and at p[n]) are below some limits, respectively aMaxX, aMaxY, and aMaxZ for acceleration, and jMaxX, jMaxY, and jMaxZ for jerk.
What we want to find is the values of p[i] for all i ∈ [1; n-1]. Because p[i] = p[i-1] + v[i], this is the same as finding v[i]. By the same reasoning, with v[i] = v[i-1] + a[i] and a[i] = a[i-1] + j[i], it is also the same as finding a[i] or j[i].
a[0] and a[n+1] are assumed to be zero.
Observations and simplifications
Because the problem's constraints are independant of the dimension, we can solve for each of the three dimensions separately, as long as the number of points obtained in each case is the same. Therefore, I am only going to solve the one-dimensional version of the problem, using aMax and jMax, irrespective of the axis.
*[WIP]* Determine the worst case to solve first, then solve the other ones, knowing the number of points.
The actual positions of the two given points are irrelevant, what matters is the relative distance between them, which we can define as P = p[n] - p[0]. Let's also define the ranges R = [1; n] and R* = [1; n+1].
Because of the discrete nature of the problem, we can obtain the following equations. Note that ∑{i∈R}(x[i]) is the sum of all x[i] for i∈R.
Ⓐ ∑{i∈R}(v[i]) = P
Ⓑ ∑{i∈R}(a[i]) = v[n] - v[0]
Ⓧ ∑{i∈R*}(j[i]) = 0
Ⓧ comes from the assumption that a[0] = a[n+1] = 0.
From Ⓐ and v[i] = v[i-1] + a[i], i∈R, we can deduce:
Ⓒ ∑{i∈R}((n+1-i)*a[i]) = P - n*v[0]
By the same logic, from Ⓑ, Ⓒ, and a[i] = a[i-1] + j[i], i∈R, we can deduce:
Ⓨ ∑{i∈R}((n+1-i)*j[i]) = v[n] - v[0]
Ⓩ ∑{i∈R}(T[n+1-i]*j[i]) = P - n*v[0]
Here, T[n] is the nth triangular number, defined by T[n] = n*(n+1)/2.
The equations Ⓧ, Ⓨ, and Ⓩ are the relevant ones for the next parts.
The approach
In order to minimize n, we can start with a small value of n (1, 2?) and find a solution. Then, if max{i∈R}(abs(a[i])) > aMax or max{i∈R}(abs(j[i])) > jMax, we can increment n and repeat the process.
*[WIP]* Find a lower bound for n to avoid unnecessary calculations from small values of n. Or estimate the correct value of n and pinpoint it by testing solutions.
Finding a solution requires finding the values of j[i] for all i∈R*. I have yet to find an optimal form for j[i], but defining j*[i], r[i] and s[i] such that
j[i] = j*[i] + r[i]v[0] + s[i]v[n]
works quite well.
*[WIP]* Find a better form for j[i]
By doing that, we transform our n-1 unknowns (j[i], i∈R, note that j[n+1] = -∑{i∈R}(j[i])) into 3(n-1) easier to find unknowns. Here are a few things we can deduce right now from Ⓧ, Ⓨ, and Ⓩ.
∑{i∈R*}(r[i]) = 0
∑{i∈R*}(s[i]) = 0
∑{i∈R}((n+1-i)*r[i]) = -1
∑{i∈R}((n+1-i)*s[i]) = 1
∑{i∈R}(T(n+1-i)*r[i]) = -n
∑{i∈R}(T(n+1-i)*s[i]) = 0
As a reminder, here are Ⓧ, Ⓨ, and Ⓩ.
Ⓧ ∑{i∈R*}(j[i]) + j[n+1] = 0
Ⓨ ∑{i∈R}((n+1-i)*j[i]) = v[n] - v[0]
Ⓩ ∑{i∈R}(T[n+1-i]*j[i]) = P - n*v[0]
The goal now is to find adequate special cases to help us determine these unknowns.
The special cases
v[0] = v[n] = 0
By playing with values of jerk, I observed that taking all of j[i], i∈R* as part of a parabola yields excellent results for minimizing both jerk and acceleration. Although it isn't the best possible fit, I haven't found better yet.
The intuition behind values of jerk coming from a parabola is that, if the values of position are to follow a polynomial, then its degree must be at least 5, and can be 5. This is easier to understand if you think about the values of velocity following a 4th degree polynomial. The constraints that v[0] and v[n] are set, a[0] = a[n+1] = 0, and that its integral over [0; n] must equal P, this polynomial must have a degree of at least 4. This holds for both continuous and dicrete cases. Finally, it seems that taking the smallest degree leads to a smoother jerk as well as making it easier to calculate.
Here is an example of a continuous case where the position is in purple, the velocity in blue, the acceleration in yellow and the jerk in red.
In case you want to play with this, here is how to define the position curve in terms of n, p[0], p[n], v[0], and v[n] (the other ones are simply derivatives).
a = (-3(v[n]+v[0]) + 6(p[n]-p[0])) / n^5
b = (n(7v[n]+8v[0]) - 15(p[n]-p[0])) / n^4
c = (-n(4v[n]+6v[0]) + 10(p[n]-p[0])) / n^3
p[x] = ax^5 + bx^4 + cx^3 + v[0]x + p[0]
If v[0] = v[n] = 0, then j[i] = j*[i], i∈R*. That means that the values j*[i] follow a quadratic polynomial. So we want to find α, β, and γ such that Ⓟ holds.
Ⓟ j*[i] = αi^2 + βi + γ, i∈R*
From Ⓧ, Ⓨ, and Ⓩ follow these equations.
α*∑{i∈R*}(i^2) + β*∑{i∈R*}(i) + c*∑{i∈R*}(1) = 0
α*∑{i∈R}((n+1-i)*i^2) + β*∑{i∈R}((n+1-i)*i) + c*∑{i∈R}(n+1-i) = 0
α*∑{i∈R}(T(n+1-i)*i^2) + β*∑{i∈R}(T(n+1-i)*i) + c*∑{i∈R}(T(n+1-i)) = P
Solving this system gives α, β, and γ, which can be used with Ⓟ to calculate j*[i], i∈R*. Note that j*[i] = j*[n+2-i], so only the upper half of the calculations need to be done.
v[0] = v[n] = 1/n
If v[0] = v[n] = 1/n, then j[i] = 0, i∈R*. This means that Ⓠ holds.
Ⓠ r[i] + s[i] = -n*j[i], i∈R*
v[0] = 0, j[i∈L] = J, j[h] = 0, j[i∈U] = -J
L and U are respectively the lower and upper halves of R*, and h is the value in between, if n+1 is odd. In other words:
if n is odd:
L = [1; (n+1)/2]
U = [(n+3)/2; n+1]
if n is even:
L = [1; n/2]
h = n/2+1
U = [n/2+2; n]
This special case corresponds to the maximum overall acceleration between p[0] and p[n] while minimizing abs(j[i]), i∈R*. Here, Ⓩ gives us the following equation.
∑{i∈R}(T[n+1-i]*j[i]) = P
∑{i∈L}(T[n+1-i])*j[1] + ∑{i∈U}(T[n+1-i])*j[n+1] = P
j[1] = P / [ ∑{i∈L}(T[n+1-i]) - ∑{i∈U}(T[n+1-i]) ]
This gives j[1], and so every j[i], i∈R*. We can then calculate v[n] using Ⓨ.
Putting the pieces together
Each special case gives us, for some values of v[0], v[n] and P, a relation of the form
αj*[i] + βr[i] + γs[i] = δ.
By treating three special cases (assuming they are not similar, meaning the do not give the same relation), we have a system of three equations that, once solved, gives the values of j*[i], r[i] and s[i] for all i∈R*.
As a result, we can calculate, for each value of n, values of j[i] depending on v[0], v[n] and P. They can be precalculated, which means testing them for any value of n can be very fast. Thereby, we can very quicklyt find a good estimate for the fewest amount of points needed in the trajectory, as well as a good approximation of the best trajectory possible, as long as we have precalculated values up to a sufficiently large value of n.
Answer
I suggest you to take following function :
X(n) = Xstart + Vxstart n+ (-6xstart+3Vxstart+6xend-3Vxend+c/2) n^2 + (8xstart+3Vxstart-8xend+5Vxend-c) n^3 + (-3Xstart-Vxstart+3xend-2Vxend+c/2) n^4
(for each coordinate X,Y,Z)
Here are some graphs of what this gives, I took c=3 for each samples:
For xstart=1, vstart=1, xend=3, vstart=-2, this gives :
X(n)= 1 + n + 16 n^2 -25 n^3 + 10 n^4
For xstart = -4, vstart =-4, xend = 4, vend = 0, this gives :
(-4 -4n +61n^2 -78n^3 + 29yn^4)
where c is a number from 0.1 to 5, it is up to you to decide, the higher c will be, the faster the function will go to that point (but it might have to turn back if c > 4). (See graphs below).
The polynomial comes from following calculation : where a=x0,b=v0,c=xe,d=v2,e=the magic constant
Explanation
Based on Nelfeal's answer, my idea was to try to solve the given problem with polynomials.
We can change the problem as to define a new Axis which goes in the P[last]-P[0], to have the problem reduced to dimension 1.
We can think about the problem in continuous mathematics instead of discrete mathematics (eg use functions instead of sequences), and go back to the discrete world which is just a special case of the continuous.
We can change the unit for time and space so that the time is 1 and the distance is 1, so that the problem is simplified to
Find a function 𝒇 which satisfies the following :
𝒇(0) = 0 and 𝒇(1) = 1
𝒇'(0) = 0 and 𝒇'(1) = 0
For x∈ℝ |𝒇''(x)| < c, where c is the max speed
We have
P(X) = ∑{i∈ℕ} Ai Xi
P'(X) = ∑{i∈ℕ} (i+1) Ai+1 Xi
P''(X) = ∑{i∈ℕ} (i+2)(i+1) Ai+2 Xi
We need :
P(0) = 0
P(1) = 1
P'(0) = 0
P'(1) = 0
-c <= P''(x) <= c
Thus it means :
a0 = 0 (from 1.)
a1 = 0 (from 3.)
P(1) = ∑{i∈ℕ} Ai = 1
P'(1) = ∑{i∈ℕ} (i+1) Ai = 0
P''(x) = ∑{i∈ℕ} (i+2)(i+1) Ai Xi in [-c,c]
The third equation is the most complex one, and can be simplified by saying that P(1) = c.
We will have c vary to see what changes.
After inverting a 3x3 matrix, we get following result :
P(x) = (c/2+6) x^2 - (c+8) x^3 + (c/2+3) x^4
For c=0.15, this gives :
For c=1, this gives:
For c=4, we see a bounce back :
If we take c from 0.1 to 6, we get following 3d graph :
Note that we have solved this for polynomals of degree 4, but you might do the same things to higher degrees (up to 10 if you want to) to get more possibilities in your functions.
http://www.spoj.com/problems/SCALE/
I am trying to do it using recursion but getting TLE.
The tags of the problem say BINARY SEARCH.
How can one do it using binary search ?
Thanx in advance.
First thing to notice here is that if you had two weights of each size instead of one, then the problem would be quite trivial, as we we would only need to represent X in its base 3 representation and take corresponding number of weights. For, example if X=21 then we could take two times P_3 and one time P_2, and put those into another scale.
Now let's try to make something similar using the fact that we can add to both scales (including the one where X is placed):
Assume that X <= P_1+P_2+...+P_n, that would mean that X <= P_n + (P_n-1)/2 (easy to understand why). Therefore, X + P_(n-1) + P_(n-2)+...+P_1 < 2*P_n.
(*) What that means is that if we add some of the weights from 1 to n-1 to same scale as X, then the number on that scale still does
not have 2 in its n-th rightmost digit (either 0 or 1).
From now on assume that digit means a digit of a number in its base-3 representation (but it can temporarily become larger than 2 :P ). Now lets denote the total weight of first scale (where X is placed) as A=X and the other scale is B=0 and our goal is to make them equal (both A and B will change as we will make our progress) .
Let's iterate through all digits of the A from smallest to largest (leftmost). If the current digit index is i and it:
Equals to 0 then just ignore and proceed further
Equals to 1 then we place weight P_i=3^(i-1) on scale B.
Equals to 2 then we add P_i=3^(i-1) to scale A. Note that it would result in the increase of the digit (i+1).
Equals to 3 (yes this case is possible, if both current and previous digit were 2) add 1 to digit at index i+1 and go further (no weights are added to any scale).
Due to (*) obviously the procedure will run correctly (as the last digit will be equal to 1 in A), as we will choose only one weight from the set and place them correctly, and obviously the numbers A and B will be equal after the procedure is complete.
Now second case X > P_1+P_2+...+P_n. Obviously we cannot balance even if we place all weights on the second scale.
This completes the proof and shows when it is possible and the way how to place the weights to both scales to equalise them.
EDIT:
C++ code which I successfully submitted on SPOJ just now https://ideone.com/tbB7Ve
The solution to this problem is quite trivial. The idea is the same as #Yerken's answer, but expressed in a bit different way:
Only the first weight has a mass not divisible by 3. So the first weight is the only one has effect on balancing mod 3 property of the 2 scales:
If X mod 3 == 0, the first weight must not be used
If X mod 3 == 1, the first weight must be on scale B (the currently empty one)
If X mod 3 == 2, the first weight must be on scale A
Subtract both scales by weight(B) --> solution doesn't change, and now weight(A) is divisible by 3 while weight(B) == 0
Set X' = weight(A)/3 and divide every weights Pi by 3 ==> Solution doesn't change, and now it's the same problem with N' = N-1 and X' = (X+1)/3
pseudo-code:
listA <- empty
listB <- empty
for i = 1 to N {
if (X == 0) break for loop; // done!
if (X mod 3 == 1) then push i to listB;
if (X mod 3 == 2) then push i to listA;
X = (X + 1)/3; // integer division
}
hasSolution <- (X == 0)
C++ code: http://ideone.com/LXLGmE
P is an n*d matrix, holding n d-dimensional samples. P in some areas is several times more dense than others. I want to select a subset of P in which distance between any pairs of samples be more than d0, and I need it to be spread all over the area. All samples have same priority and there's no need to optimize anything (e.g. covered area or sum of pairwise distances).
Here is a sample code that does so, but it's really slow. I need a more efficient code since I need to call it several times.
%% generating sample data
n_4 = 1000; n_2 = n_4*2;n = n_4*4;
x1=[ randn(n_4, 1)*10+30; randn(n_4, 1)*3 + 60];
y1=[ randn(n_4, 1)*5 + 35; randn(n_4, 1)*20 + 80 ];
x2 = rand(n_2, 1)*(max(x1)-min(x1)) + min(x1);
y2 = rand(n_2, 1)*(max(y1)-min(y1)) + min(y1);
P = [x1,y1;x2, y2];
%% eliminating close ones
tic
d0 = 1.5;
D = pdist2(P, P);D(1:n+1:end) = inf;
E = zeros(n, 1); % eliminated ones
for i=1:n-1
if ~E(i)
CloseOnes = (D(i,:)<d0) & ((1:n)>i) & (~E');
E(CloseOnes) = 1;
end
end
P2 = P(~E, :);
toc
%% plotting samples
subplot(121); scatter(P(:, 1), P(:, 2)); axis equal;
subplot(122); scatter(P2(:, 1), P2(:, 2)); axis equal;
Edit: How big the subset should be?
As j_random_hacker pointed out in comments, one can say that P(1, :) is the fastest answer if we don’t define a constraint on the number of selected samples. It delicately shows incoherence of the title! But I think the current title better describes the purpose. So let’s define a constraint: “Try to select m samples if it’s possible”. Now with the implicit assumption of m=n we can get the biggest possible subset. As I mentioned before a faster method excels the one that finds the optimum answer.
Finding closest points over and over suggests a different data structure that is optimized for spatial searches. I suggest a delaunay triangulation.
The below solution is "approximate" in the sense that it will likely remove more points than strictly necessary. I'm batching all the computations and removing all points in each iteration that contribute to distances that are too long, and in many cases removing one point may remove the edge that appears later in the same iteration. If this matters, the edge list can be further processed to avoid duplicates, or even to find points to remove that will impact the greatest number of distances.
This is fast.
dt = delaunayTriangulation(P(:,1), P(:,2));
d0 = 1.5;
while 1
edge = edges(dt); % vertex ids in pairs
% Lookup the actual locations of each point and reorganize
pwise = reshape(dt.Points(edge.', :), 2, size(edge,1), 2);
% Compute length of each edge
difference = pwise(1,:,:) - pwise(2,:,:);
edge_lengths = sqrt(difference(1,:,1).^2 + difference(1,:,2).^2);
% Find edges less than minimum length
idx = find(edge_lengths < d0);
if(isempty(idx))
break;
end
% pick first vertex of each too-short edge for deletion
% This could be smarter to avoid overdeleting
points_to_delete = unique(edge(idx, 1));
% remove them. triangulation auto-updates
dt.Points(points_to_delete, :) = [];
% repeat until no edge is too short
end
P2 = dt.Points;
You don't specify how many points you want to select. This is crucial to the problem.
I don't readily see a way to optimise your method.
Assuming that Euclidean distance is acceptable as a distance measure, the following implementation is much faster when selecting only a small number of points, and faster even when trying to the subset with 'all' valid points (note that finding the maximum possible number of points is hard).
%%
figure;
subplot(121); scatter(P(:, 1), P(:, 2)); axis equal;
d0 = 1.5;
m_range = linspace(1, 2000, 100);
m_time = NaN(size(m_range));
for m_i = 1:length(m_range);
m = m_range(m_i)
a = tic;
% Test points in random order.
r = randperm(n);
r_i = 1;
S = false(n, 1); % selected ones
for i=1:m
found = false;
while ~found
j = r(r_i);
r_i = r_i + 1;
if r_i > n
% We have tried all points. Nothing else can be valid.
break;
end
if sum(S) == 0
% This is the first point.
found = true;
else
% Get the points already selected
P_selected = P(S, :);
% Exclude points >= d0 along either axis - they cannot have
% a Euclidean distance less than d0.
P_valid = (abs(P_selected(:, 1) - P(j, 1)) < d0) & (abs(P_selected(:, 2) - P(j, 2)) < d0);
if sum(P_valid) == 0
% There are no points that can be < d0.
found = true;
else
% Implement Euclidean distance explicitly rather than
% using pdist - this makes a large difference to
% timing.
found = min(sqrt(sum((P_selected(P_valid, :) - repmat(P(j, :), sum(P_valid), 1)) .^ 2, 2))) >= d0;
end
end
end
if found
% We found a valid point - select it.
S(j) = true;
else
% Nothing found, so we must have exhausted all points.
break;
end
end
P2 = P(S, :);
m_time(m_i) = toc(a);
subplot(122); scatter(P2(:, 1), P2(:, 2)); axis equal;
drawnow;
end
%%
figure
plot(m_range, m_time);
hold on;
plot(m_range([1 end]), ones(2, 1) * original_time);
hold off;
where original_time is the time taken by your method. This gives the following timings, where the red line is your method, and the blue is mine, with the number of points selected along the x axis. Note that the line flattens when 'all' points meeting the criteria have been selected.
As you say in your comment, performance is highly dependent on the value of d0. In fact, as d0 is reduced, the method above appears to have even greater improvement in performance (this is for d0=0.1):
Note however that this is also dependent on other factors such as the distribution of your data. This method exploits specific properties of your data set, and reduces the number of expensive calculations by filtering out points where calculating the Euclidean distance is pointless. This works particularly well for selecting fewer points, and it is actually faster for smaller d0 because there are fewer points in the data set that match the criteria (so there are fewer computations of the Euclidean distance required). The optimal solution for a problem like this will usually be specific to the exact data set used.
Also note that in my code above, manually calculating the Euclidean distance is much faster then calling pdist. The flexibility and generality of the Matlab built-ins is often detrimental to performance in simple cases.