I came across this question in one of the Job Interviews & i am unable to find the correct alogorithm of the solution so, i am posting this question here:
There is a robot who can move on a co-ordinate plane in eithr of the 2 ways:
Given that the robots current position is (x,y), The robot can move equal to the sum of x & y in either if the directon like so:
(x,y) -> (x+y, y)
(x,y) -> (x, x+y)
Now given a initial Point (x1, y1) and an destination point (x2, y2) you need to write a programme to check if the robot can ever reach the destination taking any number of moves.
Note: x1, y1 , x2 , y2 > 0
Explanation:
Suppose the robot's initial point is (2,3) and desintation is (7,5)
Result in this case is yes as the robot can take this path:
(2,3) -> (2, 2+3) => (2, 5)
(2,5) -> (2+5, 5) => (7,5)
Suppose the robot's initial point is (2,3) and desintation is (4,5)
Result in this case is No as no matter what path the robot takes it cannot reach (4,5)
A naive brute-force approach
One way would be to recursively explore every possible move until you reach the target.
Something to consider is that the robot can keep moving indefinitely (never reaching the target) so you need an end case so the function completes. Luckily the position is always increasing in the x and y axis, so when either the x-coordinate or y-coordinate is greater than the target, you can give up exploring that path.
So something like:
def can_reach_target(pos, target):
if pos == target:
return True
if pos[0] > target[0] or pos[1] > target[1]:
return False
return can_reach_target((pos[0], sum(pos)), target) or \
can_reach_target((sum(pos), pos[1]), target)
And it works:
>>> can_reach_target((2,3),(7,5))
True
>>> can_reach_target((2,3),(4,5))
False
A limitation is that this does not work for negative coordinates - not sure if this is a requirement, just let me know if it is and I will adapt the answer.
Bactracking
On the other hand, if negative co-ordinates are not allowed, then we can also approach this as Dave suggests. This is much more efficient, as the realisation is that there is one and only one way of the robot getting to each coordinate.
The method relies on being able to determine which way we stepped: either increasing the x-coordinate or the y-coordinate. We can determine which coordinate was last changed, by selecting the larger of the two. The following proof guarantees that this is the case.
The possibilities for a state change are:
1. (a, b) => (a+b, b) a x-coordinate change
and,
2. (a, b) => (a, a+b) a y-coordinate change
In case (1), the x-coordinate is now larger, since:
a > 0
a + b > b (add b to both sides)
and similarly, since b is also > 0, we can deduce that a+b is > a.
Now we can start from the target and ask: which coordinate led us here? And the answer is simple. If the x-coordinate is greater than the y-coordinate, subtract the y-coordinate from the x-coordinate, otherwise subtract the x-coordinate from the y-coordinate.
That is to say, for a coordinate, (x,y), if x > y, then we came from (x-y,y) otherwise (x,y-x).
The first code can now be adapted to:
def can_reach_target(pos, target):
if pos == target:
return True
if target[0] < pos[0] or target[1] < pos[1]:
return False
x, y = target
return can_reach_target(pos, (x-y,y) if x > y else (x,y-x))
which works as expected:
>>> can_reach_target((2,3),(7,5))
True
>>> can_reach_target((2,3),(4,5))
False
Timings
>>> timeit.timeit('brute_force((2,3),(62,3))',globals=locals(),number=10**5)
3.41243960801512
>>> timeit.timeit('backtracker((2,3),(62,3))',globals=locals(),number=10**5)
1.4046142909792252
>>> timeit.timeit('brute_force((2,3),(602,3))',globals=locals(),number=10**4)
3.518286211998202
>>> timeit.timeit('backtracker((2,3),(602,3))',globals=locals(),number=10**4)
1.4182081500184722
So you can see that the backtracker is nearly three times faster in both cases.
Go backwards. I'm assuming that the starting coordinates are positive. Say you want to know if a starting point of (a,b) is compatible with an end point of (x,y). One step back from (x,y) you were either at (x-y,y) or (x,y-x). If x > y choose the former, otherwise choose the latter.
I agree with Dave that going backwards is an efficient approach. If only positive coordinates are legal, then every coordinate has at most one valid parent. This lets you work backwards without a combinatorial explosion.
Here's a sample implementation:
def get_path(source, destination):
path = [destination]
c,d = destination
while True:
if (c,d) == source:
return list(reversed(path))
if c > d:
c -= d
else:
d -= c
path.append((c,d))
if c < source[0] or d < source[1]:
return None
print(get_path((1,1), (1,1)))
print(get_path((2,3), (7,5)))
print(get_path((2,3), (4,5)))
print(get_path((1,1), (6761, 1966)))
print(get_path((4795, 1966), (6761, 1966)))
Result:
[(1, 1)]
[(2, 3), (2, 5), (7, 5)]
None
[(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (6, 5), (11, 5), (16, 5), (21, 5), (26, 5), (31, 5), (36, 5), (41, 5), (46, 5), (46, 51), (46, 97), (143, 97), (143, 240), (383, 240), (623, 240), (863, 240), (863, 1103), (863, 1966), (2829, 1966), (4795, 1966), (6761, 1966)]
[(4795, 1966), (6761, 1966)]
Appendix: some observations I made along the way that might be useful for finding an O(1) solution:
(a,b) is reachable from (1,1) if and only if a and b are coprime.
If a and b have a common factor, then all children of (a,b) also have that common factor. Equivalently, if there is a path from (a,b) to (c,d), then there is also a path from (n*a, n*b) to (n*c, n*d), for any positive integer n.
if a and b are coprime and aren't (1,1), then there are infinitely many coprime coordinates that are unreachable from (a,b). By choosing (a,b) as a starting point, you're effectively limiting yourself to some sub-branch of the tree formed by (1,1). You can never reach any of the sibling branches of (a,b), where infinitely many coordinates reside.
A recursive funtion should work fine for that. You even got the number of possibilities.
def find_if_possible(x,y,x_obj,y_obj,max_depth):
if(max_depth < 0):
return 0
elif(x == x_obj and y == y_obj):
return 1
elif(x>x_obj or y>y_obj):
return 0
else:
return(sum(find_if_possible(x+y,y,x_obj,y_obj,max_depth-1),find_if_possible(x,y+x,x_obj,y_obj,max_depth-1))
Related
Given a set of intervals on the real line and some parameter d > 0. Find a sequence of points with gaps between neighbors less or equal to d, such that the number of intervals that contain any of the points is minimized.
To prevent trivial solutions we ask that the first point from the sequence is before the first interval, and the last point is after the last interval. The intervals can be thought of right-open.
Does this problem have a name? Maybe even an algorithm and a complexity bound?
Some background:
This is motivated by a question from topological data analysis, but it seems so general, that it could be interesting for other topics, e.g. task scheduling (given a factory that has to shut down at least once a year and wants to minimize the number of tasks inflicted by the maintenance...)
We were thinking of integer programming and minimum cuts, but the d-parameter does not quite fit. We also implemented approximate greedy solutions in n^2 and n*logn time, but they can run into very bad local optima.
Show me a picture
I draw intervals by lines. The following diagram shows 7 intervals. d is such that you have to cut at least every fourth character. At the bottom of the diagram you see two solutions (marked with x and y) to the diagram. x cuts through the four intervals in the top, whereas y cuts through the three intervals at the bottom. y is optimal.
——— ———
——— ———
———
———
———
x x x x
y y y
Show me some code:
How should we define fun in the following snippet?
intervals = [(0, 1), (0.5, 1.5), (0.5, 1.5)]
d = 1.1
fun(intervals, d)
>>> [-0.55, 0.45, 1.55] # Or something close to it
In this small example the optimal solution will cut the first interval, but not the second and third. Obviously, the algorithm should work with more complicated examples as well.
A tougher test can be the following: Given a uniform distribution of interval start times on [0, 100] and lengths uniform on [0, d], one can compute the expected number of cuts by a regular grid [0, d, 2d, 3d,..] to be slightly below 0.5*n. And the optimal solution should be better:
n = 10000
delta = 1
starts = np.random.uniform(low=0., high=99, size=n)
lengths = np.random.uniform(low=0., high=1, size=n)
rand_intervals = np.array([starts, starts + lengths]).T
regular_grid = np.arange(0, 101, 1)
optimal_grid = fun(rand_intervals)
# This computes the number of intervals being cut by one of the points
def cuts(intervals, grid):
bins = np.digitize(intervals, grid)
return sum(bins[:,0] != bins[:,1])
cuts(rand_intervals, regular_grid)
>>> 4987 # Expected to be slightly below 0.5*n
assert cuts(rand_intervals, optimal_grid) <= cuts(rand_intervals, regular_grid)
You can solve this optimally through dynamic programming by maintaining an array S[k] where S[k] is the best solution (covers the largest amount of space) while having k intervals with a point in it. Then you can repeatedly remove your lowest S[k], extend it in all possible ways (limiting yourself to the relevant endpoints of intervals plus the last point in S[k] + delta), and updating S with those new possible solutions.
When the lowest possible S[k] in your table covers the entire range, you are done.
A Python 3 solution using intervaltree from pip:
from intervaltree import Interval, IntervalTree
def optimal_points(intervals, d, epsilon=1e-9):
intervals = [Interval(lr[0], lr[1]) for lr in intervals]
tree = IntervalTree(intervals)
start = min(iv.begin for iv in intervals)
stop = max(iv.end for iv in intervals)
# The best partial solution with k intervals containing a point.
# We also store the intervals that these points are contained in as a set.
sols = {0: ([start], set())}
while True:
lowest_k = min(sols.keys())
s, contained = sols.pop(lowest_k)
# print(lowest_k, s[-1]) # For tracking progress in slow instances.
if s[-1] >= stop:
return s
relevant_intervals = tree[s[-1]:s[-1] + d]
relevant_points = [iv.begin - epsilon for iv in relevant_intervals]
relevant_points += [iv.end + epsilon for iv in relevant_intervals]
extensions = {s[-1] + d} | {p for p in relevant_points if s[-1] < p < s[-1] + d}
for ext in sorted(extensions, reverse=True):
new_s = s + [ext]
new_contained = set(tree[ext]) | contained
new_k = len(new_contained)
if new_k not in sols or new_s[-1] > sols[new_k][0][-1]:
sols[new_k] = (new_s, new_contained)
If the range and precision could be feasible for iterating over, we could first merge and count the intervals. For example,
[(0, 1), (0.5, 1.5), (0.5, 1.5)] ->
[(0, 0.5, 1), (0.5, 1, 3), (1, 1.5, 2)]
Now let f(n, k) represent the optimal solution with k points up to n on the number line. Then:
f(n, k) = min(
num_intervals(n) + f(n - i, k - 1)
)
num_intervals(n) is known in O(1)
from a pointer in the merged interval list.
n-i is not every precision point up to n. Rather, it's
every point not more than d back that marks a change
from one merged interval to the next as we move it
back from our current pointer in the merged-interval
list.
One issue to note is that we need to store the distance between the rightmost and previous point for any optimal f(n, k). This is to avoid joining f(n - i, k - 1) where the second to rightmost point would be less than d away from our current n, making the new middle point, n - i, superfluous and invalidating this solution. (I'm not sure I've thought this issue through enough. Perhaps someone could point out something that's amiss.)
How would we know k is high enough? Given that the optimal solution may be lower than the current k, we assume that the recurrence would prevent us from finding an instance based on the idea in the above paragraph:
0.......8
——— ———
——— ———
———
———
———
x x x x
y y y
d = 4
merged list:
[(1, 3, 2), (3, 4, 5), (4, 5, 3), (5, 6, 5), (6, 8, 2)]
f(4, 2) = (3, 0) // (intersections, previous point)
f(8, 3) = (3, 4)
There are no valid solutions for f(8, 4) since the
break point we may consider between interval change
in the merged list is before the second-to-last
point in f(8, 3).
Let's say I have two intervals,
[a1, a2] and [b1, b2]
Where a1,a2,b1,b2 are all in the range [0, 2 pi]. Now, given these two intervals, I want to find their overlapping interval. This is quite tricky. Since an example of two intervals is
[5, 1] and [0, 6]
Which are sketched below (the red areas are the intervals).
Notice that these two intervals return an overlapping interval that consists of two intervals:
[0,1] and [5,6]
There are multiple different cases that must be treated, is there any known algorithm that does this?
I do not know of an existing algorithm (which doesn't mean there isn't one), but here's one I've come up with.
As already mentioned by #Michael Kenzel, numbers don't increase monotonically, which makes everything very complicated.
But we can observe that we can unroll the circle onto the number line.
Then each interval then appears infinitely many times with a period of 2π.
Let's first define a normalize operation as following:
normalize([a, b]):
if (a > b):
a -= 2π
Using this operation we unroll both our intervals onto a [-2π, 2π] part of the number line.
Example intervals:
[2, 5] -> [2, 5]
[4, π] -> [-2, π]
Two intervals on a circle can overlap at most 2 times.
(I don't have a proper proof of this, but intuitively: an overlap starts where one interval started and another one has not ended. This can happen only once on a number line and twice in our case.)
By just looking at normalized intervals, we can miss one of the overlaps. In the example above we would detect [2, π] overlap and miss [4, 5]. This happens because we have unrolled the original intervals not onto [0, 2π], but a twice longer part of the number line, [-2π, 2π].
To correct for that, we can, for example, take the part that falls onto the negative line and shift it by 2π to the right, this way having all pieces in the original [0, 2π]. But it is computationally ineffective, as we will, in the worst case, have to test 2 pieces on one interval against 2 pieces of another interval - total of 4 operations.
Here is an illustration of such an unlucky example that will require 4 comparisons:
If we want to be a bit more efficient, we will try to do only 2 interval-vs-interval operations. We won't need more as there will be at most 2 overlaps.
As the intervals repeat infinitely on the number line with the period of 2π, we can just take 2 neighboring duplicates of the first interval and compare them against the second one.
To make sure that the second interval will be, so to say, in between those duplicates, we can take the one that starts earlier and add 2π to its both ends. Or subtract 2π from the one that starts later.
There will be not more than two overlaps, which can be then brought back to the [0, 2π] interval by addition/subtraction of 2π.
In our original example it would look like that:
To sum it up:
given [a, b], [c, d]
[A, B] = normalize([a, b])
[C, D] = normalize([c, d])
if (A < C):
[E, F] = [A + 2π, B + 2π]
else:
[E, F] = [A - 2π, B - 2π]
I_1 = intersect([A, B], [C, D])
I_2 = intersect([E, F], [C, D])
bring I_1 and I_2 back to the [0, 2π]
I think I didn't miss any corner cases, but feel free to point to any mistake in my logic.
The first thing to note is that no new angle is created when you discover the intersection sector of two sectors 'A' and 'B'. Sector 'A' is defined by two limits, Acw and Acc, and sector 'B' is defined by two limits, Bcw and Bcc. Here 'cw' and 'cc' denote 'ClockWise' and 'CounterClockwise'.
The boundaries of the intersection sector will be made from at most two out of these four angles. The problem is entirely concerned with selecting two out of these four angles to be the limits of the intersection sector, let's call them Icw and Icc.
It is important to distinguish between "cw" and "cc" and keep them straight throughout the problem, because any pair of angles actually defines two sectors, right? This is as you have shown in your picture at the top of this answer. The issue of "angle wraparound" will arise naturally as the problem is solved.
Some Helper Functions
OK, so we have our four angles, and we have to select two of the four to be the limits of our intersection sector. In order to do this, we need an operator that determines whether an angle dAngle falls between two limits, let's call them dLimitCW and dLimitCC.
It is in this operator that the issue of "angle wraparound" arises. I did mine by constraining all angles to the range of -π to π. To determine whether dAngle falls between dLimitCW and dLimitCC, we subtract dAngle from dLimitCW and dLimitCC, and then constrain the result to fall within the [-π, π] range by adding or subtracting 2π. This is just like rotating the polar coordinate system by the angle dAngle , so that what was dAngle is now zero; what was dLimitCW is now dLimitCWRot, and what was dLimitCC is now dLimitCCRot.
The code looks like this:
bool AngleLiesBetween(double dAngle, double dLimitCW, double dLimitCC)
{
double dLimitCWRot, dLimitCCRot;
// Rotate everything so that dAngle is on zero axis.
dLimitCWRot = constrainAnglePlusMinusPi(dLimitCW - dAngle);
dLimitCCRot = constrainAnglePlusMinusPi(dLimitCC - dAngle);
if (dLimitCWRot > dLimitCCRot)
return (signbit(dLimitCWRot * dLimitCCRot));
else
return (!signbit(dLimitCWRot * dLimitCCRot));
}
where the function constrainAnglePlusMinusPi is
double constrainAnglePlusMinusPi(double x)
{
x = fmod(x + pi, 2*pi);
if (x < 0)
x += 2*pi;
return x - pi;
}
Once we have our "angle lies between" function, we use it to select which of the four angles that made up the limit angles of the two sectors make up the intersection sector.
The Nitty Gritty
To do this, we must first detect the case in which the two angular ranges do not overlap; we do this by calling our AngleLiesBetween() function four times; if it returns a "false" all four times, there is no overlap and the intersection is undefined:
if (
(AngleLiesBetween(Acw, Bcw, Bcc) == false) &&
(AngleLiesBetween(Acc, Bcw, Bcc) == false) &&
(AngleLiesBetween(Bcw, Acw, Acc) == false) &&
(AngleLiesBetween(Bcc, Acw, Acc) == false)
)
{
// Ranges have no overlap, result is undefined.
*this = CAngleRange(0.0f, 0.0f);
return;
}
Here I'm returning a CAngleRange object containing (0.0, 0.0) to indicate "no overlap," but you can do it some other way if you like, like by having your "intersection" function return a value of "false."
Once you've handled the "no overlap" case, the rest is easy. You check each of the six remaining combinations one at a time and determine which two limits are the limits of I by their outcomes:
if ((AngleLiesBetween(Acw, Bcw, Bcc) == true) && (AngleLiesBetween(Acc, Bcw, Bcc) == false)) then Icw = Acw and Icc = Bcc;
if ((AngleLiesBetween(Acw, Bcw, Bcc) == false) && (AngleLiesBetween(Acc, Bcw, Bcc) == true)) then Icw = Bcw and Icc = Acc;
if ((AngleLiesBetween(Acw, Bcw, Bcc) == true) && (AngleLiesBetween(Acc, Bcw, Bcc) == true)) then Icw = Acw and Icc = Acc;
if ((AngleLiesBetween(Bcw, Acw, Acc) == true) && (AngleLiesBetween(Bcc, Acw, Acc) == false)) then Icw = Bcw and Icc = Acc;
if ((AngleLiesBetween(Bcw, Acw, Acc) == false) && (AngleLiesBetween(Bcc, Acw, Acc) == true)) then Icw = Acw and Icc = Bcc;
and finally
if ((AngleLiesBetween(Bcw, Acw, Acc) == true) && (AngleLiesBetween(Bcc, Acw, Acc) == true)) then Icw = Bcw and Icc = Bcc.
You don't have to constrain the results to [-π, π] or [0, 2π] because you haven't changed them; each of the result angles is just one of the angles you presented as input to the function in the first place.
Of course you can optimize and streamline the code I've given in various ways that take advantage of the symmetries inherent in the problem, but I think when all is said and done you have to compare eight separate angle combinations for "between-ness" no matter how you optimize things. I like to keep things simple and straightforward in my code, in case I have made an error and have to come back and debug it in a few years when I've forgotten all my clever optimizations and streamlining efforts.
About Angle Wraparound
Notice that the issue of "angle wraparound" got handled in function AngleLiesBetween(); we rotated the coordinate system to put the angle we are checking (which we called dAngle) for "between-ness" at zero degrees. This naturally puts the two angle limits (dLimitCW and dLimitCC) on either side of the polar origin in the case in which dAngle is between those limits. Thus, the wrap-around issue disappears; we've "rotated it out of the way," so to speak.
About the Polar Coordinate System
It may be worth noting that in the polar coordinate system I'm using, CW angles are more positive than CC angles (unless the wrap-around is between them). This is the opposite of the polar coordinates we learn in calculus class, where angles increase in the CC (counterclockwise) direction.
This is because of the quandary that results from the decision — made long, long ago — that computer devices (like display surfaces, originally implemented on cathode ray tubes) would count the vertical direction as increasing downward, instead of upward, as we learn when we study analytic geometry and calculus.
The people who made this decision did so because they wanted to display text on the screen, with the "first" line at the top, the "second" line (line 2) below the "first" line (line 1), and so forth. To make this easier to keep track of this in their early machine code, they had the +y direction go "down," as in "toward the bottom of the screen." This decision has far-reaching and annoying consequences for people who do image geometry in code.
One of the consequences is that by flipping the direction of +y, the "sense of rotation" also flipped from the conventional "angle increases counterclockwise" sense we're used to from high-school math. This issue is mostly invisible at the coding level; it only comes out when you look at your results on the screen.
This is why it is very important to write code to visualize your results on the screen before you trust them. Here "trust them" is certainly necessary before you let your customer see them.
As long as you have intervals where the numbers just increase monotonically, it's simple; you just take the max() of the minimums and the min() of the maximums and done. It would seem that the major source of complication here is the fact that you can have intervals that wrap around at 0, i.e., the numbers that are part of the interval are not monotonically-increasing. It would seem to me that one way around this problem is to simply treat intervals that wrap around as not one interval but the union of two intervals. For example, think of [5, 1] as [5, 2 pi] ∪ [0, 1]. Then the problem of finding the intersection of [5, 1] and [0, 6] turns into the problem of finding the intersection of [5, 2 pi] and [0, 6] as well as the intersection of [0, 1] and [0, 6]. Mathematically, you'd be taking advantage of the distributive law of set intersection, i.e., (A ∪ B) ∩ C = (A ∩ C) ∪ (B ∩ C). So given two intervals A and B, we would start by splitting each into two intervals, A1 and A2, and B1 and B2, where A1 and B1 each start after 0 and end before 2 pi, and A2 and B2 start before 2 pi and end before 2 pi. Slitting like this, we can compute our intersections like
(A1 ∪ A2) ∩ (B1 ∪ B2) = (A1 ∩ (B1 ∪ B2)) ∪ (A2 ∩ (B1 ∪ B2) = (A1 ∩ B1) ∪ (A1 ∩ B2) ∪ (A2 ∩ B1) ∪ (A2 ∩ B2)
i.e., compute the intersection of all combinations of A1, A2, B1, and B2…
I am modelling a particle in 3D space.
{0} The particle starts at time t0 from a known position P0 with a velocity V0. The velocity is computed using its known previous position of P-1 at t-1.
{1} The particle is targeted to go to P1 at t1 with a known velocity of V1.
{..} The particle moves as fast as it can, without jerks (C1 continuous) bound by a set of constraints that clamp the acceleration along x, y and z independently. The maximum acceleration/deceleration along x, y and z are known and are Xa, Ya, and Za. The max rate of change of acceleration along x, y and z are defined by Xr, Yr, and Zr.
{n} After an unknown number of time steps it reaches Pn at some time (say tn) with a velocity of Vn.
{n+1} It moves to Pn+1 at tn+1.
The problem I have is to compute the minimum time for the transition from P0 to Pn and to generate the intermediate positions and velocity directions thereof. A secondary goal is to accelerate smoothly instead of applying acceleration that results in jerks.
Current Approach:
find the dimension {x, y or z} that will take the longest to align from start P0 to end Pn. This will be the critical dimension and will determine the total time. This is fairly straightforward and I can write something to this effect.
interpolate smoothly without jitters from P0 to Pn in all dimensions such that the velocity at Pn is as expected. I am not sure, how to approach this.
Any inputs/physics engines that already do this will be useful. It is a commercial project and I cannot put dependencies on large 3rd party libraries with restrictive licenses.
Note: Particle at P0 and Pn have little or no acceleration.
If I understand correctly, you have a point (P0, V0), with V0 = P0 - P-1, and a point (Pn, Vn), with Vn = Pn - Pn-1, and you want to find the fewest intermediate points by adjusting the acceleration at each time step.
Let's define the acceleration at ti: Ai = Vi - Vi-1, with abs(Ai) <= mA. Here, since the problem is axis-independant, abs is the member-wise absolute instead of the norm (or vector magnitude), and mA is the maximum acceleration vector, positive in each dimension. Let's also consider that Pn > P0 (member-wise).
From that, we get Vi = Vi-1 + Ai and so Pi = Pi-1 + Vi-1 + Ai.
If you need to go from some point to another in the fastest way possible, the obvious thing to do, whatever the initial velocity, is accelerate as much as possible until you reach the goal. However, since your problem is discrete and you have a terminal velocity Vn, using that method will probably lead too far and with a different terminal velocity.
However, you can do the same thing in reverse, starting from the end point. And if you start simultaneously from both points, you will make two paths crossing each other in each dimension (not necessarily crossing in 3D, but, in each dimension, the relative direction of both paths changes at some "crossing" point).
Let's take a one-dimensional example. (P0, V0) = (0, -2) and (Pn, Vn) = (35, -1), and mA = 1.
The first path, with Ai = mA, goes like this:
(0, -2) -> (-1, -1) -> (-1, 0) -> (0, 1) -> (2, 2) ->
(5, 3) -> (9, 4) -> (14, 5) -> (20, 6) -> (27, 7) -> ...
The second path, with Ai = -mA but in reverse, goes like this:
(35, -1) <- (36, 0) <- (36, 1) <- (35, 2) <- (33, 3) <-
(30, 4) <- (26, 5) <- (21, 6) <- (15, 7) <- ...
You can see the paths cross with the same velocity somewhere between 20 and 21. That gives you the fastest acceleration and deceleration parts of the path you need, but the two parts aren't connected. However, it's easy to connect them by finding the closest points of same velocity; let's call these points Pq and Pr. Here, Pq = (20, 6) and Pr = (21, 6). Since that velocity is calculated between current and previous points, take the point before Pq (Pq-1, or (14, 5) in the example) and the point Pr, and try connecting them.
If Pq >= Pr >= Pq - 2mA, then you can connect them directly by taking Pq-1 unchanged, and Pr with Vr = Pr - Pq-1.
Else, take Pq-2 and Pr-1 (where Vr-1 = Vr - mA, because it's in reverse) and try connecting those by adding intermediate points. Since these points have a velocity difference of mA, you can search only for intermediate points with the same velocity Vs such that Vq-2 <= Vs <= Vr-1.
If you still can't find a solution, then take Pq-3 and Pr-2 and repeat the process with more intermediate points.
In the example I took, Pq < Pr, so we have to try with Pq-2 = (9, 4) and Pr-1 = (26, 5). We can connect those with a sequence of 3 points, for example (9, 4) -> (13, 4) -> (17, 4) -> (21, 4) -> (26, 5).
In any case, this method will give you the smallest amount of intermediate points, meaning the fastest path between P0 and Pn.
If you then want to reduce jerk, then you can forget the points calculated previously and do an interpolation with the number of points you now know to be minimal.
After playing around with some ideas, I came up with another solution, more accurate and probably faster, if done correctly, than that of my previous answer. It is however quite complicated and requires quite a bit of maths, although not very complex maths. Moreover, this is a work in progress: I am still investigating some areas. Nonetheless, from what I've tried, it does already produce very good results.
The problem
Definitions and goal
Throughout this answer, p[n] refers to the position of the nth point, v[n] to its velocity, a[n] to its acceleration, and j[n] to its jerk (the derivative of acceleration). The velocity of the nth point depends only on its position and that of the previous point. Similarly for acceleration and jerk, but with the points velocity and acceleration, respectively.
We have a start point and an end point, respectively p[0] and p[n], both with associated velocities v[0] and v[n]. The goal is to place n-1 points in between, with an arbitrary n, such that, along the X, Y, and Z axes, the absolute values of acceleration and jerk at any of these points (and at p[n]) are below some limits, respectively aMaxX, aMaxY, and aMaxZ for acceleration, and jMaxX, jMaxY, and jMaxZ for jerk.
What we want to find is the values of p[i] for all i ∈ [1; n-1]. Because p[i] = p[i-1] + v[i], this is the same as finding v[i]. By the same reasoning, with v[i] = v[i-1] + a[i] and a[i] = a[i-1] + j[i], it is also the same as finding a[i] or j[i].
a[0] and a[n+1] are assumed to be zero.
Observations and simplifications
Because the problem's constraints are independant of the dimension, we can solve for each of the three dimensions separately, as long as the number of points obtained in each case is the same. Therefore, I am only going to solve the one-dimensional version of the problem, using aMax and jMax, irrespective of the axis.
*[WIP]* Determine the worst case to solve first, then solve the other ones, knowing the number of points.
The actual positions of the two given points are irrelevant, what matters is the relative distance between them, which we can define as P = p[n] - p[0]. Let's also define the ranges R = [1; n] and R* = [1; n+1].
Because of the discrete nature of the problem, we can obtain the following equations. Note that ∑{i∈R}(x[i]) is the sum of all x[i] for i∈R.
Ⓐ ∑{i∈R}(v[i]) = P
Ⓑ ∑{i∈R}(a[i]) = v[n] - v[0]
Ⓧ ∑{i∈R*}(j[i]) = 0
Ⓧ comes from the assumption that a[0] = a[n+1] = 0.
From Ⓐ and v[i] = v[i-1] + a[i], i∈R, we can deduce:
Ⓒ ∑{i∈R}((n+1-i)*a[i]) = P - n*v[0]
By the same logic, from Ⓑ, Ⓒ, and a[i] = a[i-1] + j[i], i∈R, we can deduce:
Ⓨ ∑{i∈R}((n+1-i)*j[i]) = v[n] - v[0]
Ⓩ ∑{i∈R}(T[n+1-i]*j[i]) = P - n*v[0]
Here, T[n] is the nth triangular number, defined by T[n] = n*(n+1)/2.
The equations Ⓧ, Ⓨ, and Ⓩ are the relevant ones for the next parts.
The approach
In order to minimize n, we can start with a small value of n (1, 2?) and find a solution. Then, if max{i∈R}(abs(a[i])) > aMax or max{i∈R}(abs(j[i])) > jMax, we can increment n and repeat the process.
*[WIP]* Find a lower bound for n to avoid unnecessary calculations from small values of n. Or estimate the correct value of n and pinpoint it by testing solutions.
Finding a solution requires finding the values of j[i] for all i∈R*. I have yet to find an optimal form for j[i], but defining j*[i], r[i] and s[i] such that
j[i] = j*[i] + r[i]v[0] + s[i]v[n]
works quite well.
*[WIP]* Find a better form for j[i]
By doing that, we transform our n-1 unknowns (j[i], i∈R, note that j[n+1] = -∑{i∈R}(j[i])) into 3(n-1) easier to find unknowns. Here are a few things we can deduce right now from Ⓧ, Ⓨ, and Ⓩ.
∑{i∈R*}(r[i]) = 0
∑{i∈R*}(s[i]) = 0
∑{i∈R}((n+1-i)*r[i]) = -1
∑{i∈R}((n+1-i)*s[i]) = 1
∑{i∈R}(T(n+1-i)*r[i]) = -n
∑{i∈R}(T(n+1-i)*s[i]) = 0
As a reminder, here are Ⓧ, Ⓨ, and Ⓩ.
Ⓧ ∑{i∈R*}(j[i]) + j[n+1] = 0
Ⓨ ∑{i∈R}((n+1-i)*j[i]) = v[n] - v[0]
Ⓩ ∑{i∈R}(T[n+1-i]*j[i]) = P - n*v[0]
The goal now is to find adequate special cases to help us determine these unknowns.
The special cases
v[0] = v[n] = 0
By playing with values of jerk, I observed that taking all of j[i], i∈R* as part of a parabola yields excellent results for minimizing both jerk and acceleration. Although it isn't the best possible fit, I haven't found better yet.
The intuition behind values of jerk coming from a parabola is that, if the values of position are to follow a polynomial, then its degree must be at least 5, and can be 5. This is easier to understand if you think about the values of velocity following a 4th degree polynomial. The constraints that v[0] and v[n] are set, a[0] = a[n+1] = 0, and that its integral over [0; n] must equal P, this polynomial must have a degree of at least 4. This holds for both continuous and dicrete cases. Finally, it seems that taking the smallest degree leads to a smoother jerk as well as making it easier to calculate.
Here is an example of a continuous case where the position is in purple, the velocity in blue, the acceleration in yellow and the jerk in red.
In case you want to play with this, here is how to define the position curve in terms of n, p[0], p[n], v[0], and v[n] (the other ones are simply derivatives).
a = (-3(v[n]+v[0]) + 6(p[n]-p[0])) / n^5
b = (n(7v[n]+8v[0]) - 15(p[n]-p[0])) / n^4
c = (-n(4v[n]+6v[0]) + 10(p[n]-p[0])) / n^3
p[x] = ax^5 + bx^4 + cx^3 + v[0]x + p[0]
If v[0] = v[n] = 0, then j[i] = j*[i], i∈R*. That means that the values j*[i] follow a quadratic polynomial. So we want to find α, β, and γ such that Ⓟ holds.
Ⓟ j*[i] = αi^2 + βi + γ, i∈R*
From Ⓧ, Ⓨ, and Ⓩ follow these equations.
α*∑{i∈R*}(i^2) + β*∑{i∈R*}(i) + c*∑{i∈R*}(1) = 0
α*∑{i∈R}((n+1-i)*i^2) + β*∑{i∈R}((n+1-i)*i) + c*∑{i∈R}(n+1-i) = 0
α*∑{i∈R}(T(n+1-i)*i^2) + β*∑{i∈R}(T(n+1-i)*i) + c*∑{i∈R}(T(n+1-i)) = P
Solving this system gives α, β, and γ, which can be used with Ⓟ to calculate j*[i], i∈R*. Note that j*[i] = j*[n+2-i], so only the upper half of the calculations need to be done.
v[0] = v[n] = 1/n
If v[0] = v[n] = 1/n, then j[i] = 0, i∈R*. This means that Ⓠ holds.
Ⓠ r[i] + s[i] = -n*j[i], i∈R*
v[0] = 0, j[i∈L] = J, j[h] = 0, j[i∈U] = -J
L and U are respectively the lower and upper halves of R*, and h is the value in between, if n+1 is odd. In other words:
if n is odd:
L = [1; (n+1)/2]
U = [(n+3)/2; n+1]
if n is even:
L = [1; n/2]
h = n/2+1
U = [n/2+2; n]
This special case corresponds to the maximum overall acceleration between p[0] and p[n] while minimizing abs(j[i]), i∈R*. Here, Ⓩ gives us the following equation.
∑{i∈R}(T[n+1-i]*j[i]) = P
∑{i∈L}(T[n+1-i])*j[1] + ∑{i∈U}(T[n+1-i])*j[n+1] = P
j[1] = P / [ ∑{i∈L}(T[n+1-i]) - ∑{i∈U}(T[n+1-i]) ]
This gives j[1], and so every j[i], i∈R*. We can then calculate v[n] using Ⓨ.
Putting the pieces together
Each special case gives us, for some values of v[0], v[n] and P, a relation of the form
αj*[i] + βr[i] + γs[i] = δ.
By treating three special cases (assuming they are not similar, meaning the do not give the same relation), we have a system of three equations that, once solved, gives the values of j*[i], r[i] and s[i] for all i∈R*.
As a result, we can calculate, for each value of n, values of j[i] depending on v[0], v[n] and P. They can be precalculated, which means testing them for any value of n can be very fast. Thereby, we can very quicklyt find a good estimate for the fewest amount of points needed in the trajectory, as well as a good approximation of the best trajectory possible, as long as we have precalculated values up to a sufficiently large value of n.
Answer
I suggest you to take following function :
X(n) = Xstart + Vxstart n+ (-6xstart+3Vxstart+6xend-3Vxend+c/2) n^2 + (8xstart+3Vxstart-8xend+5Vxend-c) n^3 + (-3Xstart-Vxstart+3xend-2Vxend+c/2) n^4
(for each coordinate X,Y,Z)
Here are some graphs of what this gives, I took c=3 for each samples:
For xstart=1, vstart=1, xend=3, vstart=-2, this gives :
X(n)= 1 + n + 16 n^2 -25 n^3 + 10 n^4
For xstart = -4, vstart =-4, xend = 4, vend = 0, this gives :
(-4 -4n +61n^2 -78n^3 + 29yn^4)
where c is a number from 0.1 to 5, it is up to you to decide, the higher c will be, the faster the function will go to that point (but it might have to turn back if c > 4). (See graphs below).
The polynomial comes from following calculation : where a=x0,b=v0,c=xe,d=v2,e=the magic constant
Explanation
Based on Nelfeal's answer, my idea was to try to solve the given problem with polynomials.
We can change the problem as to define a new Axis which goes in the P[last]-P[0], to have the problem reduced to dimension 1.
We can think about the problem in continuous mathematics instead of discrete mathematics (eg use functions instead of sequences), and go back to the discrete world which is just a special case of the continuous.
We can change the unit for time and space so that the time is 1 and the distance is 1, so that the problem is simplified to
Find a function 𝒇 which satisfies the following :
𝒇(0) = 0 and 𝒇(1) = 1
𝒇'(0) = 0 and 𝒇'(1) = 0
For x∈ℝ |𝒇''(x)| < c, where c is the max speed
We have
P(X) = ∑{i∈ℕ} Ai Xi
P'(X) = ∑{i∈ℕ} (i+1) Ai+1 Xi
P''(X) = ∑{i∈ℕ} (i+2)(i+1) Ai+2 Xi
We need :
P(0) = 0
P(1) = 1
P'(0) = 0
P'(1) = 0
-c <= P''(x) <= c
Thus it means :
a0 = 0 (from 1.)
a1 = 0 (from 3.)
P(1) = ∑{i∈ℕ} Ai = 1
P'(1) = ∑{i∈ℕ} (i+1) Ai = 0
P''(x) = ∑{i∈ℕ} (i+2)(i+1) Ai Xi in [-c,c]
The third equation is the most complex one, and can be simplified by saying that P(1) = c.
We will have c vary to see what changes.
After inverting a 3x3 matrix, we get following result :
P(x) = (c/2+6) x^2 - (c+8) x^3 + (c/2+3) x^4
For c=0.15, this gives :
For c=1, this gives:
For c=4, we see a bounce back :
If we take c from 0.1 to 6, we get following 3d graph :
Note that we have solved this for polynomals of degree 4, but you might do the same things to higher degrees (up to 10 if you want to) to get more possibilities in your functions.
I was wondering if I, from a certain point in sth like a binary tree, could get to a next certain point.
I should say also, that I don't have a tree structure. I will have just points.
For example (342,124) -> (23420,1324) and the program should say me if it is possible to go from (342,124) to (23420,1324).
My coordinate system template (depthToNode,Node). I just need to know if I can move from a point to point, which those points are linked by exactly the same way, the same values, like in the data structure in the image.
Some explanation:
The top node is (0, 0)
Every MOVE increases depthToNode by one; At the same time value of the node decreases by 1 when moving to left or increases by 1 when moving to right.During a MOVE Every node is connected to left and right subnodes. When moving to left - leftNode decreases value node and Node value can increase by one. Thus basic MOVE can be only (+1, -1) or (+1, +1).
First it appeared first to me like a problem that can be solved using Breadth First Search or Depth First Search algorithms.
After having a closer look at the problem - I noticed that there is a clear pattern that can be turned into equation. I focused on the fact that left subnode has value Node-1 and and right subnode has value Node+1.
So thinking in terms of points we can have points (a, b) and (c, d):
For (a, b) and (c, d) where c>=a
You have N=c-a consecutive operations.
There are two types of operations L= -1 and R=+1.
The solution exists if there is l and r that:
(Ll + Rr)=d-b where l+r==N and l >= 0, r >= 0, both int.
So:
(L*(N-r) + R*r)=d-b
NL-Lr+Rr = d-b
-N+2r=d-b
2r=d-b+N
2r=d-b+c-a
So in the end:
If d-b+c-a produce R that is even and greater or equal to zero (0, 2, 4 ... ) = there is a path.
Lets try it:
(0, 0) -> (3, 1): 1 - 0 + 3 - 0 = 2 (path exists, cause it is 0, 2, 4...)
(2, 0) -> (4, -4): -4 - 0 + 4 - 2 = -2 (path does not exists).
Given a set of closed regions [a,b] where a and b are integers I need to find another set of regions that cover the same numbers but are disjoint.
I suppose it is possible to do naively by iterating through the set several times, but I am looking for a recommendation of a good algorithm for this. Please help.
EDIT:
to clarify, the resulting regions cannot be larger than the original ones, I have to come up with disjoint regions that are contained by the original ones. In other words, I need to split the original regions on the boundaries where they overlap.
example:
3,8
1,4
7,9
11,14
result:
1,3
3,4
4,7
7,8
8,9
11,14
Just sort all endpoints left to right (remember their type: start or end). Swype left to right. Keep a counter starting at 0. Whenever you come across a start: increment the counter. When you come across an end: decrement (note that the counter is always at least 0).
Keep track of the last two points. If the counter is greater than zero - and the last two points are different (prevent empty ranges) - add the interval between the last two points.
Pseudocode:
points = all interval endpoints
sort(points)
previous = points[0]
counter = 1
for(int i = 1; i < #points; i++) {
current = points[i]
if (current was start point)
counter++
else
counter--
if (counter > 0 and previous != current)
add (previous, current) to output
previous = current
}
(This is a modification of an answer that I posted earlier today which I deleted after I discovered it had a logic error. I later realized that I could modify Vincent van der Weele's elegant idea of using parenthesis depth to fix the bug)
On Edit: Modified to be able to accept intervals of length 0
Call an interval [a,a] of length 0 essential if a doesn't also appear as an endpoint of any interval of length > 0. For example, in [1,3], [2,2], [3,3], [4,4] the 0-length intervals [2,2] and [4,4] are essential but [3,3] isn't.
Inessential 0-length intervals are redundant thus need not appear in the final output. When the list of intervals is initially scanned (loading the basic data structures) points corresponding to 0-length intervals are recorded, as are endpoint of intervals of length > 0. When the scan is completed, two instances of each point corresponding to essential 0-length intervals are added into the list of endpoints, which is then sorted. The resulting data structure is a multiset where the only repetitions correspond to essential 0-length intervals.
For every endpoint in the interval define the pdelta (parentheses delta) of the endpoint as the number of times that point appears as a left endpoint minus the number of times it appears as a right endpoint. Store these in a dictionary keyed by the endpoints.
[a,b] where a,b are the first two elements of the list of endpoints is the first interval in the list of disjoint intervals. Define the parentheses depth of b to be the sum of pdelta[a] and pdelta[b]. We loop through the rest of the endpoints as follows:
In each pass through the loop look at the parenthesis depth of b. If it is not 0 than b is still needed for one more interval. Let a = b and let the new p be the next value in the list. Adjust the parentheses depth be the pdelta of the new b and add [a,b] to the list of disjoint intervals. Otherwise (if the parenthesis depth of b is 0) let the next [a,b] be the next two points in the list and adjust the parenthesis depth accordingly.
Here is a Python implementation:
def disjointify(intervals):
if len(intervals) == 0: return []
pdelta = {}
ends = set()
disjoints = []
onePoints = set() #onePoint intervals
for (a,b) in intervals:
if a == b:
onePoints.add(a)
if not a in pdelta: pdelta[a] = 0
else:
ends.add(a)
ends.add(b)
pdelta[a] = pdelta.setdefault(a,0) + 1
pdelta[b] = pdelta.setdefault(b,0) - 1
onePoints.difference_update(ends)
ends = list(ends)
for a in onePoints:
ends.extend([a,a])
ends.sort()
a = ends[0]
b = ends[1]
pdepth = pdelta[a] + pdelta[b]
i = 1
disjoints.append((a,b))
while i < len(ends) - 1:
if pdepth != 0:
a = b
b = ends[i+1]
pdepth += pdelta[b]
i += 1
else:
a = ends[i+1]
b = ends[i+2]
pdepth += (pdelta[a] + pdelta[b])
i += 2
disjoints.append((a,b))
return disjoints
Sample output which illustrates various edge cases:
>>> example = [(1,1), (1,4), (2,2), (4,4),(5,5), (6,8), (7,9), (10,10)]
>>> disjointify(example)
[(1, 2), (2, 2), (2, 4), (5, 5), (6, 7), (7, 8), (8, 9), (10, 10)]
>>> disjointify([(1,1), (2,2)])
[(1, 1), (2, 2)]
(I am using Python tuples to represent the closed intervals even though it has the minor drawback of looking like the standard mathematical notation for open intervals).
A final remark: referring to the result as a collection of disjoint interval might not be accurate since some of these intervals have nonempty albeit 1-point intersections