How to find number of steps to transform (a,b) to (x,y) - algorithm

Given 2 numbers a=1 and b=1.
At each steps, you can do one of the following:
a+=b;
b+=a;
If it's possible to transform a into x and b into y, find the minimum steps needed
x and y can be arbitrarily large (more than 10^15)
My approach so far was just to do a recursive backtrack which will be around O(2^min(x,y)) in complexity (too large). DP won't do either since the states can be more than 10^15.
Any idea? Is there any number theory that is needed to solve this?
P.s. This is not a homework.

Given that you reached some (x,y) the only way to get there is if you added the smaller value into what is now the larger value. Say x > y, then the only possible previous state is x-y, y.
Also note that the number of steps to get to x,y is the same to get to y,x.
So the solution you are looking for is something like
steps(x,y):
if x < y: return steps(y, x)
if y == 1: return x - 1
if y == 0: throw error # You can't get this combination.
return x / y + steps (y, x % y)

In other words, find the depth of a node in the Calkin--Wilf tree. The node exists iff gcd(a, b) = 1. You can modify the gcd algorithm to give the number of operations as a byproduct (sum all of the quotients computed along the way and subtract one).

Related

Dynamic programming with extremely large inputs

The problem is to find the path with the minimum number of steps required to reach (m, n) from (1, 1) (if it exists) provided that you can only move in two ways:
(x, y) = (x + y, y) or (x, y) = (x, x + y).
I tried to do this with dynamic programming but m and n can be up to 10^25 so it is not feasible. How can I adapt my solution so that it can work for large inputs? Or is there an alternate method?
Go backwards. Say your goal is (x, y). If x > y, then the last step must have been from (x - y, y); otherwise, the last step must have been from (x, y - x). (If x = y, this location is unreachable.) Working backwards, it's easy to see there's only a single way to reach any reachable goal location, and that path is always obvious.
With that in mind, you can use a minor variation on the Euclidean algorithm to solve this problem. Each iteration or recursive level represents a number of steps in a given direction, and you can keep track of the number of steps you need along the way.

Find minimum number of iterations to reach a certain sum

I'm trying to solve this problem since weeks, but couldn't arrive to a solution.
You start with two numbers X and Y both equal to 1. Only valid options are X+Y or Y+X at a time. We need to find minimum number of iterations need to reach a specific number.
eg : if the number is 5
X=1, Y=1; X = X+Y
X=2, Y=1; Y = X+Y
X=2, Y=3; Y = Y+X
X=2, Y=5; Stop answer reached
My take : If a number is odd let's say 23, decrement by 1. Now value = 22. Find the largest number that divides 22 = 11. Now reach the number by adding 1's so that:
X=11; Y=1 ; Y=Y+X
X=11; Y=12; X=X+Y
X=23, answer reached
But the problem with this approach is I cannot recursively reach a specific number, as even if I reach a certain point, say X = required value, the Y value gets misplaced and I cant reuse it to reach another value
Now I can give an O(nlogn) solution.
The method seems like greatest common divisor
Use f(x, y) express the minimum number of iterations to this state. This state can be reached by f(x-y, y) if x>y or f(x,y-x) if x<y. We can see that the way to reach state (x, y) is unique, we can calculate it in O(logn) like gcd.
The answer is min( f(n, i) | 1 <= i < n) so complexity is O(nlogn)
python code:
def gcd (n, m):
if m == 0:
return n
return gcd (m, n%m)
def calculate (x, y):
if y == 0:
return -1
return calculate (y, x%y) + x/y
def solve (n):
x = 0
min = n
for i in xrange (1, n):
if gcd (n, i) == 1:
ans = calculate (n, i)
if ans < min:
min = ans
x = i
print min
if __name__ == '__main__':
solve (5)
If the numbers are not that big (say, below 1000), you can use a breadth-first search.
Consider a directed graph where each vertex is a pair of numbers (X,Y), and from each such vertex there are two edges to vertices (X+Y,Y) and (X,X+Y). Run a BFS on that graph from (0,0) until you reach any of the positions you need.

Algorithm to enumerate paths

Say you are standing at point 0 on the real line. At each step, you can either move to the left l places, or to the right r places. You intend to get to the number p. Also, there are some numbers on which you are not allowed to step. You want count in how many you can do this. All numbers mentioned are integers (l and r positive, of course). What would be a good method for counting this?
Note. You can step on p itself in the journey as well, so the answer is infinity in some cases.
It just like "how many integer(x,y) solutions with L*x+R*y=P".
I believe there are numbers of articles to this problem.
This is not an algorithmic question but rather a math question. Nevertheless, here is the solution. Let us assume that your numbers l and r are positive integers (none of them are zero).
A solution exists if, and only if, the equation r * x - l * y = p has nonnegative integer solutions (x, y). The equation expresses the fact that we walked x times to the right and y times to the left, in any order. The equation is known as Bézout identity and we know precisely what its solutions look like.
If gcd(r,l) divides p then there exists an integer solution (x0, y0), and every other solution is of the form x = x0 + k * r / gcd(l,r), y = y0 + k * l / gcd(l,r) where k runs through the integers. Clearly, if k is larger than both -x0 * gcd(l,r) / r and -y0 * gcd(l,r) / l then x and y are nonnegative, so we have infinitely many solutions.
If gcd(r,l) does not divide p then there are no solutions because the left hand side is always divisible by gcd(l,r) but the right-hand side is not.
In summary, your algorithm for counting the solutions looks like this:
if p % gcd(l,r):
return Infinity
else:
return 0
At this point it seems pointless to try to enumerate all the paths, because that will be a rather boring exercise. For each nonnegative solution (x,y) we simply enumerate all possible ways of arranging x moves to the right and y moves to the left. There will be (x+y)!/(x! * y!) such paths (among the x+y steps pick x which will be the moves to the right).

Computational Geometry set of points algorithm

I have to design an algorithm with running time O(nlogn) for the following problem:
Given a set P of n points, determine a value A > 0 such that the shear transformation (x,y) -> (x+Ay,y) does not change the order (in x direction) of points with unequal x-coordinates.
I am having a lot of difficulty even figuring out where to begin.
Any help with this would be greatly appreciated!
Thank you!
I think y = 0.
When x = 0, A > 0
(x,y) -> (x+Ay,y)
-> (0+(A*0),0) = (0,0)
When x = 1, A > 0
(x,y) -> (x+Ay,y)
-> (1+(A*0),0) = (1,0)
with unequal x-coordinates, (2,0), (3,0), (4,0)...
So, I think that the begin point may be (0,0), x=0.
Suppose all x,y coordinates are positive numbers. (Without loss of generality, one can add offsets.) In time O(n log n), sort a list L of the points, primarily in ascending order by x coordinates and secondarily in ascending order by y coordinates. In time O(n), process point pairs (in L order) as follows. Let p, q be any two consecutive points in L, and let px, qx, py, qy denote their x and y coordinate values. From there you just need to consider several cases and it should be obvious what to do: If px=qx, do nothing. Else, if py<=qy, do nothing. Else (px>qx, py>qy) require that px + A*py < qx + A*qy, i.e. (px-qx)/(py-qy) > A.
So: Go through L in order, and find the largest A' that is satisfied for all point pairs where px>qx and py>qy. Then choose a value of A that's a little less than A', for example, A'/2. (Or, if the object of the problem is to find the largest such A, just report the A' value.)
Ok, here's a rough stab at a method.
Sort the list of points by x order. (This gives the O(nlogn)--all the following steps are O(n).)
Generate a new list of dx_i = x_(i+1) - x_i, the differences between the x coordinates. As the x_i are ordered, all of these dx_i >= 0.
Now for some A, the transformed dx_i(A) will be x_(i+1) -x_i + A * ( y_(i+1) - y_i). There will be an order change if this is negative or zero (x_(i+1)(A) < x_i(A).
So for each dx_i, find the value of A that would make dx_i(A) zero, namely
A_i = - (x_(i+1) - x_i)/(y_(i+1) - y_i). You now have a list of coefficients that would 'cause' an order swap between a consecutive (in x-order) pair of points. Watch for division by zero, but that's the case where two points have the same y, these points will not change order. Some of the A_i will be negative, discard these as you want A>0. (Negative A_i will also induce an order swap, so the A>0 requirement is a little arbitrary.)
Find the smallest A_i > 0 in the list. So any A with 0 < A < A_i(min) will be a shear that does not change the order of your points. Pick A_i(min) as that will bring two points to the same x, but not past each other.

Minimum range of 3 sets

We have three sets S1, S2, S3. I need to find x,y,z such that
x E S1
y E S2
z E S3
let min denote the minimum value out of x,y,z
let max denote the maximum value out of x,y,z
The range denoted by max-min should be the MINIMUM possible value
Of course, the full-bruteforce solution described by IVlad is simple and therefore, easier and faster to write, but it's complexity is O(n3).
According to your algorithm tag, I would like to post a more complex algorithm, that has a O(n2) worst case and O(nlogn) average complexity (almost sure about this, but I'm too lazy to make a proof).
Algorithm description
Consider thinking about some abstract (X, Y, Z) tuple. We want to find a tuple that has a minimal distance between it's maximum and minimum element. What we can say at this point is that distance is actually created by our maximum element and minimum element. Therefore, the value of element between them really doesn't matter as long as it really lies between the maximum and the minimum.
So, here is the approach. We allocate some additional set (let's call it S) and combine every initial set (X, Y, Z) into one. We also need an ability to lookup the initial set of every element in the set we've just created (so, if we point to some element in S, let's say S[10] and ask "Where did this guy come from?", our application should answer something like "He comes from Y).
After that, let's sort our new set S by it's keys (this would be O(n log n) or O(n) in some certain cases)
Determining the minimal distance
Now the interesting part comes. What we want to do is to compute some artificial value, let's call it minimal distance and mark it as d[x], where x is some element from S. This value refers to the minimal max - min distance which can be achived using the elements that are predecessors / successors of current element in the sequence.
Consider the following example - this is our S set(first line shows indexes, second - values and letters X, Y and Z refer to initial sets):
0 1 2 3 4 5 6 7
------------------
1 2 4 5 8 10 11 12
Y Z Y X Y Y X Z
Let's say we want to compute that our minimal distance for element with index 4. In fact, that minimal distance means the best (x, y, z) tuple that can be built using the selected element.
In our case (S[4]), we can say that our (x, y, z) pair would definitely look like (something, 8, something), because it should have the element we're counting the distance for (pretty obvious, hehe).
Now, we have to fill the gaps. We know that elements we're seeking for, should be from X and Z. And we want those elements to be the best in terms of max - min distance. There is an easy way to select them.
We make a bidirectional run (run left, the run right from current element) seeking for the first element-not-from-Y. In this case we would seek for two nearest elements from X and Z in two directions (4 elements total).
This finding method is what we need: if we select the first element of from X while running (left / right, doesn't matter), that element would suit us better than any other element that follows it in terms of distance. This happens because our S set is sorted.
In case of my example (counting the distance for element with index number 4), we would mark elements with indexes 6 and 7 as suitable from the right side and elements with indexes 1 and 3 from the left side.
Now, we have to test 4 cases that can happen - and take the case so that our distance is minimal. In our particular case we have the following (elements returned by the previous routine):
Z X Y X Z
2 5 8 11 12
We should test every (X, Y, Z) tuple that can be built using these elements, take the tuple with minimal distance and save that distance for our element. In this example, we would say that (11, 8, 12) tuple has the best distance of 4. So, we store d[5] = 4 (5 here is the element index).
Yielding the result
Now, when we know how to find the distance, let's do it for every element in our S set (this operation would take O(n2) in the worst case and better time - something like O(nlogn) in average).
After we have that distance value for every element in our set, just select the element with minimal distance and run our distance counting algorithm (which is described above) for it once again, but now save the (-, -, -) tuple. It would be the answer.
Pseudocode
Here is comes the pseudocode, I tried to make it easy to read, but it's implementation would be more complex, because you'll need to code set lookups *("determine set for element"). Also note that determine tuple and determine distance routines are basically the same, but the second yields the actual tuple.
COMBINE (X, Y, Z) -> S
SORT(S)
FOREACH (v in S)
DETERMINE_DISTANCE(v, S) -> d[v]
DETERMINE_TUPLE(MIN(d[v]))
P.S
I'm pretty sure that this method could be easily used for (-, -, -, ... -) tuple seeking, still resulting in good algorithmic complexity.
min = infinity (really large number in practice, like 1000000000)
solution = (-, -, -)
for each x E S1
for each y E S2
for each z E S3
t = max(x, y, z) - min(x, y, z)
if t < min
min = t
solution = (x, y, z)

Resources