Find minimum number of iterations to reach a certain sum - algorithm

I'm trying to solve this problem since weeks, but couldn't arrive to a solution.
You start with two numbers X and Y both equal to 1. Only valid options are X+Y or Y+X at a time. We need to find minimum number of iterations need to reach a specific number.
eg : if the number is 5
X=1, Y=1; X = X+Y
X=2, Y=1; Y = X+Y
X=2, Y=3; Y = Y+X
X=2, Y=5; Stop answer reached
My take : If a number is odd let's say 23, decrement by 1. Now value = 22. Find the largest number that divides 22 = 11. Now reach the number by adding 1's so that:
X=11; Y=1 ; Y=Y+X
X=11; Y=12; X=X+Y
X=23, answer reached
But the problem with this approach is I cannot recursively reach a specific number, as even if I reach a certain point, say X = required value, the Y value gets misplaced and I cant reuse it to reach another value

Now I can give an O(nlogn) solution.
The method seems like greatest common divisor
Use f(x, y) express the minimum number of iterations to this state. This state can be reached by f(x-y, y) if x>y or f(x,y-x) if x<y. We can see that the way to reach state (x, y) is unique, we can calculate it in O(logn) like gcd.
The answer is min( f(n, i) | 1 <= i < n) so complexity is O(nlogn)
python code:
def gcd (n, m):
if m == 0:
return n
return gcd (m, n%m)
def calculate (x, y):
if y == 0:
return -1
return calculate (y, x%y) + x/y
def solve (n):
x = 0
min = n
for i in xrange (1, n):
if gcd (n, i) == 1:
ans = calculate (n, i)
if ans < min:
min = ans
x = i
print min
if __name__ == '__main__':
solve (5)

If the numbers are not that big (say, below 1000), you can use a breadth-first search.
Consider a directed graph where each vertex is a pair of numbers (X,Y), and from each such vertex there are two edges to vertices (X+Y,Y) and (X,X+Y). Run a BFS on that graph from (0,0) until you reach any of the positions you need.

Related

Graph Data Structure - Find minimum number of moves

You are standing in a 2D grid at (0,0) . You need to go to (x,y) . In one move you can go from (a,b) to (c,d) if:
c and d are integers
|a-c|+|b-d|=K (K is provided in the input)
Find if it is possible to reach final destination . If possible what is the minimum number of moves you can take to reach the final destination?
CONSTRAINTS:
-10^5<=x,y<=10^5
1<=K<=10^9
Eg-
INPUT
K=5
X=1, Y=3
OUTPUT
2
[(0,0)->(-3,2)->(1,3)]
In the first move, one can go from (0,0) to (-3,2)
(|0-(-3)| + (0-2) = 3+2 = 5)
and second move,
(-3,2) to (1,3)
(|-3-1| + |2-3| = 4+1 = 5)
Hence 2.
The comments asserted a necessary criterion, that 2 must divide |x| + |y| at least as many times as it divides k. You should prove it.
But now suppose that P is a path from (0, 0) to (x, y). Let's let P_h be the sum of the lengths of the horizontal steps you take. Similarly let P_v be the sum of the lengths of the vertical steps you take. The following are easy to show:
|x| ≤ P_h
2 divides P_h - |x|
|y| ≤ P_v
2 divides P_v - |y|
k divides P_h + P_v and in fact the number of steps taken is (P_h + P_v)/k.
Between these facts, if i*k is the first multiple of k which is at least |x| + |y| with i*k - |x| - |y| even, then any path of length i*k must be minimal.
To finish, show that you can always construct a path of that length. And then the problem becomes easy.
Constant time solution.
All distances here are Manhattan distances.
Everything a distance of k from the origin can be reached in one step. This forms a diamond shape.
Within that diamond, every tile an even distance from the origin can be reached in 2 steps.
If k is odd, then every tile in the diamond an odd distance from the origin can be reached in 3 steps. If k is even then these tiles can't be reached.
Let's call (x, y) the target. The target can be reached unless k is even and |x| + |y| is odd.
If the target is inside the diamond, determine whether it can be reached in 0, 1, 2, or 3 based on the rules above.
If the target is outside the diamond, find the distance between the target and the closest corner (closest of (-k,0), (k,0), (0,-k), (0,k)). Call this d.
If d%k == 0 then it takes d/k moves to get to the corner, +1 to get to the center. Otherwise, after reaching the corner, we have a remaining distance of d%k to move. If this is even we can stay along the edge of the diamond where we have one additional move to get to the origin. If this is odd then k must also be odd and we can be adjacent to the edge of the diamond where there are two moves to the origin.
Here's some Ruby code that implements this.
def get_moves(x, y, k)
distance = get_dist(0, 0, x, y)
# return 0 to 3 for reachable distances inside the diamond
# return -1 for unreachable distances
return -1 if k % 2 == 0 && distance % 2 == 1
return 0 if distance == 0
return 1 if distance == k
return 2 if distance < k && distance % 2 == 0
return 3 if distance < k && distance % 2 == 1
# Find the distance between the target & nearest corner of the diamond
distance_to_corner = [get_dist(x, y, 0, -k), get_dist(x, y, 0, k), get_dist(x, y, -k, 0), get_dist(x, y, k, 0)].min
# Find the extra distance since we're going in steps of size k
extra_distance = (k - distance_to_corner % k) % k
# If the extra distance is even then we can stay along the
# edge of the diamond which is 1 move from the origin.
# Otherwise we're adjacent to the edge of the diamond,
# putting us 2 moves from the origin.
if extra_distance % 2 == 0
return 1 + (distance_to_corner + extra_distance) / k
else
return 2 + (distance_to_corner + extra_distance) / k
end
end
def get_dist(x_1, y_1, x_2, y_2)
return (x_2 - x_1).abs + (y_2 - y_1).abs
end

Special Pairs till N

Given a number N (1 <= N <= 10^50), Find number of unique pairs (x,y) such that sum of digits of x + sum of digits of y is prime.
x,y <= N;
Test Case - N=5
output - 6
explanation - pairs are (1,2), (1,4), (2,3), (2,5), (3,4)
Note - (x,y), (y,x) are equivalent.So, if (2,5) is included then (5,2) is not.
This question was asked in a competitive programming contest.I couldn't figure how to do it.Has anyone got some ideas?
Observation 1:
Primes you want to consider are smaller than 1000.
(Because sum of all digits of a number <= 10^50 is <= 50*9 <= 500)
Observation 2:
There exists only one pair (x, x) that gives you a prime number. (What is (1, 1), because 1 + 1 = 2, for any x you get an even number, which must be 2 or is not a prime number)
Let's say you have a wizard friend that told you all the results of function f for a given n, where f(x) = number of numbers smaller than n which sum of digits is equal x.
Now find all primes up to 1000 and for each x from 0 to 500 and for each p - prime calculate f(x) * f(p - x).
Sum of the values you've calculated is equal to 2 * answer - 1. (Because of duplicates and that (1, 1) is the only pair (x, y) that x = y and it gives a prime number). So you only check up to 500*1000 possibilities.
The only thing left is to calculate function f.
You can do it using dynamic programming.
Let g(x, d, e) = number of d-digit numbers which sum of digits is equal x. If e = 1, the number you're considering is equal to first d digits of n, otherwise it's smaller.
x <= 500, d <= 50, e <= 1
You can easily see that you have up to 500*50*2 states.
Let's say you know all the previous values of g and you want to calculate g(x, d, 0).
You take any d - 1 digit number and add a digit y for each 0 <= y <= 9. Since you want to get x, it's previous sum of digits must have been equal to x - y. You also want it smaller than n, so you take g(x - y, d - 1, 0) and if y is smaller than dth digit of n, add also g(x - y, d - 1, 1).
Formula for g(x, d, 1):
You take any d - 1 digit number and add a digit y that y is equal to dth digit of n. Then your result is g(x - y, d - 1, 1).
Number of different options to consider is equal 500*50*2*10, which should be enough.

Finding distinct pairs {x, y} that satisfies the equation 1/x + 1/y = 1/n with x, y, and n being whole numbers

The task is to find the amount of distinct pairs of {x, y} that fits the equation 1/x + 1/y = 1/n, with n being the input given by the user. Different ordering of x and y does not count as a new pair.
For example, the value n = 2 will mean 1/n = 1/2. 1/2 can be formed with two pairs of {x, y}, whcih are 6 and 3 and 4 and 4.
The value n = 3 will mean 1/n = 1/3. 1/3 can be formed with two pairs of {x, y}, which are 4 and 12 and 6 and 6.
The mathematical equation of 1/x + 1/y = 1/n can be converted to y = nx/(x-n) where if y and x in said converted equation are whole, they count as a pair of {x, y}. Using said converted formula, I will iterate n times starting from x = n + 1 and adding x by 1 per iteration to find whether nx % (x - n) == 0; if it yields true, the x and y are a new distinct pair.
I found the answer to limit my iteration by n times by manually computing the answers and finding the number of repetitions 'pattern'. x also starts with n+1 because otherwise, division by zero will happen or y will result in a negative number. The modulo operator is to indicate that the y attained is whole.
Questions:
Is there a mathematical explanation behind why the iteration is limited to n times? I found out that the limit of iteration is n times by doing manual computation and finding the pattern: that I only need to iterate n times to find the amount of distinct pairs.
Is there another way to find the amount of distinct pairs {x, y} other than my method above, which is by finding the VALUES of distinct pairs itself and then summing the amount of distinct pair? Is there a quick mathematical formula I'm not aware of?
For reference, my code can be seen here: https://gist.github.com/TakeNoteIAmHere/596eaa2ccf5815fe9bbc20172dce7a63
Assuming that x,y,n > 0 we have
Observation 1: both, x and y must be greater than n
Observation 2: since (x,y) and (y,x) do not count as distinct, we can assume that x <= y.
Observation 3: x = y = 2n is always a solution and if x > 2n then y < x (thus no new solution)
This means the possible values for x are from n+1 up to 2n.
A little algebra convers the equation
1/x + 1/y = n
into
(x-n)*(y-n) = n*n
Since we want a solution in integers, we seek integers f, g so that
f*g = n*n
and then the solution for x and y is
x = f+n, y = g+n
I think the easiest way to proceed is to factorise n, ie write
n = (P[1]^k[1]) * .. *(P[m]^k[m])
where the Ps are distinct primes, the ks positive integers and ^ denotes exponentiation.
Then the possibilities for f and g are
f = P[1]^a[1]) * .. *(P[m]^a[m])
g = P[1]^b[1]) * .. *(P[m]^b[m])
where the as and bs satisfy, for each i=1..m
0<=a[i]<=2*k[i]
b[i] = 2*k[i] - a[i]
If we just wanted to count the number of solutions, we would just need to count the number of fs, ie the number of distinct sequences a[]. But this is just
Nall = (2*k[1]+1)*... (2*[k[m]+1)
However we want to count the solution (f,g) and (g,f) as being the same. There is only one case where f = g (because the factorisation into primes is unique, we can only have f=g if the a[] equal the b[]) and so the number we seek is
1 + (Nall-1)/2

How to find number of steps to transform (a,b) to (x,y)

Given 2 numbers a=1 and b=1.
At each steps, you can do one of the following:
a+=b;
b+=a;
If it's possible to transform a into x and b into y, find the minimum steps needed
x and y can be arbitrarily large (more than 10^15)
My approach so far was just to do a recursive backtrack which will be around O(2^min(x,y)) in complexity (too large). DP won't do either since the states can be more than 10^15.
Any idea? Is there any number theory that is needed to solve this?
P.s. This is not a homework.
Given that you reached some (x,y) the only way to get there is if you added the smaller value into what is now the larger value. Say x > y, then the only possible previous state is x-y, y.
Also note that the number of steps to get to x,y is the same to get to y,x.
So the solution you are looking for is something like
steps(x,y):
if x < y: return steps(y, x)
if y == 1: return x - 1
if y == 0: throw error # You can't get this combination.
return x / y + steps (y, x % y)
In other words, find the depth of a node in the Calkin--Wilf tree. The node exists iff gcd(a, b) = 1. You can modify the gcd algorithm to give the number of operations as a byproduct (sum all of the quotients computed along the way and subtract one).

Dijkstra Algorithm on a graph modeling a network

We have a directed graph G = (V, E) for a comm. network with each edge having a probability of not failing r(u, v) (defined as edge weight) which lies in interval [0, 1]. The probabilities are independent, so that from one vertex to another, if we multiply all probabilities, we get the the probability of the entire path not failing.
I need an efficient algorithm to find a most reliable path from one given vertex to another given vertex (i.e., a path from the first vertex to the second that is least likely to fail). I am given that log(r · s) = log r + log s will be helpful.
This is what I have so far -:
DIJKSTRA-VARIANT (G, s, t)
for v in V:
val[v] ← ∞
A ← ∅
Q ← V to initialize Q with vertices in V.
val[s] ← 0
while Q is not ∅ and t is not in A
do x ← EXTRACT-MIN (Q)
A ← A ∪ {x}
for each vertex y ∈ Adj[x]
do if val[x] + p(x, y) < val[y]:
val[y] = val[x] + p(x, y)
s is the source vertex and t is the destination vertex. Of course, I have not exploited the log property as I am not able to understand how to use it. The relaxation portion of the algorithm at the bottom needs to be modified, and the val array will capture the results. Without log, it would probably be storing the next highest probability. How should I modify the algorithm to use log?
Right now, your code has
do if val[x] + p(x, y) < val[y]:
val[y] = val[x] + p(x, y)
Since the edge weights in this case represent probabilities, you need to multiply them together (rather than adding):
do if val[x] * p(x, y) > val[y]:
val[y] = val[x] * p(x, y)
I've changed the sign to >, since you want the probability to be as large as possible.
Logs are helpful because (1) log(xy) = log(x) + log(y) (as you said) and sums are easier to compute than products, and (2) log(x) is a monotonic function of x, so log(x) and x have their maximum in the same place. Therefore, you can deal with the logarithm of the probability, instead of the probability itself:
do if log_val[x] + log(p(x, y)) > log_val[y]:
log_val[y] = log_val[x] + log(p(x, y))
Edited to add (since I don't have enough rep to leave a comment): you'll want to initialize your val array to 0, rather than Infinity, because you're calculating a maximum instead of a minimum. (Since you want the largest probability of not failing.) So, after log transforming, the initial log_val array values should be -Infinity.
In order to calculate probabilities you should multiply (instead of add) in the relaxation phase, which means changing:
do if val[x] + p(x, y) < val[y]:
val[y] = val[x] + p(x, y)
to:
do if val[x] * p(x, y) < val[y]:
val[y] = val[x] * p(x, y)
Using the Log is possible if the range is (0,1] since log(0) = -infinity and log(1) = 0, it means that for every x,y in (0,1]: probability x < probability y than: log(x) < log(y). Since we are maintaining the same relation (between probabilities) this modification will provide the correct answer.
I think you'll be able to take it from here.
I think I may have solved the question partially.
Here is my attempt. Edits and pointers are welcome -:
DIJKSTRA-VARIANT (G, s, t)
for v in V:
val[v] ← 0
A ← ∅
Q ← V to initialize Q with vertices in V.
val[s] ← 1
while Q is not ∅ and t is not in A
do x ← EXTRACT-MAX (Q)
A ← A ∪ {x}
for each vertex y ∈ Adj[x]
do if log(val[x]) + log(p(x, y)) > log(val[y]):
log(val[y]) = log(val[x]) + log(p(x, y))
Since I am to find the highest possible probability values, I believe I should be using >. The following questions remain -:
What should the initial values in the val array be?
Is there anything else I need to add?
EDIT: I have changed the initial val values to 0. However, log is undefined at 0. I am open to a better alternative. Also, I changed the priority queue's method to EXTRACT-MAX since it is the larger probabilities that need to be extracted. This would ideally be implemented on a binary max-heap.
FURTHER EDIT: I have marked tinybike's answer as accepted, since they have posted most of the necessary details that I require. The algorithm should be as I have posted here.

Resources