You are standing in a 2D grid at (0,0) . You need to go to (x,y) . In one move you can go from (a,b) to (c,d) if:
c and d are integers
|a-c|+|b-d|=K (K is provided in the input)
Find if it is possible to reach final destination . If possible what is the minimum number of moves you can take to reach the final destination?
CONSTRAINTS:
-10^5<=x,y<=10^5
1<=K<=10^9
Eg-
INPUT
K=5
X=1, Y=3
OUTPUT
2
[(0,0)->(-3,2)->(1,3)]
In the first move, one can go from (0,0) to (-3,2)
(|0-(-3)| + (0-2) = 3+2 = 5)
and second move,
(-3,2) to (1,3)
(|-3-1| + |2-3| = 4+1 = 5)
Hence 2.
The comments asserted a necessary criterion, that 2 must divide |x| + |y| at least as many times as it divides k. You should prove it.
But now suppose that P is a path from (0, 0) to (x, y). Let's let P_h be the sum of the lengths of the horizontal steps you take. Similarly let P_v be the sum of the lengths of the vertical steps you take. The following are easy to show:
|x| ≤ P_h
2 divides P_h - |x|
|y| ≤ P_v
2 divides P_v - |y|
k divides P_h + P_v and in fact the number of steps taken is (P_h + P_v)/k.
Between these facts, if i*k is the first multiple of k which is at least |x| + |y| with i*k - |x| - |y| even, then any path of length i*k must be minimal.
To finish, show that you can always construct a path of that length. And then the problem becomes easy.
Constant time solution.
All distances here are Manhattan distances.
Everything a distance of k from the origin can be reached in one step. This forms a diamond shape.
Within that diamond, every tile an even distance from the origin can be reached in 2 steps.
If k is odd, then every tile in the diamond an odd distance from the origin can be reached in 3 steps. If k is even then these tiles can't be reached.
Let's call (x, y) the target. The target can be reached unless k is even and |x| + |y| is odd.
If the target is inside the diamond, determine whether it can be reached in 0, 1, 2, or 3 based on the rules above.
If the target is outside the diamond, find the distance between the target and the closest corner (closest of (-k,0), (k,0), (0,-k), (0,k)). Call this d.
If d%k == 0 then it takes d/k moves to get to the corner, +1 to get to the center. Otherwise, after reaching the corner, we have a remaining distance of d%k to move. If this is even we can stay along the edge of the diamond where we have one additional move to get to the origin. If this is odd then k must also be odd and we can be adjacent to the edge of the diamond where there are two moves to the origin.
Here's some Ruby code that implements this.
def get_moves(x, y, k)
distance = get_dist(0, 0, x, y)
# return 0 to 3 for reachable distances inside the diamond
# return -1 for unreachable distances
return -1 if k % 2 == 0 && distance % 2 == 1
return 0 if distance == 0
return 1 if distance == k
return 2 if distance < k && distance % 2 == 0
return 3 if distance < k && distance % 2 == 1
# Find the distance between the target & nearest corner of the diamond
distance_to_corner = [get_dist(x, y, 0, -k), get_dist(x, y, 0, k), get_dist(x, y, -k, 0), get_dist(x, y, k, 0)].min
# Find the extra distance since we're going in steps of size k
extra_distance = (k - distance_to_corner % k) % k
# If the extra distance is even then we can stay along the
# edge of the diamond which is 1 move from the origin.
# Otherwise we're adjacent to the edge of the diamond,
# putting us 2 moves from the origin.
if extra_distance % 2 == 0
return 1 + (distance_to_corner + extra_distance) / k
else
return 2 + (distance_to_corner + extra_distance) / k
end
end
def get_dist(x_1, y_1, x_2, y_2)
return (x_2 - x_1).abs + (y_2 - y_1).abs
end
Related
Given a tree with n vertices, each vertex has a special value C_v. A straight path of length k >= 1 is defined as a sequence of vertices v_1, v_2, ... , v_k such that each two consecutive elements of the sequence are connected by an edge and all vertices v_i are different. The straight path may not contain any edges. In other words, for k = 1, a sequence containing a single vertex is also a straight path. There is a function S defined. For a given straight path v_1, v_2, ... , v_k we get S(v_1, v_2, ... ,v_k) = Cv_1 - Cv_2 + Cv_3 - Cv_4 + ...
Calculate the sum of the values of the function S for all straight paths in the tree. Since the result may be very large, give its remainder when divided by 10^9 + 7.
Paths are treated as directed. For example: paths 1 -> 2 -> 4 and 4 -> 2 -> 1 are treated as two different paths and for each one separately the value of the function S should be taken into account in the result.
My implementation is as follows:
def S(path):
total, negative_one_pow = 0, 1
for node in path:
total += (values[node - 1] * negative_one_pow)
negative_one_pow *= -1
return total
def search(graph):
global total
for node in range(1, n + 1):
queue = [(node, [node])]
visited = set()
while queue:
current_node, path = queue.pop(0)
if current_node in visited:
continue
visited.add(current_node)
total += S(path)
for neighbor in graph[current_node]:
queue.append((neighbor, [*path, neighbor]))
n = int(input())
values = list(map(int, input().split()))
graph = {i: [] for i in range(1, n + 1)}
total = 0
for i in range(n - 1):
a, b = map(int, input().split())
graph[a].append(b)
graph[b].append(a)
search(graph)
print(total % 1000000007)
The execution of the code takes too long for bigger graphs. Can you suggest ways to speed up the code?
It is a tree. Therefore all straight paths start somewhere, travel up some distance (maybe 0), hit a peak, then turn around and go down some distance (maybe 0).
First calculate for each node the sum and count of all odd length paths rising to its children, and the sum and count of all even length paths rising to its children. This can be done through dynamic programming.
Armed with that we can calculate for each node the sum of all paths with that node as a peak. (Calculate the sum of all paths that rise to the peak then fall. For each child subtract out the sum of all paths that rose to that child and then fell back to that child.)
Problem Description:
Let there be an array of 2D pairs ((x1, y1), . . . ,(xn, yn))
. With a fixed constant
y' a pair (i, j) is called half-inverted if i < j, xi > xj , and yi ≥ y' > yj . Devise an algorithm
that counts the number of half-inverted pairs. You will get full marks if your algorithm is
correct of complexity no more than O(n log n).
\My idea is to treat this using similar method as counting inversion in a normal array, but my problem is that how do we maintain the order during the Merge And Count step?
It is a simple modification of the familiar merge-sort inversion counting algorithm which can be used to solve this problem so make you fully understand it as a prerequisite.
If we examine the merge step of this algorithm we have 2 sorted halves and 2 pointers pointing to an element of each. Let our left pointer be i and our right, j. Using the traditional definition of an inversion, if our i pointer points to a value that is larger than the value pointed to by j then due the arrays being sorted and all the elements on the left being before those on the right in the real array, we know all the elements from i to the end of the left half meet our definition of an inversion for our value at j so we increase our count by mid - i where mid is the end of the left half.
Switching back to your problem, we are dealing with pairs (x,y). If we can keep our x values sorted then, using the approach described above, we can simply count the number of inversions only considering x values. Looking at your definition of half inversions we will surely be over counting the number we need if we only count xi > xj. We are missing the additional constraint of yi >= y' > yj which must be filtered out of our counting.
So, if we look back to our traditional algorithm when our i pointer is pointing to a value greater than the value at j we also need to make sure that our y value at j is less than y'. If this not true then none of the x's from i to mid will match our definition of a half inversion and so we cannot count them. Now let's assume our j's y is smaller than y', if we simply counted all the pairs from i to mid then we would still be over counting the pairs which have yi < y'.
One way to fix this is to keep track of the of y values in the left half from i to mid which are >= y' and add that value to our count. We can keep track of how many y >= y' we see in the merge step up to any i, and subtract that from the total number of y's which are >= y' in the left half. To keep track of that total number we can return that value from our recursive function (total = left + right) and only use the number which came from the left half when merging. We also need to modify our base case which is straightforward.
def count_half_inversions(l, y):
return count_rec(l, 0, len(l), l.copy(), y)[0]
def count_rec(l, begin, end, copy, y):
if end-begin <= 1:
# we have only 1 pair
return (0, 1 if l[begin][1] >= y else 0)
mid = begin + ((end-begin) // 2)
left = count_rec(copy, begin, mid, l, y)
right = count_rec(copy, mid, end, l, y)
between = merge_count(l, begin, mid, end, copy, left[1], y)
# return (inversion count, number of pairs, (i,j), with j >= y)
return (left[0] + right[0] + between, left[1] + right[1])
def merge_count(l, begin, mid, end, copy, left_y_count, y):
result = 0
i,j = begin, mid
k = begin
while i < mid and j < end:
if copy[i][0] > copy[j][0]:
if y > copy[j][1]:
result += left_y_count
smaller = copy[j]
j += 1
else:
if copy[i][1] >= y:
left_y_count -= 1
smaller = copy[i]
i += 1
l[k] = smaller
k += 1
while i < mid:
l[k] = copy[i]
i += 1
k += 1
while j < end:
l[k] = copy[j]
j += 1
k += 1
return result
test_case = [(1,1), (6,4), (6,3), (1,2), (1,2), (3,3), (6,2), (0,1)]
fixed_y = 2
print(count_half_inversions(test_case, fixed_y))
I'm trying to solve this problem since weeks, but couldn't arrive to a solution.
You start with two numbers X and Y both equal to 1. Only valid options are X+Y or Y+X at a time. We need to find minimum number of iterations need to reach a specific number.
eg : if the number is 5
X=1, Y=1; X = X+Y
X=2, Y=1; Y = X+Y
X=2, Y=3; Y = Y+X
X=2, Y=5; Stop answer reached
My take : If a number is odd let's say 23, decrement by 1. Now value = 22. Find the largest number that divides 22 = 11. Now reach the number by adding 1's so that:
X=11; Y=1 ; Y=Y+X
X=11; Y=12; X=X+Y
X=23, answer reached
But the problem with this approach is I cannot recursively reach a specific number, as even if I reach a certain point, say X = required value, the Y value gets misplaced and I cant reuse it to reach another value
Now I can give an O(nlogn) solution.
The method seems like greatest common divisor
Use f(x, y) express the minimum number of iterations to this state. This state can be reached by f(x-y, y) if x>y or f(x,y-x) if x<y. We can see that the way to reach state (x, y) is unique, we can calculate it in O(logn) like gcd.
The answer is min( f(n, i) | 1 <= i < n) so complexity is O(nlogn)
python code:
def gcd (n, m):
if m == 0:
return n
return gcd (m, n%m)
def calculate (x, y):
if y == 0:
return -1
return calculate (y, x%y) + x/y
def solve (n):
x = 0
min = n
for i in xrange (1, n):
if gcd (n, i) == 1:
ans = calculate (n, i)
if ans < min:
min = ans
x = i
print min
if __name__ == '__main__':
solve (5)
If the numbers are not that big (say, below 1000), you can use a breadth-first search.
Consider a directed graph where each vertex is a pair of numbers (X,Y), and from each such vertex there are two edges to vertices (X+Y,Y) and (X,X+Y). Run a BFS on that graph from (0,0) until you reach any of the positions you need.
I have to make a program to find all convex quadrilaterals from the given list of 2d points.
I have tried it with vector cross product but it doesn't seem to be a correct solution.
Maybe there is some effective algorithm to this problem but I can not find it.
This is an example case with inputs and outputs:
Input
Number of Points:
6
coordinates of points (x,y):
0 0
0 1
1 0
1 1
2 0
2 1
Output
Number of convex quadrilaterals:
9
A quadrilateral is convex if its diagonals intersect. Conversely, if two line segments intersect, then their four endpoints make a convex quadrilateral.
Every pair of points gives you a line segment, and every point of intersection between two line segments corresponds to a convex quadrilateral.
You can find the points of intersection using either the naïve algorithm that compares all pairs of segments, or the Bentley–Ottmann algorithm. The former takes O(n4); and the latter O((n2 + q) log n) (where q is the number of convex quadrilaterals). In the worst case q = Θ(n4) — consider n points on a circle — so Bentley–Ottmann is not always faster.
Here's the naïve version in Python:
import numpy as np
from itertools import combinations
def intersection(s1, s2):
"""
Return the intersection point of line segments `s1` and `s2`, or
None if they do not intersect.
"""
p, r = s1[0], s1[1] - s1[0]
q, s = s2[0], s2[1] - s2[0]
rxs = float(np.cross(r, s))
if rxs == 0: return None
t = np.cross(q - p, s) / rxs
u = np.cross(q - p, r) / rxs
if 0 < t < 1 and 0 < u < 1:
return p + t * r
return None
def convex_quadrilaterals(points):
"""
Generate the convex quadrilaterals among `points`.
"""
segments = combinations(points, 2)
for s1, s2 in combinations(segments, 2):
if intersection(s1, s2) != None:
yield s1, s2
And an example run:
>>> points = map(np.array, [(0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1)])
>>> len(list(convex_quadrilaterals(points)))
9
My input are three numbers - a number s and the beginning b and end e of a range with 0 <= s,b,e <= 10^1000. The task is to find the minimal Levenstein distance between s and all numbers in range [b, e]. It is not necessary to find the number minimizing the distance, the minimal distance is sufficient.
Obviously I have to read the numbers as string, because standard C++ type will not handle such large numbers. Calculating the Levenstein distance for every number in the possibly huge range is not feasible.
Any ideas?
[EDIT 10/8/2013: Some cases considered in the DP algorithm actually don't need to be considered after all, though considering them does not lead to incorrectness :)]
In the following I describe an algorithm that takes O(N^2) time, where N is the largest number of digits in any of b, e, or s. Since all these numbers are limited to 1000 digits, this means at most a few million basic operations, which will take milliseconds on any modern CPU.
Suppose s has n digits. In the following, "between" means "inclusive"; I will say "strictly between" if I mean "excluding its endpoints". Indices are 1-based. x[i] means the ith digit of x, so e.g. x[1] is its first digit.
Splitting up the problem
The first thing to do is to break up the problem into a series of subproblems in which each b and e have the same number of digits. Suppose e has k >= 0 more digits than s: break up the problem into k+1 subproblems. E.g. if b = 5 and e = 14032, create the following subproblems:
b = 5, e = 9
b = 10, e = 99
b = 100, e = 999
b = 1000, e = 9999
b = 10000, e = 14032
We can solve each of these subproblems, and take the minimum solution.
The easy cases: the middle
The easy cases are the ones in the middle. Whenever e has k >= 1 more digits than b, there will be k-1 subproblems (e.g. 3 above) in which b is a power of 10 and e is the next power of 10, minus 1. Suppose b is 10^m. Notice that choosing any digit between 1 and 9, followed by any m digits between 0 and 9, produces a number x that is in the range b <= x <= e. Furthermore there are no numbers in this range that cannot be produced this way. The minimum Levenshtein distance between s (or in fact any given length-n digit string that doesn't start with a 0) and any number x in the range 10^m <= x <= 10^(m+1)-1 is necessarily abs(m+1-n), since if m+1 >= n it's possible to simply choose the first n digits of x to be the same as those in s, and delete the remainder, and if m+1 < n then choose the first m+1 to be the same as those in s and insert the remainder.
In fact we can deal with all these subproblems in a single constant-time operation: if the smallest "easy" subproblem has b = 10^m and the largest "easy" subproblem has b = 10^u, then the minimum Levenshtein distance between s and any number in any of these ranges is m-n if n < m, n-u if n > u, and 0 otherwise.
The hard cases: the end(s)
The hard cases are when b and e are not restricted to have the form b = 10^m and e = 10^(m+1)-1 respectively. Any master problem can generate at most two subproblems like this: either two "ends" (resulting from a master problem in which b and e have different numbers of digits, such as the example at the top) or a single subproblem (i.e. the master problem itself, which didn't need to be subdivided at all because b and e already have the same number of digits). Note that due to the previous splitting of the problem, we can assume that the subproblem's b and e have the same number of digits, which we will call m.
Super-Levenshtein!
What we will do is design a variation of the Levenshtein DP matrix that calculates the minimum Levenshtein distance between a given digit string (s) and any number x in the range b <= x <= e. Despite this added "power", the algorithm will still run in O(n^2) time :)
First, observe that if b and e have the same number of digits and b != e, then it must be the case that they consist of some number q >= 0 of identical digits at the left, followed by a digit that is larger in e than in b. Now consider the following procedure for generating a random digit string x:
Set x to the first q digits of b.
Append a randomly-chosen digit d between b[i] and e[i] to x.
If d == b[i], we "hug" the lower bound:
For i from q+1 to m:
If b[i] == 9 then append b[i]. [EDIT 10/8/2013: Actually this can't happen, because we chose q so that e[i] will be larger then b[i], and there is no digit larger than 9!]
Otherwise, flip a coin:
Heads: Append b[i].
Tails: Append a randomly-chosen digit d > b[i], then goto 6.
Stop.
Else if d == e[i], we "hug" the upper bound:
For i from q+1 to m:
If e[i] == 0 then append e[i]. [EDIT 10/8/2013: Actually this can't happen, because we chose q so that b[i] will be smaller then e[i], and there is no digit smaller than 0!]
Otherwise, flip a coin:
Heads: Append e[i].
Tails: Append a randomly-chosen digit d < e[i], then goto 6.
Stop.
Otherwise (if d is strictly between b[i] and e[i]), drop through to step 6.
Keep appending randomly-chosen digits to x until it has m digits.
The basic idea is that after including all the digits that you must include, you can either "hug" the lower bound's digits for as long as you want, or "hug" the upper bound's digits for as long as you want, and as soon as you decide to stop "hugging", you can thereafter choose any digits you want. For suitable random choices, this procedure will generate all and only the numbers x such that b <= x <= e.
In the "usual" Levenshtein distance computation between two strings s and x, of lengths n and m respectively, we have a rectangular grid from (0, 0) to (n, m), and at each grid point (i, j) we record the Levenshtein distance between the prefix s[1..i] and the prefix x[1..j]. The score at (i, j) is calculated from the scores at (i-1, j), (i, j-1) and (i-1, j-1) using bottom-up dynamic programming. To adapt this to treat x as one of a set of possible strings (specifically, a digit string corresponding to a number between b and e) instead of a particular given string, what we need to do is record not one but two scores for each grid point: one for the case where we assume that the digit at position j was chosen to hug the lower bound, and one where we assume it was chosen to hug the upper bound. The 3rd possibility (step 5 above) doesn't actually require space in the DP matrix because we can work out the minimal Levenshtein distance for the entire rest of the input string immediately, very similar to the way we work it out for the "easy" subproblems in the first section.
Super-Levenshtein DP recursion
Call the overall minimal score at grid point (i, j) v(i, j). Let diff(a, b) = 1 if characters a and b are different, and 0 otherwise. Let inrange(a, b..c) be 1 if the character a is in the range b..c, and 0 otherwise. The calculations are:
# The best Lev distance overall between s[1..i] and x[1..j]
v(i, j) = min(hb(i, j), he(i, j))
# The best Lev distance between s[1..i] and x[1..j] obtainable by
# continuing to hug the lower bound
hb(i, j) = min(hb(i-1, j)+1, hb(i, j-1)+1, hb(i-1, j-1)+diff(s[i], b[j]))
# The best Lev distance between s[1..i] and x[1..j] obtainable by
# continuing to hug the upper bound
he(i, j) = min(he(i-1, j)+1, he(i, j-1)+1, he(i-1, j-1)+diff(s[i], e[j]))
At the point in time when v(i, j) is being calculated, we will also calculate the Levenshtein distance resulting from choosing to "stop hugging", i.e. by choosing a digit that is strictly in between b[j] and e[j] (if j == q) or (if j != q) is either above b[j] or below e[j], and thereafter freely choosing digits to make the suffix of x match the suffix of s as closely as possible:
# The best Lev distance possible between the ENTIRE STRINGS s and x, given that
# we choose to stop hugging at the jth digit of x, and have optimally aligned
# the first i digits of s to these j digits
sh(i, j) = if j >= q then shc(i, j)+abs(n-i-m+j)
else infinity
shc(i, j) = if j == q then
min(hb(i, j-1)+1, hb(i-1, j-1)+inrange(s[i], (b[j]+1)..(e[j]-1)))
else
min(hb(i, j-1)+1, hb(i-1, j-1)+inrange(s[i], (b[j]+1)..9),
he(i, j-1)+1, he(i-1, j-1)+inrange(s[i], (0..(e[j]-1)))
The formula for shc(i, j) doesn't need to consider "downward" moves, since such moves don't involve any digit choice for x.
The overall minimal Levenshtein distance is the minimum of v(n, m) and sh(i, j), for all 0 <= i <= n and 0 <= j <= m.
Complexity
Take N to be the largest number of digits in any of s, b or e. The original problem can be split in linear time into at most 1 set of easy problems that collectively takes O(1) time to solve and 2 hard subproblems that each take O(N^2) time to solve using the super-Levenshtein algorithm, so overall the problem can be solved in O(N^2) time, i.e. time proportional to the square of the number of digits.
A first idea to speed up the computation (works if |e-b| is not too large):
Question: how much can the Levestein distance change when we compare s with n and then with n+1?
Answer: not too much!
Let's see the dynamic-programming tables for s = 12007 and two consecutive n
n = 12296
0 1 2 3 4 5
1 0 1 2 3 4
2 1 0 1 2 3
3 2 1 1 2 3
4 3 2 2 2 3
5 4 3 3 3 3
and
n = 12297
0 1 2 3 4 5
1 0 1 2 3 4
2 1 0 1 2 3
3 2 1 1 2 3
4 3 2 2 2 3
5 4 3 3 3 2
As you can see, only the last column changes, since n and n+1 have the same digits, except for the last one.
If you have the dynamic-programming table for the edit-distance of s = 12001 and n = 12296, you already have the table for n = 12297, you just need to update the last column!
Obviously if n = 12299 then n+1 = 12300 and you need to update the last 3 columns of the previous table.. but this happens just once every 100 iteration.
In general, you have to
update the last column on every iterations (so, length(s) cells)
update the second-to-last too, once every 10 iterations
update the third-to-last, too, once every 100 iterations
so let L = length(s) and D = e-b. First you compute the edit-distance between s and b. Then you can find the minimum Levenstein distance over [b,e] looping over every integer in the interval. There are D of them, so the execution time is about:
Now since
we have an algorithm wich is