Related
I'm trying to find a name for my problem, so I don't have to re-invent wheel when coding an algorithm which solves it...
I have say 2,000 binary (row) vectors and I need to pick 500 from them. In the picked sample I do column sums and I want my sample to be as close as possible to a pre-defined distribution of the column sums. I'll be working with 20 to 60 columns.
A tiny example:
Out of the vectors:
110
010
011
110
100
I need to pick 2 to get column sums 2, 1, 0. The solution (exact in this case) would be
110
100
My ideas so far
one could maybe call this a binary multidimensional knapsack, but I did not find any algos for that
Linear Programming could help, but I'd need some step by step explanation as I got no experience with it
as exact solution is not always feasible, something like simulated annealing brute force could work well
a hacky way using constraint solvers comes to mind - first set the constraints tight and gradually loosen them until some solution is found - given that CSP should be much faster than ILP...?
My concrete, practical (if the approximation guarantee works out for you) suggestion would be to apply the maximum entropy method (in Chapter 7 of Boyd and Vandenberghe's book Convex Optimization; you can probably find several implementations with your favorite search engine) to find the maximum entropy probability distribution on row indexes such that (1) no row index is more likely than 1/500 (2) the expected value of the row vector chosen is 1/500th of the predefined distribution. Given this distribution, choose each row independently with probability 500 times its distribution likelihood, which will give you 500 rows on average. If you need exactly 500, repeat until you get exactly 500 (shouldn't take too many tries due to concentration bounds).
Firstly I will make some assumptions regarding this problem:
Regardless whether the column sum of the selected solution is over or under the target, it weighs the same.
The sum of the first, second, and third column are equally weighted in the solution (i.e. If there's a solution whereas the first column sum is off by 1, and another where the third column sum is off by 1, the solution are equally good).
The closest problem I can think of this problem is the Subset sum problem, which itself can be thought of a special case of Knapsack problem.
However both of these problem are NP-Complete. This means there are no polynomial time algorithm that can solve them, even though it is easy to verify the solution.
If I were you the two most arguably efficient solution of this problem are linear programming and machine learning.
Depending on how many columns you are optimising in this problem, with linear programming you can control how much finely tuned you want the solution, in exchange of time. You should read up on this, because this is fairly simple and efficient.
With Machine learning, you need a lot of data sets (the set of vectors and the set of solutions). You don't even need to specify what you want, a lot of machine learning algorithms can generally deduce what you want them to optimise based on your data set.
Both solution has pros and cons, you should decide which one to use yourself based on the circumstances and problem set.
This definitely can be modeled as (integer!) linear program (many problems can). Once you have it, you can use a program such as lpsolve to solve it.
We model vector i is selected as x_i which can be 0 or 1.
Then for each column c, we have a constraint:
sum of all (x_i * value of i in column c) = target for column c
Taking your example, in lp_solve this could look like:
min: ;
+x1 +x4 +x5 >= 2;
+x1 +x4 +x5 <= 2;
+x1 +x2 +x3 +x4 <= 1;
+x1 +x2 +x3 +x4 >= 1;
+x3 <= 0;
+x3 >= 0;
bin x1, x2, x3, x4, x5;
If you are fine with a heuristic based search approach, here is one.
Go over the list and find the minimum squared sum of the digit wise difference between each bit string and the goal. For example, if we are looking for 2, 1, 0, and we are scoring 0, 1, 0, we would do it in the following way:
Take the digit wise difference:
2, 0, 1
Square the digit wise difference:
4, 0, 1
Sum:
5
As a side note, squaring the difference when scoring is a common method when doing heuristic search. In your case, it makes sense because bit strings that have a 1 in as the first digit are a lot more interesting to us. In your case this simple algorithm would pick first 110, then 100, which would is the best solution.
In any case, there are some optimizations that could be made to this, I will post them here if this kind of approach is what you are looking for, but this is the core of the algorithm.
You have a given target binary vector. You want to select M vectors out of N that have the closest sum to the target. Let's say you use the eucilidean distance to measure if a selection is better than another.
If you want an exact sum, have a look at the k-sum problem which is a generalization of the 3SUM problem. The problem is harder than the subset sum problem, because you want an exact number of elements to add to a target value. There is a solution in O(N^(M/2)). lg N), but that means more than 2000^250 * 7.6 > 10^826 operations in your case (in the favorable case where vectors operations have a cost of 1).
First conclusion: do not try to get an exact result unless your vectors have some characteristics that may reduce the complexity.
Here's a hill climbing approach:
sort the vectors by number of 1's: 111... first, 000... last;
use the polynomial time approximate algorithm for the subset sum;
you have an approximate solution with K elements. Because of the order of elements (the big ones come first), K should be a little as possible:
if K >= M, you take the M first vectors of the solution and that's probably near the best you can do.
if K < M, you can remove the first vector and try to replace it with 2 or more vectors from the rest of the N vectors, using the same technique, until you have M vectors. To sumarize: split the big vectors into smaller ones until you reach the correct number of vectors.
Here's a proof of concept with numbers, in Python:
import random
def distance(x, y):
return abs(x-y)
def show(ls):
if len(ls) < 10:
return str(ls)
else:
return ", ".join(map(str, ls[:5]+("...",)+ls[-5:]))
def find(is_xs, target):
# see https://en.wikipedia.org/wiki/Subset_sum_problem#Pseudo-polynomial_time_dynamic_programming_solution
S = [(0, ())] # we store indices along with values to get the path
for i, x in is_xs:
T = [(x + t, js + (i,)) for t, js in S]
U = sorted(S + T)
y, ks = U[0]
S = [(y, ks)]
for z, ls in U:
if z == target: # use the euclidean distance here if you want an approximation
return ls
if z != y and z < target:
y, ks = z, ls
S.append((z, ls))
ls = S[-1][1] # take the closest element to target
return ls
N = 2000
M = 500
target = 1000
xs = [random.randint(0, 10) for _ in range(N)]
print ("Take {} numbers out of {} to make a sum of {}", M, xs, target)
xs = sorted(xs, reverse = True)
is_xs = list(enumerate(xs))
print ("Sorted numbers: {}".format(show(tuple(is_xs))))
ls = find(is_xs, target)
print("FIRST TRY: {} elements ({}) -> {}".format(len(ls), show(ls), sum(x for i, x in is_xs if i in ls)))
splits = 0
while len(ls) < M:
first_x = xs[ls[0]]
js_ys = [(i, x) for i, x in is_xs if i not in ls and x != first_x]
replace = find(js_ys, first_x)
splits += 1
if len(replace) < 2 or len(replace) + len(ls) - 1 > M or sum(xs[i] for i in replace) != first_x:
print("Give up: can't replace {}.\nAdd the lowest elements.")
ls += tuple([i for i, x in is_xs if i not in ls][len(ls)-M:])
break
print ("Replace {} (={}) by {} (={})".format(ls[:1], first_x, replace, sum(xs[i] for i in replace)))
ls = tuple(sorted(ls[1:] + replace)) # use a heap?
print("{} elements ({}) -> {}".format(len(ls), show(ls), sum(x for i, x in is_xs if i in ls)))
print("AFTER {} splits, {} -> {}".format(splits, ls, sum(x for i, x in is_xs if i in ls)))
The result is obviously not guaranteed to be optimal.
Remarks:
Complexity: find has a polynomial time complexity (see the Wikipedia page) and is called at most M^2 times, hence the complexity remains polynomial. In practice, the process is reasonably fast (split calls have a small target).
Vectors: to ensure that you reach the target with the minimum of elements, you can improve the order of element. Your target is (t_1, ..., t_c): if you sort the t_js from max to min, you get the more importants columns first. You can sort the vectors: by number of 1s and then by the presence of a 1 in the most important columns. E.g. target = 4 8 6 => 1 1 1 > 0 1 1 > 1 1 0 > 1 0 1 > 0 1 0 > 0 0 1 > 1 0 0 > 0 0 0.
find (Vectors) if the current sum exceed the target in all the columns, then you're not connecting to the target (any vector you add to the current sum will bring you farther from the target): don't add the sum to S (z >= target case for numbers).
I propose a simple ad hoc algorithm, which, broadly speaking, is a kind of gradient descent algorithm. It seems to work relatively well for input vectors which have a distribution of 1s “similar” to the target sum vector, and probably also for all “nice” input vectors, as defined in a comment of yours. The solution is not exact, but the approximation seems good.
The distance between the sum vector of the output vectors and the target vector is taken to be Euclidean. To minimize it means minimizing the sum of the square differences off sum vector and target vector (the square root is not needed because it is monotonic). The algorithm does not guarantee to yield the sample that minimizes the distance from the target, but anyway makes a serious attempt at doing so, by always moving in some locally optimal direction.
The algorithm can be split into 3 parts.
First of all the first M candidate output vectors out of the N input vectors (e.g., N=2000, M=500) are put in a list, and the remaining vectors are put in another.
Then "approximately optimal" swaps between vectors in the two lists are done, until either the distance would not decrease any more, or a predefined maximum number of iterations is reached. An approximately optimal swap is one where removing the first vector from the list of output vectors causes a maximal decrease or minimal increase of the distance, and then, after the removal of the first vector, adding the second vector to the same list causes a maximal decrease of the distance. The whole swap is avoided if the net result is not a decrease of the distance.
Then, as a last phase, "optimal" swaps are done, again stopping on no decrease in distance or maximum number of iterations reached. Optimal swaps cause a maximal decrease of the distance, without requiring the removal of the first vector to be optimal in itself. To find an optimal swap all vector pairs have to be checked. This phase is much more expensive, being O(M(N-M)), while the previous "approximate" phase is O(M+(N-M))=O(N). Luckily, when entering this phase, most of the work has already been done by the previous phase.
from typing import List, Tuple
def get_sample(vects: List[Tuple[int]], target: Tuple[int], n_out: int,
max_approx_swaps: int = None, max_optimal_swaps: int = None,
verbose: bool = False) -> List[Tuple[int]]:
"""
Get a sample of the input vectors having a sum close to the target vector.
Closeness is measured in Euclidean metrics. The output is not guaranteed to be
optimal (minimum square distance from target), but a serious attempt is made.
The max_* parameters can be used to avoid too long execution times,
tune them to your needs by setting verbose to True, or leave them None (∞).
:param vects: the list of vectors (tuples) with the same number of "columns"
:param target: the target vector, with the same number of "columns"
:param n_out: the requested sample size
:param max_approx_swaps: the max number of approximately optimal vector swaps,
None means unlimited (default: None)
:param max_optimal_swaps: the max number of optimal vector swaps,
None means unlimited (default: None)
:param verbose: print some info if True (default: False)
:return: the sample of n_out vectors having a sum close to the target vector
"""
def square_distance(v1, v2):
return sum((e1 - e2) ** 2 for e1, e2 in zip(v1, v2))
n_vec = len(vects)
assert n_vec > 0
assert n_out > 0
n_rem = n_vec - n_out
assert n_rem > 0
output = vects[:n_out]
remain = vects[n_out:]
n_col = len(vects[0])
assert n_col == len(target) > 0
sumvect = (0,) * n_col
for outvect in output:
sumvect = tuple(map(int.__add__, sumvect, outvect))
sqdist = square_distance(sumvect, target)
if verbose:
print(f"sqdist = {sqdist:4} after"
f" picking the first {n_out} vectors out of {n_vec}")
if max_approx_swaps is None:
max_approx_swaps = sqdist
n_approx_swaps = 0
while sqdist and n_approx_swaps < max_approx_swaps:
# find the best vect to subtract (the square distance MAY increase)
sqdist_0 = None
index_0 = None
sumvect_0 = None
for index in range(n_out):
tmp_sumvect = tuple(map(int.__sub__, sumvect, output[index]))
tmp_sqdist = square_distance(tmp_sumvect, target)
if sqdist_0 is None or sqdist_0 > tmp_sqdist:
sqdist_0 = tmp_sqdist
index_0 = index
sumvect_0 = tmp_sumvect
# find the best vect to add,
# but only if there is a net decrease of the square distance
sqdist_1 = sqdist
index_1 = None
sumvect_1 = None
for index in range(n_rem):
tmp_sumvect = tuple(map(int.__add__, sumvect_0, remain[index]))
tmp_sqdist = square_distance(tmp_sumvect, target)
if sqdist_1 > tmp_sqdist:
sqdist_1 = tmp_sqdist
index_1 = index
sumvect_1 = tmp_sumvect
if sumvect_1:
tmp = output[index_0]
output[index_0] = remain[index_1]
remain[index_1] = tmp
sqdist = sqdist_1
sumvect = sumvect_1
n_approx_swaps += 1
else:
break
if verbose:
print(f"sqdist = {sqdist:4} after {n_approx_swaps}"
f" approximately optimal swap{'s'[n_approx_swaps == 1:]}")
diffvect = tuple(map(int.__sub__, sumvect, target))
if max_optimal_swaps is None:
max_optimal_swaps = sqdist
n_optimal_swaps = 0
while sqdist and n_optimal_swaps < max_optimal_swaps:
# find the best pair to swap,
# but only if the square distance decreases
best_sqdist = sqdist
best_diffvect = diffvect
best_pair = None
for i0 in range(M):
tmp_diffvect = tuple(map(int.__sub__, diffvect, output[i0]))
for i1 in range(n_rem):
new_diffvect = tuple(map(int.__add__, tmp_diffvect, remain[i1]))
new_sqdist = sum(d * d for d in new_diffvect)
if best_sqdist > new_sqdist:
best_sqdist = new_sqdist
best_diffvect = new_diffvect
best_pair = (i0, i1)
if best_pair:
tmp = output[best_pair[0]]
output[best_pair[0]] = remain[best_pair[1]]
remain[best_pair[1]] = tmp
sqdist = best_sqdist
diffvect = best_diffvect
n_optimal_swaps += 1
else:
break
if verbose:
print(f"sqdist = {sqdist:4} after {n_optimal_swaps}"
f" optimal swap{'s'[n_optimal_swaps == 1:]}")
return output
from random import randrange
C = 30 # number of columns
N = 2000 # total number of vectors
M = 500 # number of output vectors
F = 0.9 # fill factor of the target sum vector
T = int(M * F) # maximum value + 1 that can be appear in the target sum vector
A = 10000 # maximum number of approximately optimal swaps, may be None (∞)
B = 10 # maximum number of optimal swaps, may be None (unlimited)
target = tuple(randrange(T) for _ in range(C))
vects = [tuple(int(randrange(M) < t) for t in target) for _ in range(N)]
sample = get_sample(vects, target, M, A, B, True)
Typical output:
sqdist = 2639 after picking the first 500 vectors out of 2000
sqdist = 9 after 27 approximately optimal swaps
sqdist = 1 after 4 optimal swaps
P.S.: As it stands, this algorithm is not limited to binary input vectors, integer vectors would work too. Intuitively I suspect that the quality of the optimization could suffer, though. I suspect that this algorithm is more appropriate for binary vectors.
P.P.S.: Execution times with your kind of data are probably acceptable with standard CPython, but get better (like a couple of seconds, almost a factor of 10) with PyPy. To handle bigger sets of data, the algorithm would have to be translated to C or some other language, which should not be difficult at all.
i have an appointment for university which is due today and i start getting nervous. We recently discussed dynamic programming for algorithm optimization and now we shall implement an algorithm ourself which uses dynamic programming.
Task
So we have a simple game for which we shall write an algorithm to find the best possible strategy to get the best possible score (assuming both players play optimized).
We have a row of numbers like 4 7 2 3 (note that according to the task description it is not asured that it always is an equal count of numbers). Now each player turnwise takes a number from the back or the front. When the last number is picked the numbers are summed up for each player and the resulting scores for each player are substracted from each other. The result is then the score for player 1. So an optimal order for the above numbers would be
P1: 3 -> p2: 4 -> p1: 7 -> p2: 2
So p1 would have 3, 7 and p2 would have 4, 2 which results in a final score of (3 + 7) - (4 + 2) = 4 for player 1.
In the first task we should simply implement "an easy recursive way of solving this" where i just used a minimax algorithm which seemed to be fine for the automated test. In the second task however i am stuck since we shall now work with dynamic programming techniques. The only hint i found was that in the task itself a matrix is mentioned.
What i know so far
We had an example of a word converting problem where such a matrix was used it was called Edit distance of two words which means how many changes (Insertions, Deletions, Substitutions) of letters does it take to change one word into another. There the two words where ordered as a table or matrix and for each combination of the word the distance would be calculated.
Example:
W H A T
| D | I
v v
W A N T
editing distance would be 2. And you had a table where each editing distance for each substring was displayed like this:
"" W H A T
1 2 3 4
W 1 0 1 2 3
A 2 1 1 2 3
N 3 2 2 2 3
T 4 3 3 3 2
So for example from WHA to WAN would take 2 edits: insert N and delete H, from WH to WAN would also take 2 edits: substitude H->A and insert N and so on. These values where calculated with an "OPT" function which i think stands for optimization.
I also leanred about bottom-up and top-down recursive schemes but im not quite sure how to attach that to my problem.
What i thought about
As a reminder i use the numbers 4 7 2 3.
i learned from the above that i should try to create a table where each possible result is displayed (like minimax just that it will be saved before). I then created a simple table where i tried to include the possible draws which can be made like this (which i think is my OPT function):
4 7 2 3
------------------
a. 4 | 0 -3 2 1
|
b. 7 | 3 0 5 4
|
c. 2 | -2 -5 0 -1
|
d. 3 | -1 -4 1 0
the left column marks player 1 draws, the upper row marks player 2 draws and each number then stands for numberP1 - numberP2. From this table i can at least read the above mentioned optimal strategy of 3 -> 4 -> 7 -> 2 (-1 + 5) so im sure that the table should contain all possible results, but im not quite sure now how to draw the results from it. I had the idea to start iterating over the rows and pick the one with the highest number in it and mark that as the pick from p1 (but that would be greedy anyways). p2 would then search this row for the lowest number and pick that specific entry which would then be the turn.
Example:
p1 picks row a. 7 | 3 0 5 4 since 5 is the highest value in the table. P2 now picks the 3 from that row because it is the lowest (the 0 is an invalid draw since it is the same number and you cant pick that twice) so the first turn would be 7 -> 4 but then i noticed that this draw is not possible since the 7 is not accessible from the start. So for each turn you have only 4 possibilities: the outer numbers of the table and the ones which are directly after/before them since these would be accessable after drawing. So for the first turn i only have rows a. or d. and from that p1 could pick:
4 which leaves p2 with 7 or 3. Or p1 takes 3 which leaves p2 with 4 or 2
But i dont really know how to draw a conclusion out of that and im really stuck.
So i would really like to know if im on the right way with that or if im overthinking this pretty much. Is this the right way to solve this?
The first thing you should try to write down, when starting a dynamic programming algorithm, is a recurrence relation.
Let's first simplify a very little the problem. We will consider that the number of cards is even, and that we want to design an optimal strategy for the first player to play. Once we have managed to solve this version of the problem, the others (odd number of cards, optimize strategy for second player) follows trivially.
So, first, a recurrence relation. Let X(i, j) be the best possible score that player 1 can expect (when player 2 plays optimally as well), when the cards remaining are from the i^th to the j^th ones. Then, the best score that player 1 can expect when playing the game will be represented by X(1, n).
We have:
X(i, j) = max(Arr[i] + X(i+1, j), X(i, j-1) + Arr[j]) if j-i % 2 == 1, meaning that the best score that player one can expect is the best between taking the card on the left, and taking the card on the right.
In the other case, the other player is playing, so he'll try to minimize:
X(i, j) = min(Arr[i] + X(i+1, j), X(i, j-1) + Arr[j]) if j-i % 2 == 0.
The terminal case is trivial: X(i, i) = Arr[i], meaning that when there is only one card, we just pick it, and that's all.
Now the algorithm without dynamic programming, here we only write the recurrence relation as a recursive algorithm:
function get_value(Arr, i, j) {
if i == j {
return Arr[i]
} else if j - i % 2 == 0 {
return max(
Arr[i] + get_value(i+1, j),
get_value(i, j-1) + Arr[j]
)
} else {
return min(
Arr[i] + get_value(i+1, j),
get_value(i, j-1) + Arr[j]
)
}
}
The problem with this function is that for some given i, j, there will be many redundant calculations of X(i, j). The essence of dynamic programming is to store intermediate results in order to prevent redundant calculations.
Algo with dynamic programming (X is initialized with + inf everywhere.
function get_value(Arr, X, i, j) {
if X[i][j] != +inf {
return X[i][j]
} else if i == j {
result = Arr[i]
} else if j - i % 2 == 0 {
result = max(
Arr[i] + get_value(i+1, j),
get_value(i, j-1) + Arr[j]
)
} else {
result = min(
Arr[i] + get_value(i+1, j),
get_value(i, j-1) + Arr[j]
)
}
X[i][j] = result
return result
}
As you can see the only difference with the algorithm above is that we now use a 2D array X to store intermediate results. The consequence on time complexity is huge, since the first algorithm runs in O(2^n), while the second runs in O(n²).
Dynamic programming problems can generally be solved in 2 ways, top down and bottom up.
Bottom up requires building a data structure from the simplest to the most complex case. This is harder to write, but offers the option of throwing away parts of the data that you know you won't need again. Top down requires writing a recursive function, and then memoizing. So bottom up can be more efficient, top down is usually easier to write.
I will show both. The naive approach can be:
def best_game(numbers):
if 0 == len(numbers):
return 0
else:
score_l = numbers[0] - best_game(numbers[1:])
score_r = numbers[-1] - best_game(numbers[0:-1])
return max(score_l, score_r)
But we're passing a lot of redundant data. So let's reorganize it slightly.
def best_game(numbers):
def _best_game(i, j):
if j <= i:
return 0
else:
score_l = numbers[i] - _best_game(i+1, j)
score_r = numbers[j-1] - _best_game(i, j-1)
return max(score_l, score_r)
return _best_game(0, len(numbers))
And now we can add a caching layer to memoize it:
def best_game(numbers):
seen = {}
def _best_game(i, j):
if j <= i:
return 0
elif (i, j) not in seen:
score_l = numbers[i] - _best_game(i+1, j)
score_r = numbers[j-1] - _best_game(i, j-1)
seen[(i, j)] = max(score_l, score_r)
return seen[(i, j)]
return _best_game(0, len(numbers))
This approach will be memory and time O(n^2).
Now bottom up.
def best_game(numbers):
# We start with scores for each 0 length game
# before, after, and between every pair of numbers.
# There are len(numbers)+1 of these, and all scores
# are 0.
scores = [0] * (len(numbers) + 1)
for i in range(len(numbers)):
# We will compute scores for all games of length i+1.
new_scores = []
for j in range(len(numbers) - i):
score_l = numbers[j] - scores[j+1]
score_r = numbers[j+i] - scores[j]
new_scores.append(max(score_l, score_r))
# And now we replace scores by new_scores.
scores = new_scores
return scores[0]
This is again O(n^2) time but only O(n) space. Because after I compute the games of length 1 I can throw away the games of length 0. Of length 2, I can throw away the games of length 1. And so on.
This grade 11 problem has been bothering me since 2010 and I still can't figure out/find a solution even after university.
Problem Description
There is a very unusual street in your neighbourhood. This street
forms a perfect circle, and the circumference of the circle is
1,000,000. There are H (1 ≤ H ≤ 1000) houses on the street. The
address of each house is the clockwise arc-length from the
northern-most point of the circle. The address of the house at the
northern-most point of the circle is 0. You also have special firehoses
which follow the curve of the street. However, you wish to keep the
length of the longest hose you require to a minimum. Your task is to
place k (1 ≤ k ≤ 1000) fire hydrants on this street so that the maximum
length of hose required to connect a house to a fire hydrant is as
small as possible.
Input Specification
The first line of input will be an integer H, the number of houses. The
next H lines each contain one integer, which is the address of that
particular house, and each house address is at least 0 and less than
1,000,000. On the H + 2nd line is the number k, which is the number of
fire hydrants that can be placed around the circle. Note that a fire
hydrant can be placed at the same position as a house. You may assume
that no two houses are at the same address. Note: at least 40% of the
marks for this question have H ≤ 10.
Output Specification
On one line, output the length of hose required
so that every house can connect to its nearest fire hydrant with that
length of hose.
Sample Input
4
0
67000
68000
77000
2
Output for Sample Input
5000
Link to original question
I can't even come up with a brutal force algorithm since the placement might be float number. For example if the houses are located in 1 and 2, then the hydro should be placed at 1.5 and the distance would be 0.5
Here is quick outline of an answer.
First write a function that can figures out whether you can cover all of the houses with a given maximum length per hydrant. (The maximum hose will be half that length.) It just starts at a house, covers all of the houses it can, jumps to the next, and ditto, and sees whether you stretch. If you fail it tries starting at the next house instead until it has gone around the circle. This will be a O(n^2) function.
Second create a sorted list of the pairwise distances between houses. (You have to consider it going both ways around for a single hydrant, you can only worry about the shorter way if you have 2+ hydrants.) The length covered by a hydrant will be one of those. This takes O(n^2 log(n)).
Now do a binary search to find the shortest length that can cover all of the houses. This will require O(log(n)) calls to the O(n^2) function that you wrote in the first step.
The end result is a O(n^2 log(n)) algorithm.
And here is working code for all but the parsing logic.
#! /usr/bin/env python
def _find_hoses_needed (circle_length, hose_span, houses):
# We assume that houses is sorted.
answers = [] # We can always get away with one hydrant per house.
for start in range(len(houses)):
needed = 1
last_begin = start
current_house = start + 1 if start + 1 < len(houses) else 0
while current_house != start:
pos_begin = houses[last_begin]
pos_end = houses[current_house]
length = pos_end - pos_begin if pos_begin <= pos_end else circle_length + pos_begin - pos_end
if hose_span < length:
# We need a new hose.
needed = needed + 1
last_begin = current_house
current_house = current_house + 1
if len(houses) <= current_house:
# We looped around the circle.
current_house = 0
answers.append(needed)
return min(answers)
def find_min_hose_coverage (circle_length, hydrant_count, houses):
houses = sorted(houses)
# First we find all of the possible answers.
is_length = set()
for i in range(len(houses)):
for j in range(i, len(houses)):
is_length.add(houses[j] - houses[i])
is_length.add(houses[i] - houses[j] + circle_length)
possible_answers = sorted(is_length)
# Now we do a binary search.
lower = 0
upper = len(possible_answers) - 1
while lower < upper:
mid = (lower + upper) / 2 # Note, we lose the fraction here.
if hydrant_count < _find_hoses_needed(circle_length, possible_answers[mid], houses):
# We need a strictly longer coverage to make it.
lower = mid + 1
else:
# Longer is not needed
upper = mid
return possible_answers[lower]
print(find_min_hose_coverage(1000000, 2, [0, 67000, 68000, 77000])/2.0)
I have a set of points (x,y).
i need to return two points with minimal distance.
I use this:
http://www.cs.ucsb.edu/~suri/cs235/ClosestPair.pdf
but , i dont really understand how the algo is working.
Can explain in more simple how the algo working?
or suggest another idea?
Thank!
If the number of points is small, you can use the brute force approach i.e:
for each point find the closest point among other points and save the minimum distance with the current two indices till now.
If the number of points is large, I think you may find the answer in this thread:
Shortest distance between points algorithm
Solution for Closest Pair Problem with minimum time complexity O(nlogn) is divide-and-conquer methodology as it mentioned in the document that you have read.
Divide-and-conquer Approach for Closest-Pair Problem
Easiest way to understand this algorithm is reading an implementation of it in a high-level language (because sometimes understanding the algorithms or pseudo-codes can be harder than understanding the real codes) like Python:
# closest pairs by divide and conquer
# David Eppstein, UC Irvine, 7 Mar 2002
from __future__ import generators
def closestpair(L):
def square(x): return x*x
def sqdist(p,q): return square(p[0]-q[0])+square(p[1]-q[1])
# Work around ridiculous Python inability to change variables in outer scopes
# by storing a list "best", where best[0] = smallest sqdist found so far and
# best[1] = pair of points giving that value of sqdist. Then best itself is never
# changed, but its elements best[0] and best[1] can be.
#
# We use the pair L[0],L[1] as our initial guess at a small distance.
best = [sqdist(L[0],L[1]), (L[0],L[1])]
# check whether pair (p,q) forms a closer pair than one seen already
def testpair(p,q):
d = sqdist(p,q)
if d < best[0]:
best[0] = d
best[1] = p,q
# merge two sorted lists by y-coordinate
def merge(A,B):
i = 0
j = 0
while i < len(A) or j < len(B):
if j >= len(B) or (i < len(A) and A[i][1] <= B[j][1]):
yield A[i]
i += 1
else:
yield B[j]
j += 1
# Find closest pair recursively; returns all points sorted by y coordinate
def recur(L):
if len(L) < 2:
return L
split = len(L)/2
L = list(merge(recur(L[:split]), recur(L[split:])))
# Find possible closest pair across split line
# Note: this is not quite the same as the algorithm described in class, because
# we use the global minimum distance found so far (best[0]), instead of
# the best distance found within the recursive calls made by this call to recur().
for i in range(len(E)):
for j in range(1,8):
if i+j < len(E):
testpair(E[i],E[i+j])
return L
L.sort()
recur(L)
return best[1]
closestpair([(0,0),(7,6),(2,20),(12,5),(16,16),(5,8),\
(19,7),(14,22),(8,19),(7,29),(10,11),(1,13)])
# returns: (7,6),(5,8)
Taken from: https://www.ics.uci.edu/~eppstein/161/python/closestpair.py
Detailed explanation:
First we define an Euclidean distance aka Square distance function to prevent code repetition.
def square(x): return x*x # Define square function
def sqdist(p,q): return square(p[0]-q[0])+square(p[1]-q[1]) # Define Euclidean distance function
Then we are taking the first two points as our initial best guess:
best = [sqdist(L[0],L[1]), (L[0],L[1])]
This is a function definition for comparing Euclidean distances of next pair with our current best pair:
def testpair(p,q):
d = sqdist(p,q)
if d < best[0]:
best[0] = d
best[1] = p,q
def merge(A,B): is just a rewind function for the algorithm to merge two sorted lists that previously divided to half.
def recur(L): function definition is the actual body of the algorithm. So I will explain this function definition in more detail:
if len(L) < 2:
return L
with this part, algorithm terminates the recursion if there is only one element/point left in the list of points.
Split the list to half: split = len(L)/2
Create a recursion (by calling function's itself) for each half: L = list(merge(recur(L[:split]), recur(L[split:])))
Then lastly this nested loops will test whole pairs in the current half-list with each other:
for i in range(len(E)):
for j in range(1,8):
if i+j < len(E):
testpair(E[i],E[i+j])
As the result of this, if a better pair is found best pair will be updated.
So they solve for the problem in Many dimensions using a divide-and-conquer approach. Binary search or divide-and-conquer is mega fast. Basically, if you can split a dataset into two halves, and keep doing that until you find some info you want, you are doing it as fast as humanly and computerly possible most of the time.
For this question, it means that we divide the data set of points into two sets, S1 and S2.
All the points are numerical, right? So we have to pick some number where to divide the dataset.
So we pick some number m and say it is the median.
So let's take a look at an example:
(14, 2)
(11, 2)
(5, 2)
(15, 2)
(0, 2)
What's the closest pair?
Well, they all have the same Y coordinate, so we can look at Xs only... X shortest distance is 14 to 15, a distance of 1.
How can we figure that out using divide-and-conquer?
We look at the greatest value of X and the smallest value of X and we choose the median as a dividing line to make our two sets.
Our median is 7.5 in this example.
We then make 2 sets
S1: (0, 2) and (5, 2)
S2: (11, 2) and (14, 2) and (15, 2)
Median: 7.5
We must keep track of the median for every split, because that is actually a vital piece of knowledge in this algorithm. They don't show it very clearly on the slides, but knowing the median value (where you split a set to make two sets) is essential to solving this question quickly.
We keep track of a value they call delta in the algorithm. Ugh I don't know why most computer scientists absolutely suck at naming variables, you need to have descriptive names when you code so you don't forget what the f000 you coded 10 years ago, so instead of delta let's call this value our-shortest-twig-from-the-median-so-far
Since we have the median value of 7.5 let's go and see what our-shortest-twig-from-the-median-so-far is for Set1 and Set2, respectively:
Set1 : shortest-twig-from-the-median-so-far 2.5 (5 to m where m is 7.5)
Set 2: shortest-twig-from-the-median-so-far 3.5 (looking at 11 to m)
So I think the key take-away from the algorithm is that this shortest-twig-from-the-median-so-far is something that you're trying to improve upon every time you divide a set.
Since S1 in our case has 2 elements only, we are done with the left set, and we have 3 in the right set, so we continue dividing:
S2 = { (11,2) (14,2) (15,2) }
What do you do? You make a new median, call it S2-median
S2-median is halfway between 15 and 11... or 13, right? My math may be fuzzy, but I think that's right so far.
So let's look at the shortest-twig-so-far-for-our-right-side-with-median-thirteen ...
15 to 13 is... 2
11 to 13 is .... 2
14 to 13 is ... 1 (!!!)
So our m value or shortest-twig-from-the-median-so-far is improved (where we updated our median from before because we're in a new chunk or Set...)
Now that we've found it we know that (14, 2) is one of the points that satisfies the shortest pair equation. You can then check exhaustively against the points in this subset (15, 11, 14) to see which one is the closer one.
Clearly, (15,2) and (14,2) are the winning pair in this case.
Does that make sense? You must keep track of the median when you cut the set, and keep a new median for everytime you cut the set until you have only 2 elements remaining on each side (or in our case 3)
The magic is in the median or shortest-twig-from-the-median-so-far
Thanks for asking this question, I went in not knowing how this algorithm worked but found the right highlighted bullet point on the slide and rolled with it. Do you get it now? I don't know how to explain the median magic other than binary search is f000ing awesome.
http://www.spoj.com/problems/SCALE/
I am trying to do it using recursion but getting TLE.
The tags of the problem say BINARY SEARCH.
How can one do it using binary search ?
Thanx in advance.
First thing to notice here is that if you had two weights of each size instead of one, then the problem would be quite trivial, as we we would only need to represent X in its base 3 representation and take corresponding number of weights. For, example if X=21 then we could take two times P_3 and one time P_2, and put those into another scale.
Now let's try to make something similar using the fact that we can add to both scales (including the one where X is placed):
Assume that X <= P_1+P_2+...+P_n, that would mean that X <= P_n + (P_n-1)/2 (easy to understand why). Therefore, X + P_(n-1) + P_(n-2)+...+P_1 < 2*P_n.
(*) What that means is that if we add some of the weights from 1 to n-1 to same scale as X, then the number on that scale still does
not have 2 in its n-th rightmost digit (either 0 or 1).
From now on assume that digit means a digit of a number in its base-3 representation (but it can temporarily become larger than 2 :P ). Now lets denote the total weight of first scale (where X is placed) as A=X and the other scale is B=0 and our goal is to make them equal (both A and B will change as we will make our progress) .
Let's iterate through all digits of the A from smallest to largest (leftmost). If the current digit index is i and it:
Equals to 0 then just ignore and proceed further
Equals to 1 then we place weight P_i=3^(i-1) on scale B.
Equals to 2 then we add P_i=3^(i-1) to scale A. Note that it would result in the increase of the digit (i+1).
Equals to 3 (yes this case is possible, if both current and previous digit were 2) add 1 to digit at index i+1 and go further (no weights are added to any scale).
Due to (*) obviously the procedure will run correctly (as the last digit will be equal to 1 in A), as we will choose only one weight from the set and place them correctly, and obviously the numbers A and B will be equal after the procedure is complete.
Now second case X > P_1+P_2+...+P_n. Obviously we cannot balance even if we place all weights on the second scale.
This completes the proof and shows when it is possible and the way how to place the weights to both scales to equalise them.
EDIT:
C++ code which I successfully submitted on SPOJ just now https://ideone.com/tbB7Ve
The solution to this problem is quite trivial. The idea is the same as #Yerken's answer, but expressed in a bit different way:
Only the first weight has a mass not divisible by 3. So the first weight is the only one has effect on balancing mod 3 property of the 2 scales:
If X mod 3 == 0, the first weight must not be used
If X mod 3 == 1, the first weight must be on scale B (the currently empty one)
If X mod 3 == 2, the first weight must be on scale A
Subtract both scales by weight(B) --> solution doesn't change, and now weight(A) is divisible by 3 while weight(B) == 0
Set X' = weight(A)/3 and divide every weights Pi by 3 ==> Solution doesn't change, and now it's the same problem with N' = N-1 and X' = (X+1)/3
pseudo-code:
listA <- empty
listB <- empty
for i = 1 to N {
if (X == 0) break for loop; // done!
if (X mod 3 == 1) then push i to listB;
if (X mod 3 == 2) then push i to listA;
X = (X + 1)/3; // integer division
}
hasSolution <- (X == 0)
C++ code: http://ideone.com/LXLGmE