Shortest Path for n entities - algorithm

I'm trying to solve a problem that is about minimizing the distance traveled by a group of n entities who have to go trough a group of x points in a given order.
The n entities all start in the same position (1,1) and then I'm given x points that are in a queue and have to be "answered" in the correct order. However, I want the distance to be minimal.
My approach to the algorithm so far was to order the entities in increasing order of their distance to the x that is next in line. Then, I'd check, from the ones that are closer to the ones that are the furthest away, if this distance to the next in line is bigger than the distance to the one that comes afterwards to minimize the distance. The closest one to not fulfill this condition went to answer. If all were closer to the x that came afterwards, I'd reorder them in increasing order of distance to the one that came afterwards and send the furthest away from this to answer the x. Since this is a test problem I'm doing as practice for a competition I know what the result should be for my test case and it seems I'm doing this wrong.
How should I implement such an algorithm that guarantees that the distance is minimal?

The algorithm you've described sounds like a greedy search algorithm. Greedy algorithms are not guaranteed to find optimal solutions except under specific conditions that don't seem to hold here.
This looks like a candidate for a dynamic programming formulation. Alternatively, you can use heuristic search such as the A* search algorithm. I'd go with the latter if I was in the competition. See the link for a description of how the algorithm works, how to implement it, and how you might apply it to your problem.

Although since the points need to be visited in order, there is a bound on the number of possible arrangements, I couldn't think of a more efficient way than the following formulation. Let f(ns, i) represent the optimal arrangement up to the ith point, where ns is the list of the last chosen point for each entity that has at least one point. Then we have at most two choices: either start a new entity if we haven't run out, or try the current point as the next visit for each entity.
Python recursion:
import math
def d(p1, p2):
return math.sqrt(math.pow(p1[0] - p2[0], 2) + math.pow(p1[1] - p2[1], 2))
def f(ps, n):
def g(ns, i):
if i == len(ps):
return 0
# start a new entity if we haven't run out
best = d((0,0), ps[i]) + g(ns[:] + [i], i + 1) if len(ns) < n else float('inf')
# try the current point as the next visit for each entity
for entity_idx, point_idx in enumerate(ns):
_ns = ns[:]
_ns[entity_idx] = i
best = min(best, d(ps[point_idx], ps[i]) + g(_ns, i + 1))
return best
return d((0,0), ps[0]) + g([0], 1)
Output:
"""
p5
p3
p1
(0,0) p4 p6
p2
"""
points = [(0,1), (0,-1), (0,2), (2,0), (0,3), (3,0)]
print f(points, 3) # 7.0

Related

Understanding subtleties of dynamic programming approaches

I understand that there are mainly two approaches to dynamic programming solutions:
Fixed optimal order of evaluation (lets call it Foo approach): Foo approach usually goes from subproblems to bigger problems thus using results obtained earlier for subproblems to solve bigger problems, thus avoiding "revisiting" subproblem. CLRS also seems to call this "Bottom Up" approach.
Without fixed optimal order of evaluation (lets call it Non-Foo approach): In this approach evaluation proceeds from problems to their sub-problems . It ensures that sub problems are not "re-evaluated" (thus ensuring optimality) by maintaining results of their past evaluations in some data structure and then first checking if the result of the problem at hand exists in this data structure before starting its evaluation. CLRS seem to call this as "Top Down" approach
This is what is roughly conveyed as one of the main points by this answer.
I have following doubts:
Q1. Memoization or not?
CLRS uses terms "top down with memoization" approach and "bottom up" approach. I feel both approaches require memory to cache results of sub problems. But, then, why CLRS use term "memoization" only for top down approach and not for bottom up approach? After solving some problems by DP approach, I feel that solutions by top down approach for all problems require memory to caches results of "all" subproblems. However, that is not the case with bottom up approach. Solutions by bottom up approach for some problems does not need to cache results of "all" sub problems. Q1. Am I correct with this?
For example consider this problem:
Given cost[i] being the cost of ith step on a staircase, give the minimum cost of reaching the top of the floor if:
you can climb either one or two steps
you can start from the step with index 0, or the step with index 1
The top down approach solution is as follows:
class Solution:
def minCostAux(self, curStep, cost):
if self.minCosts[curStep] > -1:
return self.minCosts[curStep]
if curStep == -1:
return 0
elif curStep == 0:
self.minCosts[curStep] = cost[0]
else:
self.minCosts[curStep] = min(self.minCostAux(curStep-2, cost) + cost[curStep]
, self.minCostAux(curStep-1, cost) + cost[curStep])
return self.minCosts[curStep]
def minCostClimbingStairs(self, cost) -> int:
cost.append(0)
self.minCosts = [-1] * len(cost)
return self.minCostAux(len(cost)-1, cost)
The bottom up approach solution is as follows:
class Solution:
def minCostClimbingStairs(self, cost) -> int:
cost.append(0)
secondLastMinCost = cost[0]
lastMinCost = min(cost[0]+cost[1], cost[1])
minCost = lastMinCost
for i in range(2,len(cost)):
minCost = min(lastMinCost, secondLastMinCost) + cost[i]
secondLastMinCost = lastMinCost
lastMinCost = minCost
return minCost
Note that the top down approach caches result of all steps in self.minCosts while bottom up approach caches result of only last two steps in variables lastMinCost and secondLastMinCost.
Q2. Does all problems have solutions by both approaches?
I feel no. I came to this opinion after solving this problem:
Find the probability that the knight will not go out of n x n chessboard after k moves, if the knight was initially kept in the cell at index (row, column).
I feel the only way to solve this problem is to find successive probabilities in increasing number of steps starting from cell (row, column), that is probability that the knight will not go out of chessboard after step 1, then after step 2, then after step 3 and so on. This is bottom up approach. We cannot do it top down, for example, we cannot start with kth step and go to k-1th step, then k-2th step and so on, because:
We cannot know which cells will be reached in kth step to start with
We cannot ensure that all paths from kth step will lead to initial knight cell position (row,column).
Even one of the top voted answer gives dp solution as follows:
class Solution {
private int[][]dir = new int[][]{{-2,-1},{-1,-2},{1,-2},{2,-1},{2,1},{1,2},{-1,2},{-2,1}};
private double[][][] dp;
public double knightProbability(int N, int K, int r, int c) {
dp = new double[N][N][K + 1];
return find(N,K,r,c);
}
public double find(int N,int K,int r,int c){
if(r < 0 || r > N - 1 || c < 0 || c > N - 1) return 0;
if(K == 0) return 1;
if(dp[r][c][K] != 0) return dp[r][c][K];
double rate = 0;
for(int i = 0;i < dir.length;i++) rate += 0.125 * find(N,K - 1,r + dir[i][0],c + dir[i][1]);
dp[r][c][K] = rate;
return rate;
}
}
I feel this is still a bottom up approach since it starts with initial knight cell position (r,c) (and hence starts from 0th or no step to Kth step) despite the fact that it counts K downwads to 0. So, this is bottom up approach done recursively and not top down approach. To be precise, this solution does NOT first find:
probability of knight not going out of chessboard after K steps starting at cell (r,c)
and then find:
probability of knight not going out of chessboard after K-1 steps starting at cell (r,c)
but it finds in reverse / bottom up order: first for K-1 steps and then for K steps.
Also, I did not find any solutions in of top voted discussions in leetcode doing it in truly top down manner, starting from Kth step to 0th step ending in (row,column) cell, instead of starting with (row,column) cell.
Similarly we cannot solve the following problem with the bottom up approach but only with top down approach:
Find the probability that the Knight ends up in the cell at index (row,column) after K steps, starting at any initial cell.
Q2. So am I correct with my understanding that not all problems have solutions by both top down or bottom up approaches? Or am I just overthinking unnecessarily and both above problems can indeed be solved with both top down and bottom up approaches?
PS: I indeed seem to have done overthinking here: knightProbability() function above is indeed top down, and I ill-interpreted as explained in detailed above 😑. I have kept this explanation for reference as there are already some answers below and also as a hint of how confusion / mis-interpretaions might happen, so that I will be more cautious in future. Sorry if this long explanation caused you some confusion / frustrations. Regardless, the main question still holds: does every problem have bottom up and top down solutions?
Q3. Bottom up approach recursively?
I am pondering if bottom up solutions for all problems can also be implemented recursively. After trying to do so for other problems, I came to following conclusion:
We can implement bottom up solutions for such problems recursively, only that the recursion won't be meaningful, but kind of hacky.
For example, below is recursive bottom up solution for minimum cost climbing stairs problem mentioned in Q1:
class Solution:
def minCostAux(self, step_i, cost):
if self.minCosts[step_i] != -1:
return self.minCosts[step_i]
self.minCosts[step_i] = min(self.minCostAux(step_i-1, cost)
, self.minCostAux(step_i-2, cost)) + cost[step_i]
if step_i == len(cost)-1: # returning from non-base case, gives sense of
# not-so meaningful recursion.
# Also, base cases usually appear at the
# beginning, before recursive call.
# Or should we call it "ceil condition"?
return self.minCosts[step_i]
return self.minCostAux(step_i+1, cost)
def minCostClimbingStairs(self, cost: List[int]) -> int:
cost.append(0)
self.minCosts = [-1] * len(cost)
self.minCosts[0] = cost[0]
self.minCosts[1] = min(cost[0]+cost[1], cost[1])
return self.minCostAux(2, cost)
Is my quoted understanding correct?
First, context.
Every dynamic programming problem can be solved without dynamic programming using a recursive function. Generally this will take exponential time, but you can always do it. At least in principle. If the problem can't be written that way, then it really isn't a dynamic programming problem.
The idea of dynamic programming is that if I already did a calculation and have a saved result, I can just use that saved result instead of doing the calculation again.
The whole top-down vs bottom-up distinction refers to the naive recursive solution.
In a top-down approach your call stack looks like the naive version except that you make a "memo" of what the recursive result would have given. And then the next time you short-circuit the call and return the memo. This means you can always, always, always solve dynamic programming problems top down. There is always a solution that looks like recursion+memoization. And that solution by definition is top down.
In a bottom up approach you start with what some of the bottom levels would have been and build up from there. Because you know the structure of the data very clearly, frequently you are able to know when you are done with data and can throw it away, saving memory. Occasionally you can filter data on non-obvious conditions that are hard for memoization to duplicate, making bottom up faster as well. For a concrete example of the latter, see Sorting largest amounts to fit total delay.
Start with your summary.
I strongly disagree with your thinking about the distinction in terms of the optimal order of evaluations. I've encountered many cases with top down where optimizing the order of evaluations will cause memoization to start hitting sooner, making code run faster. Conversely while bottom up certainly picks a convenient order of operations, it is not always optimal.
Now to your questions.
Q1: Correct. Bottom up often knows when it is done with data, top down does not. Therefore bottom up gives you the opportunity to delete data when you are done with it. And you gave an example where this happens.
As for why only one is called memoization, it is because memoization is a specific technique for optimizing a function, and you get top down by memoizing recursion. While the data stored in dynamic programming may match up to specific memos in memoization, you aren't using the memoization technique.
Q2: I do not know.
I've personally found cases where I was solving a problem over some complex data structure and simply couldn't find a bottom up approach. Maybe I simply wasn't clever enough, but I don't believe that a bottom up approach always exists to be found.
But top down is always possible. Here is how to do it in Python for the example that you gave.
First the naive recursive solution looks like this:
def prob_in_board(n, i, j, k):
if i < 0 or j < 0 or n <= i or n <= j:
return 0
elif k <= 0:
return 1
else:
moves = [
(i+1, j+2), (i+1, j-2),
(i-1, j+2), (i-1, j-2),
(i+2, j+1), (i+2, j-1),
(i-2, j+1), (i-2, j-1),
]
answer = 0
for next_i, next_j in moves:
answer += prob_in_board(n, next_i, next_j, k-1) / len(moves)
return answer
print(prob_in_board(8, 3, 4, 7))
And now we just memoize.
def prob_in_board_memoized(n, i, j, k, cache=None):
if cache is None:
cache = {}
if i < 0 or j < 0 or n <= i or n <= j:
return 0
elif k <= 0:
return 1
elif (i, j, k) not in cache:
moves = [
(i+1, j+2), (i+1, j-2),
(i-1, j+2), (i-1, j-2),
(i+2, j+1), (i+2, j-1),
(i-2, j+1), (i-2, j-1),
]
answer = 0
for next_i, next_j in moves:
answer += prob_in_board_memoized(n, next_i, next_j, k-1, cache) / len(moves)
cache[(i, j, k)] = answer
return cache[(i, j, k)]
print(prob_in_board_memoized(8, 3, 4, 7))
This solution is top down. If it seems otherwise to you, then you do not correctly understand what is meant by top-down.
I found your question ( does every dynamic programming problem have bottom up and top down solutions ? ) very interesting. That's why I'm adding another answer to continue the discussion about it.
To answer the question in its generic form, I need to formulate it more precisely with math. First, I need to formulate precisely what is a dynamic programming problem. Then, I need to define precisely what is a bottom up solution and what is a top down solution.
I will try to put some definitions but I think they are not the most generic ones. I think a really generic definition would need more heavy math.
First, define a state space S of dimension d as a subset of Z^d (Z represents the integers set).
Let f: S -> R be a function that we are interested in calculate for a given point P of the state space S (R represents the real numbers set).
Let t: S -> S^k be a transition function (it associates points in the state space to sets of points in the state space).
Consider the problem of calculating f on a point P in S.
We can consider it as a dynamic programming problem if there is a function g: R^k -> R such that f(P) = g(f(t(P)[0]), f(t(P)[1]), ..., f(t(P)[k])) (a problem can be solved only by using sub problems) and t defines a directed graph that is not a tree (sub problems have some overlap).
Consider the graph defined by t. We know it has a source (the point P) and some sinks for which we know the value of f (the base cases). We can define a top down solution for the problem as a depth first search through this graph that starts in the source and calculate f for each vertex at its return time (when the depth first search of all its sub graph is completed) using the transition function. On the other hand, a bottom up solution for the problem can be defined as a multi source breadth first search through the transposed graph that starts in the sinks and finishes in the source vertex, calculating f at each visited vertex using the previous visited layer.
The problem is: to navigate through the transposed graph, for each point you visit you need to know what points transition to this point in the original graph. In math terms, for each point Q in the transition graph, you need to know the set J of points such that for each point Pi in J, t(Pi) contains Q and there is no other point Pr in the state space outside of J such that t(Pr) contains Q. Notice that a trivial way to know this is to visit all the state space for each point Q.
The conclusion is that a bottom up solution as defined here always exists but it only compensates if you have a way to navigate through the transposed graph at least as efficiently as navigating through the original graph. This depends essentially in the properties of the transition function.
In particular, for the leetcode problem you mentioned, the transition function is the function that, for each point in the chessboard, gives all the points to which the knight can go to. A very special property about this function is that it's symmetric: if the knight can go from A to B, then it can also go from B to A. So, given a certain point P, you can know to which points the knight can go as efficiently as you can know from which points the knight can come from. This is the property that guarantees you that there exists a bottom up approach as efficient as the top down approach for this problem.
For the leetcode question you mentioned, the top down approach is like the following:
Let P(x, y, k) be the probability that the knight is at the square (x, y) at the k-th step. Look at all squares that the knight could have come from (you can get them in O(1), just look at the board with a pen and paper and get the formulas from the different cases, like knight in the corner, knight in the border, knight in a central region etc). Let them be (x1, y1), ... (xj, yj). For each of these squares, what is the probability that the knight jumps to (x, y) ? Considering that it can go out of the border, it's always 1/8. So:
P(x, y, k) = (P(x1, y1, k-1) + ... + P(xj, yj, k-1))/8
The base case is k = 0:
P(x, y ,0) = 1 if (x, y) = (x_start, y_start) and P(x, y, 0) = 0 otherwise.
You iterate through all n^2 squares and use the recurrence formula to calculate P(x, y, k). Many times you will need solutions you already calculated for k-1 and so you can benefit a lot from memoization.
In the end, the final solution will be the sum of P(x, y, k) over all squares of the board.

Finding algorithm that minimizes a cost function

I have a restroom that I need to place at some point. I want the restroom's placement to minimize the total distance people have to travel to get there.
So I have x apartments, and each house has n people living in each apartment, so the apartments would be like a_1, a_2, a_3, ... a_x and the number of people in a_1 would be n_1, a_2 would be n_2, etc. No two apartments can be in the same space and each apartment has a positive number of people.
So I know the distance between an apartment a_1 and the proposed bathroom, placed at a, would be |a_1 - a|.
MY WORKING:
I defined a cost function, C(a) = SUM[from i = 1 to x] (n_i)|a_i - a|. I want to find the location a that minimizes this cost function, given two arrays - one for the location of the apartments and one for the number of people in each apartment. I want my algorithm to be in O(n) time.
I was thinking of representing this as a graph and using MSTs or Djikstra's but that would not meet the O(n) runtime. Clearly, there must be something I can do without graphs, but I am unsure.
My understanding of your problem:
You have a once dimensional line with points a1,...,an. Each point has a value n1,....n, and you need to pick a point a that minimizes the cost function
SUM[from i = 1 to x] (n_i)|a_i - a|.
Lets assume our input a1...an is sorted.
Our strategy will be a sweep from left to right, calculating possible a on the way.
Things we will keep track of:
total_n : the total number of people
left_n : the number of people living to the left or at our current position
right_n : the number of people living to the right of our current position
a: our current postition
Calculate:
C(a)
left_n = n1
right_n = total_n - left_n
Now we consider what happens to the sum if we move our restroom to the right 1 step. The people on the left get 1 step further away, but the people on the right get 1 step closer.
We can say that C(a+1) = C(a) +left_n -right_n
If the range an-a1 is fairly small, we can use this and just step through the range using this formula to update the sum. Note that when this sum starts increasing we have gone too far and can safely stop.
However, if the apartments are very far apart we cannot step 1 by 1 unit. We need instead to step apartment by apartment. Note
C(a[i]) = C(a[i-1]) + left_n*(a[i]-a[a-1]) - right_n*(a[i]-a[i-1])
If at any point C(a[i]) > C(a[i-1]) we know that the correct position of the restroom is somewhere between i and i-1.
We can calculate that position, lets call it x.
The sum at x is C(a[i-1]) + left_n*(x-a[i-1]) - right_n*(x-a[i-1]) and we want to minimize this. Note that everything but x is known values.
We can simplify to
f(x) = C(a[i-1]) + left_n*x-left_n*a[i-1]) - right_n*x-left_n*a[i-1])
Constant terms cannot affect our decision so we are actually looking to minize
f(x) = x*(left_n-right_n)
We see that if left_n < right_n we want the restroom to be at i+1, but if left_n > right_n we want the restroom to be at i.
We need to at most do this calculation at each apartment, so the running time is O(n).

What is a greedy algorithm for this problem that is minimally optimal + proof?

The details are a bit cringe, fair warning lol:
I want to set up meters on the floor of my building to catch someone; assume my floor is a number line from 0 to length L. The specific type of meter I am designing has a radius of detection that is 4.7 meters in the -x and +x direction (diameter of 9.4 meters of detection). I want to set them up in such a way that if the person I am trying to find steps foot anywhere in the floor, I will know. However, I can't just setup a meter anywhere (it may annoy other residents); therefore, there are only n valid locations that I can setup a meter. Additionally, these meters are expensive and time consuming to make, so I would like to use as few as possible.
For simplicity, you can assume the meter has 0 width, and that each valid location is just a point on the number line aformentioned. What is a greedy algorithm that places as few meters as possible, while being able to detect the entire hallway of length L like I want it to, or, if detecting the entire hallway is not possible, will output false for the set of n locations I have (and, if it isn't able to detect the whole hallway, still uses as few meters as possible while attempting to do so)?
Edit: some clarification on being able to detect the entire hallway or not
Given:
L (hallway length)
a list of N valid positions to place a meter (p_0 ... p_N-1) of radius 4.7
You can determine in O(N) either a valid and minimal ("good") covering of the whole hallway or a proof that no such covering exists given the constraints as follows (pseudo-code):
// total = total length;
// start = current starting position, initially 0
// possible = list of possible meter positions
// placed = list of (optimal) meter placements, initially empty
boolean solve(float total, float start, List<Float> possible, List<Float> placed):
if (total-start <= 0):
return true; // problem solved with no additional meters - woo!
else:
Float next = extractFurthestWithinRange(start, possible, 4.7);
if (next == null):
return false; // no way to cover end of hall: report failure
else:
placed.add(next); // placement decided
return solve(total, next + 4.7, possible, placed);
Where extractFurthestWithinRange(float start, List<Float> candidates, float range) returns null if there are no candidates within range of start, or returns the last position p in candidates such that p <= start + range -- and also removes p, and all candidates c such that p >= c.
The key here is that, by always choosing to place a meter in the next position that a) leaves no gaps and b) is furthest from the previously-placed position we are simultaneously creating a valid covering (= no gaps) and an optimal covering (= no possible valid covering could have used less meters - because our gaps are already as wide as possible). At each iteration, we either completely solve the problem, or take a greedy bite to reduce it to a (guaranteed) smaller problem.
Note that there can be other optimal coverings with different meter positions, but they will use the exact same number of meters as those returned from this pseudo-code. For example, if you adapt the code to start from the end of the hallway instead of from the start, the covering would still be good, but the gaps could be rearranged. Indeed, if you need the lexicographically minimal optimal covering, you should use the adapted algorithm that places meters starting from the end:
// remaining = length (starts at hallway length)
// possible = positions to place meters at, starting by closest to end of hallway
// placed = positions where meters have been placed
boolean solve(float remaining, List<Float> possible, Queue<Float> placed):
if (remaining <= 0):
return true; // problem solved with no additional meters - woo!
else:
// extracts points p up to and including p such that p >= remaining - range
Float next = extractFurthestWithinRange2(remaining, possible, 4.7);
if (next == null):
return false; // no way to cover start of hall: report failure
else:
placed.add(next); // placement decided
return solve(next - 4.7, possible, placed);
To prove that your solution is optimal if it is found, you merely have to prove that it finds the lexicographically last optimal solution.
And you do that by induction on the size of the lexicographically last optimal solution. The case of a zero length floor and no monitor is trivial. Otherwise you demonstrate that you found the first element of the lexicographically last solution. And covering the rest of the line with the remaining elements is your induction step.
Technical note, for this to work you have to be allowed to place monitoring stations outside of the line.

Find minimum distance between points

I have a set of points (x,y).
i need to return two points with minimal distance.
I use this:
http://www.cs.ucsb.edu/~suri/cs235/ClosestPair.pdf
but , i dont really understand how the algo is working.
Can explain in more simple how the algo working?
or suggest another idea?
Thank!
If the number of points is small, you can use the brute force approach i.e:
for each point find the closest point among other points and save the minimum distance with the current two indices till now.
If the number of points is large, I think you may find the answer in this thread:
Shortest distance between points algorithm
Solution for Closest Pair Problem with minimum time complexity O(nlogn) is divide-and-conquer methodology as it mentioned in the document that you have read.
Divide-and-conquer Approach for Closest-Pair Problem
Easiest way to understand this algorithm is reading an implementation of it in a high-level language (because sometimes understanding the algorithms or pseudo-codes can be harder than understanding the real codes) like Python:
# closest pairs by divide and conquer
# David Eppstein, UC Irvine, 7 Mar 2002
from __future__ import generators
def closestpair(L):
def square(x): return x*x
def sqdist(p,q): return square(p[0]-q[0])+square(p[1]-q[1])
# Work around ridiculous Python inability to change variables in outer scopes
# by storing a list "best", where best[0] = smallest sqdist found so far and
# best[1] = pair of points giving that value of sqdist. Then best itself is never
# changed, but its elements best[0] and best[1] can be.
#
# We use the pair L[0],L[1] as our initial guess at a small distance.
best = [sqdist(L[0],L[1]), (L[0],L[1])]
# check whether pair (p,q) forms a closer pair than one seen already
def testpair(p,q):
d = sqdist(p,q)
if d < best[0]:
best[0] = d
best[1] = p,q
# merge two sorted lists by y-coordinate
def merge(A,B):
i = 0
j = 0
while i < len(A) or j < len(B):
if j >= len(B) or (i < len(A) and A[i][1] <= B[j][1]):
yield A[i]
i += 1
else:
yield B[j]
j += 1
# Find closest pair recursively; returns all points sorted by y coordinate
def recur(L):
if len(L) < 2:
return L
split = len(L)/2
L = list(merge(recur(L[:split]), recur(L[split:])))
# Find possible closest pair across split line
# Note: this is not quite the same as the algorithm described in class, because
# we use the global minimum distance found so far (best[0]), instead of
# the best distance found within the recursive calls made by this call to recur().
for i in range(len(E)):
for j in range(1,8):
if i+j < len(E):
testpair(E[i],E[i+j])
return L
L.sort()
recur(L)
return best[1]
closestpair([(0,0),(7,6),(2,20),(12,5),(16,16),(5,8),\
(19,7),(14,22),(8,19),(7,29),(10,11),(1,13)])
# returns: (7,6),(5,8)
Taken from: https://www.ics.uci.edu/~eppstein/161/python/closestpair.py
Detailed explanation:
First we define an Euclidean distance aka Square distance function to prevent code repetition.
def square(x): return x*x # Define square function
def sqdist(p,q): return square(p[0]-q[0])+square(p[1]-q[1]) # Define Euclidean distance function
Then we are taking the first two points as our initial best guess:
best = [sqdist(L[0],L[1]), (L[0],L[1])]
This is a function definition for comparing Euclidean distances of next pair with our current best pair:
def testpair(p,q):
d = sqdist(p,q)
if d < best[0]:
best[0] = d
best[1] = p,q
def merge(A,B): is just a rewind function for the algorithm to merge two sorted lists that previously divided to half.
def recur(L): function definition is the actual body of the algorithm. So I will explain this function definition in more detail:
if len(L) < 2:
return L
with this part, algorithm terminates the recursion if there is only one element/point left in the list of points.
Split the list to half: split = len(L)/2
Create a recursion (by calling function's itself) for each half: L = list(merge(recur(L[:split]), recur(L[split:])))
Then lastly this nested loops will test whole pairs in the current half-list with each other:
for i in range(len(E)):
for j in range(1,8):
if i+j < len(E):
testpair(E[i],E[i+j])
As the result of this, if a better pair is found best pair will be updated.
So they solve for the problem in Many dimensions using a divide-and-conquer approach. Binary search or divide-and-conquer is mega fast. Basically, if you can split a dataset into two halves, and keep doing that until you find some info you want, you are doing it as fast as humanly and computerly possible most of the time.
For this question, it means that we divide the data set of points into two sets, S1 and S2.
All the points are numerical, right? So we have to pick some number where to divide the dataset.
So we pick some number m and say it is the median.
So let's take a look at an example:
(14, 2)
(11, 2)
(5, 2)
(15, 2)
(0, 2)
What's the closest pair?
Well, they all have the same Y coordinate, so we can look at Xs only... X shortest distance is 14 to 15, a distance of 1.
How can we figure that out using divide-and-conquer?
We look at the greatest value of X and the smallest value of X and we choose the median as a dividing line to make our two sets.
Our median is 7.5 in this example.
We then make 2 sets
S1: (0, 2) and (5, 2)
S2: (11, 2) and (14, 2) and (15, 2)
Median: 7.5
We must keep track of the median for every split, because that is actually a vital piece of knowledge in this algorithm. They don't show it very clearly on the slides, but knowing the median value (where you split a set to make two sets) is essential to solving this question quickly.
We keep track of a value they call delta in the algorithm. Ugh I don't know why most computer scientists absolutely suck at naming variables, you need to have descriptive names when you code so you don't forget what the f000 you coded 10 years ago, so instead of delta let's call this value our-shortest-twig-from-the-median-so-far
Since we have the median value of 7.5 let's go and see what our-shortest-twig-from-the-median-so-far is for Set1 and Set2, respectively:
Set1 : shortest-twig-from-the-median-so-far 2.5 (5 to m where m is 7.5)
Set 2: shortest-twig-from-the-median-so-far 3.5 (looking at 11 to m)
So I think the key take-away from the algorithm is that this shortest-twig-from-the-median-so-far is something that you're trying to improve upon every time you divide a set.
Since S1 in our case has 2 elements only, we are done with the left set, and we have 3 in the right set, so we continue dividing:
S2 = { (11,2) (14,2) (15,2) }
What do you do? You make a new median, call it S2-median
S2-median is halfway between 15 and 11... or 13, right? My math may be fuzzy, but I think that's right so far.
So let's look at the shortest-twig-so-far-for-our-right-side-with-median-thirteen ...
15 to 13 is... 2
11 to 13 is .... 2
14 to 13 is ... 1 (!!!)
So our m value or shortest-twig-from-the-median-so-far is improved (where we updated our median from before because we're in a new chunk or Set...)
Now that we've found it we know that (14, 2) is one of the points that satisfies the shortest pair equation. You can then check exhaustively against the points in this subset (15, 11, 14) to see which one is the closer one.
Clearly, (15,2) and (14,2) are the winning pair in this case.
Does that make sense? You must keep track of the median when you cut the set, and keep a new median for everytime you cut the set until you have only 2 elements remaining on each side (or in our case 3)
The magic is in the median or shortest-twig-from-the-median-so-far
Thanks for asking this question, I went in not knowing how this algorithm worked but found the right highlighted bullet point on the slide and rolled with it. Do you get it now? I don't know how to explain the median magic other than binary search is f000ing awesome.

Algorithm to establish ordering amongst a set of items

I have a set of students (referred to as items in the title for generality). Amongst these students, some have a reputation for being rambunctious. We are told about a set of hate relationships of the form 'i hates j'. 'i hates j' does not imply 'j hates i'. We are supposed to arrange the students in rows (front most row numbered 1) in a way such that if 'i hates j' then i should be put in a row that is strictly lesser numbered than that of j (in other words: in some row that is in front of j's row) so that i doesn't throw anything at j (Turning back is not allowed). What would be an efficient algorithm to find the minimum number of rows needed (each row need not have the same number of students)?
We will make the following assumptions:
1) If we model this as a directed graph, there are no cycles in the graph. The most basic cycle would be: if 'i hates j' is true, 'j hates i' is false. Because otherwise, I think the ordering would become impossible.
2) Every student in the group is at least hated by one other student OR at least hates one other student. Of course, there would be students who are both hated by some and who in turn hate other students. This means that there are no stray students who don't form part of the graph.
Update: I have already thought of constructing a directed graph with i --> j if 'i hates j and doing topological sorting. However, since the general topological sort would suit better if I had to line all the students in a single line. Since there is a variation of the rows here, I am trying to figure out how to factor in the change into topological sort so it gives me what I want.
When you answer, please state the complexity of your solution. If anybody is giving code and you don't mind the language, then I'd prefer Java but of course any other language is just as fine.
JFYI This is not for any kind of homework (I am not a student btw :)).
It sounds to me that you need to investigate topological sorting.
This problem is basically another way to put the longest path in a directed graph problem. The number of rows is actually number of nodes in path (number of edges + 1).
Assuming the graph is acyclic, the solution is topological sort.
Acyclic is a bit stronger the your assumption 1. Not only A -> B and B -> A is invalid. Also A -> B, B -> C, C -> A and any cycle of any length.
HINT: the question is how many rows are needed, not which student in which row. The answer to the question is the length of the longest path.
It's from a project management theory (or scheduling theory, I don't know the exact term). There the task is about sorting jobs (vertex is a job, arc is a job order relationship).
Obviously we have some connected oriented graph without loops. There is an arc from vertex a to vertex b if and only if a hates b. Let's assume there is a source (without incoming arcs) and destination (without outgoing arcs) vertex. If that is not the case, just add imaginary ones. Now we want to find length of a longest path from source to destination (it will be number of rows - 1, but mind the imaginary verteces).
We will define vertex rank (r[v]) as number of arcs in a longest path between source and this vertex v. Obviously we want to know r[destination]. Algorithm for finding rank:
0) r_0[v] := 0 for all verteces v
repeat
t) r_t[end(j)] := max( r_{t-1}[end(j)], r_{t-1}[start(j)] + 1 ) for all arcs j
until for all arcs j r_{t+1}[end(j)] = r_t[end(j)] // i.e. no changes on this iteration
On each step at least one vertex increases its rank. Therefore in this form complexity is O(n^3).
By the way, this algorithm also gives you student distribution among rows. Just group students by their respective ranks.
Edit: Another code with the same idea. Possibly it is better understandable.
# Python
# V is a list of vertex indices, let it be something like V = range(N)
# source has index 0, destination has index N-1
# E is a list of edges, i.e. tuples of the form (start vertex, end vertex)
R = [0] * len(V)
do:
changes = False
for e in E:
if R[e[1]] < R[e[0]] + 1:
changes = True
R[e[1]] = R[e[0]] + 1
while changes
# The answer is derived from value of R[N-1]
Of course this is the simplest implementation. It can be optimized, and time estimate can be better.
Edit2: obvious optimization - update only verteces adjacent to those that were updated on the previous step. I.e. introduce a queue with verteces whose rank was updated. Also for edge storing one should use adjacency lists. With such optimization complexity would be O(N^2). Indeed, each vertex may appear in the queue at most rank times. But vertex rank never exceeds N - number of verteces. Therefore total number of algorithm steps will not exceed O(N^2).
Essentailly the important thing in assumption #1 is that there must not be any cycles in this graph. If there are any cycles you can't solve this problem.
I would start by seating all of the students that do not hate any other students in the back row. Then you can seat the students who hate these students in the next row and etc.
The number of rows is the length of the longest path in the directed graph, plus one. As a limit case, if there is no hate relationship everyone can fit on the same row.
To allocate the rows, put everyone who is not hated by anyone else on the row one. These are the "roots" of your graph. Everyone else is put on row N + 1 if N is the length of the longest path from any of the roots to that person (this path is of length one at least).
A simple O(N^3) algorithm is the following:
S = set of students
for s in S: s.row = -1 # initialize row field
rownum = 0 # start from first row below
flag = true # when to finish
while (flag):
rownum = rownum + 1 # proceed to next row
flag = false
for s in S:
if (s.row != -1) continue # already allocated
ok = true
foreach q in S:
# Check if there is student q who will sit
# on this or later row who hates s
if ((q.row == -1 or q.row = rownum)
and s hated by q) ok = false; break
if (ok): # can put s here
s.row = rownum
flag = true
Simple answer = 1 row.
Put all students in the same row.
Actually that might not solve the question as stated - lesser row, rather than equal row...
Put all students in row 1
For each hate relation, put the not-hating student in a row behind the hating student
Iterate till you have no activity, or iterate Num(relation) times.
But I'm sure there are better algorithms - look at acyclic graphs.
Construct a relationship graph where i hates j will have a directed edge from i to j. So end result is a directed graph. It should be a DAG otherwise no solutions as it's not possible to resolve circular hate relations ship.
Now simply do a DFS search and during the post node callbacks, means the once the DFS of all the children are done and before returning from the DFS call to this node, simply check the row number of all the children and assign the row number of this node as row max row of the child + 1. Incase if there is some one who doesn't hate anyone basically node with no adjacency list simply assign him row 0.
Once all the nodes are processed reverse the row numbers. This should be easy as this is just about finding the max and assigning the row numbers as max-already assigned row numbers.
Here is the sample code.
postNodeCb( graph g, int node )
{
if ( /* No adj list */ )
row[ node ] = 0;
else
row[ node ] = max( row number of all children ) + 1;
}
main()
{
.
.
for ( int i = 0; i < NUM_VER; i++ )
if ( !visited[ i ] )
graphTraverseDfs( g, i );`enter code here`
.
.
}

Resources