Dynamic convex hull trick - algorithm

I was reading about interesting algorithms in my free time and I just discovered the convex hull trick algorithm, with which we can compute the maxima of several lines in the plane on a given x coordinate. I found this article:
http://wcipeg.com/wiki/Convex_hull_trick
Here the author says, that the dynamic version of this algorithm runs in logarithmic time, but there is no proof. When we insert a line, we test some of his neighbors, but I dont understand how can it be O(log N) when we can test all the N lines by such an insert. Is it correct or I missed something?
UPDATE: this question is already answered, the interesting ones are the rest below
How can we delete?
I mean... if we delete a line, we will probably need the previous ones to reset the whole hull, but that algoritm is deleting all the not necessary lines, when inserting a new one.
Is it an another approach, to solve problems like above(or similar ones, for example managing queries like insert, delete, find maximum on an x point or on a given range etc.)
Thank you in advance!

To answer your first question: "How can insertion be O(logn)?", you can indeed end up checking O(n) neighbors, but note that you only need to check an extra neighbor when you discover that you need to do a delete operation.
The point is that if you are going to insert n new lines, then you can at most do the delete operation n times. Therefore the total amount of extra work is at most O(n) in addition to the O(logn) work per line that you need to find its position in the sorted data structure.
So the total effort for inserting all n lines is O(n) + O(nlogn) = O(nlogn), or in other words, amortized O(logn) per line.

The article claims amortized(not the worst case) O(log N) time per insertion. The amortize bound is easy to prove(each line is removed at most once and each check is either the last one or leads to a deletion of one line).
The article does not say that this data structure supports deletions at all. I'm not sure if it is possible to handle them efficiently. There is an obstacle: the time complexity analysis is based on fact that if we remove a line, we will never need it in the future, which is not the case when deletions are allowed.

Insertion can be quicker than O(log n), it can be achieve in O(Log h) where h is the set of already calculated Hull points. Insertion in batch or one by one can be done in O(log h) per point. You can read my articles about that:
A Convex Hull Algorithm and its implementation in O(n log h)
Fast and improved 2D Convex Hull algorithm and its implementation in O(n log h)
First and Extremely fast Online 2D Convex Hull Algorithm in O(Log h) per point
About delete: I'm pretty sure, but it has to be proven, that it can be achieve in O(log n + log h) = O(log n) per point. A text about it is available at the end of the my third article.

Related

Can my algorithm be done any better?

I have been presented with a challenge to make the most effective algorithm that I can for a task. Right now I came to the complexity of n * logn. And I was wondering if it is even possible to do it better. So basically the task is there are kids having a counting out game. You are given the number n which is the number of kids and m which how many times you skip someone before you execute. You need to return a list which gives the execution order. I tried to do it like this you use skip list.
Current = m
while table.size>0:
executed.add(table[current%table.size])
table.remove(current%table.size)
Current += m
My questions are is this correct? Is it n*logn and can you do it better?
Is this correct?
No.
When you remove an element from the table, the table.size decreases, and current % table.size expression generally ends up pointing at another irrelevant element.
For example, 44 % 11 is 0 but 44 % 10 is 4, an element in a totally different place.
Is it n*logn?
No.
If table is just a random-access array, it can take n operations to remove an element.
For example, if m = 1, the program, after fixing the point above, would always remove the first element of the array.
When an array implementation is naive enough, it takes table.size operations to relocate the array each time, leading to a total to about n^2 / 2 operations in total.
Now, it would be n log n if table was backed up, for example, by a balanced binary search tree with implicit indexes instead of keys, as well as split and merge primitives. That's a treap for example, here is what results from a quick search for an English source.
Such a data structure could be used as an array with O(log n) costs for access, merge and split.
But nothing so far suggests this is the case, and there is no such data structure in most languages' standard libraries.
Can you do it better?
Correction: partially, yes; fully, maybe.
If we solve the problem backwards, we have the following sub-problem.
Let there be a circle of k kids, and the pointer is currently at kid t.
We know that, just a moment ago, there was a circle of k + 1 kids, but we don't know where, at which kid x, the pointer was.
Then we counted to m, removed the kid, and the pointer ended up at t.
Whom did we just remove, and what is x?
Turns out the "what is x" part can be solved in O(1) (drawing can be helpful here), so the finding the last kid standing is doable in O(n).
As pointed out in the comments, the whole thing is called Josephus Problem, and its variants are studied extensively, e.g., in Concrete Mathematics by Knuth et al.
However, in O(1) per step, this only finds the number of the last standing kid.
It does not automatically give the whole order of counting the kids out.
There certainly are ways to make it O(log(n)) per step, O(n log(n)) in total.
But as for O(1), I don't know at the moment.
Complexity of your algorithm depends on the complexity of the operations
executed.add(..) and table.remove(..).
If both of them have complexity of O(1), your algorithm has complexity of O(n) because the loop terminates after n steps.
While executed.add(..) can easily be implemented in O(1), table.remove(..) needs a bit more thinking.
You can make it in O(n):
Store your persons in a LinkedList and connect the last element with the first. Removing an element costs O(1).
Goging to the next person to choose would cost O(m) but that is a constant = O(1).
This way the algorithm has the complexity of O(n*m) = O(n) (for constant m).

Algorithmic complexity of group average clustering

I've been reading lately about various hierarchical clustering algorithms such as single-linkage clustering and group average clustering. In general, these algorithms don't tend to scale well. Naive implementations of most hierarchical clustering algorithms are O(N^3), but single-linkage clustering can be implemented in O(N^2) time.
It is also claimed that group-average clustering can be implemented in O(N^2 logN) time. This is what my question is about.
I simply do not see how this is possible.
Explanation after explanation, such as:
http://nlp.stanford.edu/IR-book/html/htmledition/time-complexity-of-hac-1.html
http://nlp.stanford.edu/IR-book/completelink.html#averagesection
https://en.wikipedia.org/wiki/UPGMA#Time_complexity
... are claiming that group average hierarchical clustering can be done in O(N^2 logN) time by using priority queues. But when I read the actual explanation or pseudo-code, it always appears to me that it is nothing better than O(N^3).
Essentially, the algorithm is as follows:
For an input sequence of size N:
Create a distance matrix of NxN #(this is O(N^2) time)
For each row in the distance matrix:
Create a priority queue (binary heap) of all distances in the row
Then:
For i in 0 to N-1:
Find the min element among all N priority queues # O(N)
Let k = the row index of the min element
For each element e in the kth row:
Merge the min element with it's nearest neighbor
Update the corresponding values in the distance matrix
Update the corresponding value in priority_queue[e]
So it's that last step that, to me, would seem to make this an O(N^3) algorithm. There's no way to "update" an arbitrary value in the priority queue without scanning the queue in O(N) time - assuming the priority queue is a binary heap. (A binary heap gives you constant access to the min element and log N insertion/deletion, but you can't simply find an element by value in better than O(N) time). And since we'd scan the priority queue for each row element, for each row, we get (O(N^3)).
The priority queue is sorted by a distance value - but the algorithm in question calls for deleting the element in the priority queue which corresponds to k, the row index in the distance matrix of the min element. Again, there's no way to find this element in the queue without an O(N) scan.
So, I assume I'm probably wrong since everyone else is saying otherwise. Can someone explain how this algorithm is somehow not O(N^3), but in fact, O(N^2 logN) ?
I think you are saying that the problem is that in order to update an entry in a heap you have to find it, and finding it takes time O(N). What you can do to get round this is to maintain an index that gives, for each item i, its location heapPos[i] in the heap. Every time you swap two items to restore the heap invariant you then need to modify two entries in heapPos[i] to keep the index correct, but this is just a constant factor on the work done in the heap.
If you store the positions in the heap (which adds another O(n) memory) you can update the heap without scanning, on the changed positions only. These updates are restricted to two paths on the heap (one removal, one update) and execute in O(log n). Alternatively, you could binary-search by the old priority, which will likely be in O(log n), too (but slower, above approach is O(1)).
So IMHO you can indeed implement these in O(n^2 log n). But the implementation will still use a lot (O(n^2)) of memory, anything of O(n^2) does not scale. You usually
run out of memory before you run out of time if you have O(n^2) memory...
Implementing these data structures is quite tricky. And when not done well, this may end up being slower than a theoretically-worse approach. For example Fibonacci heaps. They have nice properties on paper, but have too high constant costs to pay off.
No, because the distance matrix is symmetrical.
if the first entry in row 0 is to column 5, distance of 1, and that is lowest in the system, then the first entry in row 5 must be the complementary entry to column 0, with a distance of 1.
In fact you only need a half matrix.

Grouping set of points to nearest pairs

I need an algorithm for the following problem:
I'm given a set of 2D points P = { (x_1, y_1), (x_2, y_2), ..., (x_n, y_n) } on a plane. I need to group them in pairs in the following manner:
Find two closest points (x_a, y_a) and (x_b, y_b) in P.
Add the pair <(x_a, y_a), (x_b, y_b)> to the set of results R.
Remove <(x_a, y_a), (x_b, y_b)> from P.
If initial set P is not empty, go to the step one.
Return set of pairs R.
That naive algorithm is O(n^3), using faster algorithm for searching nearest neighbors it can be improved to O(n^2 logn). Could it be made any better?
And what if the points are not in the euclidean space?
An example (resulting groups are circled by red loops):
Put all of the points into an http://en.wikipedia.org/wiki/R-tree (time O(n log(n))) then for each point calculate the distance to its nearest neighbor. Put points and initial distances into a priority queue. Initialize an empty set of removed points, and an empty set of pairs. Then do the following pseudocode:
while priority_queue is not empty:
(distance, point) = priority_queue.get();
if point in removed_set:
continue
neighbor = rtree.find_nearest_neighbor(point)
if distance < distance_between(point, neighbor):
# The previous neighbor was removed, find the next.
priority_queue.add((distance_between(point, neighbor), point)
else:
# This is the closest pair.
found_pairs.add(point, neighbor)
removed_set.add(point)
removed_set.add(neighbor)
rtree.remove(point)
rtree.remove(neighbor)
The slowest part of this is the nearest neighbor searches. An R-tree does not guarantee that those nearest neighbor searches will be O(log(n)). But they tend to be. Furthermore you are not guaranteed that you will do O(1) neighbor searches per point. But typically you will. So average performance should be O(n log(n)). (I might be missing a log factor.)
This problem calls for a dynamic Voronoi diagram I guess.
When the Voronoi diagram of a point set is known, the nearest neighbor pair can be found in linear time.
Then deleting these two points can be done in linear or sublinear time (I didn't find precise info on that).
So globally you can expect an O(N²) solution.
If your distances are arbitrary and you can't embed your points into Euclidean space (and/or the dimension of the space would be really high), then there's basically no way around at least a quadratic time algorithm because you don't know what the closest pair is until you check all the pairs. It is easy to get very close to this, basically by sorting all pairs according to distance and then maintaining a boolean look up table indicating which points in your list have already been taken, and then going through the list of sorted pairs in order and adding a pair of points to your "nearest neighbors" if neither point in the pair is in the look up table of taken points, and then adding both points in the pair to the look up table if so. Complexity O(n^2 log n), with O(n^2) extra space.
You can find the closest pair with this divide and conquer algorithm that runs in O(nlogn) time, you may repeat this n times and you will get O(n^2 logn) which is not better than what you got.
Nevertheless, you can exploit the recursive structure of the divide and conquer algorithm. Think about this, if the pair of points you removed were on the right side of the partition, then everything will behave the same on the left side, nothing changed there, so you just have to redo the O(logn) merge steps bottom up. But consider that the first new merge step will be to merge 2 elements, the second merges 4 elements then 8, and then 16,.., n/4, n/2, n, so the total number of operations on these merge steps are O(n), so you get the second closest pair in just O(n) time. So you repeat this n/2 times by removing the previously found pair and get a total O(n^2) runtime with O(nlogn) extra space to keep track of the recursive steps, which is a little better.
But you can do even better, there is a randomized data structure that let you do updates in your point set and get an expected O(logn) query and update time. I'm not very familiar with that particular data structure but you can find it in this paper. That will make your algorithm O(nlogn) expected time, I'm not sure if there is a deterministic version with similar runtimes, but those tend to be way more cumbersome.

Closest pair of points (linear 1-D case) algorithm

I'm tutoring a student and one of her assignments is to describe an O(nlogn) algorithm for the closest pair of points in the one-dimensional case. But the restriction is she's not allowed to use a divide-and-conquer approach. I understand the two-dimensional case from a question a user posted some years ago. I'll link it in case someone wants to look at it: For 2-D case (plane) - "Closest pair of points" algorithm.
However, for the 1-D case, I can only think of a solution which involves checking each and every point on the line and comparing it to the closest point to the left and right of it. But this solution isn't O(nlogn) since checking each point will take time proportional to n and the comparisons for each point would take time proportional to 2n. I'm not sure where log(n) would come from without using a divide-and-conquer approach.
For some reason, I can't come up with a solution. Any help would be appreciated.
Hint: If the points were ordered from left to right, what would you do, and what would the complexity be? What is the complexity of ordering the points first?
It seems to me that one could:
Sort the locations into order - O(n log n)
Find the differences between the ordered locations - O(n)
Find the smallest difference - O(n)
The smallest difference defines the two closest points.
The overall result would be O(n log n).

Quickhull - all points on convex hull - bad performance

How to avoid bad performance in quickhull algorithm, when all points in input lie on convex hull?
QuickHull's performance comes mainly from being able to throw away a portion of the input with each recursive call (or iteration). This does not happen when all the points lie on a circle, unfortunately. Even in this case, it's still possible to obtain an O(nlog n) worst-case performance if the split step gives a fairly balanced partition at every recursive call. The ultimate worst-case which results in quadratic runtime is when we have grossly imbalanced splits (say one split always ends up empty) in each call. Because this pretty much depends on the dataset, there's not much one can do about it.
You might want to try other algorithms instead, such as Andrew's variant of Graham's scan, or MergeHull. Both have guaranteed O(nlog n) worst-case time-complexity.
For some algorithm implementation comparison, I suggest to look at my article: Fast and improved 2D Convex Hull algorithm and its implementation in O(n log h) which compare many 2D algorithm performance like:
Monotone chain
Graham scan
Delaunay/Voronoi
Chan
Liu and Chen
Ouellet (mine)

Resources