2D Peak finding in linear time - algorithm

I am reading this O(n) solution of the 2D peak finding problem. The author says that an important detail is to
split by the maximum direction. For square arrays this means that
split directions will be alternating.
Why is this necessary?

This is not a necessary. The alternating direction gives O(N) for any arrays.
Let's count the number of comparisons for an array M × N.
First iteration gives 3×M, second gives 3×N/2, third gives 3×M/4, fourth gives 3×N/8, i.e.:
3 * (M + M/4 + M/16 + ...) + 3 * (N/2 + N/8 + N/32 + ...)
We got two geometric series. Cause both of these series has common ratio 1/4, we can calculate the upper limit:
3 * (4 * M/3 + 2 * N/3)
Cause O(const × N) = O(N) and O(M + N) = O(N), we have O(N) algorithm.
If we always choose the vertical direction, then performance of algorithm is O(logM × N). If M much more N, then this algorithm will be faster. F.e. if M = 1025 and N = 3, then count of comparisons in the first algorithm is comparable to 1000, and in the second algorithm is comparable to 30.
Splitting an array by the maximum direction, we got the faster algorithm for specific values of M and N. Is this algorithm O(N)? Yes, cause even comparing both vertical and horizontal sections at each step we have 3 × (M + M/2 + M/4 + ...) + 3 × (N + N/2 + N/4 + ...) = 3 * (2 × M + 2 × N) comparisons, i.e. O(M + N) = O(N). But we always choose only one direction at each step.

Splitting the longer side ensures that the length of the split is at most sqrt(area). He could have also gone through the proof noticing that he's halving area with each call and he looks at most 3 sqrt(area) cells to do so.

Related

Max-heapify with convergent series

I am going through max-heapify and below was the observations
1- Observe max-heapify takes O(1) for nodes that are one level above leaves and in general O(L) times for nodes that are L level above leaves
2- n/4 nodes with level 1, n/8 nodes with level 2 so on.
The total amount of work in the for loop:
n/4 (1 c) + n/8 (2 c) + n/16 (3c) + ... + 1(log n c)
set n/4 = 2 pow k
C*2powk (1/2pow0 + 2/2pow1 + 3/2pow2 + ... + (k+1)/2powk)
series in the bracket is convergent series bounded by a constant
Algorithm is:
build max-heap (A)
for i=n/2 down to 1:
do max-heapify (A,i)
I understand most of the thing from the lecture, but I am confused on some points
1- Why we using n/4 (1 c), why not n/2? and how we know that n/4 leads us to level 1
2- How this convergent series leads us to theta n complexity
1: Consider some (complete for simplicity) binary tree. It has one root, two nodes below the root, 4 below them and so on. Hence the number of nodes in a binary tree of height h is
1 + 2 + 2^2 + ... + 2^(h-1) = 2^h - 1
Since this is the number of nodes n, roughly half of all nodes (n/2) are leaves. In the level above are half as many nodes as there are leaves so n/4.
2: You have a runtime of
C * 2^k * (1 + 2/2^1 + ... + (k+1)/2^k)
Let's call the Term in the bracket S(k) (dependent on k). We say that S(k) converges when k goes against infinity. Furthermore it is increasing, since S(k+1) is S(k) plus a positive term. Hence it has to always be lower than its limit. Would it be higher, we couldn't get down again. Therefore we can say there is a constant A (so independent of k) such that S(k) < A for all k.
Hence we can write the runtime as
C * 2^k * (1 + 2/2^1 + ... + (k+1)/2^k) < C * 2^k * A = A * C * n/4 = O(n)
It is kind of long explanation, I had the same difficulty to understand the heapify() before. So, I’d like to share it. Please let me know if you find some issue. Thanks.

Can't calculate growth rate of function

Hello, I am trying to solve a question in above image, but I can't.
Specially, my question is about C(n) in the image, I got "7logn + n^(1/3)" at the end.
we know that left side of + sign, "7logn<=n for all n>7 (witness c=1, k=7)", and right side of + sign, "n^(1/3)<=n".
Both sides between + sign from my perspective is O(n) and thus whole C(n) is O(n).
But why the answer is Big-theta(n^1/3)?
It is only true if log is the logarithm of base 2 (then log(8) = 3, because 2^3 = 8).
8^(log(n)/9) = (8^log(n))^(1/9) = (n^log(8))^(1/9) = (n^3)^(1/9) = n^(3 * 1/9) = n^(1/3)
n^(1/3) is the same as the 3rd root of n.
It is O(n^(1/3)) and not O(log(n)) because the former term is growing faster:
Limit of n towards infinity of log(n) / (n^(1/3)) equals 0. If you would have to switch the expressions to get 0 then the other one would be growing faster. E.g. n + log(n) would be O(n) because n is growing faster, not to be confused with n * log(n) which is O(n * log(n)).

Complexity of trominoes algorithm

What is or what should be complexity of (divide and conquer) trominoes algorithm and why?
I've been given a 2^k * 2^k sized board, and one of the tiles is randomly removed making it a deficient board. The task is to fill the with "trominos" which are an L-shaped figure made of 3 tiles.
Tiling Problem
– Input: A n by n square board, with one of the 1 by 1 square
missing, where n = 2k for some k ≥ 1.
– Output: A tiling of the board using a tromino, a three square tile
obtained by deleting the upper right 1 by 1 corner from a 2 by 2
square.
– You are allowed to rotate the tromino, for tiling the board.
Base Case: A 2 by 2 square can be tiled.
Induction:
– Divide the square into 4, n/2 by n/2 squares.
– Place the tromino at the “center”, where the tromino does not
overlap the n/2 by n/2 square which was earlier missing out 1 by 1
square.
– Solve each of the four n/2 by n/2 boards inductively.
This algorithm runs in time O(n2) = O(4k). To see why, notice that your algorithm does O(1) work per grid, then makes four subcalls to grids whose width and height of half the original size. If we use n as a parameter denoting the width or height of the grid, we have the following recurrence relation:
T(n) = 4T(n / 2) + O(1)
By the Master Theorem, this solves to O(n2). Since n = 2k, we see that n2 = 4k, so this is also O(4k) if you want to use k as your parameter.
We could also let N denote the total number of squares on the board (so N = n2), in which case the subcalls are to four grids of size N / 4 each. This gives the recurrence
S(N) = 4S(N / 4) + O(1)
This solves to O(N) = O(n2), confirming the above result.
Hope this helps!
To my understanding, the complexity can be determined as follows. Let T(n) denote the number of steps needed to solve a board of side length n. From the description in the original question above, we have
T(2) = c
where c is a constant and
T(n) = 4*T(n/2) + b
where b is a constant for placing the tromino. Using the master theorem, the runtime bound is
O(n^2)
via case 1.
I'll try to offer less formal solutions but without making use of the Master theorem.
– Place the tromino at the “center”, where the tromino does not overlap the n/2 by n/2 square which was earlier missing out 1 by 1 square.
I'm guessing this is an O(1) operation? In that case, if n is the board size:
T(1) = O(1)
T(n) = 4T(n / 4) + O(1) =
= 4(4T(n / 4^2) + O(1)) + O(1) =
= 4^2T(n / 4^2) + 4*O(1) + O(1) =
= ... =
= 4^kT(n / 4^k) + 4^(k - 1)*O(1)
But n = 2^k x 2^k = 2^(2k) = (2^2)^k = 4^k, so the whole algorithm is O(n).
Note that this does not contradict #Codor's answer, because he took n to be the side length of the board, while I took it to be the entire area.
If the middle step is not O(1) but O(n):
T(n) = 4T(n / 4) + O(n) =
= 4(4*T(n / 4^2) + O(n / 4)) + O(n) =
= 4^2T(n / 4^2) + 2*O(n) =
= ... =
= 4^kT(n / 4^k) + k*O(n)
We have:
k*O(n) = n log n because 4^k = n
So the entire algorithm would be O(n log n).
You do O(1) work per tromino placed. Since there's (n^2-1)/3 trominos to place, the algorithm takes O(n^2) time.

Analysis of dictionary

My first question in the analysis it is mentioned as
n+(n/2)+(n/4)+--- is atmost 2n. how we got result as atmost 2n?
We have a collection of arrays, where array "i" has size (2 to power
i). Each array is either empty of full, and each is sorted. However,
there will be no relationship between the items in different arrays.
The issue of which arrays are full and which is empty is based on the
binary representation of the number of items we are storing.
To perform a lookup, we just do binary search in eac occupied aray. In
the worst case this takes time O(log(n) + log(n/2) + log(n/4) +....+1)
= O(log square n).
Following is question on above text snippet.
How does author came with O(log(n) + log(n/2) + log(n/4) +....+1) ?
and above sum is O(log square n).
Thanks!
n + n/2 + n/4 + n/8 ... = n * (1/1 + 1/2 + 1/4 + 1/8 + ...)
The sum 1/1 + 1/2 + 1/4 + 1/8 + ... is a geometric series that converges to 2, so the result is 2n.
Apparently, the author is talking about a collection of arrays with the sizes n, n/2, n/4, ..., and he is doing a binary search in each of them. A binary search in an array with n elements takes O(log n) time, so the total time required is O(log n + log n/2 + log n/4 + ...).

Average Runtime of Quickselect

Wikipedia states that the average runtime of quickselect algorithm (Link) is O(n). However, I could not clearly understand how this is so. Could anyone explain to me (via recurrence relation + master method usage) as to how the average runtime is O(n)?
Because
we already know which partition our desired element lies in.
We do not need to sort (by doing partition on) all the elements, but only do operation on the partition we need.
As in quick sort, we have to do partition in halves *, and then in halves of a half, but this time, we only need to do the next round partition in one single partition (half) of the two where the element is expected to lie in.
It is like (not very accurate)
n + 1/2 n + 1/4 n + 1/8 n + ..... < 2 n
So it is O(n).
Half is used for convenience, the actual partition is not exact 50%.
To do an average case analysis of quickselect one has to consider how likely it is that two elements are compared during the algorithm for every pair of elements and assuming a random pivoting. From this we can derive the average number of comparisons. Unfortunately the analysis I will show requires some longer calculations but it is a clean average case analysis as opposed to the current answers.
Let's assume the field we want to select the k-th smallest element from is a random permutation of [1,...,n]. The pivot elements we choose during the course of the algorithm can also be seen as a given random permutation. During the algorithm we then always pick the next feasible pivot from this permutation therefore they are chosen uniform at random as every element has the same probability of occurring as the next feasible element in the random permutation.
There is one simple, yet very important, observation: We only compare two elements i and j (with i<j) if and only if one of them is chosen as first pivot element from the range [min(k,i), max(k,j)]. If another element from this range is chosen first then they will never be compared because we continue searching in a sub-field where at least one of the elements i,j is not contained in.
Because of the above observation and the fact that the pivots are chosen uniform at random the probability of a comparison between i and j is:
2/(max(k,j) - min(k,i) + 1)
(Two events out of max(k,j) - min(k,i) + 1 possibilities.)
We split the analysis in three parts:
max(k,j) = k, therefore i < j <= k
min(k,i) = k, therefore k <= i < j
min(k,i) = i and max(k,j) = j, therefore i < k < j
In the third case the less-equal signs are omitted because we already consider those cases in the first two cases.
Now let's get our hands a little dirty on calculations. We just sum up all the probabilities as this gives the expected number of comparisons.
Case 1
Case 2
Similar to case 1 so this remains as an exercise. ;)
Case 3
We use H_r for the r-th harmonic number which grows approximately like ln(r).
Conclusion
All three cases need a linear number of expected comparisons. This shows that quickselect indeed has an expected runtime in O(n). Note that - as already mentioned - the worst case is in O(n^2).
Note: The idea of this proof is not mine. I think that's roughly the standard average case analysis of quickselect.
If there are any errors please let me know.
In quickselect, as specified, we apply recursion on only one half of the partition.
Average Case Analysis:
First Step: T(n) = cn + T(n/2)
where, cn = time to perform partition, where c is any constant(doesn't matter). T(n/2) = applying recursion on one half of the partition.Since it's an average case we assume that the partition was the median.
As we keep on doing recursion, we get the following set of equation:
T(n/2) = cn/2 + T(n/4) T(n/4) = cn/2 + T(n/8) .. . T(2) = c.2 + T(1) T(1) = c.1 + ...
Summing the equations and cross-cancelling like values produces a linear result.
c(n + n/2 + n/4 + ... + 2 + 1) = c(2n) //sum of a GP
Hence, it's O(n)
I also felt very conflicted at first when I read that the average time complexity of quickselect is O(n) while we break the list in half each time (like binary search or quicksort). It turns out that breaking the search space in half each time doesn't guarantee an O(log n) or O(n log n) runtime. What makes quicksort O(n log n) and quickselect is O(n) is that we always need to explore all branches of the recursive tree for quicksort and only a single branch for quickselect. Let's compare the time complexity recurrence relations of quicksort and quickselect to prove my point.
Quicksort:
T(n) = n + 2T(n/2)
= n + 2(n/2 + 2T(n/4))
= n + 2(n/2) + 4T(n/4)
= n + 2(n/2) + 4(n/4) + ... + n(n/n)
= 2^0(n/2^0) + 2^1(n/2^1) + ... + 2^log2(n)(n/2^log2(n))
= n (log2(n) + 1) (since we are adding n to itself log2 + 1 times)
Quickselect:
T(n) = n + T(n/2)
= n + n/2 + T(n/4)
= n + n/2 + n/4 + ... n/n
= n(1 + 1/2 + 1/4 + ... + 1/2^log2(n))
= n (1/(1-(1/2))) = 2n (by geometric series)
I hope this convinces you why the average runtime of quickselect is O(n).

Resources