Looking at the following tail recursive quicksort pseudocode
QuickSort(A[1, ..., n], lo, hi)
Input: An array A of n distinct integers, the lower index and the higher index
// For the first call lo = 1 and hi = n
Output: The array A in sorted order
If lo = hi return
// The array A is already sorted in this case
If lo > hi or indices out of the range 1 to n then return
Else
Pick an index k in [lo,hi] as the pivot
// Assume that this can be done using O(1) cells on the stack
i = Partition(A[lo, ..., hi], k)
// Use in-place partitioning here so assume that this can be done
// using O(1) space on the stack
If i - lo <= hi - i
QuickSort(A, lo, i-1) // sort the smaller half first
QuickSort(A, i+1, hi)
Else
QuickSort(A, i+1, hi) // sort the smaller half first
QuickSort(A, lo, i-1)
Assuming that the pivot is chosen adversarially each time I analyzed that it should have a space complexity of O(logn) [which I am not entirely sure is correct], but how would the space complexity be affected if the pivot is then chosen uniformly at random? I am fairly new to understanding space complexity over time complexity, so any feedback is appreciated!
Refer this article covering Tail Recursion
Within the article it says that the Space Complexity of a Tail Recursive Quick Sort is as follows:
space complexity = input + O(log(n))
A few articles to get a more in depth of understanding can be found below:
Pivoting To Understand QuickSort Pt.1
Pivoting To Understand QuickSort Pt.2
QuickSort Notes from Duke
Carnegie Melon Randomized Quicksort Lecture Notes
Algorithmic Analysis of QuickSort
QuickSort Using Random Pivoting
The worst case for time is if you divide the array as unevenly as possible, and that time will be O(n^2). If you're not doing tail-recursion, that will also be the worst case for space.
However if you divide the array unevenly and are doing a tail-recursive sort, the call to sorting the larger half takes no space because you just replace the current call frame. Therefore the maximum space used is when you've made the first recursive calls over and over again. Which is at most 1/2 of at most 1/2 of... for a total of log_2(n) call frames.
If you switch from worst case to average case with a uniformly chosen pivot, it is O(log(n)) again, but with a better constant. First of all it can't be more than that because the average case cannot exceed the worst case.
The trick is to prove that you can't improve that bound. To demonstrate that, we can prove that the average space to sort an array of size n is at least C log(n+1)/(3 log(2)) where C is the space for a single call.
By inspection this is true for n = 1, 2, ..., 7 because the initial call takes space C and log(n+1)/(3 log(2)) <= 1.
If n is bigger than 7 and the statement is true up to n, our pivot will break us into groups of size m and n-m where m <= n-m. With at least even odds, n <= 4m and our expected maximum cost during the first recursive call is at least
C 1 + f(m)
>= C + f(n/4 rounded up)
>= C (3 log(2)) /(3 log(2)) + C log(n/4 + 1)/(3 log(2)))
> C (3 log(2) + log(n+1) - 2 log(2) ) / (3 log(2)) )
= C (log(n+1) + log(2)) / (3 log(2))
The rest of the time that doesn't hold and our expected maximum cost during the tail-recursive call is at least
f(n-m)
>= f(n/2 rounded down)
>= C log(n/2 + 1/2) / (3 log(2)) # n/2
= C (log(n+1) - log(2)) / (3 log(2))
When you average those two, you get the desired lower bound of C log(n+1) / (3 log(2)).
(I may have made a small error, but the idea is right.)
Related
I have a method called binary sum
Algorithm BinarySum(A, i, n):
Input: An array A and integers i and n
Output: The sum of the n integers in A starting at index i
if n = 1 then
return A[i]
return BinarySum(A, i, n/ 2) + BinarySum(A, i + n/ 2, n/ 2)
Ignoring the fact of making a simple problem complicated I have been asked to find the Big O. Here is my thought process. For an array of size N I will be making 1 + 2 + 4 .. + N recursive calls. This is close to half the sum from 1 to N so I will say it is about N(N + 1)/4. After making this many calls now I need to add them together. So once again I need to perform N(N+1)/4 additions. Adding them together we are left with N^2 as the dominate term.
So would the big O of this algorithm be O(N^2)? Or am I doing something wrong. It feels strange to have binary recursion and not have a 2^n or log n in the final answer
There are in-fact 2^n and log n terms in the final result... sort of.
For each call to a sub-array of length n, two recursive calls are made to both halves of this array, plus a constant amount of work (if-statement, addition, pushing onto the call stack etc). Thus the recurrence relation is given by:
At this point we could just use the Master theorem to directly arrive at the final result - O(n). But let's instead derive it by repeated expansion:
The stopping condition n = 1 gives the maximum value of m (ignoring rounding):
In step (*) we used the standard formula for geometric series. So as you can see the answer does involve log n and 2^n terms in a sense, but they "cancel" out to give a simple linear term, which is the same as for a simple loop.
According to Wikipedia, partition-based selection algorithms such as quickselect have runtime of O(n), but I am not convinced by it. Can anyone explain why it is O(n)?
In the normal quick-sort, the runtime is O(n log n). Every time we partition the branch into two branches (greater than the pivot and lesser than the pivot), we need to continue the process in both branches, whereas quickselect only needs to process one branch. I totally understand these points.
However, if you think in the Binary Search algorithm, after we chose the middle element, we are also searching only one side of the branch. So does that make the algorithm O(1)? No, of course, the Binary Search Algorithm is still O(log N) instead of O(1). This is also the same thing as the search element in a Binary Search Tree. We only search for one side, but we still consider O(log n) instead of O(1).
Can someone explain why in quickselect, if we continue the search in one side of pivot, it is considered O(1) instead of O(log n)? I consider the algorithm to be O(n log n), O(N) for the partitioning, and O(log n) for the number of times to continue finding.
There are several different selection algorithms, from the much simpler quickselect (expected O(n), worst-case O(n2)) to the more complex median-of-medians algorithm (Θ(n)). Both of these algorithms work by using a quicksort partitioning step (time O(n)) to rearrange the elements and position one element into its proper position. If that element is at the index in question, we're done and can just return that element. Otherwise, we determine which side to recurse on and recurse there.
Let's now make a very strong assumption - suppose that we're using quickselect (pick the pivot randomly) and on each iteration we manage to guess the exact middle of the array. In that case, our algorithm will work like this: we do a partition step, throw away half of the array, then recursively process one half of the array. This means that on each recursive call we end up doing work proportional to the length of the array at that level, but that length keeps decreasing by a factor of two on each iteration. If we work out the math (ignoring constant factors, etc.) we end up getting the following time:
Work at the first level: n
Work after one recursive call: n / 2
Work after two recursive calls: n / 4
Work after three recursive calls: n / 8
...
This means that the total work done is given by
n + n / 2 + n / 4 + n / 8 + n / 16 + ... = n (1 + 1/2 + 1/4 + 1/8 + ...)
Notice that this last term is n times the sum of 1, 1/2, 1/4, 1/8, etc. If you work out this infinite sum, despite the fact that there are infinitely many terms, the total sum is exactly 2. This means that the total work is
n + n / 2 + n / 4 + n / 8 + n / 16 + ... = n (1 + 1/2 + 1/4 + 1/8 + ...) = 2n
This may seem weird, but the idea is that if we do linear work on each level but keep cutting the array in half, we end up doing only roughly 2n work.
An important detail here is that there are indeed O(log n) different iterations here, but not all of them are doing an equal amount of work. Indeed, each iteration does half as much work as the previous iteration. If we ignore the fact that the work is decreasing, you can conclude that the work is O(n log n), which is correct but not a tight bound. This more precise analysis, which uses the fact that the work done keeps decreasing on each iteration, gives the O(n) runtime.
Of course, this is a very optimistic assumption - we almost never get a 50/50 split! - but using a more powerful version of this analysis, you can say that if you can guarantee any constant factor split, the total work done is only some constant multiple of n. If we pick a totally random element on each iteration (as we do in quickselect), then on expectation we only need to pick two elements before we end up picking some pivot element in the middle 50% of the array, which means that, on expectation, only two rounds of picking a pivot are required before we end up picking something that gives a 25/75 split. This is where the expected runtime of O(n) for quickselect comes from.
A formal analysis of the median-of-medians algorithm is much harder because the recurrence is difficult and not easy to analyze. Intuitively, the algorithm works by doing a small amount of work to guarantee a good pivot is chosen. However, because there are two different recursive calls made, an analysis like the above won't work correctly. You can either use an advanced result called the Akra-Bazzi theorem, or use the formal definition of big-O to explicitly prove that the runtime is O(n). For a more detailed analysis, check out "Introduction to Algorithms, Third Edition" by Cormen, Leisserson, Rivest, and Stein.
Let me try to explain the difference between selection & binary search.
Binary search algorithm in each step does O(1) operations. Totally there are log(N) steps and this makes it O(log(N))
Selection algorithm in each step performs O(n) operations. But this 'n' keeps on reducing by half each time. There are totally log(N) steps.
This makes it N + N/2 + N/4 + ... + 1 (log(N) times) = 2N = O(N)
For binary search it is 1 + 1 + ... (log(N) times) = O(logN)
In Quicksort, the recursion tree is lg(N) levels deep and each of these levels requires O(N) amount of work. So the total running time is O(NlgN).
In Quickselect, the recurision tree is lg(N) levels deep and each level requires only half the work of the level above it. This produces the following:
N * (1/1 + 1/2 + 1/4 + 1/8 + ...)
or
N * Summation(1/i^2)
1 < i <= lgN
The important thing to note here is that i goes from 1 to lgN, but not from 1 to N and also not from 1 to infinity.
The summation evaluates to 2. Hence Quickselect = O(2N).
Quicksort does not have a big-O of nlogn - it's worst case runtime is n^2.
I assume you're asking about Hoare's Selection Algorithm (or quickselect) not the naive selection algorithm that is O(kn). Like quicksort, quickselect has a worst case runtime of O(n^2) (if bad pivots are chosen), not O(n). It can run in expectation time n because it's only sorting one side, as you point out.
Because for selection, you're not sorting, necessarily. You can simply count how many items there are which have any given value. So an O(n) median can be performed by counting how many times each value comes up, and picking the value that has 50% of items above and below it. It's 1 pass through the array, simply incrementing a counter for each element in the array, so it's O(n).
For example, if you have an array "a" of 8 bit numbers, you can do the following:
int histogram [ 256 ];
for (i = 0; i < 256; i++)
{
histogram [ i ] = 0;
}
for (i = 0; i < numItems; i++)
{
histogram [ a [ i ] ]++;
}
i = 0;
sum = 0;
while (sum < (numItems / 2))
{
sum += histogram [ i ];
i++;
}
At the end, the variable "i" will contain the 8-bit value of the median. It was about 1.5 passes through the array "a". Once through the entire array to count the values, and half through it again to get the final value.
I am trying to find an algorithm to solve the following equation:
∑ max(ai, x) = y
in which the ai are constants and x is the variable.
I can find an algorithm with O(n log n) time complexity as follows:
First of all, sort the ai in O(n log n) time, and arrange intervals
(−∞, a0), (a0, a1), …, (ai, ai+1), …, (an−1, an), (an, ∞)
Then, for each interval, assume x belongs to this interval, and solve the equation. We could get a x̂, and then test whether x̂ belongs to this interval or not. If x̂ belongs to the corresponding interval, we will assign x̂ to x, and return x. On the other hand, we will try the next interval until we get the solution.
The above method is an O(n log n) algorithm due to the sort. With the definition of the equation-solving problem, I expect an algorithm with O(n) time complexity. Is there any reference for this problem?
First of all, this only has a solution if the sum of all a_i is smaller than y. You should check this first, because the algorithm below depends on this property.
Assume that we have chosen some pivot p from all a_i and want to calculate the x that corresponds to the interval [p, q), where q is the next larger a_i. This is:
If you move p to the next larger a_i, x changes as follows:
, where p' is the new pivot and n is the old number of a_i that are smaller or equal to p. Under the assumption that the sum of all a_i is smaller than y, this clearly leads to a decrease of x. Similarly, if we choose a smaller p, x is increased.
Coming back to the first equation, we can observe the following: If x is smaller than p, we should choose a smaller p. If x is greater than the smallest of the greater a_is, we should choose a larger p. In every other case, we have found the right x.
This can be utilized in a quick select procedure. #MvG's comment brought me onto this track. All credits for the quick select idea go to him. Here is some pseudo code (modified version from Wikipedia):
findX(list, y)
left := 0
right := length(list) - 1
sumGreater := 0 // the sum of all a_i greater than the current interval
numSmaller := 0 // the number of all a_i smaller than the current interval
minGreater := inf //the minimum of all a_i greater than the current interval
loop
if left = right
return (y - sumGreater) / (numSmaller + 1)
pivotIndex := medianOfMedians(list, left, right)
//the partition function will also sum the elements larger than the pivot,
//count the elements smaller than the pivot, and find the minimum of the
//larger elements
(pivotIndex, partialSumGreater, partialNumSmaller, partialMinGreater)
:= partition(list, left, right, pivotIndex)
x := (y - sumGreater - partialSumGreater) / (numSmaller + partialNumSmaller + 1)
if(x >= list[pivotIndex] && x < min(partialMinGreater, minGreater))
return x
else if x < list[pivotIndex]
right := pivotIndex - 1
minGreater := list[pivotIndex]
sumGreater += partialSumGreater + list[pivotIndex]
else
left := pivotIndex + 1
numSmaller += partialNumSmaller + 1
The key idea is that the partitioning function gathers some additional statistics. This does not change the time complexity of the partitioning function because it requires O(n) additional operations, leaving a total time complexity of O(n) for the partitioning function. The medianOfMedians function is also linear in time. The remaining operations in the loop are constant time. Assuming that the median of medians yields good pivots, the total time of the entire algorithm is approximately O(n + n/2 + n/4 + n/8 ...) = O(n).
Since comments might get deleted, I'm turning my own comments into a coherent answer. Contrary to the original question, I'm using indices 1 through n, avoiding the a0 originally used. So this is consistent one-based indexing using inclusive indices.
Assume for the moment that bi are the coefficients from your input, but in sorted order, so bi ≤ bi+1. As you essentially already wrote, if bi ≤ x ≤ bi+1 then the result is i ⋅ x + bi+1 + ⋯ + bn since the first i terms will use the x and the other terms will use the bj. Solving for x you get x = (y − bi+1 − ⋯ - bn) / i and putting that back into your inequality you have i ⋅ bi ≤ y − bi+1 − ⋯ − bn ≤ i ⋅ bi+1. Concentrating on one of the inequalities, you want the largest i such that
i ⋅ bi ≤ y − bi+1 − ⋯ − bn (subsequently called “the inequality”)
But in order to make this work on unsorted ai, you'd need something similar to the median of medians. That is an algorithm which achieves O(n) guaranteed worst-case behavior for the problem of selecting a median, where the typical quickselect would take O(n²) in the worst case although it usually does quite well in practice.
Actually your problem is not that different from quickselect. You can pick a pivot coefficient, and split the remainder into larger and smaller values. Then you evaluate the inequality for the pivot element. If it is satisfied, you recurse into the list of larger elements, otherwise you recurse into the list of smaller elements, until at some point you have two adjacent elements, one which satisfies the inequality and one which does not.
This is O(n²) in the worst case, since you might need O(n) recursive calls, each of them taking O(n) time to process its input. Just like the O(n²) quickselect itself is suboptimal. The median-of-medians shows that that problem can indeed be solved in O(n). So we either need to find a similar solution here, or reformulate this problem here in terms of finding the median, or write some algorithm wich makes use of the median in a reasonable way.
Actually Nico Schertler found a way to achieve that last option: Take the algorithm I outlined above, but choose the pivot element to be the median. That way you can guarantee that each recursive call will process at most half as much input as the previous call. Since the median of medians itself is O(n) this can be done without exceeding the O(n) bound for each recursive call.
So in pseudocode it's like this (using inclusive indices throughout):
# f: Process whole problem with coefficients a_1 through a_n
f(y, a, n) := begin
if y < (sum of a_i for i from 1 through n): # O(n)
throw Error "Cannot satisfy equation" # Or omit check and risk division by zero
return g(a, 1, n, y) # O(n)
end
# g: Recursively process part of the problem, namely a_l through a_r
# Precondition: we know inequality holds for i = l - 1 and fails for i = r + 1
# a: the array as provided to f; will get modified in place
# l: left index (inclusive)
# r: right index (inclusive)
# y: (original y) - (sum of a_j for j from r + 1 through n)
g(a, l, r, y) := begin # process a_l through a_r O(r-l)
if r < l: # inequality holds in r but fails in l O(1)
return y / r # compute x for the case of i = r O(1)
m = median(a, l, r) # computed using median of medians O(r-l)
i = floor((l + r) / 2) # index of median, with same tie breaks O(1)
partition(a, l, r, m) # so a_l…a_(i-1) ≤ a_i=m ≤ a_(i+1)…a_r O(r-l)
rhs = y - (sum of a_j for j from i + 1 to r) # O((r-l)/2)
if i * a_i ≤ rhs: # condition holds, check larger i
return g(a, i + 1, r, y) # recurse in right half of list O((r-l)/2)
else: # condition fails, check smaller i
return g(a, l, i - 1, rhs - m) # recurse in left half of list O((r-l)/2)
end
It's all in the title. Suppose $X$ is an array of n floats. The empirical CDF is the function (of t):
Fn(t) = (1/n) sum{1{Xi <= t} : i=1,...,n}
This has to be computed for t_1<t_2,...,t_m (e.g. for m different, sorted, values of t). My question is what is the numerical complexity of computing this? I think O(nlog(n))+O(mlog(n)) [sort the array then perform m binary search, one for each value of t]
but I may be naive. Can anyone confirm?
Edit:
Sorry for the mess. While writing the question, I realized that I was imposing some constraints that are not in the original problem. I respond to Yves's question below.
The Xi are not sorted.
The t_j are sorted and equi-spaced.
m is smaller than n, but not by orders of magnitudes: typically m~n/4.
The given expression, a sum of N 0/1 terms, is clearly O(N).
UPDATE:
If the Xi are presorted, the function is trivially CDFi = CDF(Xi) = i/N, and the computation is in a way O(0)!
If the Xi are unsorted, you'll need to sort first in O(N.Log(N)), unless the range of the variable allows a faster sorting such as Counting sort.
If you only need to evaluate for a small number of Xis, let K, then you can consider using the naïve summation, as K.N can beat N.Log(N).
UPDATE: (second change by the OP)
Else, sort the Xi if necessary and sort the tj if necessary. Then a single linear pass will suffice. Total complexity will be one of:
O(n.Log(n) + m.Log(m))
O(n.Log(n) + m)
O(n + m.Log(m))
O(n + m).
If m < Log(n) and the Xi are unsorted, use the naïve formula. Complexity O(m.n).
Possibly there could be better options when m>n.
UPDATE: final specs: Xi unsorted, Tj sorted, m < n.
The solution I would choose is as follows:
1) Sort the Xi.
2) "Merge" the sorted Xi and Tj. This means, progress simultaneously in the X and T lists, keeping two running indexes; make sure to always increment the index that causes the shortest move; use CDF(Tj)=i/n. This is a linear process. (Very close to a merge in mergesort.)
Global complexity is O(n.Log(n)), the merging term O(n) being absorbed in the former.
UPDATE: uniform sampling.
When the Tj values are equi-spaced, let Tj = T0 + D.j, you can use an histogram approach.
Allocate an array of m+1 counters, initially 0. For every Xi, compute a bin index as Floor((Xi - T0) / D). Clamp negative values to 0 and values larger than m to m. Increment that bin. In the end, every bin will tell you how many X values are in range [Tj, Tj+1[.
Compute the prefix sum of the counters. They will now tell you how many X values are smaller than Xj+1, and CDF(j)=Counter[j]/n.
[Caution, this is an unchecked sketch, can be wrong in details.]
Total computation will take n bin incrementations followed by a prefix sum on m elements, i.e. O(n) operations.
# Input data
X= [0.125, 6, 3.25, 9, 1.4375, 6, 3.125, 7]
n= len(X)
# Sampling points (1 to 6)
T0= 1
DT= 1
m= 6
# Initialize the counters: O(m)
C= [0] * m
# Accumulate the histogram: O(n)
for x in X:
i= max(0, int((x - T0) / DT))
if i < m:
C[i]+= 1
# Compute the prefix sum: O(m)
S= 0
for i in range(m - 1):
C[i + 1]+= C[i]
# Reduce: O(m)
for i in range(m):
C[i]/= float(n)
# Display
print "T=", C
T= [0.25, 0.25, 0.5, 0.5, 0.5, 0.75]
A CDF Fn(t) is always a non-decreasing function in [0..1]. Therefore I assume your notation is saying to count the number of elements Xi <= t and return that count divided by n.
Thus if t is very large, you have n/n = 1. For very small, it's 0/n = 0 as we'd expect.
This is a poor definition of an empiracle CDF. See for example see Law, Averill M., Simulation & Modeling, 4th ed., p 301 for some more advanced ideas.
The simplest efficient way to compute your function (given that m, the number of Fn(t) values you need, is unknown) is first to sort the inputs Xi. This requires O(n log n) time, but needs to be done only once no matter how many t values you're processing.
Let's call the sorted values Yi. To find the count of Yi values <= t is the same as finding i such that Yi <= t < Yi+i. This can be done by binary search in O(log n) time for a given value of t. Divide by n and you have the Fn(t) value required. Of course you can repeat this m times to get the job done in O(m log n) time.
However you say your special case is m presorted values of t_j. You can find all the i values with a single pass over the Yi and simultaneously over the t_j, in the fashion of the merge operation in mergesort. With this you find all the answers in O(m + n) time.
Putting this together with the sorting cost, you have O(m + n + n log n) = O(m + n log n).
Note this is always faster than using the binary search lookup m times, O(n log n + m log n) = O((m + n) log n).
The only case you'd want to skip the presorting is when m < O(log n). This is because with no presorting, processing all the t_j needs O(mn) time - you must touch all n elements to count the number <= t_j. Consequently, if m < O(log n), then skipping the presort leads to less than O(n log n), i.e. asymptotically faster than the presort method.
I'm reading Introduction to Algorithms book, second edition, the chapter about Medians and Order statistics. And I have a few questions about randomized and non-randomized selection algorithms.
The problem:
Given an unordered array of integers, find i'th smallest element in the array
a. The Randomized_Select algorithm is simple. But I cannot understand the math that explains it's work time. Is it possible to explain that without doing deep math, in more intuitive way? As for me, I'd think that it should work for O(nlog n), and in worst case it should be O(n^2), just like quick sort. In avg randomizedPartition returns near middle of the array, and array is divided into two each call, and the next recursion call process only half of the array. The RandomizedPartition costs (p-r+1)<=n, so we have O(n*log n). In the worst case it would choose every time the max element in the array, and divide the array into two parts - (n-1) and (0) each step. That's O(n^2)
The next one (Select algorithm) is more incomprehensible then previous:
b. What it's difference comparing to previous. Is it faster in avg?
c. The algorithm consists of five steps. In first one we divide the array into n/5 parts each one with 5 elements (beside the last one). Then each part is sorted using insertion sort, and we select 3rd element (median) of each. Because we have sorted these elements, we can be sure that previous two <= this pivot element, and the last two are >= then it. Then we need to select avg element among medians. In the book stated that we recursively call Select algorithm for these medians. How we can do that? In select algorithm we are using insertion sort, and if we are swapping two medians, we need to swap all four (or even more if it is more deeper step) elements that are "children" for each median. Or do we create new array that contain only previously selected medians, and are searching medians among them? If yes, how can we fill them in original array, as we changed their order previously.
The other steps are pretty simple and look like in the randomized_partition algorithm.
The randomized select run in O(n). look at this analysis.
Algorithm :
Randomly choose an element
split the set in "lower than" set L and "bigger than" set B
if the size of "lower than" is j-1 we found it
if the size is bigger, then Lookup in L
or lookup in B
The total cost is the sum of :
The cost of splitting the array of size n
The cost of lookup in L or the cost of looking up in B
Edited: I Tried to restructure my post
You can notice that :
We always go next in the set with greater amount of elements
The amount of elements in this set is n - rank(xj)
1 <= rank(xi) <= n So 1 <= n - rank(xj) <= n
The randomness of the element xj directly affect the randomness of the number of element which
are greater xj(and which are smaller than xj)
if xj is the element chosen , then you know that the cost is O(n) + cost(n - rank(xj)). Let's call rank(xj) = rj.
To give a good estimate we need to take the expected value of the total cost, which is
T(n) = E(cost) = sum {each possible xj}p(xj)(O(n) + T(n - rank(xj)))
xj is random. After this it is pure math.
We obtain :
T(n) = 1/n *( O(n) + sum {all possible values of rj when we continue}(O(n) + T(n - rj))) )
T(n) = 1/n *( O(n) + sum {1 < rj < n, rj != i}(O(n) + T(n - rj))) )
Here you can change variable, vj = n - rj
T(n) = 1/n *( O(n) + sum { 0 <= vj <= n - 1, vj!= n-i}(O(n) + T(vj) ))
We put O(n) outside the sum , gain a factor
T(n) = 1/n *( O(n) + O(n^2) + sum {1 <= vj <= n -1, vj!= n-i}( T(vj) ))
We put O(n) and O(n^2) outside, loose a factor
T(n) = O(1) + O(n) + 1/n *( sum { 0 <= vj <= n -1, vj!= n-i} T(vj) )
Check the link on how this is computed.
For the non-randomized version :
You say yourself:
In avg randomizedPartition returns near middle of the array.
That is exactly why the randomized algorithm works and that is exactly what it is used to construct the deterministic algorithm. Ideally you want to pick the pivot deterministically such that it produces a good split, but the best value for a good split is already the solution! So at each step they want a value which is good enough, "at least 3/10 of the array below the pivot and at least 3/10 of the array above". To achieve this they split the original array in 5 at each step, and again it is a mathematical choice.
I once created an explanation for this (with diagram) on the Wikipedia page for it... http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm