Paredes and Navarro state that
m + k log m = O(m + k log k)
This gives an immediate "tighter looking" bound for incremental sorting. That is, if a partial or incremental sorting algorithm is O(m + k log m), then it is automatically O(m + k log k), where the k smallest elements are sorted from a set of size m. Unfortunately, their explanation is rather difficult for me to understand. Why does it hold?
Specifically, they state
Note that m + k log m = O(m + k log k), as they can differ only
if k = o(mα) for any α > 0, and then m dominates k log m.
This seems to suggest they're talking about k as a function of m along some path, but it's very hard to see how k = o(mα) plays into things, or where to place the quantifiers in their statement.
There are various ways to define big-O notation for multi-variable functions, which would seem to make the question difficult to approach. Fortunately, it doesn't actually matter exactly which definition you pick, as long as you make the entirely reasonable assumption that m > 0 and k >= 1. That is, in the incremental sorting context, you assume that you need to obtain at least the first element from a set with at least one element.
Theorem
If m and k are real numbers, m > 0, and k >= 1, then m + k log m <= 2(m + k log k).
Proof
Suppose for the sake of contradiction that
m + k log m > 2(m + k log k)
Rearranging terms,
k log m - 2k log k > m
By the product property for logarithms,
k log m - k (log (k^2)) > m
By the sum property for logarithms,
k (log (m / k^2)) > m
Dividing by k (which is positive),
log (m / k^2) > m/k
Since k >= 1, k^2 >= k, so (since m >= 0) m / k >= m / k^2. Thus
log (m / k^2) > m / k^2
The logarithm of a number can never exceed that number, so we have reached a contradiction.
Related
For n/2 + 5 log n, I would of thought the lower order terms of 5 and 2 would be dropped, thus leaving n log n
Where am I going wrong?
Edit:
Thank you, I believe I can now correct my mistake:
O(n/2 + 5 log n) = O(n/2 + log n) = O(n + log n) = O(n)
n/2 + 5 log n <= 2n, for all n >= 1 (c = 2, n0=1)
Let us define the function f as follows for n >= 1:
f(n) = n/2 + 5*log(n)
This function is not O(log n); it grows more quickly than that. To show that, we can show that for any constant c > 0, there is a choice of n0 such that for n > n0, f(n) > c * log(n). For 0 < c <= 5, this is trivial, since f(n) > [5log(n)] by definition. For c > 5, we get
n/2 + 5*log(n) > c*log(n)
<=> n/2 > (c - 5)*log(n)
<=> (1/(2(c - 5))*n/log(n) > 1
We can now note that the expression on the LHS is monotonically increasing for n > 1 and find the limit as n grows without bound using l'Hopital:
lim(n->infinity) (1/(2(c - 5))*n/log(n)
= (1/(2(c - 5))* lim(n->infinity) n/log(n)
= (1/(2(c - 5))* lim(n->infinity) 1/(1/n)
= (1/(2(c - 5))* lim(n->infinity) n
-> infinity
Using l'Hopital we find there is no limit as n grows without bound; the value of the LHS grows without bound as well. Because the LHS is monotonically increasing and grows without bound, there must be an n0 after which the value of the LHS exceeds the value 1, as required.
This all proves that f is not O(log n).
It is true that f is O(n log n). This is not hard to show at all: choose c = (5+1/2), and it is obvious that
f(n) = n/2 + 5log(n) <= nlog(n)/2 + 5nlog(n) = (5+1/2)nlog(n) for all n.
However, this is not the best bound we can get for your function. Your function is actually O(n) as well. Choosing the same value for c as before, we need only notice that n > log(n) for all n >= 1, so
f(n) = n/2 + 5log(n) <= n/2 + 5n = (5+1/2)n
So, f is also O(n). We can show that f(n) is Omega(n) which proves it is also Theta(n). That is left as an exercise but is not difficult to do either. Hint: what if you choose c = 1/2?
It's neither O(log n) nor O(n*log n). It'll be O(n) because for large value of n log n is much smaller than n hence it'll be dropped.
It's neither O(log n) nor O(n*log n). It'll be O(n) because for larger values of n log(n) is much smaller than n hence it'll be dropped.
Consider n=10000, now 5log(n) i.e 5*log(10000)=46(apprx) which is less than n/2(= 5000).
I have an array of n random integers
I choose a random integer and partition by the chosen random integer (all integers smaller than the chosen integer will be on the left side, all bigger integers will be on the right side)
What will be the size of my left and right side in the average case, if we assume no duplicates in the array?
I can easily see, that there is 1/n chance that the array is split in half, if we are lucky. Additionally, there is 1/n chance, that the array is split so that the left side is of length 1/2-1 and the right side is of length 1/2+1 and so on.
Could we derive from this observation the "average" case?
You can probably find a better explanation (and certainly the proper citations) in a textbook on randomized algorithms, but here's the gist of average-case QuickSort, in two different ways.
First way
Let C(n) be the expected number of comparisons required on average for a random permutation of 1...n. Since the expectation of the sum of the number of comparisons required for the two recursive calls equals the sum of the expectations, we can write a recurrence that averages over the n possible divisions:
C(0) = 0
1 n−1
C(n) = n−1 + ― sum (C(i) + C(n−1−i))
n i=0
Rather than pull the exact solution out of a hat (or peek at the second way), I'll show you how I'd get an asymptotic bound.
First, I'd guess the asymptotic bound. Obviously I'm familiar with QuickSort and my reasoning here is fabricated, but since the best case is O(n log n) by the Master Theorem, that's a reasonable place to start.
Second, I'd guess an actual bound: 100 n log (n + 1). I use a big constant because why not? It doesn't matter for asymptotic notation and can only make my job easier. I use log (n + 1) instead of log n because log n is undefined for n = 0, and 0 log (0 + 1) = 0 covers the base case.
Third, let's try to verify the inductive step. Assuming that C(i) ≤ 100 i log (i + 1) for all i ∈ {0, ..., n−1},
1 n−1
C(n) = n−1 + ― sum (C(i) + C(n−1−i)) [by definition]
n i=0
2 n−1
= n−1 + ― sum C(i) [by symmetry]
n i=0
2 n−1
≤ n−1 + ― sum 100 i log(i + 1) [by the inductive hypothesis]
n i=0
n
2 /
≤ n−1 + ― | 100 x log(x + 1) dx [upper Darboux sum]
n /
0
2
= n−1 + ― (50 (n² − 1) log (n + 1) − 25 (n − 2) n)
n
[WolframAlpha FTW, I forgot how to integrate]
= n−1 + 100 (n − 1/n) log (n + 1) − 50 (n − 2)
= 100 (n − 1/n) log (n + 1) − 49 n + 100.
Well that's irritating. It's almost what we want but that + 100 messes up the program a little bit. We can extend the base cases to n = 1 and n = 2 by inspection and then assume that n ≥ 3 to finish the bound:
C(n) = 100 (n − 1/n) log (n + 1) − 49 n + 100
≤ 100 n log (n + 1) − 49 n + 100
≤ 100 n log (n + 1). [since n ≥ 3 implies 49 n ≥ 100]
Once again, no one would publish such a messy derivation. I wanted to show how one could work it out formally without knowing the answer ahead of time.
Second way
How else can we derive how many comparisons QuickSort does in expectation? Another possibility is to exploit the linearity of expectation by summing over each pair of elements the probability that those elements are compared. What is that probability? We observe that a pair {i, j} is compared if and only if, at the leaf-most invocation where i and j exist in the array, either i or j is chosen as the pivot. This happens with probability 2/(j+1 − i), since the pivot must be i, j, or one of the j − (i+1) elements that compare between them. Therefore,
n n 2
C(n) = sum sum ―――――――
i=1 j=i+1 j+1 − i
n n+1−i 2
= sum sum ―
i=1 d=2 d
n
= sum 2 (H(n+1−i) − 1) [where H is the harmonic numbers]
i=1
n
= 2 sum H(i) − n
i=1
= 2 (n + 1) (H(n+1) − 1) − n. [WolframAlpha FTW again]
Since H(n) is Θ(log n), this is Θ(n log n), as expected.
From the definition of Ω notation, this would imply that 2^(n) >= c * 2^(n + k). Taking the lg of both sides and simplifying, I see that n >= lg(c) * (n + k). If I pick c = 1, n0 = 1, and k to be some negative constant, then I can see this is true. I am wondering if this is a correct analysis, and that if I pick a positive k, then it is false. Thanks for your help.
The definition of Ω requires that there exists c such that 2^n ≥ c.2^(n+k).
Clearly c = 2^(-k) (or a smaller value) satisfies this condition and 2^n = Ω(2^(n+k)) for any k.
It is well known that Pascal's identity can be used to encode a combination of k elements out of n into a number from 0 to (n \choose k) - 1 (let's call this number a combination index) using a combinatorial number system. Assuming constant time for arithmetic operations, this algorithm takes O(n) time.†
I have an application where k ≪ n and an algorithm in O(n) time is infeasible. Is there an algorithm to bijectively assign a number between 0 and (n \choose k) - 1 to a combination of k elements out of n whose runtime is of order O(k) or similar? The algorithm does not need to compute the same mapping as the combinatorial number system, however, the inverse needs to be computable in a similar time complexity.
† More specifically, the algorithm computing the combination from the combination index runs in O(n) time. Computing the combination index from the combination works in O(k) time if you pre-compute the binomial coefficients.
Description of a comment.
For given combinatorial index (N), to find k'th digit it is needed to find c_k such that (c_k \choose k) <= N and ((c_k+1) \choose k) > N.
Set P(i,k) = i!/(i-k)!.
P(i, k) = i * (i-1) * ... * (i-k+1)
substitute x = i - (k-1)/2
= (x+(k-1)/2) * (x+(k-1)/2-1) * ... * (x-(k-1)/2+1) * (x-(k-1)/2)
= (x^2 - ((k-1)/2)^2) * (x^2 - ((k-1)/2-1)^2) * ...
= x^k - sum(((k-2i-1)/2)^2))*x^(k-2) + O(x^(k-4))
= x^k - O(x^(k-2))
P(i, k) = (i - (k-1)/2)^k - O(i^(k-2))
From above inequality:
(c_k \choose k) <= N
P(c_k, k) <= N * k!
c_k ~= (N * k!)^(1/k) + (k-1)/2
I am not sure how large is O(c_k^(k-2)) part. I suppose it doesn't influence too much. If it is of order (c_k+1)/(c_k-k+1) than approximation is very good. That is due:
((c_k+1) \choose k) = (c_k \choose k) * (c_k + 1) / (c_k - k + 1)
I would try algorithm something like:
For given k
Precalculate k!
For given N
For i in (k, ..., 0)
Calculate c_i with (N * i!)^(1/i) + (i-1)/2
(*) Check is P(c_i, k) <=> N * i!
If smaller check c_i+1
If larger check c_i-1
Repeat (*) until found P(c_i, i) <= N * i! < P(c_i+1, i)
N = N - P(c_i, i)
If approximation is good, number of steps << k, than finding one digit is O(k).
Can I say that:
log n + log (n-1) + log (n-2) + .... + log (n - k) = theta(k * log n)?
Formal way to write the above:
Sigma (i runs from 0 to k) log (n-i) = theta (k* log n)?
If the above statement is right, how can I prove it?
If it is wrong, how can I express it (the left side of the equation, of course) as an asymptotic run time function of n and k?
Thanks.
Denote:
LHS = log(n) + log(n-1) + ... + log(n-k)
RHS = k * log n
Note that:
LHS = log(n*(n-1)*...*(n-k)) = log(polynomial of (k+1)th order)
It follows that this is equal to:
(k+1)*log(n(1 + terms that are 0 in limit))
If we consider a division:
(k+1)*log(n(1 + terms that are 0 in limit)) / RHS
we get in limit:
(k+1)/k = 1 + 1/k
So if k is a constant, both terms grow equally fast. So LHS = theta(RHS).
Wolfram Alpha seems to agree.
When n is constant, terms that previously were 0 in limit don't disappear but instead you get:
(k+1) * some constant number / k * (some other constant number)
So it's:
(1 + 1/k)*(another constant number). So also LHS = theta(RHS).
When proving Θ, you want to prove O and Ω.
Upper bound is proven easily:
log(n(n-1)...(n-k)) ≤ log(n^k) = k log n = O(k log n)
For the lower bound, if k ≥ n/2,
then in the product there is n/2 terms greater than n/2:
log(n(n-1)...(n-k)) ≥ (n/2)log(n/2) = Ω(n log n) ≥ Ω(k log n)
and if k ≤ n/2, all terms are greater than n/2:
log(n(n-1)...(n-k)) ≥ log((n/2)^k) = k log(n/2) = Ω(k log n)