Big O constant c and n for algorithms - big-o

I assume that a particular example in my book is wrong. But am I correct?
Example: 3log n + 2 is O(log n)
Justification: 3log n + 2 <= 5 log n, for n>=2.
I understand how they get the c=5 (since they take the coefficients and add them up). But I don't see how for n=2 for instance, the left function is smaller than the right one.
If I fill in 2 in n:
3 log 2 + 2 = 2.903 and 5 log 2 = 1.5051.
Only till n=10, the left function is actually smaller or equal than the right one.
Is my assumption right?

The log in this case is 2 based, not 10 based.
3log(2) + 2 = 3 + 2 = 5
5log(2) = 5
and it is true that 5 <= 5

To expand a bit on Peter's answer, the base of the logarithm is typically assumed to be base 2 when analyzing run times. It's not necessary to specify the base in the O() notation, since logarithms of different bases differ from each other only by a constant factor. (In this example, log_10(x) / log_2(x) = log(2)/log(10) = ~0.30103.) This constant factor is not relevant to the asymptotic run time.

Related

Why is runtime of Fibonacci θ(1.6^N)?

Why is the runtime closer to O(1.6^N) and not O(2^N)? I think it has something to do with the call stack, but it's still a little unclear to me.
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
For starters, remember that big-O notation always provides an upper bound. That is, if a function is O(n), it's also O(n2), O(n3), O(2n), etc. So in that sense, you are not incorrect if you say that the runtime is O(2n). You just don't have a tight bound.
To see where the tight bound of Θ(φn) comes from, it might help to look at how many recursive calls end up getting made when evaluating Fibonacci(n). Notice that
Fibonacci(0) requires one call, the call to Fibonacci(0).
Fibonacci(1) requires one call, the call to Fibonacci(1).
Fibonacci(2) requires three calls: one to Fibonacci(2), and one each to Fibonacci(0) and Fibonacci(1).
Fibonacci(3) requires five calls: one to Fibonacci(3), then the three calls generated by invoking Fibonacci(2) and the one call generated by invoking Fibonacci(1).
Fibonacci(4) requires nine calls: one to Fibonacci(4), then five from the call to Fibonacci(3) and three from the call to Fibonacci(2).
More generally, the pattern seems to be that if you're calling Fibonacci(n) for some n ≥ 2, that the number of calls made is one (for the call itself), plus the number of calls needed to evaluate Fibonacci(n-1) and Fibonacci(n-2). If we let Ln denote the number of calls made, this means that we have that
L0 = L1 = 1
Ln+2 = 1 + Ln + Ln+1.
So now the question is how fast this sequence grows. Evaluating the first few terms gives us
1, 1, 3, 5, 9, 15, 25, 41, ...
which definitely gets bigger and bigger, but it's not clear how much bigger that is.
Something you might notice here is that Ln kinda sorta ish looks like the Fibonacci numbers. That is, it's defined in terms of the sum of the two previous terms, but it has an extra +1 term. So maybe we might want to look at the difference between Ln and Fn, since that might show us how much "faster" the L series grows. You might notice that the first two values of the L series are 1, 1 and the first two values of the Fibonacci series are 0, 1, so we'll shift things over by one term to make things line up a bit more nicely:
L(n) 1 1 3 5 9 15 25 41
F(n+1) 1 1 2 3 5 8 13 21
Diff: 0 0 1 2 4 7 12 20
And wait, hold on a second. What happens if we add one to each term of the difference? That gives us
L(n) 1 1 3 5 9 15 25 41
F(n+1) 1 1 2 3 5 8 13 21
Diff+1 1 1 2 3 5 8 13 21
Whoa! Looks like Ln - Fn+1 + 1 = Fn+1. Rearranging, we see that
Ln = 2Fn+1 - 1.
Wow! So the actual number of calls made by the Fibonacci recursive function is very closely related to the actual value returned. So we could say that the runtime of the Fibonacci function is Θ(Fn+1) and we'd be correct.
But now the question is where φ comes in. There's a lovely mathematical result called Binet's formula that says that Fn = Θ(φn). There are many ways to prove this, but they all essentially boil down to the observation that
the Fibonacci numbers seem to grow exponentially quickly;
if they grow exponentially with a base of x, then Fn+2 = Fn + Fn+1 can be rewritten as x2 = x + 1; and
φ is a solution to x2 = x + 1.
From this, we can see that since the runtime of Fibonacci is Θ(Fn+1), then the runtime is also Θ(φn+1) = Θ(φn).
The number φ = (1+sqrt(5))/2 is characterized by the two following properties:
φ >= 1
φ2 = φ + 1.
Multiplying the second equation by φ{n-1} we get
φn+1 = φn + φn-1
Since f(0) = 0, f(1) = 1 and f(n+1) = f(n) + f(n-1), using 1 and 3,
it is easy to see by induction in n that f(n) <= φn
Thus f(n) is O(φn).
A similar inductive argument shows that
f(n) >= φn-3 = φ-3φn (n >= 1)
thus f(n) = Θ(φn).

Recurrence relation on Factorial

I was studying recurrence by a slide found at (slide 7 and 8):
http://www.cs.ucf.edu/courses/cop3502h/spring2012/Lectures/Lec8_RecurrenceRelations.pdf
I just can't accept (probably I`m not seeing it right) that the recurrence equation of factorial is :
T(n) = T(n-1)+2
T(1) = 1
when considering the number of operations ("*" and "-") of the function :
int factorial(int n) {
if (n == 1)
return 1;
return n * factorial(n-1);
}
If we use n = 5 we will get 6 by the formula above while the real number of subs and molts are 8.
My teacher also told us that if analyzing only the number of "*" it would be :
T(n) = T(n-1)+1.
Again if I use n = 5, I would get 5 but if you do it on a paper you will get 4 multiplications.
I also checked on the forum, but this Question is more messed then a hell :
Recurrence Relation
Anyone could help me understand that ? thanks.
if we use n = 5 we will get 6 by the formula above while the real number of subs and molts are 8.
It seems that the slides are counting the number of operations, not just subtractions and multiplications. In particular, the return statement is counted as one operation. (The slides say, "if it’s the base case just one operation to return.")
Thus, the real number of subtractions and multiplications is 8, but the number of operations is 9. If n is 5, then, unrolling the recursion, we get 1 + 2 + 2 + 2 + 2 = 9 operations, which looks right to me.

Find theta notation of the following while loop

I have the homework question:
Find a theta notation for the number of times the statement x = x + 1 is executed. (10 points).
i = n
while (i >= 1)
{
for j = 1 to n
{
x = x + 1
}
i = i/2
}
This is what I have done:
Ok first let’s make it easier. We will fist find the order of growth of:
while (i >= 1)
{
x = x + 1
i = i/2
}
that has order of growth O(log(n)) actually log base 2
the other inner for loop will execute n times therefore the algorithm should be of order:
O(log(n)*n)
The part where I get confused is that I am supposed to find theta notation NOT big-O. I know that theta notation is suppose to bound the function on the upper and lower limit. Will the correct answer be Theta(log(n)*n)?
I have found the answer in this link but I don't know how you get to that answer. Why they claim that the answer is Theta(n) ?
You should now prove it is also Omega(nlogn).
I won't show exactly how, since it is homework - but it is with the same principles you show O(nlogn). You need to show [unformally explnation:] that the asymptotic behavior of the function, is growing at least as fast as nlogn. [for big O you show it is growing at most at the rate of nlogn].
Remember that if a function is both O(nlogn) and Omega(nlogn), it is Theta(nlogn) [and vise versa]
p.s. Your hunch is true, it is easy to show it is not Omega(n), and thus it is not Theta(n)
p.s. 2: I think the author of the other answer confused with a different program:
i = n
while (i >= 1)
{
for j = 1 to i //NOTE: i instead of N here!
{
x = x + 1
}
i = i/2
}
The above program is indeed Theta(n), but it is different from the one you have provided.
Rephrasing your code fragment in a more formal way, so that it could be represented easily using Sigma Notation:
for (i = n; i >= 1; i = i/2 ) {
for j = 1; j <= n; j ++) {
x = x + 1; // instruction of cost 'c'
}
}
We obtain:
As #amit mentions I already have the upper limit of the function and that is Big-O which it actually is O(n*lgn). if I plot a table of that function I will get something like:
n n*lng
1 0
2 2
3 4.754887502
4 8
5 11.60964047
6 15.509775
7 19.65148445
8 24
9 28.52932501
10 33.21928095
because that is big-O then that means that the real function will be bounded by those values. In other words the real values should be less than the values at the table. for example taking a point for instace when n=9 we know that the answer should be less than or equal to 28.52932501 by looking at the table
So now we are missing to find Omega and that is the other bound. I think that the lower bound function should be Omega(n) and then we will get the table
n Omega(n)
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
.......
so that will be the other bound. If we take again point for example where n = 9 again then that will give us 9. that means that our real function should give us a value greater or equal to 9. based on our big-O function we also know that it should be les than or equal to 28.52932501

What would cause an algorithm to have O(log n) complexity?

My knowledge of big-O is limited, and when log terms show up in the equation it throws me off even more.
Can someone maybe explain to me in simple terms what a O(log n) algorithm is? Where does the logarithm come from?
This specifically came up when I was trying to solve this midterm practice question:
Let X(1..n) and Y(1..n) contain two lists of integers, each sorted in nondecreasing order. Give an O(log n)-time algorithm to find the median (or the nth smallest integer) of all 2n combined elements. For ex, X = (4, 5, 7, 8, 9) and Y = (3, 5, 8, 9, 10), then 7 is the median of the combined list (3, 4, 5, 5, 7, 8, 8, 9, 9, 10). [Hint: use concepts of binary search]
I have to agree that it's pretty weird the first time you see an O(log n) algorithm... where on earth does that logarithm come from? However, it turns out that there's several different ways that you can get a log term to show up in big-O notation. Here are a few:
Repeatedly dividing by a constant
Take any number n; say, 16. How many times can you divide n by two before you get a number less than or equal to one? For 16, we have that
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
Notice that this ends up taking four steps to complete. Interestingly, we also have that log2 16 = 4. Hmmm... what about 128?
128 / 2 = 64
64 / 2 = 32
32 / 2 = 16
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
This took seven steps, and log2 128 = 7. Is this a coincidence? Nope! There's a good reason for this. Suppose that we divide a number n by 2 i times. Then we get the number n / 2i. If we want to solve for the value of i where this value is at most 1, we get
n / 2i ≤ 1
n ≤ 2i
log2 n ≤ i
In other words, if we pick an integer i such that i ≥ log2 n, then after dividing n in half i times we'll have a value that is at most 1. The smallest i for which this is guaranteed is roughly log2 n, so if we have an algorithm that divides by 2 until the number gets sufficiently small, then we can say that it terminates in O(log n) steps.
An important detail is that it doesn't matter what constant you're dividing n by (as long as it's greater than one); if you divide by the constant k, it will take logk n steps to reach 1. Thus any algorithm that repeatedly divides the input size by some fraction will need O(log n) iterations to terminate. Those iterations might take a lot of time and so the net runtime needn't be O(log n), but the number of steps will be logarithmic.
So where does this come up? One classic example is binary search, a fast algorithm for searching a sorted array for a value. The algorithm works like this:
If the array is empty, return that the element isn't present in the array.
Otherwise:
Look at the middle element of the array.
If it's equal to the element we're looking for, return success.
If it's greater than the element we're looking for:
Throw away the second half of the array.
Repeat
If it's less than the the element we're looking for:
Throw away the first half of the array.
Repeat
For example, to search for 5 in the array
1 3 5 7 9 11 13
We'd first look at the middle element:
1 3 5 7 9 11 13
^
Since 7 > 5, and since the array is sorted, we know for a fact that the number 5 can't be in the back half of the array, so we can just discard it. This leaves
1 3 5
So now we look at the middle element here:
1 3 5
^
Since 3 < 5, we know that 5 can't appear in the first half of the array, so we can throw the first half array to leave
5
Again we look at the middle of this array:
5
^
Since this is exactly the number we're looking for, we can report that 5 is indeed in the array.
So how efficient is this? Well, on each iteration we're throwing away at least half of the remaining array elements. The algorithm stops as soon as the array is empty or we find the value we want. In the worst case, the element isn't there, so we keep halving the size of the array until we run out of elements. How long does this take? Well, since we keep cutting the array in half over and over again, we will be done in at most O(log n) iterations, since we can't cut the array in half more than O(log n) times before we run out of array elements.
Algorithms following the general technique of divide-and-conquer (cutting the problem into pieces, solving those pieces, then putting the problem back together) tend to have logarithmic terms in them for this same reason - you can't keep cutting some object in half more than O(log n) times. You might want to look at merge sort as a great example of this.
Processing values one digit at a time
How many digits are in the base-10 number n? Well, if there are k digits in the number, then we'd have that the biggest digit is some multiple of 10k. The largest k-digit number is 999...9, k times, and this is equal to 10k + 1 - 1. Consequently, if we know that n has k digits in it, then we know that the value of n is at most 10k + 1 - 1. If we want to solve for k in terms of n, we get
n ≤ 10k+1 - 1
n + 1 ≤ 10k+1
log10 (n + 1) ≤ k + 1
(log10 (n + 1)) - 1 ≤ k
From which we get that k is approximately the base-10 logarithm of n. In other words, the number of digits in n is O(log n).
For example, let's think about the complexity of adding two large numbers that are too big to fit into a machine word. Suppose that we have those numbers represented in base 10, and we'll call the numbers m and n. One way to add them is through the grade-school method - write the numbers out one digit at a time, then work from the right to the left. For example, to add 1337 and 2065, we'd start by writing the numbers out as
1 3 3 7
+ 2 0 6 5
==============
We add the last digit and carry the 1:
1
1 3 3 7
+ 2 0 6 5
==============
2
Then we add the second-to-last ("penultimate") digit and carry the 1:
1 1
1 3 3 7
+ 2 0 6 5
==============
0 2
Next, we add the third-to-last ("antepenultimate") digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
4 0 2
Finally, we add the fourth-to-last ("preantepenultimate"... I love English) digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
3 4 0 2
Now, how much work did we do? We do a total of O(1) work per digit (that is, a constant amount of work), and there are O(max{log n, log m}) total digits that need to be processed. This gives a total of O(max{log n, log m}) complexity, because we need to visit each digit in the two numbers.
Many algorithms get an O(log n) term in them from working one digit at a time in some base. A classic example is radix sort, which sorts integers one digit at a time. There are many flavors of radix sort, but they usually run in time O(n log U), where U is the largest possible integer that's being sorted. The reason for this is that each pass of the sort takes O(n) time, and there are a total of O(log U) iterations required to process each of the O(log U) digits of the largest number being sorted. Many advanced algorithms, such as Gabow's shortest-paths algorithm or the scaling version of the Ford-Fulkerson max-flow algorithm, have a log term in their complexity because they work one digit at a time.
As to your second question about how you solve that problem, you may want to look at this related question which explores a more advanced application. Given the general structure of problems that are described here, you now can have a better sense of how to think about problems when you know there's a log term in the result, so I would advise against looking at the answer until you've given it some thought.
When we talk about big-Oh descriptions, we are usually talking about the time it takes to solve problems of a given size. And usually, for simple problems, that size is just characterized by the number of input elements, and that's usually called n, or N. (Obviously that's not always true-- problems with graphs are often characterized in numbers of vertices, V, and number of edges, E; but for now, we'll talk about lists of objects, with N objects in the lists.)
We say that a problem "is big-Oh of (some function of N)" if and only if:
For all N > some arbitrary N_0, there is some constant c, such that the runtime of the algorithm is less than that constant c times (some function of N.)
In other words, don't think about small problems where the "constant overhead" of setting up the problem matters, think about big problems. And when thinking about big problems, big-Oh of (some function of N) means that the run-time is still always less than some constant times that function. Always.
In short, that function is an upper bound, up to a constant factor.
So, "big-Oh of log(n)" means the same thing that I said above, except "some function of N" is replaced with "log(n)."
So, your problem tells you to think about binary search, so let's think about that. Let's assume you have, say, a list of N elements that are sorted in increasing order. You want to find out if some given number exists in that list. One way to do that which is not a binary search is to just scan each element of the list and see if it's your target number. You might get lucky and find it on the first try. But in the worst case, you'll check N different times. This is not binary search, and it is not big-Oh of log(N) because there's no way to force it into the criteria we sketched out above.
You can pick that arbitrary constant to be c=10, and if your list has N=32 elements, you're fine: 10*log(32) = 50, which is greater than the runtime of 32. But if N=64, 10*log(64) = 60, which is less than the runtime of 64. You can pick c=100, or 1000, or a gazillion, and you'll still be able to find some N that violates that requirement. In other words, there is no N_0.
If we do a binary search, though, we pick the middle element, and make a comparison. Then we throw out half the numbers, and do it again, and again, and so on. If your N=32, you can only do that about 5 times, which is log(32). If your N=64, you can only do this about 6 times, etc. Now you can pick that arbitrary constant c, in such a way that the requirement is always met for large values of N.
With all that background, what O(log(N)) usually means is that you have some way to do a simple thing, which cuts your problem size in half. Just like the binary search is doing above. Once you cut the problem in half, you can cut it in half again, and again, and again. But, critically, what you can't do is some preprocessing step that would take longer than that O(log(N)) time. So for instance, you can't shuffle your two lists into one big list, unless you can find a way to do that in O(log(N)) time, too.
(NOTE: Nearly always, Log(N) means log-base-two, which is what I assume above.)
In the following solution, all the lines with a recursive call are done on
half of the given sizes of the sub-arrays of X and Y.
Other lines are done in a constant time.
The recursive function is T(2n)=T(2n/2)+c=T(n)+c=O(lg(2n))=O(lgn).
You start with MEDIAN(X, 1, n, Y, 1, n).
MEDIAN(X, p, r, Y, i, k)
if X[r]<Y[i]
return X[r]
if Y[k]<X[p]
return Y[k]
q=floor((p+r)/2)
j=floor((i+k)/2)
if r-p+1 is even
if X[q+1]>Y[j] and Y[j+1]>X[q]
if X[q]>Y[j]
return X[q]
else
return Y[j]
if X[q+1]<Y[j-1]
return MEDIAN(X, q+1, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j+1, k)
else
if X[q]>Y[j] and Y[j+1]>X[q-1]
return Y[j]
if Y[j]>X[q] and X[q+1]>Y[j-1]
return X[q]
if X[q+1]<Y[j-1]
return MEDIAN(X, q, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j, k)
The Log term pops up very often in algorithm complexity analysis. Here are some explanations:
1. How do you represent a number?
Lets take the number X = 245436. This notation of “245436” has implicit information in it. Making that information explicit:
X = 2 * 10 ^ 5 + 4 * 10 ^ 4 + 5 * 10 ^ 3 + 4 * 10 ^ 2 + 3 * 10 ^ 1 + 6 * 10 ^ 0
Which is the decimal expansion of the number. So, the minimum amount of information we need to represent this number is 6 digits. This is no coincidence, as any number less than 10^d can be represented in d digits.
So how many digits are required to represent X? Thats equal to the largest exponent of 10 in X plus 1.
==> 10 ^ d > X
==> log (10 ^ d) > log(X)
==> d* log(10) > log(X)
==> d > log(X) // And log appears again...
==> d = floor(log(x)) + 1
Also note that this is the most concise way to denote the number in this range. Any reduction will lead to information loss, as a missing digit can be mapped to 10 other numbers. For example: 12* can be mapped to 120, 121, 122, …, 129.
2. How do you search for a number in (0, N - 1)?
Taking N = 10^d, we use our most important observation:
The minimum amount of information to uniquely identify a value in a range between 0 to N - 1 = log(N) digits.
This implies that, when asked to search for a number on the integer line, ranging from 0 to N - 1, we need at least log(N) tries to find it. Why? Any search algorithm will need to choose one digit after another in its search for the number.
The minimum number of digits it needs to choose is log(N). Hence the minimum number of operations taken to search for a number in a space of size N is log(N).
Can you guess the order complexities of binary search, ternary search or deca search? Its O(log(N))!
3. How do you sort a set of numbers?
When asked to sort a set of numbers A into an array B, here’s what it looks like ->
Permute Elements
Every element in the original array has to be mapped to it’s corresponding index in the sorted array. So, for the first element, we have n positions. To correctly find the corresponding index in this range from 0 to n - 1, we need…log(n) operations.
The next element needs log(n-1) operations, the next log(n-2) and so on. The total comes to be:
==> log(n) + log(n - 1) + log(n - 2) + … + log(1)Using log(a) + log(b) = log(a * b), ==> log(n!)
This can be approximated to nlog(n) - n. Which is O(n*log(n))!
Hence we conclude that there can be no sorting algorithm that can do better than O(n*log(n)). And some algorithms having this complexity are the popular Merge Sort and Heap Sort!
These are some of the reasons why we see log(n) pop up so often in the complexity analysis of algorithms. The same can be extended to binary numbers. I made a video on that here.
Why does log(n) appear so often during algorithm complexity analysis?
Cheers!
We call the time complexity O(log n), when the solution is based on iterations over n, where the work done in each iteration is a fraction of the previous iteration, as the algorithm works towards the solution.
Can't comment yet... necro it is!
Avi Cohen's answer is incorrect, try:
X = 1 3 4 5 8
Y = 2 5 6 7 9
None of the conditions are true, so MEDIAN(X, p, q, Y, j, k) will cut both the fives. These are nondecreasing sequences, not all values are distinct.
Also try this even-length example with distinct values:
X = 1 3 4 7
Y = 2 5 6 8
Now MEDIAN(X, p, q, Y, j+1, k) will cut the four.
Instead I offer this algorithm, call it with MEDIAN(1,n,1,n):
MEDIAN(startx, endx, starty, endy){
if (startx == endx)
return min(X[startx], y[starty])
odd = (startx + endx) % 2 //0 if even, 1 if odd
m = (startx+endx - odd)/2
n = (starty+endy - odd)/2
x = X[m]
y = Y[n]
if x == y
//then there are n-2{+1} total elements smaller than or equal to both x and y
//so this value is the nth smallest
//we have found the median.
return x
if (x < y)
//if we remove some numbers smaller then the median,
//and remove the same amount of numbers bigger than the median,
//the median will not change
//we know the elements before x are smaller than the median,
//and the elements after y are bigger than the median,
//so we discard these and continue the search:
return MEDIAN(m, endx, starty, n + 1 - odd)
else (x > y)
return MEDIAN(startx, m + 1 - odd, n, endy)
}

What does Logn actually mean?

I am just studying for my class in Algorithms and have been looking over QuickSort. I understand the algorithm and how it works, but not how to get the number of comparisons it does, or what logn actually means, at the end of the day.
I understand the basics, to the extent of :
x=logb(Y) then
b^x = Y
But what does this mean in terms of algorithm performance? It's the number of comparisons you need to do, I understand that...the whole idea just seems so unintelligible though. Like, for QuickSort, each level K invocation involves 2^k invocations each involving sublists of length n/2^K.
So, summing to find the number of comparisons :
log n
Σ 2^k. 2(n/2^k) = 2n(1+logn)
k=0
Why are we summing up to log n ? Where did 2n(1+logn) come from? Sorry for the vagueness of my descriptions, I am just so confused.
If you consider a full, balanced binary tree, then layer by layer you have 1 + 2 + 4 + 8 + ... vertices. If the total number of vertices in the tree is 2^n - 1 then you have 1 + 2 + 4 + 8 + ... + 2^(n-1) vertices, counting layer by layer. Now, let N = 2^n (the size of the tree), then the height of the tree is n, and n = log2(N) (the height of the tree). That's what the log(n) means in these Big O expressions.
below is a sample tree:
1
/ \
2 3
/ \ / \
4 5 6 7
number of nodes in tree is 7 but high of tree is log 7 = 3, log comes when you have divide and conquer methods, in quick sort you divide list into 2 sublist, and continue this until rich small lists, divisions takes logn time (in average case), because the high of division is log n, partitioning in each level takes O(n), because in each level in average you partition N numbers, (may be there are too many list for partitioning, but average number of numbers is N in each level, in fact some of count of lists is N). So for simple observation if you have balanced partition tree you have log n time for partitioning, which means high of tree.
1 forget about b-trees for sec
here' math : log2 N = k is same 2^k=N .. its the definion of log
, it could be natural log(e) N = k aka e^k = n,, or decimal log10 N = k is 10^k = n
2 see perfect , balanced tree
1
1+ 1
1 + 1 + 1+ 1
8 ones
16 ones
etc
how many elements? 1+2+4+8..etc , so for 2 level b-tree there are 2^2-1 elements, for 3 level tree 2^3-1 and etc.. SO HERE'S MAGIC FORMULA: N_TREE_ELEMENTS= number OF levels^ 2 -1 ,or using definition of log : log2 number OF levels= number_of_tree_elements (Can forget about -1 )
3 lets say there's a task to find element in N elements b-tree, w/ K levels (aka height)
where how b-tree is constructed log2 height = number_of_tree elements
MOST IMPORTANT
so by how b-tree is constructed you need no more then 'height' OPERATIONS to find element in all N elements , or less.. so WHAT IS HEIGHT equals : log2 number_of_tree_elements..
so you need log2 N_number_of_tree_elements.. or log(N) for shorter
To understand what O(log(n)) means you might wanna read up on Big O notaion. In shot it means, that if your data set gets 1024 times bigger you runtime will only be 10 times longer (or less)(for base 2).
MergeSort runs in O(n*log(n)), which means it will take 10 240 times longer. Bubble sort runs in O(n^2), which means it will take 1024^2 = 1 048 576 times longer. So there are really some time to safe :)
To understand your sum, you must look at the mergesort algorithm as a tree:
sort(3,1,2,4)
/ \
sort(3,1) sort(2,4)
/ \ / \
sort(3) sort(1) sort(2) sort(4)
The sum iterates over each level of the tree. k=0 it the top, k= log(n) is the buttom. The tree will always be of height log2(n) (as it a balanced binary tree).
To do a little math:
Σ 2^k * 2(n/2^k) =
2 * Σ 2^k * (n/2^k) =
2 * Σ n*2^k/2^k =
2 * Σ n =
2 * n * (1+log(n)) //As there are log(n)+1 steps from 0 to log(n) inclusive
This is of course a lot of work to do, especially if you have more complex algoritms. In those situations you get really happy for the Master Theorem, but for the moment it might just get you more confused. It's very theoretical so don't worry if you don't understand it right away.
For me, to understand issues like this, this is a good way to think about it.

Resources