Here is the Code of Level Order Traversal Line by Line. How come the Time Complexity is O(n) and not O(n2).
def levelOrder(root):
queue = [root]
while len(queue):
count = len(queue)
while count:
current = queue.pop(0)
print(current.data,end='\t')
if current.left:
queue.append(current.left)
if current.right:
queue.append(current.right)
count -= 1
print()
Code
I assume that by O(n2) you actually mean O(n^2).
Why should it be O(n^2)? Just because you have two nested loops it doesn't mean that the complexity is O(n^2). It all depends what you are iterating over and what you are doing inside the loop.
If you look at the execution of the code, you'll see every node in the tree is inserted and popped exactly once, and every iteration of the loop is productive (so there are no iterations that don't do anything). Therefore, the number of iterations is bounded by N, the number of nodes in the tree. So the overall complexity is O(N).
No this has only O(N*L) complexity, where N - number of nodes and L - number of level tree has. I explaining why:
Assume tree has N nodes
queue = [root] | O(1)
while len(queue): | size of level tree has : O(Level)
count = len(queue) | O(1)
while count: | it roughly depends number nodes are left after
| processing left and right sub-tree of the current
| node; O(Left sub tree nodes) + O(Right sub tree
nodes) => O(L+R) => O(N)
count -= 1 | O(1)
In terms of upper bound of an algorithm, it wrap into O(N * L * 1) => O(N*L), where is N-Number of Nodes and L-Number of Level, tree has.
I'm trying to count running time of build heap in heap sort algorithm
BUILD-HEAP(A)
heapsize := size(A);
for i := floor(heapsize/2) downto 1
do HEAPIFY(A, i);
end for
END
The basic idea behind why the time is linear is due to the fact that the time complexity of heapify depends on where it is within the heap. It takes O(1) time when the node is a leaf node (which makes up at least half of the nodes) and O(logn) time when it’s at the root.
The O(n) time can be proven by solving the following:
image by HostMath
what I understand here O(h) means worst case for heapify for each node, so height=ln n if the node is in the root for example to heapify node 2,1,3 it takes ln_2 3 =1.5 the height of root node 2 is 1, so the call to HEAPIFY is ln_2 n=height = O(h)
BUILD-HEAP(A)
heapsize := size(A);
for i := floor(heapsize/2) downto 1
do HEAPIFY(A, i);
end for
END
suppose this is the tree
4 .. height2
/ \
2 6 .. height 1
/\ /\
1 3 5 7 .. height 0
A quick look over the above algorithm suggests that the running time is O(nlg(n)), since each call to Heapify costs O(lg(n)) and Build-Heap makes O(n) such calls.
This upper bound, though correct, is not asymptotically tight.
the Time complexity for Building a Binary Heap is O(n).
im trying to understand, the heapsize/2 means for loop only call HEAPIFY heapsize/2 times. in tree above, heapsize=7, 7/2= 3 so root will be {1,2,6} so n/2
and every call to HEAPIFY will call HEAPIFY again until reach the last leaf of every root,
for example 2 will call heapify 1 times, 6 will call heapify 1 times, and 1 will call heapify 2 times. so it is the height of the tree which is ln n. am i right?
then the compelxity will be O(n/2 * ln n) = O(n ln n)
which one is right? O(n ln n) or O(n)?
and how can i get O(n)?
im reading this as reference , please correct me if im wrong thanks!
https://www.growingwiththeweb.com/data-structures/binary-heap/build-heap-proof/
this is the reference i used, and also i read about this in CLRS book
https://www.hostmath.com/Show.aspx?Code=ln_2%203
The complexity is O(n) here is why. Let's assume that the tree has n nodes. Since a heap is a nearly complete binary tree (according to CLRS), the second half of nodes are all leaves; so, there is no need to heapify them. Now for the remaining half. We start from node at position n/2 and go backwards. In heapifying, a node can only move downwards so, as you mentioned, it takes at most as much as the height of the node swap operations to complete the heapify for that node.
With n nodes, we have at most log n levels, where level 0 has the root and level 1 has at most 2 nodes and so on:
level 0: x
. / \
level 1: x x
.
level log n: x x x x x x x x
So, we have the following:
All nodes at level logn-1 need at most 1 swap for being heapified. (at most n/2 nodes here)
All nodes at level logn-2 need at most 2 swaps for being heapified. (at most n/4 nodes here)
....
All nodes at level 0 need at most logn swaps for being heapified. (at most 1 node here, i.e, the root)
So, the sum can be written as follows:
(1 x n/2 + 2 x n/4 + 3 x n/8 + ... + log n x n/2^logn)
Let's factor out n, we get:
n x (1/2 + 2/4 + 3/8 + ... + log n/2^logn)
Now the sum (1/2 + 2/4 + 3/8 + ... + log n/2^logn) is always <= 2 (see Sigma i over 2^i); therefore, the aforementioned sum we're interested in is always <= 2 x n. So, the complexity is O(n).
Each insert to a python list is 0(n), so for the below snippet of code is the worst case time-complexity O(n+ 2k) or O(nk)? Where k is the elements, we move during the insert.
def bfs_binary_tree(root):
queue=[root]
result=[]
while queue:
node = queue.pop()
result.append(node.val)
if node.left :
queue.insert(0, node.left)
if node.right:
queue.insert(0, node.right)
return result
I am using arrays as FIFO queue, but inserting each element at the start of the list has O(k) complexity, so trying to figure out the total complexity for n elements in the queue.
Since each node ends up in the queue at most once, the outer loop will execute n times (where n is the number of nodes in the tree).
Two inserts are performed during each iteration of the loop and these inserts will require size_of_queue + 1 steps.
So we have n steps and size_of_queue steps as the two variables of interest.
The question is: the size of the queue changes, so what is the overall runtime complexity?
Well, the size of the queue will continuously grow until it is full of leaf nodes, which is the upper bound of the size of the queue. Since the number of leaf nodes is the upper bound of the queue, we know that the queue will never be larger than that.
Therefore, we know that the algorithm will never take more than n * leaf nodes steps. This is our upper bound.
So let's find out what the relationship between n and leaf_nodes is.
Note: I am assuming a balanced complete binary tree
The number of nodes at any level of a balanced binary tree with a height of at least 1 (the root node) is: 2^level. The max level of a tree is called its depth.
For example, a tree with a root and two children has 2 levels (0 and 1) and therefore has a depth of 1 and a height of 2.
Thhe total number of nodes in a tree (2^(depth+1))-1 (-1 because level 0 only has one node).
n=2^(depth+1)-1
We can also use this relationship to identify the depth of the balanced binary tree, given the total number of nodes:
If n=2^(depth+1) - 1
n + 1 = 2^(depth+1)
log(n+1) = depth+1 = number of levels, including the root. Subtract 1 to get the depth (ie., the max level) (in a balanced tree with 4 levels, level 3 is the max level because root is level 0).
What do we have so far
number_of_nodes = 2^(depth+1) - 1
depth = log(number_of_nodes)
number_of_nodes_at_level_k = 2^k
What we need
A way to derive the number of leaf nodes.
Since the depth == last_level and since the number_of_nodes_at_level_k = 2^k, it follows that the number of nodes at the last level (the leaf nodes) = 2^depth
So: leaf_nodes = 2^depth
Your runtime complexity is n * leaf_nodes = n * 2^depth = n * 2^(log n) = n * n = n^2.
If the average case for deletion is lg(n), which makes sense since you have to percDown values to maintain the integrity of a heap, why is it not the same for insertion and percUping the heap? Isn't the amount of comparisons made relative to the input (n) and divided by 2?
This is an interesting proposition. I have tried to do a basic computation. Please do take a look and mention if the calculation is buggy. It's basically a mechanical computation with a few assumptions.
Suppose that:
there are already k levels in a complete binary tree of 2^k-1 elements.
we add 2^k more elements to make the tree have k+1 levels.
the elements uniformly and randomly get situated at a level from [1..k]
(3) indicates that each element in the old tree is essentially replaced by a new element. Hence the number of percolations upwards will be:
k + 2 * (k-1) + 4 * (k - 2) + ... + 2^(k-1) * 1
= k + 2 * k + 4 * k + ... + 2^(k-1) * k - (2 + 2 * 2^2 + 3 * 2^3 + ... + (k - 1) * 2^(k-1))
= k * (2 + 4 + ... + 2^(k-1)) - (k * 2^k - 2 * (2^k - 1)) ......(a)
= k * (2^k - 1) - k * 2^k + 2 ^(k+1) - 2
= k * 2^k - k - k * 2^k + 2^(k+1) - 2
= 2^(k+1) - (k + 2)
(a) is computed here.
Hence we have 2^k elements that are percolated using (2^(k+1) - (k+1) - 1) steps. Therefore the average cost per element is O((2^(k+1) - (k+1) - 1) / 2^k) = O(2 - (k+2)/2^k) which is O(1).
Hence we can assume constant cost of insertion.
Note: If we assume that an element will get replaced with probability 0.5, we could factor that into the computation above and I think that it will lead to the division becoming close to 1.
why is it not the same for insertion and percUping the heap?
When you remove head (h), you swap it with last element in heap, that is lowest element (l) in longest subtree . The probability to put l into higher level of heap is considered low enough to be neglectible, because it's already lowest element of its subtree.
Special cases exist. For example, heap containing N equal integers will do head extraction in O(1). But it's not general case.
I was just wondering the exact same thing when starting to learn about binary heaps. I wrote an implementation in C and became puzzled by this phenomena when I timed the different operations. My intuition after giving it some thought is that, just as someone else mentioned, when removing an element, its spot in the heap is replaced by the last element, i.e an element that belongs at the bottom, and will therefore be guaranteed to have to sink back down to the bottom. In my tests I only tried removal of the top element of the heap, and so these removals always lead to traversal of the whole height of the heap (log n).
Insertion on the other hand, puts the new element at the bottom and lets it float up. It is my thinking that since most of the elements of the heap are concentrated on lower levels, it is likely that the new node reaches it's correct position with only one or two jumps. Even if the node's value is the average value of the entire heap, it shouldn't typically need to jump up all the way to the vertical middle level of the heap (seeing as the bottom level of a heap containing 2^x elements, actually contains one more than half the nodes of the entire heap). Don't know if that makes sense, but it does to me :).
Now if by removal we are talking about removing any given element and not just the top one I don't see why average case there shouldn't be O(1) too, since then we should be most likely to be removing something near the bottom...
Can someone explain to me in simple English or an easy way to explain it?
The Merge Sort use the Divide-and-Conquer approach to solve the sorting problem. First, it divides the input in half using recursion. After dividing, it sort the halfs and merge them into one sorted output. See the figure
It means that is better to sort half of your problem first and do a simple merge subroutine. So it is important to know the complexity of the merge subroutine and how many times it will be called in the recursion.
The pseudo-code for the merge sort is really simple.
# C = output [length = N]
# A 1st sorted half [N/2]
# B 2nd sorted half [N/2]
i = j = 1
for k = 1 to n
if A[i] < B[j]
C[k] = A[i]
i++
else
C[k] = B[j]
j++
It is easy to see that in every loop you will have 4 operations: k++, i++ or j++, the if statement and the attribution C = A|B. So you will have less or equal to 4N + 2 operations giving a O(N) complexity. For the sake of the proof 4N + 2 will be treated as 6N, since is true for N = 1 (4N +2 <= 6N).
So assume you have an input with N elements and assume N is a power of 2. At every level you have two times more subproblems with an input with half elements from the previous input. This means that at the the level j = 0, 1, 2, ..., lgN there will be 2^j subproblems with an input of length N / 2^j. The number of operations at each level j will be less or equal to
2^j * 6(N / 2^j) = 6N
Observe that it doens't matter the level you will always have less or equal 6N operations.
Since there are lgN + 1 levels, the complexity will be
O(6N * (lgN + 1)) = O(6N*lgN + 6N) = O(n lgN)
References:
Coursera course Algorithms: Design and Analysis, Part 1
On a "traditional" merge sort, each pass through the data doubles the size of the sorted subsections. After the first pass, the file will be sorted into sections of length two. After the second pass, length four. Then eight, sixteen, etc. up to the size of the file.
It's necessary to keep doubling the size of the sorted sections until there's one section comprising the whole file. It will take lg(N) doublings of the section size to reach the file size, and each pass of the data will take time proportional to the number of records.
After splitting the array to the stage where you have single elements i.e. call them sublists,
at each stage we compare elements of each sublist with its adjacent sublist. For example, [Reusing #Davi's image
]
At Stage-1 each element is compared with its adjacent one, so n/2 comparisons.
At Stage-2, each element of sublist is compared with its adjacent sublist, since each sublist is sorted, this means that the max number of comparisons made between two sublists is <= length of the sublist i.e. 2 (at Stage-2) and 4 comparisons at Stage-3 and 8 at Stage-4 since the sublists keep doubling in length. Which means the max number of comparisons at each stage = (length of sublist * (number of sublists/2)) ==> n/2
As you've observed the total number of stages would be log(n) base 2
So the total complexity would be == (max number of comparisons at each stage * number of stages) == O((n/2)*log(n)) ==> O(nlog(n))
Algorithm merge-sort sorts a sequence S of size n in O(n log n)
time, assuming two elements of S can be compared in O(1) time.
This is because whether it be worst case or average case the merge sort just divide the array in two halves at each stage which gives it lg(n) component and the other N component comes from its comparisons that are made at each stage. So combining it becomes nearly O(nlg n). No matter if is average case or the worst case, lg(n) factor is always present. Rest N factor depends on comparisons made which comes from the comparisons done in both cases. Now the worst case is one in which N comparisons happens for an N input at each stage. So it becomes an O(nlg n).
Many of the other answers are great, but I didn't see any mention of height and depth related to the "merge-sort tree" examples. Here is another way of approaching the question with a lot of focus on the tree. Here's another image to help explain:
Just a recap: as other answers have pointed out we know that the work of merging two sorted slices of the sequence runs in linear time (the merge helper function that we call from the main sorting function).
Now looking at this tree, where we can think of each descendant of the root (other than the root) as a recursive call to the sorting function, let's try to assess how much time we spend on each node... Since the slicing of the sequence and merging (both together) take linear time, the running time of any node is linear with respect to the length of the sequence at that node.
Here's where tree depth comes in. If n is the total size of the original sequence, the size of the sequence at any node is n/2i, where i is the depth. This is shown in the image above. Putting this together with the linear amount of work for each slice, we have a running time of O(n/2i) for every node in the tree. Now we just have to sum that up for the n nodes. One way to do this is to recognize that there are 2i nodes at each level of depth in the tree. So for any level, we have O(2i * n/2i), which is O(n) because we can cancel out the 2is! If each depth is O(n), we just have to multiply that by the height of this binary tree, which is logn. Answer: O(nlogn)
reference: Data Structures and Algorithms in Python
The recursive tree will have depth log(N), and at each level in that tree you will do a combined N work to merge two sorted arrays.
Merging sorted arrays
To merge two sorted arrays A[1,5] and B[3,4] you simply iterate both starting at the beginning, picking the lowest element between the two arrays and incrementing the pointer for that array. You're done when both pointers reach the end of their respective arrays.
[1,5] [3,4] --> []
^ ^
[1,5] [3,4] --> [1]
^ ^
[1,5] [3,4] --> [1,3]
^ ^
[1,5] [3,4] --> [1,3,4]
^ x
[1,5] [3,4] --> [1,3,4,5]
x x
Runtime = O(A + B)
Merge sort illustration
Your recursive call stack will look like this. The work starts at the bottom leaf nodes and bubbles up.
beginning with [1,5,3,4], N = 4, depth k = log(4) = 2
[1,5] [3,4] depth = k-1 (2^1 nodes) * (N/2^1 values to merge per node) == N
[1] [5] [3] [4] depth = k (2^2 nodes) * (N/2^2 values to merge per node) == N
Thus you do N work at each of k levels in the tree, where k = log(N)
N * k = N * log(N)
MergeSort algorithm takes three steps:
Divide step computes mid position of sub-array and it takes constant time O(1).
Conquer step recursively sort two sub arrays of approx n/2 elements each.
Combine step merges a total of n elements at each pass requiring at most n comparisons so it take O(n).
The algorithm requires approx logn passes to sort an array of n elements and so total time complexity is nlogn.
lets take an example of 8 element{1,2,3,4,5,6,7,8} you have to first divide it in half means n/2=4({1,2,3,4} {5,6,7,8}) this two divides section take 0(n/2) and 0(n/2) times so in first step it take 0(n/2+n/2)=0(n)time.
2. Next step is divide n/22 which means (({1,2} {3,4} )({5,6}{7,8})) which would take
(0(n/4),0(n/4),0(n/4),0(n/4)) respectively which means this step take total 0(n/4+n/4+n/4+n/4)=0(n) time.
3. next similar as previous step we have to divide further second step by 2 means n/222 ((({1},{2},{3},{4})({5},{6},{7},{8})) whose time is 0(n/8+n/8+n/8+n/8+n/8+n/8+n/8+n/8)=0(n)
which means every step takes 0(n) times .lets steps would be a so time taken by merge sort is 0(an) which mean a must be log (n) because step will always divide by 2 .so finally TC of merge sort is 0(nlog(n))