How can we find theta of this equation.
T(n)=2T(n/2)+n^2/logn
For above question cannot use master theorem.
I believe this would be
Consider the recursion tree generated by this recurrence relation. The depth would be because it would take calls to get n to be 1, through dividing n by 2 over and over. The "width", or the total run time at every level of the tree is because every time you divide the tree into 2, you also double the value (2T(n/2)). So multiplying the height by the width gives you which is
Related
Perhaps a dumb question. In a balanced binary tree where n is the total number of nodes, I understand why the height is equal to log(n). What I don't understand is what people mean when they refer to the height as being O(log(n)). I've only seen Big O used in the context of algorithms, where if an algorithm runs in O(n) and if the input doubles then the running time doubles. But height isn't an algorithm. How does this apply to the height of a tree? What does it mean for the height to be O(log(n))?
This is because a complete binary tree of n nodes does not have height log(n).
Consider a complete binary tree of height k. Such a tree has 2k leaf nodes. How many nodes does it have in total? If you look at each level, you will find that it has 1 + 2 + 4 + 8 + ... + 2k nodes, or 20 + 21 + 22 + 23 + ... 2k.
After some math, you will find that this series equals 2k+1 - 1.
So, if your tree has n nodes, what is its height? If you solve the equation n = 2k+1 - 1 with respect to k, you obtain k = log2(n+1) - 1.
This expression is slightly less nice than log2(n), and it is certainly not the same number. However, by the properties of big-O notation,
log2(n+1) - 1 = O(log(n)).
In the source you are reading, emphasis is given on that the height grows as fast as log(n). They belong to the same complexity class. This information can be useful when designing algorithms, since you know that doubling the input will increase the tree height only by a constant. This property gives tree structures immense power. So even if the expression for the exact height of the tree is more complicated (and if your binary tree is not complete, it will look more complicated still), it is of logarithmic complexity with respect to n.
To add to Berthur's excellent answer, Big-Oh notation is not specific to analysis of algorithms; it applies to any functions. In analysis f algorithms we care about the function T(n) which gives the (typically worst-case) runtime for input size n, and we want to know an upper bound (Big-Oh) on that function's rate of growth. Here, there is a function that gives the true height of a tree with whatever property, and we want an upper bound on that function's rate of growth. We could find upper bounds on arbitrary functions devoid of any context at all like f(n) = n!^(1/2^n) or whatever.
I think they mean it takes O(log(n)) to traverse the tree
I would like to know how to derive the time complexity for the Heapify Algorithm for Heap Data Structure.
I am asking this question in the light of the book "Fundamentals of Computer Algorithms" by Ellis Horowits. I am adding some screenshots of the algorithm as well as the derivation given in the book.
Algorithm:
Derivation for worst case complexity:
I understood the first part and last part of this calculation but I cannot figure out how 2^(i-1) x (k-i) changed into i2^(k-i-1).
All the derivations I can find in the internet takes a different approach by considering the height of the tree differently. I know that approach also leads to the same answer but I would like to know about this approach.
You might need the following information:
2^k-1 = n or approximately 2^k = n, where k is the number of levels, starting from the root node and the level of root is 1 (not 0) and n is the number of nodes.
Also the worst case time complexity of the Adjust() function is proportional to the height of the sub-tree it is called, that is O(log n, where n is the total number of elements in the sub-tree.
It's a variable substitution.
First, realize that in the leftmost side of the equation, the last term of the sum is zero (because when i = k, k-i = 0). So, the range of the first summation can be written as 1 <= i <= k-1. Now, substitute i with k-i. i iterates over the set {1, 2, ... , k-1} and k-i iterates over the set {k-1, ... 2, 1}, they are the same set, therefore, we can do this substitution.
In Closest Pair algorithm, it is said that presorting points according to x and y coordinates can help decrease time complexibility from O(nlog^2n) to O(nlogn), but how can that happen? I think presort also requires O(nlogn) time rather than O(n), so the equation is still T(n)=2T(n/2)+O(nlogn).
Can anyone show how to complete presort in details to achieve O(n)? Or do I have any misunderstandings about it?
Not sure to what you're referring to as "presort", but the algorithm is O(n log(n)), according to these steps:
First, sort according to the x coordinate.
Recursively, divide into two, similarly-sized sets, divided by an xm value.
a. solve for each of the left and right subsets to xm.
b. for each of the points on the left, find the closest points in a bounded rectangle containing points to the right (see details in link above); same for the points on the right.
c. return the minimum of the smallest distances found in b.
Step 1 is O(n log(n). Step 2 is given by T(n) = 2 T(n / 2) + Θ(n).
If you have a square region that holds various numbers, what is the result of each recurrence relation?:
T(n) = 3T(n/2) + c and T(n) = 2T(n/2) + cn
I know the first is supposed to result in a quad partition and the second a binary partition, but I can't intuitively wrap my head around why this is the case. Why are we making 3 recursive calls in the first case and 2 in the second? Why does the +c or +cn effect what we're doing with the problem?
I think this is what you are looking for
http://leetcode.com/2010/10/searching-2d-sorted-matrix-part-ii.html
if your question is just about the recursion explanation, I recommend reading up on solving recurrences using recursion tree and the master method
http://courses.csail.mit.edu/6.006/spring11/rec/rec08.pdf
This explains the second recurrence and the method. Basically you will have a recursion tree with height (lgn) and the cost at each level equalling n.
In the first one the recursion tree will have run time of the order of number of nodes in the tree. The height will still be lgn but the cost at each level 3^h * c. Summation over this will give you the complexity
Say we have an initially empty BST where I perform n arbitrary inserts, how would I find the average height of this BST? The expression/pseudocode for this would be (if I'm not mistaken):
H(T) = 1 + max(H(T.left), H(T.right))
My guess at a recurrence relation for this would be T(n) = 1 + 2*T(n/2), but I'm not sure if this is correct.
Now here's my dilemma, if my recurrence relation is correct, how do I calculate the average complexity for my average height algorithm?
In general average case analysis is more complicated and you can't really use the same big-O techniques you would use in a normal worst-case proof. While your definition of height is correct, translating it to the recurrence will probably be more complicated then that. First off, you probably meant T(n) = 1 + T(n/2) (this would give a O(log n) height while your version gives O(n)) and then, nothing guarantees that values are evenly split 50-50 between right and left.
If you search a bit you will see that there is plenty of material out there on the average height of BSTs. For example, one of the results I got said that the expected height of a BST tends to 4.3 * (log n) as n grows but goes through lots of complicated math to get there.
T(n/2)+c
where c is some constant
and we divide our array in two parts but we use only single part to search.if out ans is the larger then the middle value of then we only search in (mid+1.....j)
and if its smaller then the middle value then we only search in(i.....mid)
so, at time we only work with the single sub-array