Ternary tree time complexity - time

I've an assignment to explain the time complexity of a ternary tree, and I find that info on the subject on the internet is a bit contradictory, so I was hoping I could ask here to get a better understanding.
So, with each search in the tree, we move to the left or right child a logarithmic amount of times, log3(n), with n being the amount of String in the tree, correct? And no matter what, we would also have to traverse down the middle child L number of times, where L is the length of the prefix we are searching.
Does the running time then come out to O(log3(n)+L)? I see many people simply saying that it runs in logarithmic time, but does Linear time not grow faster, and hence dominate?
Hope I'm making sense, thanks for any answers on the subject!

If the tree is balanced, then yes, any search that needs to visit only one child per iteration will run in logarithmic time.
Notice that O(log_3(n) = O(ln(n) / ln(3)) = O(ln(n) * c) = O(ln(n))
so the base of the logarithm does not matter. We say logarithmic time, O(log n).
Notice also that a balanced tree has a height of O(log(n)), where n is the number of nodes. So it looks like your L describes the height of the tree and is therefore also O(log n), so not linear w.r.t. n.
Does this answer your questions?

Related

Complete binary tree time complexity

If someone wants to generates a complete binary tree. This tree has h levels where h can be any positive integer and as an input to the algorithm. What complexity will it lie in and why?
A complete binary tree is tree where all levels are full of nodes except the last level, we can define the time complexity in terms of upper bound.
If we know the height of the tree is h, then the maximum number of possible nodes in the tree are 2h - 1.
Therefore, time complexity = O(2h - 1).
To sell your algorithm in the market, you need tight upper bounds to prove that your algorithm is better than the others'.
A slightly tight upper bound for this problem can be defined after knowing exactly how many nodes are there in the tree. Let's say there are N.
Then, the time complexity = O(N).

Simple algorithmic complexity of two nested loops

I guess it's rather simple but it seems I'm troubling myself..
what's the complexity of the following?
// let's say that Q has M initial items
while Q not empty
v <- Q.getFirst
for each z in v // here, every v cannot have more than 3 z's
...
O(1) operations here
...
Q.insert(z)
end
end
The number of the times this will happen, depends on if at some point v's do not have more z's (let's call this number N)
Is the complexity O(MxN^2) or I'm wrong? It's like having a tree with M parent nodes and each node, at most, can have three children. N is the total number of nodes.
Your algorithmic complexity should have an upper bound of O( (M * v) - parent nodes that are children nodes ) which is much better stated as O(n) where n is the number of nodes in your tree, since you only iterate the tree once.
Depending on your operation, you would want to consider the runtime of your Q.insert(z) and Q.getFirst() operation, because depending on your data structure that may be worth considering.
Assuming Q.insert() and Q.getFirst() runtimes are O(1), you can say O(M * v) is an approximate bounding, but since v elements can be repeated, you are better off stating that the runtime is just O(n) because O(m*v) actually overestimates the upper bound in all cases. O(n) is exact for every instance of the tree (n being the number of nodes).
I would say that it's much more safe to call it O(n) since I don't know the exact implementation of your insert - although with a linked list both insert and the get first can be O(1) operations. (Most binary tree inserts will be O(log n) if properly implemented - sufficient information was not provided)
It should not harm you to play it safe and consider your runtime analysis O(n), but depending on who you're pitching it to, that extra variable may seem unnecessary.
HTH
edited: clarity of problem in comments helped me understand the question better, fixed nonsense

range search complexity of R tree and R* tree

What is the range search complexity for R tree and R* Tree? I understand the process of range search: similar to a DFS search, it visits each node and if a node's bounding box intersects the target range, then include the node in the result set. More precisely, we also need to consider the branch-and-bound strategy it uses: if a parent node doesn't intersect with the target, then we don't visit its children nodes. Then the complexity should be smaller than O(n), where n is the number of nodes. I really don't know how to calculate the number of nodes given the number of leaves(or data points).
Could anybody give me an explanation here? Thank you.
Obviously, the worst case must be at least O(n) if your range is [-∞;∞] in every dimension. It may be as bad as O(n log n) then because of the tree.
Assuming the answer is a single entry, the average case probably is O(log n) - only few paths through the tree need to be followed (if you have little enough overlap).
It is log to the base of your page size. So it will usually not exceed 5, because you never want trees with more than say 1000^5=10^15 objects.
For all practical purposes, assume the runtime complexity is simply the answer set size O(s). Select 2% of your data it takes twice as long as 1%.

Worst case running time of constructing a BST?

Could someone explain to me how the Worst case running time of constructing a BST is n^2? I asked my professor and the only feedback i received is
"Because the tree is linear to the size of the input. The cost is 1+2+3+4+...+(n-1)."
Can someone explain this in a different way? Her explanation makes me think its O(n)....
I think the worst case happens when the input is already sorted:
A,B,C,D,E,F,G,H.
That's why you might want to randomly permute the input sequence if applicable.
The worst-case running time is proportional to the square of the input because the BST is unbalanced. An ubalanced BST can exhibit a degenerate structure: in the worst case, a singly linked list. Constructing this list will require that each insertion marches down the full length of the growing list to get to the leaf node to add a new leaf.
For instance, try running the algorithm on data which is precisely in the reverse order, so that each new node must become the new leftmost node of the tree.
A BST (even a balanced one!) can be constructed in linear time only if the input data is already sorted. Moreover, this is done using a special algorithm which takes advantage of the order; not by performing N insertions.
I'm guessing the 1+2+3+4+...+(n-1) insertion steps are clear, (for a reversed ordered list).
You should get comfortable with the idea that this number of steps is quadratic. Think about running the algorithm twice and count the number of steps:
[1+2+3+4+...+(n-1)] + [1+2+3+4+...+(n-1)] = [1+2+3+4+...+(n-1)] + [(n-1) + ... + 4+3+2+1] = n+n+...n = n^2
Therefore, one run take 0.5*n^2 steps.

Time complexity, binary (search) tree

assume I have a complete binary tree up-to a certain depth d. What would the time complexity be to traverse (pre-order traversal) this tree.
I am confused because I know that the amount of nodes in the tree is 2^d, so therefore the time complexity would be BigO(2^d) ? because the tree is growing exponentially.
But, upon research on the internet, Everyone states that's traversal is BigO(n) where n is the number of elements (which would be 2^d in this case), not BigO(2^d), what am I missing?
thanks
n is defined as the number of nodes.
2^d is only the number of nodes when every possible node at that depth is full
ie.
o
/ \
o o
/ \
o o
only has 5 nodes when 2^d is 8
A complete binary tree has every node filled except for last row and all of the nodes are filled to the left. You can find the definition on wikipedia
http://en.wikipedia.org/wiki/Binary_tree#Types_of_binary_trees
Even if you can express the time complexity as O(2^d), that's pretty useless as it's not something that you can use to compare it to the time complexity of any other collection.
Expressing the time complexity as O(n) is on the other hand very useful. It tells you exactly how the collection reacts when you increase the number of items, without having to know exactly how the collection is implement, and you can compare it to the time complexity of other collections.

Resources