I'm looking at Mattson's Stack Distance Algorithm for finding the hit ratio curve for cache. The paper stated that it runs in O(N*M). I'm trying to figure out if this is O(N) or O(N^2).
Related
Good day! I saw a backtracking subset-generation algorithm in the link:
https://www.geeksforgeeks.org/backtracking-to-find-all-subsets/
which claims that the space complexity of the program is O(n). Yet, from what I've learned, the minimum complexity should be O(2^n) since it will be the size of our output. Is the given space complexity correct?
Wikipedia says the following on A*'s complexity:
The time complexity of A* depends on the heuristic. In the worst case,
the number of nodes expanded is exponential in the length of the
solution (the shortest path), but it is polynomial when the search
space is a tree...
And my question is: "Is A*'s time complexity exponential? Or is it not a time complexity, but memory complexity?"
If it is memory complexity, which time complexity does A* have?
In the worst case A* time complexity is exponential.
But, consider h(n) the estimated distance and h*(n) the exact distance remaining.
If the condition | h(n) - h*(n) | < O(log *h(n) ) holds, that is, if the error
of our estimate functions grows subexponential, then A* time complexity will be
polynomial.
Sadly, most of the time the estimate error grows linear, so, in practice,
faster alternatives are preferred, the price paid being the fact that optimality
is not achieved anymore.
Since each expanded node is stored to avoid visiting the same node multiple times, the exponential growth of the number of expanded nodes implies exponential time and space complexity.
Please note that exponential space complexity necessary implies exponential time complexity. The inverse is not true.
Is A* time complexity exponential or it's memory complexity?
That extract from Wikipedia suggests that that it's referring to time complexity
I was wondering if anyone could explain the A* time complexity.
I am using a heuristic that uses euclidean distance for the estimate of the weight. There are no loops in the heuristic function.
So i think that the time complexity of the heuristic is O(1).
Taking this into account, what would the A* complexity be and how is that derived?
you can have your answer here:
Why is the complexity of A* exponential in memory?
time complexity is like memory complexity
If we know the lower bound for the time complexity of a problem is Ω(n^2), am I correct in thinking it is not possible to have an algorithm with worst-case time complexity O(n log n)?
If the lower bound for the time complexity of a problem is Ω(n^2), then that means an algorithm solving this problem has to take at least C*n^2 time.
On the other hand, you have an algorithm that takes at most K*n*logn time.
This algorithm cannot run any longer than nlogn. What you need is an algorithm that runs at least n^2 time.
Therefore; it is impossible for this algorithm to solve this problem. You are correct.
If there are 2 algorthims that calculate the same result with different complexities, will O(log n) always be faster? If so please explain. BTW this is not an assignment question.
No. If one algorithm runs in N/100 and the other one in (log N)*100, then the second one will be slower for smaller input sizes. Asymptotic complexities are about the behavior of the running time as the input sizes go to infinity.
No, it will not always be faster. BUT, as the problem size grows larger and larger, eventually you will always reach a point where the O(log n) algorithm is faster than the O(n) one.
In real-world situations, usually the point where the O(log n) algorithm would overtake the O(n) algorithm would come very quickly. There is a big difference between O(log n) and O(n), just like there is a big difference between O(n) and O(n^2).
If you ever have the chance to read Programming Pearls by Jon Bentley, there is an awesome chapter in there where he pits a O(n) algorithm against a O(n^2) one, doing everything possible to give O(n^2) the advantage. (He codes the O(n^2) algorithm in C on an Alpha, and the O(n) algorithm in interpreted BASIC on an old Z80 or something, running at about 1MHz.) It is surprising how fast the O(n) algorithm overtakes the O(n^2) one.
Occasionally, though, you may find a very complex algorithm which has complexity just slightly better than a simpler one. In such a case, don't blindly choose the algorithm with a better big-O -- you may find that it is only faster on extremely large problems.
For the input of size n, an algorithm of O(n) will perform steps proportional to n, while another algorithm of O(log(n)) will perform steps roughly log(n).
Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better. Since it will be much faster.
More Answers from stackoverflow