Space complexity of backtracking algorithm - big-o

Good day! I saw a backtracking subset-generation algorithm in the link:
https://www.geeksforgeeks.org/backtracking-to-find-all-subsets/
which claims that the space complexity of the program is O(n). Yet, from what I've learned, the minimum complexity should be O(2^n) since it will be the size of our output. Is the given space complexity correct?

Related

is O(N*M) O(N) or O(N^2)?

I'm looking at Mattson's Stack Distance Algorithm for finding the hit ratio curve for cache. The paper stated that it runs in O(N*M). I'm trying to figure out if this is O(N) or O(N^2).

Time Complexity of Algorithms (Big Oh notation)

Hey just a quick question,
I've just started looking into algorithm analysis and I'm attempting to learn Big-Oh notation.
The algorithm I'm looking at contains a quicksort (of complexity O(nlog(n))) to sort a dataset, and then the algorithm that operates upon the set itself has a worst case run-time of n/10 and complexity O(n).
I believe that the overall complexity of the algorithm would just be O(n), because it's of the highest order, so it makes the complexity of the quicksort redundant. However, could someone confirm this or tell me if I'm doing something wrong?
Wrong.
Quicksort has worst case complexity O(n^2). But even if you have an O(nlogn) sort algorithm, this is still more than O(n).

A* time complexity

Wikipedia says the following on A*'s complexity:
The time complexity of A* depends on the heuristic. In the worst case,
the number of nodes expanded is exponential in the length of the
solution (the shortest path), but it is polynomial when the search
space is a tree...
And my question is: "Is A*'s time complexity exponential? Or is it not a time complexity, but memory complexity?"
If it is memory complexity, which time complexity does A* have?
In the worst case A* time complexity is exponential.
But, consider h(n) the estimated distance and h*(n) the exact distance remaining.
If the condition | h(n) - h*(n) | < O(log *h(n) ) holds, that is, if the error
of our estimate functions grows subexponential, then A* time complexity will be
polynomial.
Sadly, most of the time the estimate error grows linear, so, in practice,
faster alternatives are preferred, the price paid being the fact that optimality
is not achieved anymore.
Since each expanded node is stored to avoid visiting the same node multiple times, the exponential growth of the number of expanded nodes implies exponential time and space complexity.
Please note that exponential space complexity necessary implies exponential time complexity. The inverse is not true.
Is A* time complexity exponential or it's memory complexity?
That extract from Wikipedia suggests that that it's referring to time complexity

What is the A* time complexity and how is it derived?

I was wondering if anyone could explain the A* time complexity.
I am using a heuristic that uses euclidean distance for the estimate of the weight. There are no loops in the heuristic function.
So i think that the time complexity of the heuristic is O(1).
Taking this into account, what would the A* complexity be and how is that derived?
you can have your answer here:
Why is the complexity of A* exponential in memory?
time complexity is like memory complexity

Is O(log n) always faster than O(n)

If there are 2 algorthims that calculate the same result with different complexities, will O(log n) always be faster? If so please explain. BTW this is not an assignment question.
No. If one algorithm runs in N/100 and the other one in (log N)*100, then the second one will be slower for smaller input sizes. Asymptotic complexities are about the behavior of the running time as the input sizes go to infinity.
No, it will not always be faster. BUT, as the problem size grows larger and larger, eventually you will always reach a point where the O(log n) algorithm is faster than the O(n) one.
In real-world situations, usually the point where the O(log n) algorithm would overtake the O(n) algorithm would come very quickly. There is a big difference between O(log n) and O(n), just like there is a big difference between O(n) and O(n^2).
If you ever have the chance to read Programming Pearls by Jon Bentley, there is an awesome chapter in there where he pits a O(n) algorithm against a O(n^2) one, doing everything possible to give O(n^2) the advantage. (He codes the O(n^2) algorithm in C on an Alpha, and the O(n) algorithm in interpreted BASIC on an old Z80 or something, running at about 1MHz.) It is surprising how fast the O(n) algorithm overtakes the O(n^2) one.
Occasionally, though, you may find a very complex algorithm which has complexity just slightly better than a simpler one. In such a case, don't blindly choose the algorithm with a better big-O -- you may find that it is only faster on extremely large problems.
For the input of size n, an algorithm of O(n) will perform steps proportional to n, while another algorithm of O(log(n)) will perform steps roughly log(n).
Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better. Since it will be much faster.
More Answers from stackoverflow

Resources