space complexity of iterative binary search - data-structures

I'm trying to understand the space complexity of iterative binary search. Given the space complexity is input size + auxiliary space, shouldn't the space complexity depend on the input size? why is it always O(1)?
If we compare the space complexity between tree A(the height of the tree is 1) and tree B(the height of the tree is 1,000) I think the space complexity should be different. Could someone please explain me why it should be the same regardless of the input size?

Given the space complexity is input size + auxiliary space...
Yes, but this premise is incorrect. I checked the Web and there seem to be a lot of sites that define space complexity this way, and then go on to mention sublinear space complexities as if there were no contradiction. It's no wonder that people are confused.
This definition is really just wrong, because it is always correct to interpret a space complexity as referring to auxiliary space only.
If a stated space complexity is sublinear, then it obviously cannot include the input space, so you should interpret it as referring to auxiliary space only.
If a stated space complexity is not sublinear, then it is correct whether it includes the input space or not, and in fact it means exactly the same thing in both cases, so you can't go wrong by assuming that it refers to auxiliary space only.
Including the input space in your definition of space complexity can only reduce the set of complexity statements that your definition applies to, and the meaning when it does apply is unchanged, so that makes it strictly less correct as a definition.

Related

Space Complexity of an algorithm when no extra space is used

Consider an algorithm that uses no extra variables except the given input.
How to represent the space complexity in BigO Notation?
O(1)
Where it requires a constant amount of additional space namely 0.

Space Complexity Vs Auxiliary Space Complexity

I'm kind of confused between these two terms as for example - the Auxiliary space of merge sort, heapsort and insertion sort is O(1) whereas Space complexity of merge sort, insertion sort, heapsort is O(n).
So, if someone asks me what's the Space complexity of merge sort, heapsort or insertion sort then what should I tell them O(1) or O(n)?
Also, note in case of selection sort, I've read it's Space Complexity is O(1) which is auxiliary space.
So, is it possible the algorithm which uses "in-place computation" and for those algorithms we mention auxiliary space?
Furthermore I know -
Space Complexity = Auxiliary Space + space taken by also wrt input.
Kindly help, thank you!
When looking at O(n), you need to understand what it means. It is the "IN THE WORST CASE IT WILL BE N". I use http://bigocheatsheet.com/ as a point of reference.
When you are looking at space complexity, they will want to know how much is going to be held in memory at a given point in time. This does not include the base structure. They will want to know the amount of additional space the sort will need in order to execute accordingly. The difference is structures which need to be entirely in memory.
In regards to your first question, it will at MOST take up N space, but the total amount held in memory for your operations would be O(1).
When you are dealing with SORTS, as you listed above, they are mostly only O(1) because they really just need tmp space to hold things while swaps occur. Datastructures themselves require MORE space because they have a particular size in memory for whatever manipulations need to occur.
I use the linked website a LOT..

What is O(1) space complexity?

I am having a hard time understanding what is O(1) space complexity. I understand that it means that the space required by the algorithm does not grow with the input or the size of the data on which we are using the algorithm. But what does it exactly mean?
If we use an algorithm on a linked list say 1->2->3->4, to traverse the list to reach "3" we declare a temporary pointer. And traverse the list until we reach 3. Does this mean we still have O(1) extra space? Or does it mean something completely different. I am sorry if this does not make sense at all. I am a bit confused.
To answer your question, if you have a traversal algorithm for traversing the list which allocate a single pointer to do so, the traversal algorithms is considered to be of O(1) space complexity. Additionally, let's say that traversal algorithm needs not 1 but 1000 pointers, the space complexity is still considered to be O(1).
However, if let's say for some reason the algorithm needs to allocate 'N' pointers when traversing a list of size N, i.e., it needs to allocate 3 pointers for traversing a list of 3 elements, 10 pointers for a list of 10 elements, 1000 pointers for a list of 1000 elements and so on, then the algorithm is considered to have a space complexity of O(N). This is true even when 'N' is very small, eg., N=1.
To summarise the two examples above, O(1) denotes constant space use: the algorithm allocates the same number of pointers irrespective to the list size. In contrast, O(N) denotes linear space use: the algorithm space use grows together with respect to the input size.
It is just the amount of memory used by a program. the amount of computer memory that is the main memory required by the algorithm to complete its execution with respect to the input size.
Space Complexity(s(P)) of an algorithm is the total space taken by the algorithm to complete its execution with respect to the input size. It includes both Constant space and Auxiliary space.
S(P) = Constant space + Auxiliary space
Constant space is the one that is fixed for that algorithm, generally equals to space used by input and local variables. Auxiliary Space is the extra/temporary space used by an algorithm.
Let's say I create some data structure with a fixed size, and no matter what I do to the data structure, it will always have the same fixed size. Operations performed on this data structure are therefore O(1).
An example, let's say I have an array of fixed size 100. Any operation I do, whether that is reading from the array or updating an element, that operation will be O(1) on the array. The array's size (and thus the amount of memory it's using) is not changing.
Another example, let's say I have a LinkedList to which I add elements to it. Every time I add an element to the LinkedList, that is a O(N) operation to the list because I am growing the amount of memory required to hold all of it's elements together.
Hope this helps!

What does "in-place" mean exactly?

I know there are other questions about the meaning of the "in-place" algorithm but my question is a bit different. I know it means that the algorithm changes the original input data instead of allocating new space for the output. But what I'm not sure about is whether the auxiliary memory counts. Namely:
if an algorithm allocates some additional memory in order to compute the result
if an algorithm has a non-constant number of recursive calls which take up additional space on the stack
In-place normally implies sub-linear additional space. This isn't necessarily part of the meaning of the term. It's just that an in-place algorithm that uses linear or greater space is not interesting. If you're going to allocate O(n) space to compute an output in the same space as the input, you could have equally easily produced the output in fresh memory and maintained the same memory bound. The value of computing in-place has been lost.
Wikipedia goes farther and says the amount of extra storage is constant. However, an algorithm (say mergesort) that uses log(n) additional space to write the output over the input is still in-place in usages I have seen.
I can't think of any in-place algorithm that doesn't need some additional memory. Whether an algorithm is "in-place" is characterized by the following:
in-place: To perform an algorithm on an input of size Θ(f(n)) using o(f(n)) extra space by mutating the input into the output.
Take for example an in-place implementation of the "Insertion Sort" sorting algorithm. The input is a list of numbers taking Θ(n) space. It takes Θ(n2) time to run in the worst case, but it only takes O(1) space. If you were to not do the sort in-place, you would be required to use at least Ω(n) space, because the output needs to be a list of n numbers.

General confusion about space complexity

I'm having trouble understanding space complexity. My general question is: how can the space complexity of an algorithm on a tree be smaller than the number of nodes in the tree? Here's a specific example:
If b is the branching factor
d is Depth of shallowest goal node and,
m is Maximum length of any path in the state space
For DFS, the space complexity is supposed to be O(bm). I thought it would just always be the size of the tree? Where's the rest of the tree and how do we use the entire tree with only O(bm) space complexity?
The space complexity of an algorithm is normally separate from the space taken by the raw data.
Just for example, in searching a tree you might keep a stack of the nodes in the tree you descended through to get to some particular leaf. In this case, the three takes O(N) space, but the search takes (assuming a balanced tree) O(log N) space over and above what the tree itself occupies.
Because space complexity represents the extra space it takes besides the input.
Complexity, in general, is defined related to turing machines. The space an algorithm takes is the extra number of cells needed for it to run. The input cells are not taken into account, and can be reused by the algorithm to reduce extra storage.

Resources