By time complexity we understand algorithm's running time as a function of the size of input (number of bits needed to represent the instance in memory). Then how do we define space complexity, with regard to this observation? It obviously can't be related to the size of instance...
Space complexity can be defined in multiple ways, but the usual definition is the following. We assume that the input is stored in read-only memory somewhere, that there is a dedicated write-only memory for storing the result of the operation, and that there is some general "scratch space" memory for doing auxiliary computations. Typically, space complexity is the amount of space needed to store the output and for all the scratch space. For example, binary search has space complexity O(1) because only O(1) storage space is needed to store the input and output (assuming that array indices fit into machine words).
Sometimes, the input and output space are combined into a single storage unit and the input can be modified. In this model, for example, heapsort has space complexity O(1), while mergesort has space complexity O(n) for the auxiliary storage space needed for the merging.
Hope this helps!
Related
I'm trying to understand the space complexity of iterative binary search. Given the space complexity is input size + auxiliary space, shouldn't the space complexity depend on the input size? why is it always O(1)?
If we compare the space complexity between tree A(the height of the tree is 1) and tree B(the height of the tree is 1,000) I think the space complexity should be different. Could someone please explain me why it should be the same regardless of the input size?
Given the space complexity is input size + auxiliary space...
Yes, but this premise is incorrect. I checked the Web and there seem to be a lot of sites that define space complexity this way, and then go on to mention sublinear space complexities as if there were no contradiction. It's no wonder that people are confused.
This definition is really just wrong, because it is always correct to interpret a space complexity as referring to auxiliary space only.
If a stated space complexity is sublinear, then it obviously cannot include the input space, so you should interpret it as referring to auxiliary space only.
If a stated space complexity is not sublinear, then it is correct whether it includes the input space or not, and in fact it means exactly the same thing in both cases, so you can't go wrong by assuming that it refers to auxiliary space only.
Including the input space in your definition of space complexity can only reduce the set of complexity statements that your definition applies to, and the meaning when it does apply is unchanged, so that makes it strictly less correct as a definition.
I'm kind of confused between these two terms as for example - the Auxiliary space of merge sort, heapsort and insertion sort is O(1) whereas Space complexity of merge sort, insertion sort, heapsort is O(n).
So, if someone asks me what's the Space complexity of merge sort, heapsort or insertion sort then what should I tell them O(1) or O(n)?
Also, note in case of selection sort, I've read it's Space Complexity is O(1) which is auxiliary space.
So, is it possible the algorithm which uses "in-place computation" and for those algorithms we mention auxiliary space?
Furthermore I know -
Space Complexity = Auxiliary Space + space taken by also wrt input.
Kindly help, thank you!
When looking at O(n), you need to understand what it means. It is the "IN THE WORST CASE IT WILL BE N". I use http://bigocheatsheet.com/ as a point of reference.
When you are looking at space complexity, they will want to know how much is going to be held in memory at a given point in time. This does not include the base structure. They will want to know the amount of additional space the sort will need in order to execute accordingly. The difference is structures which need to be entirely in memory.
In regards to your first question, it will at MOST take up N space, but the total amount held in memory for your operations would be O(1).
When you are dealing with SORTS, as you listed above, they are mostly only O(1) because they really just need tmp space to hold things while swaps occur. Datastructures themselves require MORE space because they have a particular size in memory for whatever manipulations need to occur.
I use the linked website a LOT..
Difference between Auxiliary Space and Space Complexity of Heap Sort?
My attempt:
As explained here:
If we want to compare standard sorting algorithms on the basis of space, then Auxiliary Space would be a better criteria than Space Complexity. Merge Sort uses O(n) auxiliary space, Insertion sort and Heap Sort use O(1) auxiliary space. Space complexity of all these sorting algorithms is O(n) though.
I googled the space complexity of Heap Sort, I found the space complexity is O(1).
My question is:
Is that explanation correct? What is difference between Auxiliary Space and Space Complexity?
Auxiliary should be intended as to all the memory that is not used to store the original input.
Heap Sort input is an array of unordered elements and it works by rearranging them in place meaning that no (or a constant amount of it i.e. not depening on the size on the input array) auxiliary space is used (the heap is built using the input array - http://www.algostructure.com/sorting/heapsort.php).
Talking about space complexity you should also take into account the space used by the input and the auxiliary one, so in this sense, the heap sort has space complexity of O(n)+O(1) (n for the input and 1 as auxiliary space).
To be fair if you want you could also consider the space used on the stack (recursive implementation of heap sort use that space, though it should be only O(logn), see here for more details).
By the way, auxiliary space of merge-sort can also be O(1) since exists a version of merge-sort which sorts the array in place (How to sort in-place using the merge sort algorithm?).
I am having a hard time understanding what is O(1) space complexity. I understand that it means that the space required by the algorithm does not grow with the input or the size of the data on which we are using the algorithm. But what does it exactly mean?
If we use an algorithm on a linked list say 1->2->3->4, to traverse the list to reach "3" we declare a temporary pointer. And traverse the list until we reach 3. Does this mean we still have O(1) extra space? Or does it mean something completely different. I am sorry if this does not make sense at all. I am a bit confused.
To answer your question, if you have a traversal algorithm for traversing the list which allocate a single pointer to do so, the traversal algorithms is considered to be of O(1) space complexity. Additionally, let's say that traversal algorithm needs not 1 but 1000 pointers, the space complexity is still considered to be O(1).
However, if let's say for some reason the algorithm needs to allocate 'N' pointers when traversing a list of size N, i.e., it needs to allocate 3 pointers for traversing a list of 3 elements, 10 pointers for a list of 10 elements, 1000 pointers for a list of 1000 elements and so on, then the algorithm is considered to have a space complexity of O(N). This is true even when 'N' is very small, eg., N=1.
To summarise the two examples above, O(1) denotes constant space use: the algorithm allocates the same number of pointers irrespective to the list size. In contrast, O(N) denotes linear space use: the algorithm space use grows together with respect to the input size.
It is just the amount of memory used by a program. the amount of computer memory that is the main memory required by the algorithm to complete its execution with respect to the input size.
Space Complexity(s(P)) of an algorithm is the total space taken by the algorithm to complete its execution with respect to the input size. It includes both Constant space and Auxiliary space.
S(P) = Constant space + Auxiliary space
Constant space is the one that is fixed for that algorithm, generally equals to space used by input and local variables. Auxiliary Space is the extra/temporary space used by an algorithm.
Let's say I create some data structure with a fixed size, and no matter what I do to the data structure, it will always have the same fixed size. Operations performed on this data structure are therefore O(1).
An example, let's say I have an array of fixed size 100. Any operation I do, whether that is reading from the array or updating an element, that operation will be O(1) on the array. The array's size (and thus the amount of memory it's using) is not changing.
Another example, let's say I have a LinkedList to which I add elements to it. Every time I add an element to the LinkedList, that is a O(N) operation to the list because I am growing the amount of memory required to hold all of it's elements together.
Hope this helps!
I know there are other questions about the meaning of the "in-place" algorithm but my question is a bit different. I know it means that the algorithm changes the original input data instead of allocating new space for the output. But what I'm not sure about is whether the auxiliary memory counts. Namely:
if an algorithm allocates some additional memory in order to compute the result
if an algorithm has a non-constant number of recursive calls which take up additional space on the stack
In-place normally implies sub-linear additional space. This isn't necessarily part of the meaning of the term. It's just that an in-place algorithm that uses linear or greater space is not interesting. If you're going to allocate O(n) space to compute an output in the same space as the input, you could have equally easily produced the output in fresh memory and maintained the same memory bound. The value of computing in-place has been lost.
Wikipedia goes farther and says the amount of extra storage is constant. However, an algorithm (say mergesort) that uses log(n) additional space to write the output over the input is still in-place in usages I have seen.
I can't think of any in-place algorithm that doesn't need some additional memory. Whether an algorithm is "in-place" is characterized by the following:
in-place: To perform an algorithm on an input of size Θ(f(n)) using o(f(n)) extra space by mutating the input into the output.
Take for example an in-place implementation of the "Insertion Sort" sorting algorithm. The input is a list of numbers taking Θ(n) space. It takes Θ(n2) time to run in the worst case, but it only takes O(1) space. If you were to not do the sort in-place, you would be required to use at least Ω(n) space, because the output needs to be a list of n numbers.