I'm kind of confused between these two terms as for example - the Auxiliary space of merge sort, heapsort and insertion sort is O(1) whereas Space complexity of merge sort, insertion sort, heapsort is O(n).
So, if someone asks me what's the Space complexity of merge sort, heapsort or insertion sort then what should I tell them O(1) or O(n)?
Also, note in case of selection sort, I've read it's Space Complexity is O(1) which is auxiliary space.
So, is it possible the algorithm which uses "in-place computation" and for those algorithms we mention auxiliary space?
Furthermore I know -
Space Complexity = Auxiliary Space + space taken by also wrt input.
Kindly help, thank you!
When looking at O(n), you need to understand what it means. It is the "IN THE WORST CASE IT WILL BE N". I use http://bigocheatsheet.com/ as a point of reference.
When you are looking at space complexity, they will want to know how much is going to be held in memory at a given point in time. This does not include the base structure. They will want to know the amount of additional space the sort will need in order to execute accordingly. The difference is structures which need to be entirely in memory.
In regards to your first question, it will at MOST take up N space, but the total amount held in memory for your operations would be O(1).
When you are dealing with SORTS, as you listed above, they are mostly only O(1) because they really just need tmp space to hold things while swaps occur. Datastructures themselves require MORE space because they have a particular size in memory for whatever manipulations need to occur.
I use the linked website a LOT..
Related
I'm trying to understand the space complexity of iterative binary search. Given the space complexity is input size + auxiliary space, shouldn't the space complexity depend on the input size? why is it always O(1)?
If we compare the space complexity between tree A(the height of the tree is 1) and tree B(the height of the tree is 1,000) I think the space complexity should be different. Could someone please explain me why it should be the same regardless of the input size?
Given the space complexity is input size + auxiliary space...
Yes, but this premise is incorrect. I checked the Web and there seem to be a lot of sites that define space complexity this way, and then go on to mention sublinear space complexities as if there were no contradiction. It's no wonder that people are confused.
This definition is really just wrong, because it is always correct to interpret a space complexity as referring to auxiliary space only.
If a stated space complexity is sublinear, then it obviously cannot include the input space, so you should interpret it as referring to auxiliary space only.
If a stated space complexity is not sublinear, then it is correct whether it includes the input space or not, and in fact it means exactly the same thing in both cases, so you can't go wrong by assuming that it refers to auxiliary space only.
Including the input space in your definition of space complexity can only reduce the set of complexity statements that your definition applies to, and the meaning when it does apply is unchanged, so that makes it strictly less correct as a definition.
I was wondering if someone could explain to me how the space complexity of both these algorithms work. I have done readings on it but they seem to be contradictive, if I understand correctly.
I'm for example interested in how a linked list would affect the space complexity and this question says it makes it faster?;
Why is mergesort space complexity O(log(n)) with linked lists?
This question however says it shouldn't matter; Merge Sort Time and Space Complexity
Now I'm a bit new to programming and would like to understand the theory a bit better so dummie language would be appreciated.
The total space complexity of merge sort is O(n) since you have to store the elements somewhere. Nevertheless, there can indeed be a difference in additional space complexity, between an array implementation and a linked-list implementation.
Note that you can implement an iterative version that only requires O(1) additional space. However, if I remember correclty, this version would perform horribly.
In the conventional recursive version, you need to account for the stack frames. That alone gives a O(log n) additional space requirement.
In a linked-list implementation, you can perform merges in-place without any auxiliary memory. Hence the O(log n) additional space complexity.
In an array implementation, merges require auxiliary memory (likely an auxiliary array), and the last merge requires the same amount of memory as that used to store the elements in the first place. Hence the O(n) additional space complexity.
Keep in mind that space complexity tells you how the space needs of the algorithm grows as the input size grows. There are details that space complexity ignores. Namely, the sizes of a stack frame and an element are probably different, and a linked-list takes up more space than an array because of the links (the references). That last detail is important for small elements, since the additional space requirement of the array implementation is likely less than the additional space taken by the links of the linked-list implementation.
Why is merge sort space complexity O(log(n)) with linked lists?
This is only true for top down merge sort for linked lists, where O(log2(n)) stack space is used due to recursion. For bottom up merge sort for linked lists, space complexity is O(1) (constant space). One example of an optimized bottom up merge sort for a linked list uses a small (26 to 32) array of pointers or references to to the first nodes of list. This would still be considered O(1) space complexity. Link to pseudo code in wiki article:
https://en.wikipedia.org/wiki/Merge_sort#Bottom-up_implementation_using_lists
Difference between Auxiliary Space and Space Complexity of Heap Sort?
My attempt:
As explained here:
If we want to compare standard sorting algorithms on the basis of space, then Auxiliary Space would be a better criteria than Space Complexity. Merge Sort uses O(n) auxiliary space, Insertion sort and Heap Sort use O(1) auxiliary space. Space complexity of all these sorting algorithms is O(n) though.
I googled the space complexity of Heap Sort, I found the space complexity is O(1).
My question is:
Is that explanation correct? What is difference between Auxiliary Space and Space Complexity?
Auxiliary should be intended as to all the memory that is not used to store the original input.
Heap Sort input is an array of unordered elements and it works by rearranging them in place meaning that no (or a constant amount of it i.e. not depening on the size on the input array) auxiliary space is used (the heap is built using the input array - http://www.algostructure.com/sorting/heapsort.php).
Talking about space complexity you should also take into account the space used by the input and the auxiliary one, so in this sense, the heap sort has space complexity of O(n)+O(1) (n for the input and 1 as auxiliary space).
To be fair if you want you could also consider the space used on the stack (recursive implementation of heap sort use that space, though it should be only O(logn), see here for more details).
By the way, auxiliary space of merge-sort can also be O(1) since exists a version of merge-sort which sorts the array in place (How to sort in-place using the merge sort algorithm?).
I know there are other questions about the meaning of the "in-place" algorithm but my question is a bit different. I know it means that the algorithm changes the original input data instead of allocating new space for the output. But what I'm not sure about is whether the auxiliary memory counts. Namely:
if an algorithm allocates some additional memory in order to compute the result
if an algorithm has a non-constant number of recursive calls which take up additional space on the stack
In-place normally implies sub-linear additional space. This isn't necessarily part of the meaning of the term. It's just that an in-place algorithm that uses linear or greater space is not interesting. If you're going to allocate O(n) space to compute an output in the same space as the input, you could have equally easily produced the output in fresh memory and maintained the same memory bound. The value of computing in-place has been lost.
Wikipedia goes farther and says the amount of extra storage is constant. However, an algorithm (say mergesort) that uses log(n) additional space to write the output over the input is still in-place in usages I have seen.
I can't think of any in-place algorithm that doesn't need some additional memory. Whether an algorithm is "in-place" is characterized by the following:
in-place: To perform an algorithm on an input of size Θ(f(n)) using o(f(n)) extra space by mutating the input into the output.
Take for example an in-place implementation of the "Insertion Sort" sorting algorithm. The input is a list of numbers taking Θ(n) space. It takes Θ(n2) time to run in the worst case, but it only takes O(1) space. If you were to not do the sort in-place, you would be required to use at least Ω(n) space, because the output needs to be a list of n numbers.
I saw in wiki and some other text, they said space complexity of bubble sort, insertion sort, selection sort, etc is O(1) auxiliary. Are they referring to constant memory cells that will be required for variable used in programs.
Yes they are referring to the fact that most sorts are in place sorts so they have a constant memory use. If the sort was not in place then it would require O(n) extra memory at the minimum.
If an algorithm works without any addition space or memory it's called "in situ"
http://en.wikipedia.org/wiki/In_situ
An algorithm is said to be an in situ algorithm, or in-place algorithm, if the extra amount of memory required to execute the algorithm is O(1)