I think I am starting to understand at least the theory behind big Oh notation, i.e. it is a way of measuring the rate at which the speed of a function grows. In other words, big O quantifies an algorithm's efficiency. But the implementation of it is something else.
For example, in the best case scenario push and pull operations will be O(1) because the number of steps it takes to remove from or add to the stack are going to be fixed. Regardless of the value, the process will be the same.
I'm trying to envision how a sequence of events such as push and pop can degrade performance from O(1) to O(n^2). If I have an array of n/2 capacity, n push and pop operations, and a dynamic array that doubles or halves its capacity when full or half full, how is it possible that the sequence in which these operations occur can affect the speed in which a program completes? Since push and pop work on the top element of the stack, I'm having trouble seeing how efficiency goes from a constant to O(n^2).
Thanks in advance.
You're assuming that the dynamic array does its resize operations quite intelligently. If this is not the case, however, you might end up with O(n^2) runtime: Suppose the array does not double its size when full but simply is resized to size+1. Also, suppose it starts with size 1. You'd insert the first element in O(1). When inserting the second elment, the array would need to be resized to size 2, requiring it to copy the previous value. When inserting element k, it would currently have size k-1, and need to be resized to size k, resulting in k-1 elements that need to be copied, and so on.
Thus, for inserting n elements, you'd end up with copying the array n-1 times: O(n) resizes. The copy operations are also linearly dependent on n since the more elements are have been inserted, the more need to be copied: O(n) copies per resize. This results in O(n*n) = O(n^2) as its runtime complexity.
If I implement a stack as (say) a linked list, then pushes and pops will always be constant time (i.e. O(1)).
I would not choose a dynamic array implementation for a stack, unless runtime wasn't an issue for me, I happened to have a dynamic array ready-built and available to use, and I didn't have a more efficient stack implementation handy. However, if I did use an array that resized up or down when it became full or half-empty respectively, its runtime would be O(1) while the numbers of pushes and pops are low enough not to trigger the resize and O(n) when there is a resize (hence overall O(n)).
I can't think of a case where a dynamic array used as a stack could deliver performance as bad as O(n^2) unless there was a bug in its implementation.
Related
I was reading the javadocs on HashSet when I came across the interesting statement:
This class offers constant time performance for the basic operations (add, remove, contains and size)
This confuses me greatly, as I don't understand how one could possibly get constant time, O(1), performance for a comparison operation. Here are my thoughts:
If this is true, then no matter how much data I'm dumping into my HashSet, I will be able to access any element in constant time. That is, if I put 1 element in my HashSet, it will take the same amount of time to find it as if I had a googolplex of elements.
However, this wouldn't be possible if I had a constant number of buckets, or a consistent hash function, since for any fixed number of buckets, the number of elements in that bucket will grow linearly (albeit slowly, if the number is big enough) with the number of elements in the set.
Then, the only way for this to work is to have a changing hash function every time you insert an element (or every few times). A simple hash function that never any collisions would satisfy this need. One toy example for strings could be: Take the ASCII value of the strings and concatenate them together (because adding could result in a conflict).
However, this hash function, and any other hash function of this sort will likely fail for large enough strings or numbers etc. The number of buckets that you can form is immediately limited by the amount of stack/heap space you have, etc. Thus, skipping locations in memory can't be allowed indefinitely, so you'll eventually have to fill in the gaps.
But if at some point there's a recalculation of the hash function, this can only be as fast as finding a polynomial which passes through N points, or O(nlogn).
Thus arrives my confusion. While I will believe that the HashSet can access elements in O(n/B) time, where B is the number of buckets it has decided to use, I don't see how a HashSet could possibly perform add or get functions in O(1) time.
Note: This post and this post both don't address the concerns I listed..
The number of buckets is dynamic, and is approximately ~2n, where n is the number of elements in the set.
Note that HashSet gives amortized and average time performance of O(1), not worst case. This means, we can suffer an O(n) operation from time to time.
So, when the bins are too packed up, we just create a new, bigger array, and copy the elements to it.
This costs n operations, and is done when number of elements in the set exceeds 2n/2=n, so it means, the average cost of this operation is bounded by n/n=1, which is a constant.
Additionally, the number of collisions a HashMap offers is also constant on average.
Assume you are adding an element x. The probability of h(x) to be filled up with one element is ~n/2n = 1/2. The probability of it being filled up with 3 elements, is ~(n/2n)^2 = 1/4 (for large values of n), and so on and so on.
This gives you an average running time of 1 + 1/2 + 1/4 + 1/8 + .... Since this sum converges to 2, it means this operation takes constant time on average.
What I know about hashed structures is that to keep a O(1) complexity for insertion removal you need to have a good hash function to avoid collisions and the structure should not be full ( if the structure is full you will have collisions).
Normally hashed structures define a kind of fill limit, by example 70%.
When the number of object make the structure be filled more than this limit than you should extend it size to stay below the limit and warranty performances. Generally you double the size of the structure when reaching the limit so that structure size grow faster than number of elements and reduce the number of resize/maintenance operations to perform
This is a kind of maintenance operation that consists on rehashing all elements contained int he structure to redistribute them in the resized structure. For sure this has a cost whose complexity is O(n) with n the number of elements stored in the structure but this cost is not integrated in the add function that will make the maintenance operation needed
I think this is what disturb you.
I learned also that the hash function generally depends on size of the structure that is used as parameter (there was something like max number of elements to reach the limit is a prime number of structure size to reduce the probability of collision or something like that) meaning that you don't change the hash function itself, you just change on of its parameters.
To answer to your comment there is not warranty if bucket 0 or 1 was filled that when you resize to 4 new elements will go inside bucket 3 and 4. Perhaps resizing to 4 make elements A and B now be in buckets 0 and 3
For sure all above is theorical and in real life you don`t have infinite memory, you can have collisions and maintenance has a cost etc so that's why you need to have an idea about the number of objects that you will store and do a trade off with available memory to try to choose an initial size of hashed structure that will limit the need to perform maintenance operations and allow you to stay in the O(1) performances
Has anyone ever heard of this heap repair technique: SloppyHeapSort? It uses a "Sloppy" sift-down approach. Basically, it takes the element to be repaired, moves it to the bottom of the heap (without comparing it to its children) by replacing it with its larger child until it hits the bottom. Then, sift-up is called until it reaches its correct location. This makes just over lg n comparisons (in a heap of size n).
However, this cannot be used for heap construction, only for heap repair. Why is this? I don't understand why it wouldn't work if you were trying to build a heap.
The algorithm, if deployed properly, could certainly be used as part of the heap construction algorithm. It is slightly complicated by the fact that during heap construction, the root of the subheap being repaired is not the beginning of the array, which affects the implementation of sift-up (it needs to stop when the current element of the array is reached, rather than continuing to the bottom of the heap).
It should be noted that the algorithm has the same asymptotic performance as the standard heap-repair algorithm; however, it probably involves fewer comparisons. In part, this is because the standard heap-repair algorithm is called after swapping the root of the heap (the largest element) for the last element in the heap array.
The last element is not necessarily the smallest element in the heap, but it is certainly likely to be close to the bottom. After the swap, the standard algorithm will move the swapped element down up to log2N times, with each step requiring two comparisons; because the element is likely to belong near the bottom of the heap, most of the time the maximum number of comparisons will be performed. But occasionally, only two or four comparisons might be performed.
The "sloppy" algorithm instead starts by moving the "hole" from the top of the heap to somewhere near the bottom (log2N comparisons) and then moving the last element up until it finds it home, which will usually take only a few comparisons (but could, in the worst case, take nearly log2N comparisons).
Now, in the case of heapify, heap repair is performed not with the last element in the subheap, but rather with a previously unseen element taken from the original vector. This actually doesn't change the average performance analysis much, because if you start heap repair with a random element, instead of an element likely to be small, the expected number of sift-down operations is still close to the maximum. (Half of the heap is in the last level, so the probability of needing the maximum number of sift-downs for a random element is one-half.)
While the sloppy algorithm (probably) improves the number of element comparisons, it increases the number of element moves. The classic algorithm performs at most log2N swaps, while the sloppy algorithm performs at least log2N swaps, plus the additional ones during sift-up. (In both cases, the swaps can be improved to moves by not inserting the new element until its actual position is known, halving the number of memory stores.)
As a postscript, I wasn't able to find any reference to your "sloppy" algorithm. On the whole, when asking about a proposed algorithm it is generally better to include a link.
There is a linear time algorithm to construct a heap. I believe what the author meant is that using this approach to build a heap is no efficient and better algorithms exist. Of course you can build heap by adding the elements one by one using the described strategy - you simply can do better.
I've been trying to figure this out all day. Some other threads address this, but I really don't understand the answers. There are also many answers that contradict one another.
I understand that an algorithm will never take longer than the upper bound and never be faster than the lower bound. However, I didn't know an upper bound existed for best case time and a lower bound existed for worst case time. This question really threw me in a loop. I can't wrap my head around this... a given run time can have a different upper and lower bound?
For example, if someone asked: "Show that the worst-case running time of some algorithm on a heap of size n is Big Omega(lg(n))". How do you possibly get a lower bound, any bound for that matter, when given a run time?
So, in summation, an algorithm's worst case upper bound can be different than its worst case lower bound? How can this be? Once given the case, don't bounds become irrelevant? Trying to independent study algorithms and I really need to wrap my head around this first.
The meat of my accepted answer to that question is a function whose running time oscillates between n^2 and n^3 depending on whether n is odd. The point that I was trying to make is that sometimes bounds of the form O(n^k) and Omega(n^k) aren't sufficiently descriptive, even though the worst case running time is a perfectly well defined function (which, like all functions, is its own best lower and upper bound). This happens with more natural functions like n log n, which is Omega(n^k) but not O(n^k) for k ≤ 1, and O(n^k) but not Omega(n^k) for k > 1 (and hence not Theta(n^k) regardless of how we choose a constant k).
Suppose you write a program like this to find the smallest prime factor of an integer:
function lpf(n):
for i = 2 to n
if n%i == 0 then return i
If you run the function on the number 10^11 + 3, it will take 10^11 + 2 steps. If you run it on the number 10^11 + 4 it will take just one step. So the function's best-case time is O(1) steps and its worst-case time is O(n) steps.
Big O notation, describes efficiency in runtime iterations, generally based on size of an input data set.
The notation is written in its simplest form, ignoring multiples or additives, but keeping exponential form. If you have an operation of O(1) it is executed in constant time, no matter the input data.
However if you have something such as O(N) or O(log(N)), they will execute at different rates depending on input data.
The high and low bounds describe the largest and least iterations, respectively, that an algorithm can take.
Example: O(N), high bound is largest input data and low bound is smallest.
Extra sources:
Big O Cheat Sheet and MIT Lecture Notes
UPDATE:
Looking at the Stack Overflow question mentioned above, that algorithm is broken into three parts, where it has 3 possible types of runtime, depending on data. Now really, this is three different algorithms designed to handle for different data values. An algorithm is generally classified with just one notation of efficiency and that is of the notation taking the least time for ALL possible values of N.
In the case of O(N^2), larger data will take exponentially longer, and having a smaller number will proceed quickly. The algorithm determines how quickly a data set will be run, yet bounds are given depending on the range of data the algorithm is designed to handle.
I will try to explain it in the quicksort algorithm.
In quicksort you have an array and choose an element as pivot. The next step is to partition the input array into two arrays. The first one will contain elements < pivot and the second one elements > pivot.
Now assume you will apply quicksort on an already sorted list and the pivot element will always be the last element of the array. The result of partition will be an array of size n-1 and an array oft size 1 (the pivot element). This will result in a runtime of O(n*n). Now assume that the pivot element will always split the array in two equal sized array. In every step the array size will be cut in halves. This will result in O(n log n). I hope this example will make this a bit clearer for you.
Another well known sort algorithm is mergesort. Mergesort has always runtime of O(n log n). In mergesort you will cut the array down until only one element is left und will climb up the call stack to merge the one sized arrays and after that merge the array of size two and so on.
Let's say you implement a set using an array. To insert a element you simply put in the next available bucket. If there is no available bucket you increase the capacity of the array by a value m.
For the insert algorithm "there is no enough space" is the worse case.
insert (S, e)
if size(S) >= capacity(S)
reserve(S, size(S) + m)
put(S,e)
Assume we never delete elements. By keeping track of the last available position, put, size and capacity are Θ(1) in space and memory.
What about reserve? If it is implemented like [realloc in C][1], in the best case you just allocate new memory at the end of the existing memory (best case for reserve), or you have to move all existing elements as well (worse case for reserve).
The worst case lower bound for insert is the best case of
reserve(), which is linear in m if we dont nitpick. insert in
worst case is Ω(m) in space and time.
The worst case upper bound for insert is the worse case of
reserve(), which is linear in m+n. insert in worst case is
O(m+n) in space and time.
I have read that a cache oblivious stack can be implemented using a doubling array.
Can someone please explain how the analysis makes each push and pop have a 1/B amortized I/O complexity?
A stack supports the following operations:
Push
Pop
While these two operations can be performed using a singly-linked list with O(1) push and O(1) pop, it suffers from caching problems, since the stored elements are dispersed through memory. For this approach, we push to the front of the list, and pop from the front of the list.
We can use a dynamic array as our data structure, and push and pop to the end of the array. (We will keep track of the last filled position in the array as our index, and modify it as we push and pop elements).
Popping will be O(1) since we don't need to resize the array.
If there is an extra space at the end of the array, pushing will be O(1).
Problem is when we try to push an element but there is no space for it. In this case we create a new array which is twice as large (2n), then copy each of the n elements over, followed by pushing the element.
Suppose we have an array which is already size n, but starts empty.
If I push n+1 elements onto the array, then the first n elements take O(1)*n = O(n) time.
The +1 element takes O(n) time, since it must build a new copy of the array.
So pushing n+1 elements into the array is O( 2n ), but we can get rid of the constant and just say it is O(n) or linear in the number of elements.
So while pushing a single element may take longer than a constant operation, pushing a large number of elements takes a linear amount of work.
The dynamic array is cache-friendly since all elements are a close to each other as possible, so multiple elements should be in the same cache-lines.
I would think standard stacks are cache-oblivious. You fault on only 1/B of the accesses because any sequence of push/pop must be adjacent addresses, so you can hit a new cache line only once every B operations. (Note: argument requires at least 2 cache lines to prevent thrashing.)
Inserting an element into a heap involves appending it to the end of the array and then propagating it upwards until it's in the "right spot" and satisfies the heap property, the operation of which is O(logn).
However, in C, for instance, calling realloc in order to resize the array for the new element can (and likely will) result in having to copy the entirety of the array to another location in memory, which is O(n) in the best and worst case, right?
Are heaps in C (or any language, for that matter) usually done with a fixed, pre-allocated size, or is the copy operation inconsequential enough to make a dynamically sized heap a viable choice (e.g, a binary heap to keep a quickly searchable list of items)?
A typical scheme is to double the size when you run out of room. This doubling--and the copying that goes with it--does indeed take O(n) time.
However, notice that you don't have to perform this doubling very often. If you average out the total cost of all the doubling over all the operations performed on the heap that did not involve doubling, then the cost is indeed inconsequential. (This kind of averaging is known as amortized analysis.)