Fastest data structure for inserting/sorting - sorting

I need a data structure that can insert elements and sort itself as quickly as possible. I will be inserting a lot more than sorting. Deleting is not much of a concern and nethier is space. My specific implementation will additionally store nodes in an array, so lookup will be O(1), i.e. you don't have to worry about it.

If you're inserting a lot more than sorting, then it may be best to use an unsorted list/vector, and quicksort it when you need it sorted. This keeps inserts very fast. The one1 drawback is that sorting is a comparatively lengthy operation, since it's not amortized over the many inserts. If you depend on relatively constant time, this can be bad.
1 Come to think of it, there's a second drawback. If you underestimate your sort frequency, this could quickly end up being overall slower than a tree or a sorted list. If you sort after every insert, for instance, then the insert+quicksort cycle would be a bad idea.

Just use one of the self-balanced binary search trees, such as red-black tree.

Use any of the Balanced binary trees like AVL trees. It should give O(lg N) time complexity for both of the operations you are looking for.

If you don't need random access into the array, you could use a Heap.
Worst and average time complexity:
O(log N) insertion
O(1) read largest value
O(log N) to remove the largest value
Can be reconfigured to give smallest value instead of largest. By repeatedly removing the largest/smallest value you get a sorted list in O(N log N).

If you can do a lot of inserts before each sort then obviously you should just append the items and sort no sooner than you need to. My favorite is merge sort. That is O(N*Log(N)), is well behaved, and has a minimum of storage manipulation (new, malloc, tree balancing, etc.)
HOWEVER, if the values in the collection are integers and reasonably dense, you can use an O(N) sort, where you just use each value as an index into a big-enough array, and set a boolean TRUE at that index. Then you just scan the whole array and collect the indices that are TRUE.
You say you're storing items in an array where lookup is O(1). Unless you're using a hash table, that suggests your items may be dense integers, so I'm not sure if you even have a problem.
Regardless, memory allocating/deleting is expensive, and you should avoid it by pre-allocating or pooling if you can.

I had some good experience for that kind of task using a Skip List
At least in my case it was about 5 times faster compared to adding everything to a list first and then running a sort over it at the end.

Related

List with amortized constant or logarithmic insertion: how fast can possibly be lookup?

Everybody knows (or must know), that it is impossible to design a list data structure that supports both O(1) insertion in the middle, and O(1) lookup.
For instance, linked list support O(1) insertion, but O(N) for lookup, while arrays support O(1) for lookup, but O(N) for insertion (possibly amortized O(1) for insertion at the beginning, the end, or both).
However, suppose you are willing to trade O(1) insertion with:
Amortized O(1) insertion
O(log(N)) insertion
Then what is the theoretical bound for lookup in each of these cases? Do you know existing data structures? What about memory complexity?
Tree-based data structures, like a rope or finger tree, can often provide logarithmic insertion time at arbitrary positions. The tradeoff is in access time, which tends to also be logarithmic except in special cases, like the ends of a finger tree.
Dynamic arrays can provide amortized constant insertion at the ends, but insertion in the middle requires copying part of the array, and is O(N) in time, as you mention.
It's probably possible to implement a data structure which supports amortized constant middle insertion. If adding to either end, treat as a dynamic array. If inserting in the middle, keep the old array and add a new array "above" it which contains the new "middle" of the list, using the old array for data which is left or right of the middle. Access time would be logarithmic after your first middle insertion, and keeping track of what data was in which layer would quickly get complicated.
This might be the 'tiered' dynamic array mentioned in the wikipedia article, I haven't researched it further.
I suspect the reason no one really uses a data structure like that is that inserting in the middle is infrequently the case you most need to most for, and logarithmic insert (using trees) is good enough for most real world cases.
These are still open problems, but the best bounds that I am aware of are from Arne Andersson's Sublogarithmic searching without multiplications, which has insertions, deletions, and lookups of O(sqrt(lg(n))). However, this comes at a cost of 2^k additional space, where k is the number of bits in the integers being stored in the data structure, hence the reason that we're still using balanced binary trees instead of Andersson's data structure. A variant of the data structure allows O(1) lookups, but then the additional space increases to n2^k where n is the number of elements in the data structure. A randomized variant doesn't use any additional space, but then the sqrt(lg(n)) insertion/deletion/lookup times become average space times instead of worst case times.

Complexity of maintaining a sorted list vs inserting all values then sorting

Would the time and space complexity to maintain a list of numbers in sorted order (i.e start with the first one insert it, 2nd one comes along you insert it in sorted order and so on ..) be the same as inserting them as they appear and then sorting after all insertions have been made?
How do I make this decision? Can you demonstrate in terms of time and space complexity for 'n' elements?
I was thinking in terms of phonebook, what is the difference of storing it in a set and presenting sorted data to the user each time he inserts a record into the phonebook VS storing the phonebook records in a sorted order in a treeset. What would it be for n elements?
Every time you insert into a sorted list and maintain its sortedness, it is O(logn) comparisons to find where to place it but O(n) movements to place it. Since we insert n elements this is O(n^2). But, I think that if you use a data structure that is designed for inserting sorted data into (such as a binary tree) then do a pass at the end to turn it into a list/array, it is only O(nlogn). On the other hand, using such a more complex data structure will use about O(n) additional space, whereas all other approaches can be done in-place and use no additional space.
Every time you insert into an unsorted list it is O(1). Sorting it all at the end is O(nlogn). This means overall it is O(nlogn).
However, if you are not going to make lists of many elements (1000 or less) it probably doesn't matter what big-O it is, and you should either focus on what runs faster for small data sets, or not worry at all if it is not a performance issue.
It depends on what data structure you are inserting them in. If you are asking about inserting in an array, the answer is no. It takes O(n) space and time to store the n elements, and then O(n log n) to sort them, so O(n log n) total. While inserting into an array may require you to move \Omega(n) elements so takes \Theta(n^2). The same problem will be true with most "sequential" data structures. Sorry.
On the other hand, some priority queues such as lazy leftist heaps, fibonacci heaps, and Brodal queues have O(1) insert. While, a Finger Tree gives O(n log n) insert AND linear access (Finger trees are as good as a linked list for what a linked list is good for and as good as balanced binary search trees for what binary search trees are good for--they are kind of amazing).
There are going to be application-specific trade-offs to algorithm selection. The reasons one might use an insertion sort rather than some kind of offline sorting algorithm are enumerated on the Insertion Sort wikipedia page.
The determining factor here is less likely to be asymptotic complexity and more likely to be what you know about your data (e.g., is it likely to be already sorted?)
I'd go further, but I'm not convinced that this isn't a homework question asked verbatim.
Option 1
Insert at correct position in sorted order.
Time taken to find the position for i+1-th element :O(logi)
Time taken to insert and maintain order for i+1-th element: O(i)
Space Complexity:O(N)
Total time:(1*log 1 +2*log 2 + .. +(N-1)*logN-1) =O(NlogN)
Understand that this is just the time complexity.The running time can be very different from this.
Option 2:
Insert element O(1)
Sort elements O(NlogN)
Depending on the sort you employ the space complexity varies, though you can use something like quicksort, which doesn't need much space anyway.
In conclusion though both time complexity are the same, the bounds are weak and mathematically you can come up with better bounds.Also note that worst case complexity may never be encountered in practical situations, probably you will see only average cases all the time.If performance is such a vital issue in your application, you should test both sets of code on random sampling.Do tell me which one works faster after your tests.My guess is option 1.

sorting algorithm suitable for a sorted list

I have a sorted list at hand. Now i add a new element to the end of the list. Which sorting algorithm is suitable for such scenario?
Quick sort has worst case time complexity of O(n2) when the list is already sorted. Does this mean time complexity if quick sort is used in the above case will be close to O(n2)?
If you are adding just one element, find the position where it should be inserted and put it there. For an array, you can do binary search for O(logN) time and insert in O(N). For a linked list, you'll have to do a linear search which will take O(N) time but then insertion is O(1).
As for your question on quicksort: If you choose the first value as your pivot, then yes it will be O(N2) in your case. Choose a random pivot and your case will still be O(NlogN) on average. However, the method I suggest above is both easier to implement and faster in your specific case.
It depends on the implementation of the underlying list.
It seems to me that insertion sort will fit your needs except the case when the list is implemented as an array list. In this case too many moves will be required.
Rather than appending to the end of the list, you should do an insert operation.
That is, when adding 5 to [1,2,3,4,7,8,9] you'd result want to the "insert" by putting it where it belongs in the sorted list, instead of at the end and then re-sorting the whole list.
You can quickly find the position to insert the item by using a binary search.
This is basically how insertion sort works, except it operates on the entire list. This method will have better performance than even the best sorting algorithm, for a single item. It may also be faster than appending at the end of the list, depending on your implementation.
I'm assuming you're using an array, since you talk about quicksort, so just adding an element would involve finding the place to insert it (O(log n)) and then actually inserting it (O(n)) for a total cost of O(n). Just appending it to the end and then resorting the entire list is definitely the wrong way to go.
However, if this is to be a frequent operation (i.e. if you have to keep adding elements while maintaining the sorted property) you'll incur an O(n^2) cost of adding another n elements to the list. If you change your representation to a balanced binary tree, that drops to O(n log n) for another n inserts, but finding an element by index will become O(n). If you never need to do this, but just iterate over the elements in order, the tree is definitely the way to go.
Of possible interest is the indexable skiplist which, for a slight storage cost, has O(log n) inserts, deletes, searches and lookups-by-index. Give it a look, it might be just what you're looking for here.
What exactly do you mean by "list" ? Do you mean specifically a linked list, or just some linear (sequential) data structure like an array?
If it's linked list, you'll need a linear search for the correct position. The insertion itself can be done in constant time.
If it's something like an array, you can add to the end and sort, as you mentioned. A sorted collection is only bad for Quicksort if the Quicksort is really badly implemented. If you select your pivot with the typical median of 3 alogrithm, a sorted list will give optimal performance.

Why Binary Search Trees?

I was reading binary search tree and was thinking that why do we need BST at all? All the things as far as I know can also be achieve using simple sorted arrays. For e.g. - In order to build a BST having n elements, we requires n*O(log n) time i.e. O(nlog n) and lookup time is O(log n). But this thing can also be achieve using array. We can have a sorted array(requires O(nlog n) time), and lookup time in that is also O(log n) i.e. binary search algo. Then why do we need another data structure at all? Are there any other use/application of BST which make them so special?
--Ravi
Arrays are great if you're talking about write once, read many times type of interactions. It's when you get down to inserting, swapping, and deletion in which BST really start to shine compared to an array. Since they're node based, rather than based on a contiguous chunk of memory, the cost of moving an element either into the collection or out of the collection is fast while still maintaining the sorted nature of the collection.
Think of it as you would the difference in insertion between linked lists versus arrays. This is an oversimplification but it highlights an aspect of the advantage I've noted above.
Imagine you have an array with a million elements.
You want to insert an element at location 5.
So you insert at the end of the array and then sort.
Let's say the array is full; that's O(nlog n), which is 1,000,000 * 6 = 6,000,000 operations.
Imagine you have a balanced tree.
That's O(log n), plus a bit for balancing = 6 + a bit, call it 10 operations.
So, you've just spent 6,000,000 ops sorting your array. You then want to find that element. What do you do? binary search - O(log n) - which is exactly the same as what you're going to do when you search in the tree!
Now imagine you want to allocate -another- element.
Your array is full! what do you do? re-allocate the array with n extra elements and memcpy the lot? you really want to memcpy 4mbytes?
In a tree, you just add another element...
How about sorted insertion time?
In graphics programming if you have extended object(i.e. which represent an interval in each dimension and not just a point) you can add them to the smallest level of a binary tree(typically an octree) where they fit in entirely.
And if you don't pre-calculate the tree/sortedlist the O(n) random insertion time in a list can be prohibitively slow. Insertion time in a tree on the other hand is only O(log(n)).

Is it faster to sort a list after inserting items or adding them to a sorted list

If I have a sorted list (say quicksort to sort), if I have a lot of values to add, is it better to suspend sorting, and add them to the end, then sort, or use binary chop to place the items correctly while adding them. Does it make a difference if the items are random, or already more or less in order?
If you add enough items that you're effectively building the list from scratch, you should be able to get better performance by sorting the list afterwards.
If items are mostly in order, you can tweak both incremental update and regular sorting to take advantage of that, but frankly, it usually isn't worth the trouble. (You also need to be careful of things like making sure some unexpected ordering can't make your algorithm take much longer, q.v. naive quicksort)
Both incremental update and regular list sort are O(N log N) but you can get a better constant factor sorting everything afterward (I'm assuming here that you've got some auxiliary datastructure so your incremental update can access list items faster than O(N)...). Generally speaking, sorting all at once has a lot more design freedom than maintaining the ordering incrementally, since incremental update has to maintain a complete order at all times, but an all-at-once bulk sort does not.
If nothing else, remember that there are lots of highly-optimized bulk sorts available.
Usually it's far better to use a heap. in short, it splits the cost of maintaining order between the pusher and the picker. Both operations are O(log n), instead of O(n log n), like most other solutions.
If you're adding in bunches, you can use a merge sort. Sort the list of items to be added, then copy from both lists, comparing items to determine which one gets copied next. You could even copy in-place if resize your destination array and work from the end backwards.
The efficiency of this solution is O(n+m) + O(m log m) where n is the size of the original list, and m is the number of items being inserted.
Edit: Since this answer isn't getting any love, I thought I'd flesh it out with some C++ sample code. I assume that the sorted list is kept in a linked list rather than an array. This changes the algorithm to look more like an insertion than a merge, but the principle is the same.
// Note that itemstoadd is modified as a side effect of this function
template<typename T>
void AddToSortedList(std::list<T> & sortedlist, std::vector<T> & itemstoadd)
{
std::sort(itemstoadd.begin(), itemstoadd.end());
std::list<T>::iterator listposition = sortedlist.begin();
std::vector<T>::iterator nextnewitem = itemstoadd.begin();
while ((listposition != sortedlist.end()) || (nextnewitem != itemstoadd.end()))
{
if ((listposition == sortedlist.end()) || (*nextnewitem < *listposition))
sortedlist.insert(listposition, *nextnewitem++);
else
++listposition;
}
}
I'd say, let's test it! :)
I tried with quicksort, but sorting an almost sorting array with quicksort is... well, not really a good idea. I tried a modified one, cutting off at 7 elements and using insertion sort for that. Still, horrible performance. I switched to merge sort. It might need quite a lot of memory for sorting (it's not in-place), but the performance is much better on sorted arrays and almost identical on random ones (the initial sort took almost the same time for both, quicksort was only slightly faster).
This already shows one thing: The answer to your questions depends strongly on the sorting algorithm you use. If it will have poor performance on almost sorted lists, inserting at the right position will be much faster than adding at the end and then re-sorting it; and merge sort might be no option for you, as it might need way too much external memory if the list is huge. BTW I used a custom merge sort implementation, that only uses 1/2 of external storage to the naive implementation (which needs as much external storage as the array size itself).
If merge sort is no option and quicksort is no option for sure, the best alternative is probably heap sort.
My results are: Adding the new elements simply at the end and then re-sorting the array was several magnitudes faster than inserting them in the right position. However, my initial array had 10 mio elements (sorted) and I was adding another mio (unsorted). So if you add 10 elements to an array of 10 mio, inserting them correctly is much faster than re-sorting everything. So the answer to your question also depends on how big the initial (sorted) array is and how many new elements you want to add to it.
In principle, it's faster to create a tree than to sort a list. The tree inserts are O(log(n)) for each insert, leading to overall O(nlog(n)). Sorting in O(nlog(n)).
That's why Java has TreeMap, (in addition to TreeSet, TreeList, ArrayList and LinkedList implementations of a List.)
A TreeSet keeps things in object comparison order. The key is defined by the Comparable interface.
A LinkedList keeps things in the insertion order.
An ArrayList uses more memory, is faster for some operations.
A TreeMap, similarly, removes the need to sort by a key. The map is built in key order during the inserts and maintained in sorted order at all times.
However, for some reason, the Java implementation of TreeSet is quite a bit slower than using an ArrayList and a sort.
[It's hard to speculate as to why it would be dramatically slower, but it is. It should be slightly faster by one pass through the data. This kind of thing is often the cost of memory management trumping the algorithmic analysis.]
It's about the same. Inserting an item into a sorted list is O(log N), and doing this for every element in the list, N, (thus building the list) would be O(N log N) which is the speed of quicksort (or merge sort which is closer to this approach).
If you instead inserted them onto the front it would be O(1), but doing a quicksort after, it would still be O(N log N).
I would go with the first approach, because it has the potential to be slightly faster. If the initial size of your list, N, is much greater than the number of elements to insert, X, then the insert approach is O(X log N). Sorting after inserting to the head of the list is O(N log N). If N=0 (IE: your list is initially empty), the speed of inserting in sorted order, or sorting afterwards are the same.
Inserting an item into a sorted list takes O(n) time, not O(log n) time. You have to find the place to put it, taking O(log n) time. But then you have to shift over all the elements - taking O(n) time. So inserting while maintaining sorted-ness is O(n ^ 2), where as inserting them all and then sorting is O(n log n).
Depending on your sort implementation, you can get even better than O(n log n) if the number of inserts is much smaller than the list size. But if that is the case, it doesn't matter either way.
So do the insert all and sort solution if the number of inserts is large, otherwise it probably won't matter.
If the list is a) already sorted, and b) dynamic in nature, then inserting into a sorted list should always be faster (find the right place (O(n)) and insert (O(1))).
However, if the list is static, then a shuffle of the remainder of the list has to occur (O(n) to find the right place and O(n) to slide things down).
Either way, inserting into a sorted list (or something like a Binary Search Tree) should be faster.
O(n) + O(n) should always be faster than O(N log n).
At a high level, it's a pretty simple problem, because you can think of sorting as just iterated searching. When you want to insert an element into an ordered array, list, or tree, you have to search for the point at which to insert it. Then you put it in, at hopefully low cost. So you could think of a sort algorithm as just taking a bunch of things and, one by one, searching for the proper position and inserting them. Thus, an insertion sort (O(n* n)) is an iterated linear search (O(n)). Tree, heap, merge, radix, and quick sort (O(n*log(n))) can be thought of as iterated binary search (O(log(n))). It is possible to have an O(n) sort, if the underlying search is O(1) as in an ordered hash table. (An example of this is sorting 52 cards by flinging them into 52 bins.)
So the answer to your question is, inserting things one at a time, versus saving them up and then sorting them should not make much difference, in a big-O sense. You could of course have constant factors to deal with, and those might be significant.
Of course, if n is small, like 10, the whole discussion is silly.
You should add them before and then use a radix sort this should be optimal
http://en.wikipedia.org/wiki/Radix_sort#Efficiency
(If the list you're talking about is like C# List<T>.) Adding some values to right positions into a sorted list with many values is going to require less operations. But if the number of values being added becomes large, it will require more.
I would suggest using not a list but some more suitable data structure in your case. Like a binary tree, for example. A sorted data structure with minimal insertion time.
If this is .NET and the items are integers, it's quicker to add them to a Dictionary (or if you're on .Net 3.0 or above use the HashSet if you don't mind losing duplicates)This gives you automagic sorting.
I think that strings would work the same way as well. The beauty is you get O(1) insertion and sorting this way.
Inserting an item into a sorted list is O(log n), while sorting a list is O(n log N)
Which would suggest that it's always better to sort first and then insert
But remeber big 'O' only concerns the scaling of the speed with number of items, it might be that for your application an insert in the middle is expensive (eg if it was a vector) and so appending and sorting afterward might be better.

Resources