Sorting algorithm with quadratic and linear runtime - algorithm

Is it possible to make a sorting algorithm
That its running time at worst case is quadratic => n^2
But in most cases
(That is, on more than half of the n-size inputs)
the run time will be linear => n ??
I was thinking about Radix Sort and just make the worst case, worse
But I do not know if it is possible.

yes, the Bucket Sort
https://www.geeksforgeeks.org/bucket-sort-2/?ref=lbp
you can reed abut the algorithm in the link
In the worst case, we could have a bucket which contains all n values of the
array. Since insertion sort has worst case running time O(n^2), so does Bucket
sort. We can avoid this by using merge sort to sort each bucket instead, which
has worst case running time O(n lg n).

Yes, it is possible.
Bucket sort analysis does reveal such behaviour (with reasonable number of buckets).

Related

Best runtime for n-1 comparisons?

If an algorithm must make n-1 comparisons to find a certain element, then can we assume that best possible runtime of the algorithm is O(n)?
I know that the lower bound for sorting algorithms is nlogn but since we only return the found one element, I figured it would be possible to do better in terms of run time?
Thanks!
To find a certain element in an unsorted list you need O(n).
But if you sort the array (takes O(n log n) in general) you can find a certain element in O(log n).
So if you want to find often elements in the same list it is most likely worth to sort the list to then be able to find elements much more efficient.
If your array is unsorted and you find some element in it then in worst case Linear search algorithm make n-1 comparisons and time complexity will be O(n).
But if you want to reduce your time complexity then first sort your array and use Binary search algorithm it is take O(logn) in worst case.
So Binary search algorithm is more efficient then linear search.
For unsorted elements, worst case is when you have to go over all the elements, i.e., O(N). If you need many look-ups then you have several pre-processing alternatives that speed up all future accesses.
Option 1: put the elements in a standard hash table. Creating the hash table costs O(N), on average, and later pay O(1) on average for each lookup. This assumes that a reasonable hash-function can be created for this type of elements.
Most languages/libraries implement bucket-based hash-tables, which in pathological cases can put all elements in one bucket, costing O(N) per lookup.
Option 2: there are other hash-table implementations that don't suffer from pathological O(N) cases. The Robin Hood hashing (Wikipedia) (more at Programming.Guide) guarantees O(log N) lookup in the worst case, with average of O(1).
Option 3: another option is to sort elements in O(N log N) once, and then use binary-search to lookup in O(log N). Usually this is slower than Robin Hood hashing (Option 2).
Option 4: If the values are simple integers with limited range, with max-min around N, then it is possible to put the values in an array (list), such that array[value-min] will contain a count of how many times the value appears in the input. It costs O(N) to construct, and O(1) to lookup. Better, the constants for both preprocessing and lookup are significantly lower than in any other method.
Note: I didn't mention the O(N) counting-sort as an alternative to the general case of O(N log N) sorting (option 3), since if max(value)-min(value) is small enough for counting-sort, then option 4 is relevant and is simpler and faster.
If applicable, choose option 4, otherwise if you wish to invest time and code then choose option 2. If 4 isn't applicable, and 2 is not worth the effort in your case, then choose option 2 if you don't mind the pathological worst-case (never choose option 2 when an adversary may want to harm you in a DOS attack).
Your question has nothing to do with sorting, let alone linear search.
If you claim that n-1 comparisons are mandated, then your problem has certainly complexity Ω(n). But with that information alone, you can't guarantee O(n) because it is not said that these n-1 comparisons are sufficient, nor that the algorithm does not perform extra operations, for instance to decide which comparisons to perform. It could turn out that your algorithm is O(n³) with no chance to do better, but we can't tell.
Best case complexity: Ω(n).
Worst case complexity: unknown.

How to generate worst case data for Graham Scan

I know that the worse case running time of graham scan is O(nlogn) but I am not sure how to generate the worst case data. From what I understood, this occurs at the step where points are being sorted, so does that mean I should generate the worst case data for the sorting algorithm I used?
Any help would be appreciated.
Yes, as Matt notes, you need to generate a worst case for the sorting algorithm, since the rest of the algorithm runs in worst-case linear time. This sorting algorithm should be a comparison sort; otherwise, the lower bound may not be valid.
Unfortunately, without knowing the sorting algorithm, it's difficult to point to specific inputs that trigger the worst case. Some sorts, such as quicksort and mergesort, are best-case Θ(n log n). Others, like Timsort and smoothsort, have linear-time best cases. Unfortunately, given any linear-time procedure that takes a length (in unary) and returns a permutation, there's a sorting algorithm that runs in linear time on those specific permutations by checking whether the input is permuted that way and then falling back to mergesort if necessary.
The best I can do for an unspecified algorithm is to suggest that you choose a uniform random permutation, since every correct comparison sort averages Ω(n log n)-time on this input distribution.

Time Complexity to Sort a K Sorted Array Using Quicksort Algorithm

Problem:
I have to analyze the time complexity to sort (using Quick sort) a list of integer values which are almost sorted.
What I have done?
I have read SO Q1, SO Q2, SO Q3 and this one.
However, I have not found anything which mentioned explicitly the time complexity to sort a k sorted array using Quick sort.
Since the time complexity of Quick sort algorithm depends on the strategy of choosing pivot and there is a probability to face the worst case due to having almost sorted data, to avoid worst case, I have used median of three values(first, middle, last) as a pivot as referred here.
What do I think?
Since in average case, the time complexity of Quick sort algorithm is O(n log(n)) and as mentioned here, "For any non trivial value of n, a divide and conquer algorithm will need many O(n) passes, even if the array be almost completely sorted",
I think the time complexity to sort a k sorted array using Quick sort algorithm is O(n log(n)), if the worst case does not occur.
My Question:
Am I right that the time complexity to sort a k sorted array using Quick sort algorithm is O(n log(n)) if I try to avoid worst case selecting a proper pivot and if the worst case does not occur.
When you say time complexity of Quick Sort, it is O(n^2), because the worst case is assumed by default. However, if you use another strategy to choose pivot, like Randomized Quick Sort, for example, your time complexity is still going to be O(n^2) by default. But the expected time complexity is O(n log(n)), since the occurrence of the worst case is highly unlikely. So if you can prove somehow that the worst case is 100% guaranteed not to happen, then you can say time complexity is less than O(n^2), otherwise, by default, the worst case is considered, no matter how unlikely.

What is the appropriate data structure for insertion sort?

I revisited insertion sort algorithm and noticed something funny.
One obviously shouldn't use an array with this sort, as upon insertion, one will have to shift all subsequent elements O(n^2 log(n)). However a linked list is also not good here, since we preferably find the right placement using binary search, which isn't possible for a simple linked list (so we end up with O(n^2)).
Which makes me wonder: what is a data structure on which this sorting algorithm provides its premise of O(nlog(n)) complexity?
From where did you get the premise of O(n log n)? Wikipedia disagrees, as does my own experience. The premises of the insertion sort include components that are O(n) for each of the n elements.
Also, I believe that your claim of O(n^2 log n) is incorrect. The binary search is log n, and the ensuing "move sideways" is n, but these two steps are in succession, not nested. The result is n + log n, not a multiplication. The result is the expected O(n^2).
If you use a gapped array and a binary search to figure out where to insert things, then with high probability your sort will be O(n log(n)). See https://en.wikipedia.org/wiki/Library_sort for details.
However this is not as efficient as a wide variety of other sorts that are widely implemented. So this knowledge is only of theoretical interest.
Insertion sort is defined over array or list, if you use some other data structure, then it will be another algorithm.
Of course if you use a BST, insertion and search would be O(log(n)) and your overall complexity would be O(n.log(n)) on the average (remind that it will be O(n^2) in the worst), but this will be no more an insertion sort but a tree sort. If you use an AVL tree, then you get the O(n.log(n)) worst case complexity.
In insertion sort the best case scenario is when the sequence is already sorted and that takes Linear time and in the worst case takes O(n^2) time. I do not know how you got the logarithmic part in the complexity.

On the efficiency of tries and radix sort

Radix sort's time complexity is O(kn) where n is the number of keys to be sorted and k is the key length. Similarly, the time complexity for the insert, delete, and lookup operations in a trie is O(k). However, assuming all elements are distinct, isn't k>=log(n)? If so, that would mean Radix sort's asymptotic time complexity is O(nlogn), equal to that of quicksort, and trie operations have a time complexity of O(logn), equal to that of a balanced binary search tree. Of course, the constant factors may differ significantly, but the asymptotic time complexities won't. Is this true, and if so, do radix sort and tries have other advantages over other algorithms and data structures?
Edit:
Quicksort and its competitors perform O(nlogn) comparisons; in the worst case each comparison will take O(k) time (keys differ only at last digit checked). Therefore, those algorithms take O(knlogn) time. By that same logic, balanced binary search tree operations take O(klogn) time.
Big O notation is not used that way, even if k>=log n for radix sorting, O(kn) means that your processing time will double if n doubles and so on, this is how you should use big-o notation.
One advantage of radix sort is that it's worst case is O(kn) (quicksort's O(n^2)) so radix sort is somehow more resistant to malicious input than quicksort. It can also be really fast in term of real perfomance, if you use bitwise operations, a power of 2 as a base and in-place msd-radix sort with insertion sort for smaller arrays.
The same argument is valid for tries, they are resistant to malicious input in the sense that insertion/search is O(k) in the worst case. Hashtables perform insertion/search in O(1) but with O(k) hashing and in the worst case O(N) insertion/search. Also, tries can store strings more efficiently.
Check Algorithmic Complexity Attacks
The asymptotic time complexity of Radix sort is O(NlogN) which is also the time complexity of Qucik sort. The advantage of Radix sort is that it's best, average and worst case performance is same where as the worst case performance of Quick sort is O(N^2). But it takes twice the sapce as required by Quick sort. So, if space complexity is not a problem then Radix sort is a better option.

Resources