I'm beginning to learn Data Structure and Algorithms with UCSD's MOOC.
For the second problem, they ask us to implement an algorithm to find the two highest values in an array.
As an additional problem, they add the following exercise:
Exercise Break. Find two largest elements in an array in 1.5n comparisons.
I don't know exactly what 1.5 comparisons mean. I've searched on Google but couldn't find an explanation of comparisons in algorithms.
Is there a site with some examples of comparisons?
Is talking about the complexity of the algorithm
You have to give an algorithm who takes O(3/2 n) in the worst case.
Just to an example, bubble sort algo. takes O(n*n) in the worst case
I have never come across any utility or a code that uses bubble sort other than tutorials and the in class room. Is there any specific case where one might want to use it in an application?
Thanks in advance.
It's good for very small data sets. See this link for a number of answers.
What is a bubble sort good for?
Two answers stand out:
workmad3 wrote:
I came across a great use for it in an optimisation anecdote recently. A program needed a set of sprites sorted in depth order each frame. The spites order wouldn't change much between frames, so as an optimisation they were bubble sorted with a single pass each frame. This was done in both directions (top to bottom and bottom to top). So the sprites were always almost sorted with a very efficient O(N) algorithm.
Tetha wrote:
we recently used bubblesort in an optimality proof for an algorithm. We had to transform an arbitrary optimal solution represented by a sequence of objects into a solution that was found by our algorithm. Because our algorithm was just "Sort by this criteria", we had to prove that we can sort an optimal solution without making it worse. In this case, bubble sort was a very good algorithm to use, because it has the nice invariant of just swapping two elements that are next to each other and are in the wrong order. Using more complicated algorithms there would have melted brains, I think.
This site already has some questions on this topic but I am confused after reading some of the answers.
https://cs.stackexchange.com/questions/20/evaluating-the-average-time-complexity-of-a-given-bubblesort-algorithm
In the above link, answer by "Joe" says that number of swaps in bubble sort on average is same as number of inversions on average which is (n)(n-1) / 4.
However, Insertion sort vs Bubble Sort Algorithms says that in bubble sort average number of swaps is n^2 /2 and in insertion sort it is n^2/4 and that's the reason for insertion sort being better than bubble sort.
Which one is correct? Can someone please help me?
Your first link counts the expected number of inversions (i.e. swaps) assuming a uniform distribution.
Your second link is counting iterations i.e. inspections of elements.
Both are correct.
This was an interview question and I am wondering if my analysis was correct:
A 'magic select' function basically generates the 'mth' smallest value in an array that has a size of n. The task was to sort the 'm' elements in ascending order using an efficient algorithm. My analysis was to first use the 'magic select' function to get the 'mth' smallest value. I then used a partition function to sort of create a pivot to get all smaller elements on the left. After that point, I felt that a bucket sort should accomplish the task of sorting the left half efficiently.
I was just wondering if this was the best way to sort the 'm' smallest elements. I see the possibility of a quick sort being used here too. However, I thought that avoiding a comparison based sorting could lead to an O(n). Could radix sort or heap sort (O(nlogn)) be used for this too? If I didn't do it in the best possible way, which could be the best possible way to accomplish this? An array was the data structure I was allowed to use.
Many thanks!
I'm pretty sure you can't do any better than any standard algorithm for selecting the k lowest elements out of an array in sorted order. The time complexity of your "magic machine" is O(n), which is the same time complexity you'd get from a standard selection algorithm like the median-of-medians algorithm or quickselect.
Consequently, your approaches seem very reasonable. I doubt you can do any better asymptotically.
Hope this helps!
This is an interview question that I recently found on Internet:
If you are going to implement a function which takes an integer array as input and returns the maximum, would you use bubble sort or merge sort to implement this function? What if the array size is less than 1000? What if it is greater than 1000?
This is how I think about it:
First, it is really weird to use sorting to implement the above function. You can just go through the array once and find the max one.
Second, if have to make a choice between the two, then bubble sort is better - you don't have to implement the whole bubble sort procedure but only need to do the first pass. It is better than merge sort both in time and space.
Are there any mistakes in my answer? Did I miss anything?
It's a trick question. If you just want the maximum, (or indeed, the kth value for any k, which includes finding the median), there's a perfectly good O(n) algorithm. Sorting is a waste of time. That's what they want to hear.
As you say, the algorithm for maximum is really trivial. To ace a question like this, you should have the quick-select algorithm ready, and also be able to suggest a heap datastructure in case you need to be able to mutate the list of values and always be able to produce the maximum rapidly.
I just googled the algorithms. The bubble sort wins in both situations because of the largest benefit of only having to run through it once. Merge sort can not cut any short cuts for only having to calculate the largest number. Merge takes the length of the list, finds the middle, and then all the numbers below the middle compare to the left and all above compare to the right; in oppose to creating unique pairs to compare. Meaning for every number left in the array an equal number of comparisons need to be made. In addition to that each number is compared twice so the lowest numbers of the array will most likely get eliminated in both of their comparisons. Meaning only one less number in the array after doing two comparisons in many situations. Bubble would dominate
Firstly I agree with everything you have said, but perhaps it is asking about knowing time complexity's of the algorithms and how the input size is a big factor in which will be fastest.
Bubble sort is O(n2) and Merge Sort is O(nlogn). So, on a small set it wont be that different but on a lot of data Bubble sort will be much slower.
Barring the maximum part, bubble sort is slower asymptotically, but it has a big advantage for small n in that it doesn't require the merging/creation of new arrays. In some implementations, this might make it faster in real time.
only one pass is needed , for worst case , to find maximum u just have to traverse the whole array , so bubble would be better ..
Merge sort is easy for a computer to sort the elements and it takes less time to sort than bubble sort. Best case with merge sort is n*log2n and worst case is n*log2n. With bubble sort best case is O(n) and worst case is O(n2).