sort and stability - sorting

I would like to know why merge sort is stable and quick sort is not.
I know if the relative order is preserved all the time then it's stable.
shouldn't merge sort still do tie breaking? will it be still stable when it doesn't do tie breaking?
I understand quick sort will be unstable if it doesn't do tie breaking.
can u give me some examples? thank you

Looks like Stack already has the answer covered in a different thread
Quick Sort Vs Merge Sort

Related

What do you think about our professors pseudo-code for selection sort?

After I have read about selection sort and tried to write the code for it in java (Why doesn't my selection sort algorithm do what it's supposed to (java)?), and then looked at our script, I got confused. The pseudo-code for selection sort seems wrong or rather incomplete for me. It looks more similar to bubble sort than selection sort pseudo-code.
I mean, where is the important part of the code where you look for the smallest value of the array and put it to the beginning and repeat that process?
I hope this is the correct section to ask, and if not, please tell me where I can ask it (I would delete that question immediately). But it's very important for me to understand what has been written there and your opinion about it.
(I'm especially very curious and afraid how I shall write pseudo code for selection sort in an exam, if asked.)

SJF Algorithm (SRTFS) - Calculate Sorting the Processes?

Ok, I have the table above, and I need to calculate the average waiting time using the preemptive SJF algorithm, but to do that, you must first 'sort' these processes, which I think I don't understand properly how to do.
If I knew how to sort them, I'd have no trouble calculating the average waiting time at all.
Here's what I came up with, but I think it's probably wrong.
My Probably Wrong Solution:
Sorting requires an ordering/comparison operation like >=. It's shocking that you are using an algorithm with its own acronym (and what is SRTFS??) before learning how to sort. However once you know what it is you want to sort, you can use one of the java.util.Arrays.sort() methods.

What is the meaning of "stable" and "unstable" for various sorting algorithms? [duplicate]

This question already has answers here:
What is stability in sorting algorithms and why is it important?
(10 answers)
Closed 8 years ago.
Can someone explain what "stable" and "unstable" mean in relation to various sorting algorithms> How can one determine whether an algorithm is stable or not, and what applications do unstable sorting algorithms usually have (since they are unstable)?
If a sorting algorithm is said to be "unstable", this means that for any items that rank the same, the order of the tied members is not guaranteed to stay the same with successive sorts of that collection. For a 'stable' sort, the tied entries will always end up in the same order when sorted.
For an example of applications, the quick sort algorithm is not stable. This would work fine for something like sorting actions by priority (if two actions are of equal priority, you would not be likely to care about which elements of a tie are executed first).
A stable sorting algorithm, on the other hand, is good for things like a leaderboard for an online game. If you were to use an unstable sort, sorting by points (for instance), then a user viewing the sorted results on a webpage could experience different results on page refreshes and operations like paging through results would not function correctly.
A stable sort retains the order of identical items. Any sort can be made stable by appending the row index to the key. Unstable sorts, like heap sort and quick sort for example do not have this property inherently, but they are used because they tend to be faster and easier to code than stable sorts. As far as I know there are no other reasons to use unstable sorts.

Bubblesort over other sorting algorithms?

Why would you choose bubble sort over other sorting algorithms?
You wouldn't.
Owen Astrachan of Duke University once wrote a research paper tracing the history of bubble sort (Bubble Sort: An Archaeological Algorithmic Analysis) and quotes CS legend Don Knuth as saying
In short, the bubble sort seems to have nothing to recommend it, except a catchy name
and the fact that it leads to some interesting theoretical problems.
The paper concludes with
In this paper we have investigated the origins of bubble sort and its enduring popularity despite warnings against its use by many experts. We confirm the warnings by analyzing its complexity both in coding and runtime.
Bubble sort is slower than the other O(n2) sorts; it's about four times as slow as insertion sort and twice as slow as selection sort. It does have good best-case behavior (if you include a check for no swaps), but so does Insertion Sort: just one pass over an already-sorted array.
Bubble Sort is impractically slow on almost all real data sets. Any good implementation of quicksort, heapsort, or mergesort is likely to outperform it by a wide margin. Recursive sorts that use a simpler sorting algorithm for small-enough base-cases use Insertion Sort, not Bubble Sort.
Also, the President of the United States says you shouldn't use it.
Related: Why bubble sort is not efficient? has some more details.
There's one circumstance in which bubble sort is optimal, but it's one that can only really occur with ancient hardware (basically, something like a drum memory with two heads, where you can only read through the data in order, and only work with two data items that are directly next to each other on the drum).
Other than that, it's utterly useless, IMO. Even the excuse of getting something up and running quickly is nonsense, at least in my opinion. A selection sort or insertion sort is easier to write and/or understand.
You would implement bubble sort if you needed to create a web page showing an animation of bubble sort in action.
When all of the following conditions are true
Implementing speed is way more important than execution speed (probability <1%)
Bubble sort is the only sorting algorithm you remember from university class (probability 99%)
You have no sorting library at hand (probability <1%)
You don't have access to Google (probability <1%)
That would be less than 0,000099 % chance that you need to implement bubble sort, that is less than one in a million.
If your data is on a tape that is fast to read forward, slow to seek backward, and fast to rewind (or is a loop so it doesn't need rewinding), then bubblesort will perform quite well.
I suspect a trick question. No one would choose bubble sort over other sorting algorithms in the general case. The only time it really makes any sense is when you're virtually certain that the input is (nearly) sorted already.
Bubble sort is easy to implement. While the 'standard' implementation has poor performance, there is a very simple optimization which makes it a strong contender compared to many other simple algorithms. Google 'combsort', and see the magic of a few well placed lines. Quicksort still outperforms this, but is less obvious to implement and needs a language that supports recursive implementations.
I can think of a few reasons for bubble sort:
It's a basic elementary sort. They're great for beginner programmers learning the if, for, and while statements.
I can picture some free time for a programmer to experiment on how all the sorts work. What better to start with at the top with than the bubble sort (yes, this does demean its rank, but who doesn't think 'bubble sort' if someone says 'sorting algorithms').
Very easy to remember and work with for any algorithm.
When I was starting on linked lists, bubble sort helped me understand how all the nodes worked well with each other.
Now I'm feeling like a lame commercial advertising about bubble sort so I'll be quiet now.
I suppose you would choose bubble sort if you needed a sorting algorithm which was guaranteed to be stable and had a very small memory footprint. Basically, if memory is really scarce in the system (and performance isn't a concern) then it would work, and would be easily understood by anybody supporting the code. It also helps if you know ahead of time that the values are mostly sorted already.
Even in that case, insertion sort would probably be better.
And if it's a trick question, next time suggest Bogosort as an alternative. After all, if they're looking for bad sorting, that's the way to go.
It's useful for "Baby's First Sort" types of exercises in school because it's easy to explain how it works and it's easy to implement. Once you've written it, and maybe run it once, delete it and never think of it again.
You might use Bubblesort if you just wanted to try something quickly. If, for instance, you are in a new environment and you are playing around with a new idea, you can quickly throw in a bubble sort in very little time. It might take you much longer to remember and write a different sort and debug it and you still might not get it right. If your experiment works out and you need to use the code for something real, then you can spend the time to get it right.
No sense putting a lot of effort into the sort algorithm if you are just prototyping.
When demonstrating with a concrete example how not to implement a sort routine.
Because your other sorting algorithm is Monkey Sort? ;)
Seriously though, bubble sort is mainly a sorting algorithm for educational reasons and has no practical value.
When the array is already "almost" sorted or you have few additions into an already sorted-list, you can use bubble sort to resort it. Bubble sort usually works for small data-sets.

Which sorting method is most suitable for parallel processing?

I am now looking at my old school assignment and want to find the solution of a question.
Which sorting method is most suitable for parallel processing?
Bubble sort
Quick sort
Merge sort
Selection sort
I guess quick sort (or merge sort?) is the answer.
Am I correct?
Like merge sort, quicksort can also be easily parallelized due to its divide-and-conquer nature. Individual in-place partition operations are difficult to parallelize, but once divided, different sections of the list can be sorted in parallel.
One advantage of parallel quicksort over other parallel sort algorithms is that no synchronization is required. A new thread is started as soon as a sublist is available for it to work on and it does not communicate with other threads. When all threads complete, the sort is done.
http://en.wikipedia.org/wiki/Quicksort
It depends completely on the method of parallelization. For multithreaded general computing, a merge sort provides pretty reliable load balancing and memory localization properties. For a large sorting network in hardware, a form of Batcher, Bitonic, or Shell sort is actually best if you want good O(logĀ² n) performance.
i think merge sort
you can divide the dataset and make parallel operations on them..
I think Merge Sort would be the best answer here. Because the basic idea behind merge sort is to divide the problem into individual solutions.Solve them and Merge them.
Thats what we actually do in parallel processing too. Divide the whole problem into small unit statements to compute parallely and then join the results.
Thanks
Just a couple of random remarks:
Many discussions of how easy it is to parallelize quicksort ignore the pivot selection. If you traverse the array to find it, you've introduced a linear time sequential component.
Quicksort is not easy to implement at all in distributed memory. There is a discussion in the Kumar book
Yeah, I know, one should not use bubble sort. But "odd-even transposition sort", which is more or less equivalent, is actually a pretty good parallel programming exercise. In particular for distributed memory parallelism. It is the easiest example of a sorting network, which is very doable in MPI and such.
It is merge sort since the sorting is done on two sub arrays and they are compared and sorted at the end. these can be done in parallel

Resources