SJF Algorithm (SRTFS) - Calculate Sorting the Processes? - algorithm

Ok, I have the table above, and I need to calculate the average waiting time using the preemptive SJF algorithm, but to do that, you must first 'sort' these processes, which I think I don't understand properly how to do.
If I knew how to sort them, I'd have no trouble calculating the average waiting time at all.
Here's what I came up with, but I think it's probably wrong.
My Probably Wrong Solution:

Sorting requires an ordering/comparison operation like >=. It's shocking that you are using an algorithm with its own acronym (and what is SRTFS??) before learning how to sort. However once you know what it is you want to sort, you can use one of the java.util.Arrays.sort() methods.

Related

What are some examples of Sorting methods in Web development?

I am a TA for algorithms class and we are doing a unit on sorting, and I wanted to a discussion of quicksort. There are many good theoretical discussions of sorting methods on the web showing which one is better in which circumstances...
What are some real-life instances of quick-sort I can give to my student. Especially in the field of Web Development.
does Django use quick-sort?
does React?
does Leaflet use any kind of sort?
In fact, I don't really care about quicksort particularly. Any sorting method will do if I can point to a specific library that uses it. Thanks.
why are my students learning sort? why am i teaching this? i can think of academic or theoretical reasons... basically that we are constantly ordering things - either in their own right or as part of another algorithm. how about for my students, who may never have to write their own sort function?
I'll answer the question "why do we learn how to write a sort function?" Why do we learn to write anything that's already given to us by a library? Hashes, lists, queues, trees... why learn to write any of them?
The most important is to appreciate their performance consequences and when to use which one. For example, Ruby Arrays supply a lot of built in functionality. They're so well done and easy to use that it's easy to forget you're working with a list and write yourself a pile of molasses.
Look at this loop that finds a thing in a list and replaces it.
things.each { |thing|
idx = thing.index(marker)
thing[idx] = stuff
}
With no understanding of the underlying algorithms that seems perfectly sensible.
For each list in the list of things.
Find the item to replace.
Insert a new item in its place.
Two steps per thing. What could be simpler? And when they run it with a small amount of test data it's fine. When they put it into production with a real amount of data and having to do it thousands of times per second it's dog slow. Why? Without an appreciation for what all those methods are doing under the hood, they cannot know.
things.each { |thing| # O(things)
idx = thing.index(marker) # O(thing)
thing[idx] = stuff # O(1)
}
Those deceivingly simple looking Array methods are their own hidden loops. In the worst case each one must scan the whole list. Loops in loops makes this exponentially slow, it's O(n*m). How slow? If things is 1000 items long, and each thing has 1000 items in it that's... 1000 * 1000 or 1,000,000 operations!
And this isn't nearly the amount of trouble students can get into, normally they write O(n!) loops. I actually find it hard to come up with an example I'm so ingrained against it.
But that only becomes apparent after you throw a ton of data at it. While you're writing it, how can you know?
How can they make it faster? Without understanding the other options available to you and their performance characteristics, like hashes and sets and trees, they cannot know. And experienced programmer would make one immediate change to the data structure and change things to a list of sets.
things.each { |thing| # O(things)
thing.delete(marker) # O(1)
thing.add(stuff) # O(1)
}
This is much faster. Deleting and adding with an unordered set is O(1) so it's effectively free no matter how large thing gets. Now if things is 1000 items long, and each thing has 1000 items in it that's 1000 operations. By using a more appropriate data structure I just sped up that loop by 1000 times. Really what I did is changed it from O(n*m) to O(n).
Another solid example is learning how to write a solid comparison function for multi-level data. Why is the Schwartzian transform fast? You can't appreciate that without understanding how sorting works.
You could simply be told these things, sorting is O(n log n), finding something in a list is O(n), and so on... but having to do it yourself gives you a visceral appreciation for what's going on under the hood. It makes you appreciate all the work a modern language does for you.
That said, there's little point in writing six different sort algorithms, or four different trees, or five different hash conflict resolution functions. Write one of each to appreciate them, then just learn about the rest so you know they exist and when to use them. 98% of the time the exact algorithm doesn't matter, but sometimes it's good to know that a merge sort might work better than a quick sort.
Because honestly, you're never going to write your own sort function. Or tree. Or hash. Or queue. And if you do, you probably shouldn't be. Unless you intend to be the 1% that writes the underlying libraries (like I do), if you're just going to write web apps and business logic, you don't need a full blown Computer Science education. Spend that time learning Software Engineering instead: testing, requirements, estimation, readability, communications, etc...
So when a student asks "why are we learning this stuff when it's all built into the language now?" (echos of "why do I have to learn math when I have a calculator?") have them write their naive loop with their fancy methods. Shove big data at it and watch it slow to a crawl. Then write an efficient loop with good selection of data structures and algorithms and show how it screams through the data. That's their answer.
NOTE: This is the original answer before the question was understood.
Most modern languages use quicksort as their default sort, but usually modified to avoid the O(n^2) worst case. Here's the BSD man page on their implementation of qsort_r(). Ruby uses qsort_r.
The qsort() and qsort_r() functions are an implementation of C.A.R. Hoare's ``quicksort'' algorithm, a variant of partition-exchange sorting; in particular, see D.E. Knuth's Algorithm Q. Quicksort takes O N lg N average time. This implementation uses median selection to avoid its O N**2 worst-case behavior.
PHP also uses quicksort, though I don't know which particular implementation.
Perl uses its own implementation of quicksort by default. But you can also request a merge sort via the sort pragma.
In Perl versions 5.6 and earlier the quicksort algorithm was used to implement "sort()", but in Perl 5.8 a mergesort algorithm was also made available, mainly to guarantee worst case O(N log N) behaviour: the worst case of quicksort is O(N**2). In Perl 5.8 and later, quicksort defends against quadratic behaviour by shuffling large arrays before sorting.
Python since 2.3 uses Timsort and is guaranteed to be stable. Any software written in Python (Django) is likely to also use the default Timsort.
Javascript, really the ECMAScript specification, does not say what type of sorting algorithm to use for Array.prototype.sort. It only says that it's not guaranteed to be stable. This means the particular sorting algorithm is left to the Javascript implementation. Like Python, any Javascript frameworks such as React or Leaflet are likely to use the built in sort.
Visual Basic for Applications (VBA) comes with NO sorting algorithm. You have to write your own. This is a bizarre oversight for any language, but particularly one that's designed for business use and spreadsheets.
Almost any table is sorted. Most web apps are backed by SQL database and the actual sorting is performed inside that SQL database. For example SQL query SELECT id, date, total FROM orders ORDER BY date DESC. This kind of sorting uses already sorted database indexes, which are mostly implemented using B-trees (or data structures inspired by B-trees). But if data needs to be sorted on the fly then I think quicksort is usually used.
Sorting, merging of sorted files and binary search in sorted files is often used in big data processing, analytics, ad dispatching, fulltext search... Even Google results are sorted :)
Sometimes you don't need sort, but partial sort, or min-heap. For example in Dijkstra's algorithm for finding shortest path. Which is used (or can be used, or I would use it :) ) for example in route planning (Google Maps).
As pointed out by Schwern, the sorting is almost always provided by the programming language or its implementation engine, and libraries / frameworks just use that algorithm, with a custom comparison function when they need to sort complex objects.
Now if your objective is to have a real life example in the Web context, you could actually use on the contrary the "lack of" sorting method in SVG, and make an exercise out of it. Unlike other DOM elements, an SVG container paints its children in the order they are appended, irrespective of any "z-index" equivalent. So to implement a "z-index" functionality, you have to re-order the nodes yourself.
And to avoid just using a custom comparison function and relying on array.sort, you could add extra constraints, like stability, typically to preserve the current order of nodes with the same "z-index".
Since you mention Leaflet, one of the frustration with the pre 1.0 version (e.g. 0.7.7), was that all vector shapes are appended into the same single SVG container, without any provided sorting functionality, except for bringToFront / bringToBack.

Is there a way to make StoogeSort more curve-like?

I'm currently studying Analysis of Algorithms and their respective runtime, and i came across a sorting algorithm called Stooge sort, and the weird way it behaves really caught my attention. I'm trying to determine the runtime using a program created by a professor of mine, but the amount of points that i have are very small, because the runtime starts to grow in a very quick manner and i can't let my computer execute a program for an entire day.
My question is: Is there a way to make the algorithm behave more like a curve without changing it's complexity? Because i've so far calculated 5 points that would be useful (these points are the first real number after the Stooge sort "ladder" graph changes, reffering to the size of the array getting sorted), but that's not as much as i need.
I'm using the algorithm provided on the wikipedia page of the Stooge Sort.
Five points is too little data to say it doesn't behave like a curve.
In fact, you can find a pretty accurate curve fit for your data:
source: http://mycurvefit.com/index.html?action=openshare&id=7b237893-c52c-49db-bcf6-e29ccf391b7c
But, again, there is very little data to conclude anything.

How to speed up 'sort' function in Matlab?

I was using the built-in sort function of Matlab:
[temp, Idx] = sort(M,2);
I would like to have the sorted index of each row of M, which is a matrix of size > 50k.
I searched hard but did not find anything.. It would be greatly appreciated if you have any comments!
To get a sense of how much room for improvement you have, I would suggest writing a test program in C and use qsort or in C++ and user sort and carefully time it on 7000 inputs of size 7000 (or whatever setup you have in Matlab).
I'm going to give you my estimate: probably Matlab's sort runs (on properly vectorized code, like yours) as fast as C++, and you're just seeing the effect of running an algorithm that takes O(n^2 log n). It is reported in Matlab's marketing material that its sort function was faster than C's qsort, but take it with a grain of salt.
The best way to speed up that sort is to get a faster computer. It will speed everything else up too. :)
The fact is, you can rarely speed up a single call to something like a sort. MATLAB is already doing that in an efficient manner, using an optimized code internally. (Reread the carlosdc answer.) The things you can sometimes get a boost on are tools that are written in MATLAB itself.
So, what can you do? Short of buying that new computer, you can look at your overall code. One single sort of that size is never that big of a problem. But the reason for doing that sort over and over again is. Think carefully about the code, about whether you can change the flow or avoid a many times repeated sort. Algorithm change is often a FAR bigger source of improvement than the wee bit you would ever get even if you could improve that sort.
Sorting is fundamentally O(n log n).
As long as you have a reasonably efficient implementation, this is unlikely to change much.
That said, as Andrew Janke's comment suggests, multi-threading can improve things dramatically.
GPU programming can be a way to get massive speedups. If you have R2010b or later, you may be able to use accelerated versions of built-in functions like sort from Mathworks.
Otherwise, write a mex wrapper around the CUDA Thrust library which includes a sort.
You could write your own sort function in C/C++ as MEX. MATLAB documentation has examples for it.
There exist many sort algorithms which are better then other in edge cases, for example almost sorted data or stability (which does not matter in MATLAB because all its types are value types).
Is your data numeric or strings? For strings there are probably special algorithms for ASCII sort, sometimes natural sort is preferable.

Bubblesort over other sorting algorithms?

Why would you choose bubble sort over other sorting algorithms?
You wouldn't.
Owen Astrachan of Duke University once wrote a research paper tracing the history of bubble sort (Bubble Sort: An Archaeological Algorithmic Analysis) and quotes CS legend Don Knuth as saying
In short, the bubble sort seems to have nothing to recommend it, except a catchy name
and the fact that it leads to some interesting theoretical problems.
The paper concludes with
In this paper we have investigated the origins of bubble sort and its enduring popularity despite warnings against its use by many experts. We confirm the warnings by analyzing its complexity both in coding and runtime.
Bubble sort is slower than the other O(n2) sorts; it's about four times as slow as insertion sort and twice as slow as selection sort. It does have good best-case behavior (if you include a check for no swaps), but so does Insertion Sort: just one pass over an already-sorted array.
Bubble Sort is impractically slow on almost all real data sets. Any good implementation of quicksort, heapsort, or mergesort is likely to outperform it by a wide margin. Recursive sorts that use a simpler sorting algorithm for small-enough base-cases use Insertion Sort, not Bubble Sort.
Also, the President of the United States says you shouldn't use it.
Related: Why bubble sort is not efficient? has some more details.
There's one circumstance in which bubble sort is optimal, but it's one that can only really occur with ancient hardware (basically, something like a drum memory with two heads, where you can only read through the data in order, and only work with two data items that are directly next to each other on the drum).
Other than that, it's utterly useless, IMO. Even the excuse of getting something up and running quickly is nonsense, at least in my opinion. A selection sort or insertion sort is easier to write and/or understand.
You would implement bubble sort if you needed to create a web page showing an animation of bubble sort in action.
When all of the following conditions are true
Implementing speed is way more important than execution speed (probability <1%)
Bubble sort is the only sorting algorithm you remember from university class (probability 99%)
You have no sorting library at hand (probability <1%)
You don't have access to Google (probability <1%)
That would be less than 0,000099 % chance that you need to implement bubble sort, that is less than one in a million.
If your data is on a tape that is fast to read forward, slow to seek backward, and fast to rewind (or is a loop so it doesn't need rewinding), then bubblesort will perform quite well.
I suspect a trick question. No one would choose bubble sort over other sorting algorithms in the general case. The only time it really makes any sense is when you're virtually certain that the input is (nearly) sorted already.
Bubble sort is easy to implement. While the 'standard' implementation has poor performance, there is a very simple optimization which makes it a strong contender compared to many other simple algorithms. Google 'combsort', and see the magic of a few well placed lines. Quicksort still outperforms this, but is less obvious to implement and needs a language that supports recursive implementations.
I can think of a few reasons for bubble sort:
It's a basic elementary sort. They're great for beginner programmers learning the if, for, and while statements.
I can picture some free time for a programmer to experiment on how all the sorts work. What better to start with at the top with than the bubble sort (yes, this does demean its rank, but who doesn't think 'bubble sort' if someone says 'sorting algorithms').
Very easy to remember and work with for any algorithm.
When I was starting on linked lists, bubble sort helped me understand how all the nodes worked well with each other.
Now I'm feeling like a lame commercial advertising about bubble sort so I'll be quiet now.
I suppose you would choose bubble sort if you needed a sorting algorithm which was guaranteed to be stable and had a very small memory footprint. Basically, if memory is really scarce in the system (and performance isn't a concern) then it would work, and would be easily understood by anybody supporting the code. It also helps if you know ahead of time that the values are mostly sorted already.
Even in that case, insertion sort would probably be better.
And if it's a trick question, next time suggest Bogosort as an alternative. After all, if they're looking for bad sorting, that's the way to go.
It's useful for "Baby's First Sort" types of exercises in school because it's easy to explain how it works and it's easy to implement. Once you've written it, and maybe run it once, delete it and never think of it again.
You might use Bubblesort if you just wanted to try something quickly. If, for instance, you are in a new environment and you are playing around with a new idea, you can quickly throw in a bubble sort in very little time. It might take you much longer to remember and write a different sort and debug it and you still might not get it right. If your experiment works out and you need to use the code for something real, then you can spend the time to get it right.
No sense putting a lot of effort into the sort algorithm if you are just prototyping.
When demonstrating with a concrete example how not to implement a sort routine.
Because your other sorting algorithm is Monkey Sort? ;)
Seriously though, bubble sort is mainly a sorting algorithm for educational reasons and has no practical value.
When the array is already "almost" sorted or you have few additions into an already sorted-list, you can use bubble sort to resort it. Bubble sort usually works for small data-sets.

Is there any reason to implement my own sorting algorithm?

Sorting has been studied for decades, so surely the sorting algorithms provide by any programming platform (java, .NET, etc.) must be good by now, right? Is there any reason to override something like System.Collections.SortedList?
There are absolutely times where your intimate understanding of your data can result in much, much more efficient sorting algorithms than any general purpose algorithm available. I shared an example of such a situation in another post at SO, but I'll share it hear just to provide a case-in-point:
Back in the days of COBOL, FORTRAN, etc... a developer working for a phone company had to take a relatively large chunk of data that consisted of active phone numbers (I believe it was in the New York City area), and sort that list. The original implementation used a heap sort (these were 7 digit phone numbers, and a lot of disk swapping was taking place during the sort, so heap sort made sense).
Eventually, the developer stumbled on a different approach: By realizing that one, and only one of each phone number could exist in his data set, he realized that he didn't have to store the actual phone numbers themselves in memory. Instead, he treated the entire 7 digit phone number space as a very long bit array (at 8 phone numbers per byte, 10 million phone numbers requires just over a meg to capture the entire space). He then did a single pass through his source data, and set the bit for each phone number he found to 1. He then did a final pass through the bit array looking for high bits and output the sorted list of phone numbers.
This new algorithm was much, much faster (at least 1000x faster) than the heap sort algorithm, and consumed about the same amount of memory.
I would say that, in this case, it absolutely made sense for the developer to develop his own sorting algorithm.
If your application is all about sorting, and you really know your problem space, then it's quite possible for you to come up with an application specific algorithm that beats any general purpose algorithm.
However, if sorting is an ancillary part of your application, or you are just implementing a general purpose algorithm, chances are very, very good that some extremely smart university types have already provided an algorithm that is better than anything you will be able to come up with. Quick Sort is really hard to beat if you can hold things in memory, and heap sort is quite effective for massive data set ordering (although I personally prefer to use B+Tree type implementations for the heap b/c they are tuned to disk paging performance).
Generally no.
However, you know your data better than the people who wrote those sorting algorithms. Perhaps you could come up with an algorithm that is better than a generic algorithm for your specific set of data.
Implementing you own sorting algorithm is akin to optimization and as Sir Charles Antony Richard Hoare said, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil".
Certain libraries (such as Java's very own Collections.sort) implement a sort based on criteria that may or may not apply to you. For example, Collections.sort uses a merge sort for it's O(n log(n)) efficiency as well as the fact that it's an in-place sort. If two different elements have the same value, the first element in the original collection stays in front (good for multi-pass sorting to different criteria (first scan for date, then for name, the collection stays name (then date) sorted)) However, if you want slightly better constants or have a special data-set, it might make more sense to implement your own quick sort or radix sort specific exactly to what you want to do.
That said, all operations are fast on sufficiently small n
Short answer; no, except for academic interest.
You might want to multi-thread the sorting implementation.
You might need better performance characteristics than Quicksorts O(n log n), think bucketsort for example.
You might need a stable sort while the default algorithm uses quicksort. Especially for user interfaces you'll want to have the sorting order be consistent.
More efficient algorithms might be available for the data structures you're using.
You might need an iterative implementation of the default sorting algorithm because of stack overflows (eg. you're sorting large sets of data).
Ad infinitum.
A few months ago the Coding Horror blog reported on some platform with an atrociously bad sorting algorithm. If you have to use that platform then you sure do want to implement your own instead.
The problem of general purpose sorting has been researched to hell and back, so worrying about that outside of academic interest is pointless. However, most sorting isn't done on generalized input, and often you can use properties of the data to increase the speed of your sorting.
A common example is the counting sort. It is proven that for general purpose comparison sorting, O(n lg n) is the best that we can ever hope to do.
However, suppose that we know the range that the values to be sorted are in a fixed range, say [a,b]. If we create an array of size b - a + 1 (defaulting everything to zero), we can linearly scan the array, using this array to store the count of each element - resulting in a linear time sort (on the range of the data) - breaking the n lg n bound, but only because we are exploiting a special property of our data. For more detail, see here.
So yes, it is useful to write your own sorting algorithms. Pay attention to what you are sorting, and you will sometimes be able to come up with remarkable improvements.
If you have experience at implementing sorting algorithms and understand the way the data characteristics influence their performance, then you would already know the answer to your question. In other words, you would already know things like a QuickSort has pedestrian performance against an almost sorted list. :-) And that if you have your data in certain structures, some sorts of sorting are (almost) free. Etc.
Otherwise, no.

Resources