Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a mostly self-taught programmer, I'm in my freshman year of college going towards a BS in CompSci. Last year I would do some of the homework for the AP CompSci kids, and when they got to sorting algorithms, I understood what they did, but my question was what is a case where one is used? I know this may seem like a horrible, or ridiculous question, but other than a few cases I can think of, I don't understand when one would use a sorting algorithm. I understand that they are essential to know, and that they are foundational algorithms. But in the day to day, when are they used?
Sorting algorithm is an algorithm that arrange the list of elements in certain order. You can use such algorithms when you want the elements in some order.
For example:
Sorting strings on basis of lexicographical order. This makes several computation easier (like searching, insertion, deletion provided appropiate data structure is used)
Sorting integers as part of preprocessing of some algorithms. Suppose you have lot of queries in data base to find an integer, you will want to apply binary search. For it to be applicable, input must be sorted.
In many computational geometry algorithms (like convex hull), sorting the co-ordinates is the first step you do.
So, basically, if you want some ordering, you resort to sorting algorithms!
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
So far in my learning of algorithms, I have assumed that asymptotic boundings are directly related to patterns in control structures.
So if we have n^2 time complexity, I was thinking that this automatically means that I have to use nested loops. But I see that this is not always correct (and for other time complexities, not just quadratic).
How to approach this relationship between time complexity and control structure?
Thank you
Rice's theorem is a significant obstacle to making general statements about analyzing running time.
In practice there's a repertoire of techniques that get applied. A lot of algorithms have nested loop structure that's easy to analyze. When the bounds of one of those loops is data dependent, you might need to do an amortized analysis. Divide and conquer algorithms can often be analyzed with the Master Theorem or Akra–Bazzi.
In some cases, though, the running time analysis can be very subtle. Take union-find, for example: getting the inverse Ackermann running time bound requires pages of proof. And then for things like the Collatz conjecture we have no idea how to even get a finite bound.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Do all algorithms that use the divide and conquer approach use recursive functions or not necessarily?
Binary search is an application of the D&C paradigm. (As follows: split in two halves and continue into the half that may contain the key.)
It can be implemented both recursively or non-recursively.
Recursion is handy when you need to keep both "halves" of a split and queue them for later processing. A common situation is called tail recursion, when you only queue one of the halves and process the other immediately. In binary search, you just drop one of the halves.
In a very broad sense, D&C is the father of all algorithms when stated as "break the problem into easier subproblems of the same kind". This definition also encompasses iterative solutions, often implemented without recursion.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In the book Introduction to Algorithms (Corman), exercise 1.2-2 asks a the following question about comparing implementations of insertion sort and merge sort. For inputs of size n, insertion sort runs in 8n^2 steps while merge sort runs in 64n lg n steps; for which values of n does insertion sort beat merge sort?
Although I am interested in the answer, I am more interested in how to find the answer step by step (so that I can repeat the process to compare any two given algorithms if at all possible).
At first glance, this problem seems similar to something like finding the break even point in business-calculus, a class which I took more than 5 years ago, but I am not sure so any help would be appreciated.
Thank you
P/S If my tags are incorrect, this question is incorrectly categorized, or some other convention is being abused here please limit the chastising to a minimum, as this is my first time posting a question.
Since you are to find when insertion sort beats merge sort
8n^2<=64nlogn
n^2<=8nlogn
n<=8logn
On solving n-8logn = 0 you get
n = 43.411
So for n<=43 insertion sort works better than merge sort.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In usual circumstances, sorting arrays of ~1000s of simple items like integer or floats is sufficiently fast that the small differences between implementations just doesn't matter.
But what if you need to sort N modest sized arrays that have been generated by some similar process or simply have have some relatedness?
I leave the specifics of what of the mysterious array generator and relationships of the generated arrays intentionally vague. It is up to any applicable algorithms to specify a large as possible domain where they will work when they will be most useful.
EDIT: Let's narrow this by letting the arrays be independent samples. There exists an unchanging probability distribution of arrays that will be generated. Implicitly then there's a stable probability distribution of elements in the arrays but it's conditonal -- the elements within an array might not be independent. It seems like it'd be extremely hard to make use of relationships between elements within the arrays but I could be wrong. We can narrow further if needed by letting the elements in the arrays be independent. In that case the problem is to effectively learn and use the probability distribution of elements in the arrays.
Here is a paper on a self improving sorting algorithm. I am pretty strong with algorithms and machine learning, but this paper is definitely not an easy read for me.
The abstract says this
We investigate ways in which an algorithm can improve
its expected performance by fine-tuning itself automatically with respect to an arbitrary, unknown input distribution. We give such self-improving algorithms for
sorting and clustering. The highlights of this work:
a sorting algorithm with optimal expected limiting running time ...
In all cases, the algorithm begins with a learning phase
during which it adjusts itself to the input distribution
(typically in a logarithmic number of rounds), followed
by a stationary regime in which the algorithm settles to
its optimized incarnation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I always have problems with evaluating a complexity of a problem. I usually try to find an O(n) solution but sometimes O(nlogn) or even O(n^2) is the best possible one.
One "rule of thumb" I know is that if you have a sorted array and you need to find something, it probably can be done in O(logn). Also I know that sorting can't be done quicker than O(nlogn). Are there any similar rules an unexperienced programmer can follow? Reoccurring problems you know the complexity of?
The most troublesome for me is the O(n^2), especially if I'm under pressure on an exam and I waste time on trying to find a better one.
I hope this isn't a too broad and opinion-based question.
Thanks!
Non comparison based sorting takes O(n) time. Eg: radix sort.
This seems like a good read. http://bigocheatsheet.com/ It contains list of common algorithms, their space and time complexity. Hope this helps.