As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Can anyone provide a complete list of metrics for rating an algorithm?
For example, my list starts with:
elegance
readability
computational efficiency
space efficiency
correctness
This list is not in order and my suspicion is that it isn't near complete. Can anyone provide a more complete list?
An exhaustive list may be difficult to put in a concise answer, since some important qualities will only apply to a subset of algorithms, like "levels of security offered by an encryption system for particular key sizes".
In any case, I'm interested to see more additions people might have. Here are a few:
optimal (mathematically proven to be the best)
accuracy / precision (heuristics)
any bounds on best, worst, average-case
any pathological cases? (asymptotes for chosen bad data, or encryption systems which do poorly for particular "weak" keys)
safety margin (encryption systems are breakable given enough time and resources)
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Quicksort outperforms Heapsort in practice. Mergesort is the only stable one of the 3 (in plain vanilla implementations). So it's either quicksort or mergesort that'd get used depending on the situation at hand (in-place in memory or external sorting etc.,)
So is there ever a case where the heap data structure is indeed used for sorting? No matter how much I 'Google' or try to come up with applications, almost always one chooses merge/quick-sort over heapsort. I've never encountered a case where heap sort is actually used in my professional life either. What would actually be a good use-case for heapsort in practice (if at all), out of curiosity?
Some benefits off the top of my head (will amend this list after I do some more research:
Almost-sorted sets benefit from being sorted by heapsort.
Space-conscious environments often prefer the O(1) space complexity of heapsort. Think embedded systems.
Huge data sets benefit from the guaranteed running time of O(nlog n) as opposed to the probable better running time of quicksort. Think medical, space, life-support, etc.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have used a random mutation hill climbing algorithm as part of a project that I am working on, but was wondering whether it would be better to use simulated annealing to minimise the chance of getting stuck in any local optima.
The question I have is which one tends to be generally faster from your experience? Obviously there is a huge wealth of applications for both algorithms; this is more of a generalised pondering, if you like.
Thank you.
There's no way to tell in advance (unless your project is a 100% match to a well studied academic problem like a pure TSP - and even then ...). It depends on your project's constraints and your project's size (and if you implement the algorithms correctly).
So, to be sure, you have to implement both algorithms (and many others, like Tabu Search, ...) and use a Benchmarker like this one to compare them.
That being said, I 'd put my money on Simulated Annealing over Random Mutation Hill Climbing any day :)
Note: Simulated Annealing is a short but difficult algorithm: I only got it right in my 3th implementation and I 've seen see plenty of wrong implementations (that still output a pretty ok solution) in blogs, etc. It's easier just to reuse optimization algorithms.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I wonder If there is a place provided many algorithms.
I want to know the details of some processĀ“s schedule algorithms.
For example, If I want to get some informations about Network, I will check out the RFC documents. I want to know, in the field of os algorithms ,if there is something like RFC.
Further more, If there is a place I can read lots of algorithms in many fields. In my view, Reading the algorithms in many fields can help me a lot in algorithm ------Anyway, someday, maybe I can combine two algorithms to solve one particular problem.
Thanks.
How about this: List of Algorithms. Also you can study Donald Knuth's The Art of Computer Programming Vol 1 - 4.
Wikipedia has lots of them. I don't think that there is not any organization that provides algorithms for OS.
Wikipedia holds a lot of algorithms.
Use section "See Also" there.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Christoph Koutschan has set up an interesting survey that tries to identify the most important algorithms "in the world". Since one of the criteria is that "the algorithm has to be widely used" I though that extending the survey to the huge group of users at Stack Overflow would be a natural thing to do.
So, what do you think? Which algorithms deserve a place in the Algorithm Hall of Fame?
I somewhat like this algorithm:
Write code.
Test code. If buggy, go to step 3. If not, go to step 4.
Rewrite code, then go back to step 2.
Get somebody else to test your code. If they discover any bugs, return to step 3, otherwise go to step 5.
Congratulations, your code has no obvious bugs! Now you wait for a user to stumble upon a hidden one, in which case you return to step 3 once again unless you're lucky and are no longer providing support for the code in question.
I'd say binary search since it's usually the first algorithm people learn. And the RSA encryption algorithms are pretty important.
Hashing, since it's the basis for so much in security, data structures, etc. Hashing algorithms have generated a lot of Ph.D. dissertations.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What is the best Fuzzy Matching Algorithm (Fuzzy Logic, N-Gram, Levenstein, Soundex ....,) to process more than 100000 records in less time?
I suggest you read the articles by Navarro mentioned in the Refences section of the Wikipedia article titled
Approximate string matching.
Making your decision based on actual research is always better than on suggestions by random
strangers.. Especially if performance on a known set of records is important to you.
It massively depends on your data. Certain records can be matched better than others. For example postcode is a defined format so can be compared in a different way to normal strings. People can be matched on initials and DOB, or other combinations etc.