As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I was recently reading an article which mentioned:
For God's sake, don't try sorting a linked list during the interview.
Is there any reason why the author wrote this? The reason is not immediately clear. I am aware that merge sort works on linked lists in O(nlgn) time- what's wrong with that? Am I missing something obvious?
EDIT:
Any reason why is question is voted to close? I'm honestly curious and merely looking for some answers or interesting points.
I have no way of knowing why the author of the blog wrote what he did. If I had to guess, I'd say what was really meant was something along the lines of:
Don't assume that efficiently sorting a linked list would be as easy as sorting a data structure that provides random access to its elements. If you do end up relying on being able to sort a linked list, be prepared to explain what a suitable algorithm might be, and to discuss its complexity.
I think you'll find that, although it's possible to sort a linked list using merge sort, the code to do so efficiently is somewhat involved. It's not something you'd want to develop while standing at the white board in the middle of an interview.
The operation of getting/setting elements at specific indices is used by most sorting algorithms, and are required to be fast in order for the sorting algorithms to be fast. Normally they are O(1) for say a normal list, but for a linked list it is O(n) and this makes the sorting terribly inefficient. Perhaps this captures the reasoning behind your quote.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I see people biased towards DP approach over the greedy approach because it can solve optimization problems. What you guys think which one of them is preferable? I need to collect arguments in favor of preferable technique to argue with my mates. LOL. Ok, DP is used to solve the problems that have optimal substructure and principle of optimality applies on them. But is it enough for DP to be better than greedy approach?
Your question is meaningless without knowing what problem you are trying to solve.
Dynamic Programming is a tool. It is useful for solving a certain class of problems.
Greedy Algorithms are another tools. They are useful in other situations.
It's like asking "Which is better - a hammer or a saw"?
The answer will be very different depending on what you are trying to do.
Let's take the Coin change example. If you take the Greedy approach you might not get to the correct result most of the times but if you take the DP approach you will always get the right result. In fact, it's the only way to solve the problem which is by using DP.
To answer your question, forget about optimality but using Greedy you might not get a correct solution in the first place for certain kind of problems.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I wonder If there is a place provided many algorithms.
I want to know the details of some processĀ“s schedule algorithms.
For example, If I want to get some informations about Network, I will check out the RFC documents. I want to know, in the field of os algorithms ,if there is something like RFC.
Further more, If there is a place I can read lots of algorithms in many fields. In my view, Reading the algorithms in many fields can help me a lot in algorithm ------Anyway, someday, maybe I can combine two algorithms to solve one particular problem.
Thanks.
How about this: List of Algorithms. Also you can study Donald Knuth's The Art of Computer Programming Vol 1 - 4.
Wikipedia has lots of them. I don't think that there is not any organization that provides algorithms for OS.
Wikipedia holds a lot of algorithms.
Use section "See Also" there.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Please suggest me some good materials or books or links that provede the methods to find the complexity of algorithms that use linked list for their implementation. My question may sound silly to some of you but please reply.Please help
Introduction to Algorithms is the canonical textbook.
It is possible to find a complete PDF of this book online. I'm not going to provide a link, though, because I'm not sure if these copies are legal.
If all you're after is a quick reference, then Wikipedia is the best place to start. For instance, see the table at http://en.wikipedia.org/wiki/Linked_list#Tradeoffs.
Gautam I have a feeling that you are trying to skip the understanding of 'complexity analysis' portion and jump to linked-list-complexity-analysis.
Just so you know, if you want to understand it truly, then you have to understand two parts
1. How to compute complexity of an algorithm
2. The flow of algorithm in question.
'linked-list-complexity-analysis ' is not something that you can understand without understanding {1}
That said, if at all you want ready answer (trust me that wouldnt help you in interviews), you can refer to this book "Data structures and algorithm and applications in C++" (http://www.mhhe.com/engcs/compsci/sahni/)
Highly recommended for beginner.
Immensely boring, and I doubt you will read it again. :P
You will find answers to your specific questions there.
After you are done with that book , I would suggest go with DataStructures using C/C++ by Langsam/tennenbaum.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
asked in a recent interview:
What data structure would you use to implement spell correction in a document. The goal is to find if a given word typed by the user is in the dictionary or not (no need to correct it).
What is the complexity?
I would use a "Radix," or "Patricia," tree to index the dictionary. See here, including an example of its use to index dictionary words: https://secure.wikimedia.org/wikipedia/en/wiki/Radix_tree. There is a useful discussion at that link of its complexity.
if I'm understanding the question correctly, you are given a dictionary (or a list of "correct" words), and are asked to specify whether an input word is in the dictionary. So you're looking for data structures with very fast lookup times. I would go with a hash table
I would use a DAWG (Directed Acyclic Word Graph) which is basically a compressed Trie.
These are commonly used in algorithms for Scrabble and other words games, like Boggle.
I've done this before. The TWL06 Scrabble dictionary with 170,000 words fits in a 700 KB structure both on disk and in RAM.
The Levenshtein distance tells you how many letters you need to change to get from one string to another ... by finding the one with less substitutions you are able to provide correct words (also see Damerau Levenshtein distance)
The increase performance you should not calculate the distance against your whole dictionary and constrain it with some heuristic, for instance words that start with same first letter.
Bloom Filter. False positives are possible, but false negatives are not. As you know the dictionary in advance you can eliminate the false negatives by using a perfect hash for your input.(dictionary). Or you can use this as an auxiliary data structure behind your actual dictionary data structure.
edit: Of course complexity is O(1) for bloom filter.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Christoph Koutschan has set up an interesting survey that tries to identify the most important algorithms "in the world". Since one of the criteria is that "the algorithm has to be widely used" I though that extending the survey to the huge group of users at Stack Overflow would be a natural thing to do.
So, what do you think? Which algorithms deserve a place in the Algorithm Hall of Fame?
I somewhat like this algorithm:
Write code.
Test code. If buggy, go to step 3. If not, go to step 4.
Rewrite code, then go back to step 2.
Get somebody else to test your code. If they discover any bugs, return to step 3, otherwise go to step 5.
Congratulations, your code has no obvious bugs! Now you wait for a user to stumble upon a hidden one, in which case you return to step 3 once again unless you're lucky and are no longer providing support for the code in question.
I'd say binary search since it's usually the first algorithm people learn. And the RSA encryption algorithms are pretty important.
Hashing, since it's the basis for so much in security, data structures, etc. Hashing algorithms have generated a lot of Ph.D. dissertations.