Unrolled Linked Lists and Skip Lists - data-structures

Does anyone have a good resource to refer to for Unrolled Linked Lists and Skip Lists.
I just came across the two and can't get the hang of them. I am referring to Data Structures and Algorithms made easy by Narsimha Karumanchi. Although a good book, I do not understand the two kind of lists properly.
So if someone can explain the two and advantages with the help of a realistic use case, it would be really nice.
Thanks in advance :)

I think, these two websites are good as explanatory sources:
Opendatastructures and geeksforgeeks.
http://opendatastructures.org/ods-python/4_2_SkiplistSSet_Efficient_.html
http://www.geeksforgeeks.org/unrolled-linked-list-set-1-introduction/

Related

understanding selection, iteration, arrays, counting, etc. I know the concepts behind these terms, but don't know how to use them

I am a student and I am taking the computer science IGCSE course in my school. I am confident with the theory paper and also confident in most parts of the programming paper except for parts where I am required to write pseudocode for a given program. I searched for tutorials, and continued to search for some questions I have. I am having problem trying to use these terms stated in actual pseudocode and don't get the overall structure of them. I figured that stack overflow would be a great website for me to get different answers from people. I am eager to learn pseudocode and wish to understand the very basics of the structure if anyone is willing to explain it.
thanks for your time,
really appreciate it!

Do I need to remember the hard-code for different sorts?

I am currently a freshmen in college doing some self-studying about the different sorting algorithms.
My studying source does provide the codes and I did some practice on it (coding the sorting base on the concept it). At the moment, I can provide selection sort with just a little bit of trouble (coding that is).
Am I required to memorize the codes? I know the difference between the sorts and the concept behind it. Do I need to memorize the Pseudocodes behind it as well? Will interviewers ever ask you to produce the codes on the spot?
There is no need to memorize the exact code syntax , but it would be important to understand the logic behind the sorting algorithms (ie. be able to explain using pseduocode).
I've been asked in interview how to do some basic sorting algorithms like a bubble sort, but nothing very complex. I was not required to write the exact "code" in any particular language, but just prove that i know the logic and can explain how it works.
Hello and welcome on Stack Overflow. I'm answering your post below, even though it will be closed as too broad or primarly opinion-based, because this QA site is oriented towards coding questions. You might want to ask this on another QA site, like programmers stackexchange.
Do I need to memorize the Pseudocodes behind it as well?
not really, as in every decent language you'll have a standard library that offers you state of the art implementations, and what you really need to remember is the complexity and mechanism of each sort algorithm, to choose the best fit for your dataset, when you need it.
And otherwise, when you really need to dig again in the pseudocodes, there are books (like the Art of Computer Programming by Donald Knuth), and wikipedia, and many other resources online.
Will interviewers ever ask you to produce the codes on the spot?
Yes they will. It happened to me at least five times. But most of the time, they'll understand that you might not remember the full pseudocode on the spot, but they expect you to know the mechanism and complexity, and be able to reinvent the algorithm on the spot.
Though, when you do interviews, you're usually compared to other people passing the same interview, and between two candidates that passes, they'll choose the one that did the best on the tests… And then you might loose a job opportunity because someone else remembered better those algorithms.

Material and Information to improve algorithmic knowledge

Lately I have been stuck on improving my algorithmic skills. And at this point I am finding myself out of good material for solving grid problems based on dfs and bsf. I somehow managed to do http://www.spoj.pl/problems/POUR1/ with a brute force logic but i recently go-ogled to find out that the problem can be done by bfs. But I can't figure out exactly how to go about it. Can someone please provide some text to read or some kind of explanation to the above mentioned problem so I can add this to my skill set. It would be extremely kind if you could even help me out for these techniques in problems like these http://www.codechef.com/problems/MMANT/ .please help as soon as possible I am really stuck in these kind of problems ant can't move on. It would also be really kind if u could provide a list of good questions about Binary Indexed Trees and segment trees and some more examples of their usage.
Thanks for the help!! :)
One resource I've found useful is The Algorithmist:
The Algorithmist is a resource dedicated to anything algorithms - from
the practical realm, to the theoretical realm. There are also links
and explanation to problemsets.
Also The Algorithm Design Manual by Steve Skiena is extremely useful, especially the second part.

Learning to Program

I've been told that the best way to learn a programming language is to implement some data structures in it. I am currently learning Ruby and I would really love to code some data structures like Tries, AVL etc. Are there any sites out there which outline how to go about doing this and can suggest exercises and optimizations based on the same.
Any help would be greatly appreciated. Thanks.
You can also start with Ruby Code Kata. They are seemingly real world problems with almost always an algorithm based problem lying underneath.
There are discussion forums available there to discuss each Kata, so that your feedback loop for learning would be completed.
Here's a free online book on creating data structures with Ruby:
http://www.brpreiss.com/books/opus8/
I would recommend learning the basics first, since they lay the foundation. Start simple with things like linked lists, binary search trees, stacks etc.
You can also look at TopCoder Tutorials.
PuzzleNode.com helped me.
There are 15 problems. You can finish in a day or two, longer if you plan on test driving the solutions. I like to think of each problem as being larger than a kata, but smaller than trying to implement a tic tac toe game in Ruby. You'll be exposed to parsing in Ruby, data structures and using possibly gems based on your implementation. There also fun; good luck!

Learning efficient algorithms

Up until now I've mostly concentrated on how to properly design code, make it as readable as possible and as maintainable as possible. So I alway chose to learn about the higher level details of programming, such as class interactions, API design, etc.
Algorithms I never really found particularly interesting. As a result, even though I can come up with a good design for my programs, and even if I can come up with a solution to a given problem it rarely is the most efficient.
Is there a particular way of thinking about problems that helps you come up with an as efficient solution as possible, or is it simple a matter of practice and/or memorizing?
Also, what online resources can you recommend that teach you various efficient algorithms for different problems?
Data dominates. If you design your program around the right abstract data structures (ADTs), you often get a clean design, the algorithms follow quite naturally and when performance is lacking, you should be able to "plug in" more efficient ones.
A strong background in maths and logic helps here, as it allows you to visualize your program at a high level as the interaction between functions, sets, graphs, sequences, etc. You then decide whether the sets need to be ordered (balanced BST, O(lg n) operations) or not (hash tables, O(1) operations), what operations need to supported on sequences (vector-like or list-like), etc.
If you want to learn some algorithms, get a good book such as Cormen et al. and try to implement the main data structures:
binary search trees
generic binary search trees (that work on more than just int or strings)
hash tables
priority queues/heaps
dynamic arrays
Introduction To Algorithms is a great book to get you thinking about efficiency of different algorithms/data structures.
The authors of the book also teach an algorithms course on MIT . You can find most lectures here
I would say that in coming up with good algorithms (which is actually part of good design IMHO), you have to develop a way of thinking. This is best done by studying algorithm design. By study I don't mean just knowing all the common algorithms covered in a textbook, but actually understanding how and why they work, and being able to apply the basic idea contained in them to actual problems you are trying to solve.
I would suggest reading a good book on algorithms (my favourite is CLRS). For an online resource I would recommend the series of articles in the TopCoder Algorithm Tutorials.
I do not understand why you would mention practice and memorization in the same breath. Memorization won't help you at all (you probably already know this), but practice is essential. If you cannot apply what you learned, its not really learning. You can practice at various online programming contest/puzzle sites like SPOJ, Project Euler and PythonChallenge.
Recommendations:
First of all i recommend the book "Intro to Algorithms, Second Edition By corman", great book has most(if not all) of the algorithms you will need. (Some of the more important topics are sorting-algorithms, shortest paths, dynamic programing, many data structures like bst, hash maps, heaps).
another great way to learn algorithms is http://ace.delos.com/usacogate, great practice after the begining.
To your questions you will just get used to write good fast running code, after a little practice you just wouldnt want to write un-efficient code.
While I think #larsmans is correct inasmuch that understanding logic and maths is a fast way to understanding how to choose useful ADTs for solving a given problem, studying existing solutions may be more instructive for those of us who struggle with those topics. In particular, reviewing code of established software (OSS) that solves some similar problem as the one you're interested in.
I find a particularly good method for this method of study is reviewing unit tests of such a project. Apache Lucene, for example, has a source control repository containing numerous examples. While it doesn't reveal the underlying algorithms, it helps trace to particular functionality that solves a certain problem. This leads to an opportunity for studying its innards - i.e. an interesting algorithm. In Lucene's case inverted indices come to mind.
While this does not guarantee the algorithm you discover is the best, it's likely one that's received a lot scrutiny and probably comes from project with an active mailing that may answer your questions. So it's a good resource for finding a solution that is probably better than what most of us would come up with on our own.

Resources