Implementing data structures/algorithms in languages that already support them - algorithm

Does it makes sense to implement your own version of data structures and algorithms in your language of choice even if they are already supported, knowing that care has been taking into tuning them for best possible performance?

Sometimes - yes. You might need to optimise the data structure for your specific case, or give it some specific extra functionality.
A java example is apache Lucene (A mature, widely used Information Retrieval library). Although the Map<S,T> interface and implementations already exists - for performance issues, its usage is not good enough, since it boxes the int to an Integer, and a more optimized IntToIntMap was developed for this purpose, instead of using a Map<Integer,Integer>.

The question contains a false assumption, that there's such a thing as "best possible performance".
If the already-existing code was tuned for best possible performance with your particular usage patterns, then it would be impossible for you to improve on it in respect of performance, and attempting to do so would be futile.
However, it wasn't tuned for best possible performance with your particular usage. Assuming it was tuned at all, it was designed to have good all-around performance on average, taken across a lot of possible usage patterns, some of which are irrelevant to you.
So, it is possible in principle that by implementing the code yourself, you can apply some tweak that helps you and (if the implementers considered that tweak at all) presumably hinders some other user somewhere else. But that's OK, they don't have to use your code. Maybe you like cuckoo hashing and they like linear probing.
Reasons that the implementers might not have considered the tweak include: they're less smart than you (rare, but it happens); the tweak hadn't been invented when they wrote the code and they aren't following the state of the art for that structure / algorithm; they have better things to do with their time and you don't. In those cases perhaps they'd accept a patch from you once you're finished.
There are also reasons other than performance that you might want a data structure very similar to one that your language supports, but with some particular behavior added or removed. If you can't implement that on top of the existing structure then you might well do it from scratch. Obviously it's a significant cost to do so, up front and in future support, but if it's worth it then you do it.

It may makes sense when you are using a compiled language (like C, Assembly..).
When using an interpreted language you will probably have a performance loss, because the native structure parsers are already compiled, and won't waste time "interpreting" the new structure.
You will probably do it only when the native structure or algorithm lacks something you need.

Related

When would it be a good idea to implement data structures rather than using built-in ones?

What is the purpose of creating your own linked list, or other data structure like maps, queues or hash function, for some programming language, instead of using built-in ones, or why should I create it myself? Thank you.
Good question! There are several reasons why you might want to do this.
For starters, not all programming languages ship with all the nice data structures that you might want to use. For example, C doesn't have built-in libraries for any data structures (though it does have bsearch and qsort for arrays), so if you want to use a linked list, hash table, etc. in C you need to either build it yourself or use a custom third-party library.
Other languages (say, JavaScript) have built-in support for some but not all types of data structures. There's no native JavaScript support for linked lists or binary search trees, for example. And I'm not aware of any mainstream programming language that has a built-in library for tries, though please let me know if that's not the case!
The above examples indicate places where a lack of support, period, for some data structure would require you to write your own. But there are other reasons why you might want to implement your own custom data structures.
A big one is efficiency. Put yourself in the position of someone who has to implement a dynamic array, hash table, and binary search tree for a particular programming language. You can't possibly know what workflows people are going to subject your data structures to. Are they going to do a ton of inserts and deletes, or are they mostly going to be querying things? For example, if you're writing a binary search tree type where insertions and deletions are common, you probably would want to look at something like a red/black tree, but if insertions and deletions are rare then an AVL tree would work a lot better. But you can't know this up front, because you have to write one implementation that stands the test of time and works pretty well for all applications. That might counsel you to pick a "reasonable" choice that works well in many applications, but isn't aggressively performance-tuned for your specific application. Coding up a custom data structure, therefore, might let you take advantage of the particular structure of the problem you're solving.
In some cases, the language specification makes it impossible or difficult to use fast implementations of data structures as the language standard. For example, C++ requires its associative containers to allow for deletions and insertions of elements without breaking any iterators into them. This makes it significantly more challenging / inefficient to implement those containers with types like B-trees that might actually perform a bit better than regular binary search trees due to the effects of caches. Similarly, the implementation of the unordered containers has an interface that assumes chained hashing, which isn't necessarily how you'd want to implement a hash table. That's why, for example, there's Google's alternatives to the standard containers that are optimized to use custom data structures that don't easily fit into the language framework.
Another reason why libraries might not provide the fastest containers would be challenges in providing a simple interface. For example, cuckoo hashing is a somewhat recent hashing scheme that has excellent performance in practice and guarantees worst-case efficient lookups. But to make cuckoo hashing work, you need the ability to select multiple hash functions for a given data type. Most programming languages have a concept that each data type has "a" hash function (std::hash<T>, Object.hashCode, __hash__, etc.), which isn't compatible with this idea. The languages could in principle require users to write families of hash functions with the idea that there would be many different hashes to pick from per object, but that complicates the logistics of writing your own custom types. Leaving it up to the programmer to write families of hash functions for types that need it then lets the language stay simple.
And finally, there's just plain innovation in the space. New data structures get invented all the time, and languages are often slow to grow and change. There's been a bunch of research into new faster binary search trees recently (check out WAVL trees as an example) or new hashing strategies (cuckoo hashing and the "Swiss Table" that Google developed), and language designers and implementers aren't always able to keep pace with them.
So, overall, the answer is a mix of "because you can't assume your favorite data structure will be there" and "because you might be able to get better performance rolling your own implementations."
There's one last reason I can think of, and that's "to learn how the language and the data structure work." Sometimes it's worthwhile building out custom data types just to sharpen your skills, and you'll often find some really clever techniques in data structures when you do!
All of this being said, I wouldn't recommend defaulting to coding your own version of a data structure every time you need one. Library versions are usually a pretty safe bet unless you're looking for extra performance or you're missing some features that you need. But hopefully this gives you a better sense as to why you may want to consider setting aside the default, well-tested tools and building out your own.
Hope this helps!

Modern data structures

I just realized all the data structures I regularly use are really old and really simple. Linked lists, hash tables, trees, and even the more complex variants such as VLists or RBTrees are all pretty old inventions.
Most of them were conceived for a serial, single CPU world and require adapting to work in parallel environments.
What kind of newer, better data structures do we have? Why are they not widely used?
I understand using a plain old linked list if you have to implement it and prefer the simplicity, but having huge STLs and piles of third party libraries like Guava or Boost, why am I still placing locks around hashes?
Don't we have potentially standard, hard-proven modern data structures that can actually replace the trusty old-timers?
There is nothing wrong with the old ones. A good way to keep flexibility is to separate concerns. Normal (old style) datastructures are concerned with the way how data is stored. Locking is a completely different concern, which should not be part of the datastructure.
Locking is a potentially expensive operation, so if you can, you should lock multiple structures at once to optimize your code. I.e. lock critical sections not datastructures. If you directly add locking to your datastructures, you do not have the possibility to optimize this way. Also this will introduce implicit synchronisation points, that you may not want and canot control.
This does not answer a different aspect of your question. The part of "Why do we need locking at all". The answer is, that sometimes there is just no way around it. You either need to have lock somewhere, completely rely on atomic operations or disallow mutation altogether.
Method one is not feasible, as I have pointed out above, because you loose potential for optimization and you have implicit synchronisation points.
Only using atomic operations in your data structure (i.e. non-locking structures) is still an open research question, and might not always be possible. I know of some non-locking structures, i.e. queues, lists etc, but I have never heard of a non locking tree. Also non-locking structures tend to become much more complicated and slower, so we still need some better structure for thread local data, and can only add these to our datastructure zoo.
Not having mutable datastructures at all is in my opinion the best way of all of them. Mutability is often more of a hassle than it is worth. However this is a concept from functional programming and only makes sense in such an environment. Functional programming however is regarded as an esoteric concept by most programmers. Most languages which are actually used in production work mainly use non-functional concepts (this does not mean it actually is more complicated or is more error prone, it is just reflecting the current state of training among developers). In my opinion functional programming will become more wide spread, once people start to note it solves their threading problems automatically in a blink. Several other languages are now borrowing already from functional languages, so this is probably where we will find the next evolution of data structures.
If you want lock-free data structures, study persistent data structures. These are mostly popular in the functional programming world, but are applicable in other domains as well. Most persistent DSs are variants of plain lists, trees etc. but newer ones such as hash tries have surfaced in recent years.

most suitable language for computationally and memory expensive algorithms

Let's say you have to implement a tool to efficiently solve an NP-hard problem, with unavoidable possible explosion of memory usage (the output size in some cases exponential to the input size) and you are particularly concerned about the performances of this tool at running time. The source code has also to be readable and understandable once the underlying theory is known, and this requirement is as important as the efficiency of the tool itself.
I personally think that 3 languages could be suitable for these three requirements: c++, scala, java.
They all provide the right abstraction on data types that makes it possible to compare different structures or apply the same algorithms (which is also important) to different data types.
C++ has the advantage of being statically compiled and optimized, and with function inlining (if the data structures and algorithms are designed carefully) and other optimisation techniques it's possible to achieve a performance close to that of pure C while maintaining a fairly good readability.
If you also put a lot of care in data representation you can optimise the cache performance, which can gain orders of magnitude in speed when the cache miss rate is low.
Java is instead JIT compiled, which allows to apply optimisations during runtime, and in this category of algorithms that could have different behaviours between different runs, that may be a plus. I fear instead that such an approach could suffer from garbage collector, however in the case of this algorithm it's common to continuously allocate memory and java heap performance is notoriously better than C/C++ and if you implement your own memory manager inside the language you could even achieve good efficiency.
This approach instead is not able to inline method invocation (which induces a huge performance penalty) and doesn't give you control over the cache performance. Among the pros there's a better and cleaner syntax than C++.
My concerns about scala are more or less the same as Java, plus the fact that I can't control how the language is optimised unless I have a deep knowledge on the compiler and the standard library. But well: I get a very clean syntax :)
What's your take on the subject? Have you had to deal with this already? Would you implement an algorithm with such properties and requirements in any of these languages or would you suggest something else? How would you compare them?
Usually I’d say “C++” in a heartbeat. The secret being that C++ simply produces less (memory) garbage that needs managing.
On the other hand, your observation that
however in the case of this algorithm it's common to continuously allocate memory
is a hint that Java / Scala may actually be more suited. But then you could use a small object heap in C++ as well. Boost has one that uses the standard allocator interface, if memory serves.
Another advantage of C++ is obviously the use of abstraction without penalty through templates – i.e. that you can easily create generic algorithmic components that can interact without incurring a runtime overhead due to abstraction. In fact, you noted that
it's possible to achieve a performance close to that of pure C while maintaining a fairly good readability
– this is looking at things the wrong way: Templates allow C++ to achieve performance superior to that of C while still maintaining high abstraction.
D might be worth a look, seeing as how it tries to be a better C++.
From a superficial glance, it has better source code readability than C++ does, so that's one of your points covered.
It also has memory management, which makes playing with algorithms a bit easier.
And templates
Here is a stackoverflow discussion comparing the performance of C++ and D
The languages you noticed were my first guesses as well.
Each language has a different take on how to handle specific issues like compilation, memory management and source code, but in theory, any of them should be fitting to your problem.
It is impossible to tell which is best, and there is likely no major difference if you are familiar enough with all of them to work around their respective quirks.
And obviously, if you actually find the need to optimize (I'm not sure if that's a given), that's possible in each language. Lower level languages obviously offer more options, but are also (far) more complex to actually improve.
A single note about C++ vs Java: This is really a holy war, and if you've followed the recent development you'll probably have your own opinion. I, for one, think Java offers enough good aspects to make up for its flaws, usually.
And a final note on C++ vs C: According to my knowledge, the difference usually amounts to a sufficiently low percentage to ignore this. It it doesn't make a difference for the source code, it's fine to go with C, if C++ could make for easier-to-read source code, go with C++. In any case, the choice is kind of negligible.
In the end, remember that money spent on a few hours of programming/optimizing this could as well go into slightly superior hardware to make up for missed tiny details.
It all boils down to: Any of your options is fine as long as you do it right (domain knowledge).
I would use a language which makes it very easy to work on the algorithm. Get the algorithm right and it could very easily outweigh any advantage from fine-tuning the wrong algorithm. Don't be scared to play around in a language normally thought of as slow in execution speed if that language makes it easier to express algorithmic ideas. It is usually much easier to transcribe the right algorithm into another language than it is to eek-out the last dregs of speed from the wrong algorithm in the fastest executing language.
So do it in a language you are comfortable with and which is expressive. You might surprise yourself and find that what is produced is fast enough!

Reimplementing data structures in the real world

The topic of algorithms class today was reimplementing data structures, specifically ArrayList in Java. The fact that you can customize a structure for in various ways definitely got me interested, particularly with variations of add() & iterator.remove() methods.
But is reimplementing and customizing a data structure something that is of more interest to the academics vs the real-world programmers? Has anyone reimplemented their own version of a data structure in a commercial application/program, and why did you pick that route over your particular language's implementation?
Knowing how data structures are implemented and can be implemented is definitely of interest to everyone, not just academics. While you will most likely not reimplement a datastructure if the language already provides an implementation with suitable functions and performance characteristics, it is very possible that you will have to create your own data structure by composing other data structures... or you may need to implement a data structure with slightly different behavior than a well-known data structure. In that case, you certainly will need to know how the original data structure is implemented. Alternatively, you may end up needing a data structure that does not exist or which provides similar behavior to an existing data structure, but the way in which it is used requires that it be optimized for a different set of functions. Again, such a situation would require you to know how to implement (and alter) the data structure, so yes it is of interest.
Edit
I am not advocating that you reimplement existing datastructures! Don't do that. What I'm saying is that the knowledge does have practical application. For example, you may need to create a bidirectional map data structure (which you can implement by composing two unidirectional map data structures), or you may need to create a stack that keeps track of a variety of statistics (such as min, max, mean) by using an existing stack data structure with an element type that contains the value as well as these various statistics. These are some trivial examples of things that you might need to implement in the real world.
I have re-implemented some of a language's built-in data structures, functions, and classes on a number of occasions. As an embedded developer, the main reason I would do that is for speed or efficiency. The standard libraries and types were designed to be useful in a variety of situations, but there are many instances where I can create a more specialized version that is custom-tailored to take advantage of the features and limitations of my current platform. If the language doesn't provide a way to open up and modify existing classes (like you can in Ruby, for instance), then re-implementing the class/function/structure can be the only way to go.
For example, one system I worked on used a MIPS CPU that was speedy when working with 32-bit numbers but slower when working with smaller ones. I re-wrote several data structures and functions to use 32-bit integers instead of 16-bit integers, and also specified that the fields be aligned to 32-bit boundaries. The result was a noticable speed boost in a section of code that was bottlenecking other parts of the software.
That being said, it was not a trivial process. I ended up having to modify every function that used that structure and I ended up having to re-write several standard library functions as well. In this particular instance, the benefits outweighed the work. In the general case, however, it's usually not worth the trouble. There's a big potential for hard-to-debug problems, and it's almost always more work than it looks like. Unless you have specific requirements or restrictions that the existing structures/classes don't meet, I would recommend against re-implementing them.
As Michael mentions, it is indeed useful to know how to re-implement structures even if you never do so. You may find a problem in the future that can be solved by applying the principles and techniques used in existing data structures.

Justification for using non-portable code

How does one choose if someone justify their design tradeoffs in terms of optimised code, clarity of implementation, efficiency, and portability?
A relevant example for the purpose of this question could be large file handling, where a "large file" is "quite a few GB" for a problem that would be simplified using random-access methods.
Approaches for reading and modifying this file could be:
Use streams anyway, and seek to the desired place - which is portable, but potentially slow, and is not clear - this will work for practically all OS's.
map the relevant portion of the file as a large block. Eg, mmap a 50MB chunk of the file for processing, for each chunk - This would work for many OS's, depending on the subtleties of implementing mmap for that system.
Just mmap the entire file - this requires a 64-bit OS and is the most efficient and clear way to implement this, however does not work on 32-bit OS's.
Not sure what you're asking, but part of the design process is to analyze requirements for portability and performance (amongst other factors).
If you know you'll never need to port the code, and you need absolutely the best performance, then you adjust your implementation accordingly. There's no point being portable just for its own sake.
Note also that if you want both performance and portability, there's nothing stopping you from providing an implementation for each platform. Of course this will increase your cost, so really, its up to you to prioritize your needs.
Without constraints, this question rationally cannot be answered rationally.
You're asking "what is the best color" without telling us whether you're painting a house or a car or a picture.
Constraints would include at least
Language of choice
Target platforms (multi CPU industrial-grade server or iPhone?)
Optimizing for speed vs. memory
Cost (who's funding this and is there a delivery constraint?)
No piece of software could have "ultimate" portability.
An example of this sort of problem being handled using a variety of methods but with a tight constraint both on the specific input/output required and the measurement of "best" would be the WideFinder project.
Basically, you need think first before coding. Every project is unique and an analysis of the needs could help decide what is primordial for it. What will make the best solution for any project depends on a few things...
First of all, will this project need to be or eventually be multiplatform? Depending on your choice, choosing the right programming language should be easier. Then again you could also use more than one language in your project and this is completely normal. Portability does not necessarily mean less performance. All it implies is that it involves harder work to achieve your goals because you will need quality code. Also, every programming language has its own philosophy. Learn what they are. One thing is for sure, certain problems frequently come back over and over. This is why knowing the different design patters can make a difference sometimes, but some languages have their own idioms and can be very relevant when choosing a language. Another thing that needs some thought is the different approaches that you can have for your project. Multithreading, sockets, client/server systems and many other technologies are all there for you to use. Choosing the right technology can help to make a project better.
Knowing the needs and the different solutions available today is what will help decide when comes the time to choose for the different tradeoffs.
It really depends on the drivers for the project. If you are doing in-house enterprise dev, then do the simplest thing that could work on your target hardare. Mod for performance reqs as needed.
If you know you need to support different hardware platforms on day 1, then you'll clearly need to choose a portable implementation, or use multiple approaches.
Portability for portability's sake has been a marketing spiel for Java since inception and is a fact of life for C by convention, and I believe most people who abide by it "grew up" with Java or C will say that.
However true, absolute portability will only be true for the most trivial to at most applications with medium complexity -- anything with high complexity will need specialized tweaks.

Resources