Tree performance and use comparison - performance

While studying for a test I was wondering if I could collect all the information into one place. One the test we will need to be able to describe the performance of specific trees (AVL, Red-Black, AA, Splay, B-trees) and also when it would be most practical to use them in a real world scenario. I know all of the running times, most being O log(n), but I am having a difficult time coming up with real-world examples of when to use each. Any thoughts?
Example would be like: B-tree because if you only want to access chunks of memory instead of single bits of memory each time you do an operation, which would be used for slower sources such as a HDD or CD. Reducing the overall time to pull the data from the source.

Related

Estimating strong scaling efficiency when single-node run is not possible

I have implemented an OpenMP/MPI hybrid parallel algorithm, and would like to measure its strong-scaling parallel efficiency. For this, I would have to calculate speed-up S=t(1)/t(N), and then the efficiency E=S/N.
Background: Having done some analysis, I was able to show that the peak efficiency of the algorithm could be expected at a problem size, at which the single node of my benchmark cluster cannot house the data required.
Possible solutions: I can either:
calculate speed-up using the smallest node-count, at which the data can be housed e.g. at 4 nodes => S=t(4)/t(N), or,
calculate the theoretical single-node time-to-solution t(1) by extrapolation, and then use that value as reference.
Questions:
Which approach is better and why?
If I use the first approach, can I, strictly speaking, even refer to it as strong-scaling parallel efficiency, seeing as it doesn't conform to the definition provided above?
Bonus question: When we measure t(1), should we run the algorithm with simulated communication calls (i.e. by calling mpirun -n 1 ./my_benchmark_program), or should we rather call a version of the program which performs no communication at all (i.e. ./my_openmp_only_benchmark_program)?
I hope this post is clear, please ask for clarification if it isn't. Any help will be greatly appreciated. Thanks in advance.
There are various problems with the classical definition of speedup if you are using MPI. The single processor case involves no communication, while the two-processor one does, so there is overhead in the t(2) case and it will always be less than twice as fast. This is even worse if you have a multicore/multinode setup, where up to 16 (or so) processes will run on a single node, so t(17) will suddenly be much slower because it starts involving a second node.
This means you can not simply apply the textbook formulas. You need to explain how you are doing your scalability study. For instance: one process per node until the number of processes == number of nodes, then start putting multple processes on each node, et cetera.
The fact that the single-process case does not fit in memory is then a minor hiccup: you start with a base case of multiple processes, and document that fact, plus your reasoning for the base case that you actually used.

T-Tree or B-Tree

T-tree algorithm is described in this paper
And T*-Tree is an improvement from T-tree for better use of query operations, including range queries and which contains all other good features of T-tree.
This algorithm is described in this paper "T*-tree: A Main Memory Database Index Structure for Real-Time Applications".
According to this research paper, T-Tree is faster than B-tree/B+tree when datasets fit in the memory.
I implemented T-Tree/T*Tree as they described in these papers and compared the performance with B-tree/B+tree, but B-tree/B+tree perform better than T-Tree/T*Tree in all test cases (insertion, deletion, searching).
I read that T-Tree is an efficient index structure for in-memory database, and it used by Oracle TimesTen. But my results did not show that.
If anyone may know the reason or have any comment about that, it will be great to hear from her (or him).
T-Trees are not a fundamental data structure in the same sense that AVL trees or B-trees are. They are just a hacked version of balanced binary trees and as such there may or may not be niche applications where they offer decent performance.
In this day and age they are bound to suffer horribly because of their poor locality, both in the sense of expected block/page transfer counts and in the sense of cache locality. The latter is evident since in all node accesses of a search except for the very last one, only the boundary values will be checked against the search key - all the rest is paged in or cached for nought.
Compare this to the excellent access locality of B-trees in general and B+trees in particular (not to mention cache-oblivious and cache-conscious versions that were designed explicitly with memory performance charactistics in mind).
Similar problems exist with the rebalancing. In the B-tree world many variations - starting with B+ and Blink - have been developed and perfected in order to achieve desired amortised performance characteristics, including aspects like concurrency (locking/latching) or the absence thereof. So most of the time you can simply go out and find a B-tree variation that fits your performance profile - or use the simple classic B+tree and be certain of decent results.
T-trees are more complicated than comparable B-trees and it seems that they have nothing to offer in the way of performance in general, given that the times of commodity hardware with a single-level memory 'hierarchy' have been gone for decades. Not only is the hard disk the new memory, the converse is also true and main memory is the new hard disk now. I.e. even without NUMA the cost of bringing data from main memory into the cache hierarchy is so high that it pays to minimise page transfers - which is precisely what B-trees and their variations do and the T-tree doesn't. Closer to the processor core it's the number of cache line accesses/transfers that matters but the picture remains the same.
In fact, if you take the idea of binary search - which is provably optimal - and think about ways of arranging the search keys in a manner that plays well with memory hierarchies (caches) then you invariably end up with something that looks uncannily like a B-tree...
If you program for performance then you'll find that winners are almost always located somewhere in the triangle between sorted arrays, B-trees and hashing. Even balanced binary trees are only competitive if their comparatively poor performance takes the back seat in the face of other considerations and key counts are fairly small, i.e. not more than a couple million.

How to Compare the Running Times of Two Data Structures' Operations

I want to compare the performance of two search trees of integers (an AVL tree vs a RedBlack tree). So how should I design/engineer the tests to accomplish this ? For instance, let's consider the insert operation, what steps should I follow in order to state that on average this operation is faster in the RB case ? Should I time inserting just one element (assuming the trees are pre-populated) or should I time a sequence of insertions ? Also what considerations should I take to correctly measure CPU time accurately ?
Thanks in advance.
This is a really broad question, and as such, I don't think you should be hoping for anybody to get on here and give you the one final correct answer regarding how to measure performance. That being said...
First, you should develop a suite of tests. Two popular techniques exist for doing this: monitor a real-world sequence of operations done by an application (so, find some open source application that uses either an AVL or RB tree, and add some code to print out the sequence of operations it performs) or create such a stream of operations analytically (or synthetically) to target any number of cases (the average usage, particular kinds of abnormal or otherwise unusual usage, random usage, etc.). The more of these kinds of traces you get to test, the better.
Once you have your set of traces to test, you need to develop a driver to do the evaluation. The driver should be simple, the same for both AVL and RB trees (I think that in this case, this shouldn't be a problem; both present the same interface to users, differing only in terms of internal implementation details). The driver should be able to reproduce the usage recorded in your trace sets efficiently and cause the traced operations to be carried out on your data structures. One thing I like to do is to include a third "dummy" candidate that does nothing; this way, I can see how much of an influence the processing of traces is exerting on overall performance.
Each trace should be executed many, many times. You can formalize this somewhat (to reduce statistical uncertainty to within known bounds), but a rule of thumb is that the order of your error will shrink according to 1/sqrt(n), where n is the number of trials. In other words, by running each trace 10,000 times instead of 100 times, you will get errors in the average that are 10x smaller. Record all values; things to look for are the mean, median, mode(s), etc. For each run, try to keep the system conditions the same; no other programs running, etc. To help eliminate spurious results due to external factors changing, you can cull the bottom and top 10% of outliers...
Now, simply compare the data sets. Perhaps what you care most about is the average time the trace takes? Perhaps the worst? Maybe what you really care about is consistency; is the standard deviation big or small? You should have enough data to compare the results for a given trace executed on both test structures; and for different traces, it might make more sense to look at different figures (for instance, if you created a synthetic benchmark that should be the worst case for RB trees, you might ask how badly RB and AVL trees did, whereas you might not care about this for another trace representing the best case for AVL trees, etc.)
Timing on the CPU can be a challenge in its own right. You'll need to ensure that the resolution of your timer is sufficient for measuring your events. clock() and gettimeofday() functions - and others - are popular choices for recording the time of events. If your traces finish too quickly, you can get the aggregate time for several trials (so that if your timer supports microsecond timing and your traces finish in 10 microseconds, you can measure 100 executions of the trace instead of 1, and get time values on 10s of milliseconds, which should be accurate).
Another potential pitfall is providing the same execution environment each time. In between trace runs, at the very least, you might consider techniques for ensuring that you start with a clean cache. Either that, or don't time the first execution, or understand that this result might be culled when you eliminate outliers. It might be safer to just reset the cache (by manipulating every element of some large array, for instance in between executions of traces), since code A might benefit from having some of the values in cache while code B might suffer.
These are a few of the things you might consider when doing your own performance evaluation. Other tools - like PAPI and other profilers, for instance - can measure certain events - cache hits/misses, instructions, etc. - and this information can allow for much richer comparisons than simple comparisons of wall-clock run time.
Measuring CPU time accurately can be very tricky depending on your particular programming language, implementation, etc. For example, with Java's JIT compilation, the results can be extremely different depending on how much you've run the code before now!
Can you give more detail about your situation?

why program running time is not a measure?

i have learned that a program is measured by it's complexity - i mean by Big O Notation.
why don't we measure it by it's absolute running time?
thanks :)
You use the complexity of an algorithm instead of absolute running times to reason about algorithms, because the absolute running time of a program does not only depend on the algorithm used and the size of the input. It also depends on the machine it's running on, various implementations detail and what other programs are currently using system resources. Even if you run the same application twice with the same input on the same machine, you won't get exactly the same time.
Consequently when given a program you can't just make a statement like "this program will take 20*n seconds when run with an input of size n" because the program's running time depends on a lot more factors than the input size. You can however make a statement like "this program's running time is in O(n)", so that's a lot more useful.
Absolute running time is not an indicator of how the algorithm grows with different input sets. It's possible for a O(n*log(n)) algorithm to be far slower than an O(n^2) algorithm for all practical datasets.
Running time does not measure complexity, it only measures performance, or the time required to perform the task. An MP3 player will run for the length of the time require to play the song. The elapsed CPU time may be more useful in this case.
One measure of complexity is how it scales to larger inputs. This is useful for planning the require hardware. All things being equal, something that scales relatively linearly is preferable to one which scales poorly. Things are rarely equal.
The other measure of complexity is a measure of how simple the code is. The code complexity is usually higher for programs with relatively linear performance complexity. Complex code can be costly maintain, and changes are more likely to introduce errors.
All three (or four) measures are useful, and none of them are highly useful by themselves. The three together can be quite useful.
The question could use a little more context.
In programming a real program, we are likely to measure the program's running time. There are multiple potential issues with this though
1. What hardware is the program running on? Comparing two programs running on different hardware really doesn't give a meaningful comparison.
2. What other software is running? If anything else running, it's going to steal CPU cycles (or whatever other resource your program is running on).
3. What is the input? As already said, for a small set, a solution might look very fast, but scalability goes out the door. Also, some inputs are easier than others. If as a person, you hand me a dictionary and ask me to sort, I'll hand it right back and say done. Giving me a set of 50 cards (much smaller than a dictionary) in random order will take me a lot longer to do.
4. What is the starting conditions? If your program runs for the first time, chances are, spinning it off the hard disk will take up the largest chunk of time on modern systems. Comparing two implementations with small inputs will likely have their differences masked by this.
Big O notation covers a lot of these issues.
1. Hardware doesn't matter, as everything is normalized by the speed of 1 operation O(1).
2. Big O talks about the algorithm free of other algorithms around it.
3. Big O talks about how the input will change the running time, not how long one input takes. It tells you the worse the algorithm will perform, not how it performs on an average or easy input.
4. Again, Big O handles algorithms, not programs running in a physical system.

What can be parameters other than time and space while analyzing certain algorithms?

I was interested to know about parameters other than space and time during analysing the effectiveness of an algorithms. For example, we can focus on the effective trap function while developing encryption algorithms. What other things can you think of ?
First and foremost there's correctness. Make sure your algorithm always works, no matter what the input. Even for input that the algorithm is not designed to handle, you should print an error mesage, not crash the entire application. If you use greedy algorithms, make sure they truly work in every case, not just a few cases you tried by hand.
Then there's practical efficiency. An O(N2) algorithm can be a lot faster than an O(N) algorithm in practice. Do actual tests and don't rely on theoretical results too much.
Then there's ease of implementation. You usually don't need the best intro sort implementation to sort an array of 100 integers once, so don't bother.
Look for worst cases in your algorithms and if possible, try to avoid them. If you have a generally fast algorithm but with a very bad worst case, consider detecting that worst case and solving it using another algorithm that is generally slower but better for that single case.
Consider space and time tradeoffs. If you can afford the memory in order to get better speeds, there's probably no reason not to do it, especially if you really need the speed. If you can't afford the memory but can afford to be slower, do that.
If you can, use existing libraries. Don't roll your own multiprecision library if you can use GMP for example. For C++, stuff like boost and even the STL containers and algorithms have been worked on for years by an army of people and are most likely better than you can do alone.
Stability (sorting) - Does the algorithm maintain the relative order of equal elements?
Numeric Stability - Is the algorithm prone to error when very large or small real numbers are used?
Correctness - Does the algorithm always give the correct answer? If not, what is the margin of error?
Generality - Does the algorithm work in many situation (e.g. with many different data types)?
Compactness - Is the program for the algorithm concise?
Parallelizability - How well does performance scale when the number of concurrent threads of execution are increased?
Cache Awareness - Is the algorithm designed to maximize use of the computer's cache?
Cache Obliviousness - Is the algorithm tuned for particulary cache-sizes / cache-line-sizes or does it perform well regardless of the parameters of the cache?
Complexity. 2 algorithms being the same in all other respects, the one that's much simpler is going to be a much better candidate for future customization and use.
Ease of parallelization. Depending on your use case, it might not make any difference or, on the other hand, make the algorithm useless because it can't use 10000 cores.
Stability - some algorithms may "blow up" with certain test conditions, e.g. take an inordinately long time to execute, or use an inordinately large amount of memory, or perhaps not even terminate.
For algorithms that perform floating point operations, the accumulation of round-off error is often a consideration.
Power consumption, for embedded algorithms (think smartcards).
One important parameter that is frequently measure in the analysis of algorithms is that of Cache hits and cache misses. While this is a very implementation and architecture dependent issue, it is possible to generalise somewhat. One particularly interesting property of the algorithm is being Cache-oblivious, which means that the algorithm will use the cache optimally on multiple machines with different cache sizes and structures without modification.
Time and space are the big ones, and they seem so plain and definitive, whereby they should often be qualified (1). The fact that the OP uses the word "parameter" rather than say "criteria" or "properties" is somewhat indicative of this (as if a big O value on time and on space was sufficient to frame the underlying algorithm).
Other criteria include:
domain of applicability
complexity
mathematical tractability
definitiveness of outcome
ease of tuning (may be tied to "complexity" and "tactability" afore mentioned)
ability of running the algorithm in a parallel fashion
(1) "qualified": As hinted in other answers, a -technically- O(n^2) algorithm may be found to be faster than say an O(n) algorithm, in 90% of the cases (which, btw, may turn out to be 100% of the practical cases)
worst case and best case are also interesting, especially when linked to some conditions in the input. if your input data shows some properties, an algorithm, by taking advantage of this property, may perform better that another algorithm which performs the same task but does not use that property.
for example, many sorting algorithm perform very efficiently when input are partially ordered in a specific way which minimizes the number of operations the algorithm has to execute.
(if your input is mostly sorted, an insertion sort will fit nicely, while you would never use that algorithm otherwise)
If we're talking about algorithms in general, then (in the real world) you might have to think about CPU/filesystem(read/write operations)/bandwidth usage.
True they are way down there in the list of things you need worry about these days, but given a massive enough volume of data and cheap enough infrastructure you might have to tweak your code to ease up on one or the other.
What you are interested aren’t parameters, rather they are intrinsic properties of an algorithm.
Anyway, another property you might be interested in, and analyse an algorithm for, concerns heuristics (or rather, approximation algorithms), i.e. algorithms which don’t find an exact solution but rather one that is (hopefully) good enough.
You can analyze how far a solution is from the theoretical optimal solution in the worst case. For example, an existing algorithm (forgot which one) approximates the optimal travelling salesman tour by a factor of two, i.e. in the worst case it’s twice as long as the optimal tour.
Another metric concerns randomized algorithms where randomization is used to prevent unwanted worst-case behaviours. One example is randomized quicksort; quicksort has a worst-case running time of O(n2) which we want to avoid. By shuffling the array beforehand we can avoid the worst-case (i.e. an already sorted array) with a very high probability. Just how high this probability is can be important to know; this is another intrinsic property of the algorithm that can be analyzed using stochastic.
For numeric algorithms, there's also the property of continuity: that is, whether if you change input slightly, output also changes only slightly. See also Continuity analysis of programs on Lambda The Ultimate for a discussion and a link to an academical paper.
For lazy languages, there's also strictness: f is called strict if f _|_ = _|_ (where _|_ denotes the bottom (in the sense of domain theory), a computation that can't produce a result due to non-termination, errors etc.), otherwise it is non-strict. For example, the function \x -> 5 is non-strict, because (\x -> 5) _|_ = 5, whereas \x -> x + 1 is strict.
Another property is determinicity: whether the result of the algorithm (or its other properties, such as running time or space consumption) depends solely on its input.
All these things in the other answers about the quality of various algorithms are important and should be considered.
But time and space are two things that vary at some rate compared to the size of the input (n). So what else can vary according to n?
There are several that are related to I/O. For example, the number of writes to a disk is an important one, which may not be directly shown by space and time estimates alone. This becomes particularly important with flash memory, where the number of writes to the same memory location is the significant metric in some algorithms.
Another I/O metric would be "chattiness". A networking protocol might send shorter messages more often adding up to the same space and time as another networking protocol, but some aspect of the system (perhaps billing?) might make minimizing either the size or number of the messages desireable.
And that brings us to Cost, which is a very important algorithmic consideration sometimes. The cost of an algorithm may be affected by both space and time in different amounts (consider the separate costing of server storage space and gigabits of data transfer), but the cost is the thing that you wish to minimize overall, so it may have its own big-O estimations.

Resources