Implementation of Chazelle's triangulation algorithm - algorithm

There is an algorithm for triangulating a polygon in linear time due to Chazelle (1991), but, AFAIK, there aren't any standard implementations of his algorithm in general mathematical software libraries.
Does anyone know of such an implementation?

See this answer to the question Powerful algorithms too complex to implement:
According to Skiena (author of The Algorithm Design Manual), "[the] algorithm is quite hopeless to implement."
I've looked for an implementation before, but couldn't find one. I think it's safe to assume no-one has implemented it due to its complexity, and I think it also has quite a large constant factor so wouldn't do well against O(n lg n) algorithms that have smaller constant factors.

This is claimed to be an
Implementation of Chazelle's Algorithm for Triangulating a Simple Polygon in Linear Time (mainly C++ and C).

Related

How efficient is efficient when it comes to Polynomial Time Algorithms?

I hope this is the right place for this question.
Polynomial time algorithms! How do polynomial time algorithms (PTAs) actually relate to the processing power, memory size (RAM) and storage of computers?
We consider PTAs to be efficient. We know that even for a PTA, the time complexity increases with the input size n. Take for example, there already exists a PTA that determines if a number is prime. But what happens if I want to check a number this big https://justpaste.it/3fnj2? Is the PTA for prime check still considered efficient? Is there a computer that can compute if such a big number like that is prime?
Whether yes or no (maybe no, idk), how does the concept of polynomial time algorithms actually apply in the real world? Is their some computing bound or something for so-called polynomial time algorithms?
I've tried Google searches on this but all I find are mathematical Big O related explanations. I don't find articles that actual relate the concept of PTAs to computing power. I would appreciate some explanation or links to some resources.
There are a few things to explain.
Regarding Polynomial Time as efficient is just an arbitrary agreement. The mathematicians have defined a set Efficient_Algorithms = {P algorithm, where P Polynomial}. That is only a mathematical definition. Mathematicians don't see your actual hardware and they don't care for it. They work with abstract concepts. Yes, scientists consider O(n^100) as efficient.
But you cannot compare one to one statements from theoretical computer science with computer programs running on hardware. Scientists work with formulas and theorems while computer programs are executed on electric circuits.
The Big-Oh notation does not help you for comparing implementations of an algorithms. The Big-Oh notation compares algorithms but not the implementations of them. This can be illustrated as follows. Consider you have a prime checking algorithm with a high polynomial complexity. You implement it and you see it does not perform well for practical use cases. So you use a profiler. It tells you where the bottle neck is. You find out that 98% of the computations time are matrix multiplications. So you develop a processor that does exactly such calculations extremely fast. Or you buy the most modern graphics card for this purpose. Or you wait 150 years for a new hardware generation. Or you achieve to make most of these multiplications parallel. Imagine you achieved somehow to reduce the time for matrix multiplications by 95%. With this wonderful hardware you run your algorithm. And suddenly it performs well. So your algorithm is actually efficient. It was only your hardware that was not powerful enough. This is not an thought experiment. Such dramatic improvements of computation power are reality quite often.
The very most of algorithms that have a polynomial complexity have such because the problems they are solving are actually of polynomial complexity. Consider for example the matrix multiplication. If you do it on paper it is O(n^3). It is the nature of this problem that it has a polynomial complexity. In practice and daily life (I think) most problems for which you have a polynomial algorithm are actually polynomial problems. If you have a polynomial problem, then a polynomial algorithm is efficient.
Why do we talk about polynomial algorithms and why do we consider them as efficient? As already said, this is quite arbitrary. But as a motivation the following words may be helpful. When talking about "polynomial algorithms", we can say there are two types of them.
The algorithms that have a complexity that is even lower than polynomial (e.t. linear or logarithmic). I think we can agree to say these are efficient.
The algorithms that are actually polynomial and not lower than polynomial. As illustrated above, in practice these algorithms are oftentimes polynomial because they solve problems that are actually of polynomial nature and therefore require polynomial complexity. If you see it this way, then of course we can say, these algorithms are efficient.
In practice if you have a linear problem you will normally recognise it as a linear problem. Normally you would not apply an algorithm that has a worse complexity to a linear problem. This is just practical experience. If you for example search an element in a list you would not expect more comparisons than the number of elements in the list. If in such cases you apply an algorithm that has a complexity O(n^2), then of course this polynomial algorithm is not efficient. But as said, such mistakes are oftentimes so obvious, that they don't happen.
So that is my final answer to your question: In practice software developers have a good feeling for linear complexity. Good developers also have a feeling of logarithmic complexity in real life. In consequence that means, you don't have to worry about complexity theory too much. If you have polynomial algorithm, then you normally have a quite good feeling to tell if the problem itself is actually linear or not. If this is not the case, then your algorithm is efficient. If you have an exponential algorithm, it may not be obvious what is going on. But in practice you see the computation time, do some experiments or get complains from users. Exponential complexity is normally not deniable.

Why using heuristics in an algorithm takes away asymptotic optimality?

I was reading about some geometric routing algorithms, there it says that when employing heuristics in a version of the main algorithm it may improve performance, but takes away asymptotic optimality.
Why is that the case? Should we prefer asymptotic optimality over better performance? Are there prototypical cases where one should prefer asymptotic optimality? Are there any benchmarks known?
I think you are asking about optimization problems where heuristics run fast but might not achieve the totally optimal solution, whereas truly optimal solution finding algorithms can run much slower in the worst-case although they always give the totally optimal solution. If so, here's some info. In general, the decision to use a heuristic algorithm often depends on how well it approximates the optimal solution "in practice", and if this typical solution quality is good enough for you, and whether or not you think your particular problem instance falls into the category of the problems that are encountered in practice. If you are interested, you can look up approximation algorithms for NP-complete problems. There are some problems where the score of the solution found by a heuristic is within a constant multiplier (1 + epsilon) of the score of the optimal solution, and you can choose epsilon; however typically the running time increases as epsilon decreases.
My guess is that they are talking about use of (non-admissible) heuristics for approximation algorithms. For instance, the traveling salesman problem is NP-complete, yet there are heuristic approximation methods that are much faster than known algorithms for NP-complete problems but are only guaranteed to get within a few percent of optimal.

When is the "Fast Integer Multiplication Using Modular Arithmetic" (2008) algorithm faster than Schönhage-Strassen algorithm?

From Wikipedia:
"Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi[11] gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs."
I would be very interested in the size of such impractically large integers.
Maybe someone did implement both algorithms in a certain way and could do some benchmarks?
Thanks
Fürer's algorithm and it's modular equivalent (DSKS) are very deep research topics and, for now, remain only as academic interest. Nobody actually knows how big the cross-over point is. And in all likeliness it doesn't matter because that cross-over point is likely to be well beyond 64-bit computing limits.
I've implemented Schönhage-Strassen before and I understand how Fürer's algorithm works. So I'm quite familiar with both of them. I can say it's very possible that the cross-over point between Schönhage-Strassen and Fürer's algorithm is so high that a computer capable of holding the parameters will be larger than the size of the observable universe.
That's the problem when you have complexities that differ by less than a logarithm. It takes exponentially large input sizes to compensate even for small differences in the Big-O constant.
In this case, Fürer's algorithm is known to have a very very very large Big-O constant.

How to test an algorithm for perfect optimization?

Is there any way to test an algorithm for perfect optimization?
There is no easy way to prove that any given algorithm is asymptotically optimal.
Proving optimality (if ever) sometimes follows years and/or decades after the algorithm has been written. A classic example is the Union-Find/disjoint-set data structure.
Disjoint-set forests are a data structure where each set is represented by a tree data structure, in which each node holds a reference to its parent node. They were first described by Bernard A. Galler and Michael J. Fischer in 1964, although their precise analysis took years.
[...] These two techniques complement each other; applied together, the amortized time per operation is only O(α(n)), where α(n) is the inverse of the function f(n) = A(n,n), and A is the extremely quickly-growing Ackermann function.
[...] In fact, this is asymptotically optimal: Fredman and Saks showed in 1989 that Ω(α(n)) words must be accessed by any disjoint-set data structure per operation on average.
For some algorithms optimality can be proven after very careful analysis, but generally speaking, there's no easy way to tell if an algorithm is optimal once it's written. In fact, it's not always easy to prove if the algorithm is even correct.
See also
Wikipedia/Matrix multiplication
The naive algorithm is O(N3), Strassen's is roughly O(N2.807), Coppersmith-Winograd is O(N2.376), and we still don't know what is optimal.
Wikipedia/Asymptotically optimal
it is an open problem whether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is an O(nα(n)) algorithm for finding minimum spanning trees. Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way.
Practical considerations
Note that sometimes asymptotically "worse" algorithms are better in practice due to many factors (e.g. ease of implementation, actually better performance for the given input parameter range, etc).
A typical example is quicksort with a simple pivot selection that may exhibit quadratic worst-case performance, but is still favored in many scenarios over a more complicated variant and/or other asymptotically optimal sorting algorithms.
For those among us mortals that merely want to know if an algorithm:
reasonably works as expected;
is faster than others;
there is an easy step called 'benchmark'.
Pick up the best contenders in the area and compare them with your algorithm.
If your algorithm wins then it better matches your needs (the ones defined by
your benchmarks).

Is there a master list of the Big-O notation for everything?

Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.
Dictionary of Algorithms and Data Structures is a fairly comprehensive list, and includes complexity (Big-O) in the algorithms' descriptions. If you need more information, it'll be in one of the linked references, and there's always Wikipedia as a fallback.
The Cormen book is more about teaching you how to prove what Big-O would be for a given algorithm, rather than rote memorization of algorithm to its Big-O performance. The former is far more valuable than the latter, and requires an investment on your part.
Try "Introduction to Algorithms" by Cormen, Leisersen, and Rivest. If its not in there its probably not worth knowing.
In c++ the STL standards is defined by the Big-O characteristics of the algorithms as well as the space requirements. That way you could switch between competing implementations of STL and still know that your program had the same-ish runtime characteristics.
Particularily good STL implementations could even special case lists of particular types to be better than the standard-requirements.
It made it easy to pick the correct iterator or list type for a particular problem, because you could easily weigh between space consumption and speed.
Ofcourse Big-O is only a guide line as all constants are removed. If an algorithm runs in k*O(n), it would be classified as O(n), but if k is sufficiently high it could be worse than O(n^2) for some values of n and m.
Introduction to Algorithms, Second Edition, aka CLRS (Cormen, Leiserson, Rivest, Stein), is the closest thing I can think of.
If that fails, then try The Art of Computer Programming, by Knuth. If it's not in those, you probably need to do some real research.
To anyone, who is coming to this question from Google.
http://bigocheatsheet.com/

Resources