Practical implementations of constrained conforming Delaunay triangulations - algorithm

I need to create a mesh of vertices to use for pathfinding, given an existing outline. I think that for my use case a constrained conforming Delaunay triangulation algorithm would be best suited, however i dont know how to implement such algorithm.
What are possible practical(not theoretical) implementations of CCDT? Or at least what should i research into in order to come up with my own implementation?
I am using c#, but any language example would be helpful.

I assume that you are searching for an implementation of constrained Delaunay triangulation (CDT) in 2D, with a conforming algorithm.
You definitely do not want to implement a CDT yourself. Having it robust is difficult, and requires to use dedicated exact number types for degenerated cases.
There exists several open-source implementations of CDT in 2D (both with the conforming algorithm). I can cite Triangle, implemented in C, from Jonathan Shewchuk, and CGAL 2D triangulations, implemented in generic C++ (with C++ templates), by the CGAL project. For CGAL, the conforming algorithm is in the 2D mesh generator chapter: see Building Conforming Triangulations. To be honest, I have to say that I am the author of the 2D conforming algorithm in CGAL.

Since you're working in C#, the Triangle library that was cited in some of the other answers may be a good solution for you if you can use unmanaged code. I've used it and it is excellent. Although Java is not your language of interest, I've got a Java implementation at https://github.com/gwlucastrig/Tinfour which might present an example of an more object-oriented API. There are also some write-ups on the ideas and applications of the Constrained Conforming Delaunay Triangulation which might help you figure out how to apply the CCDT to your particular problem. You can find these at https://github.com/gwlucastrig/Tinfour/wiki/About-the-Constrained-Delaunay-Triangulation and https://github.com/gwlucastrig/Tinfour/wiki/Tutorial-Using-Polygon-Based-Constraints

Related

Independent algorithm implementation irrespective of what it is targeting

I am wondering if there is any architecture or guidelines already existing for defining algorithms and problems in way then can addressed N->N on demand. I have developed many algorithms (like brute Force, specialized to some paper) for solving problems. I found that I was doing repetitive work since algorithm and problem are interweaved in implementation to define solution. May be my approach is not best in implementation but I am getting what I wanted.
I am looking for architecture reference or guidelines for general basis algorithm(s) implementation and defining problems separate. So that an already implemented algorithm can be used to solve new problem. This way I can write algorithms like API, and use them when ever a problem need to be solved.
Any references would be great. I don't bother about programming platform since I can adapt to it in no time.

Searching for Genetic Programming framework/library

I am looking for framework, or library that could enable working with genetic programming (koza's style) not only by using mathematical functions, but also with loops, variable or constant assignment, object creations, or functions calling. I am not sure if there exists such branch of genetic algorithms and if it has a name.
I did my best in looking for informations, though the internet is poor with information on that specific topic.
HeuristicLab has a powerful implementation of Genetic Programming. It includes problems such as Symbolic Regression, Symbolic Classification, Time Series, Santa Fe Ant Trail, and there is a tutorial to implement custom problems such as the Lawn Mower (which is similar to the Santa Fe Ant Trail). HeuristicLab is implemented in C# and runs on Windows. It's released under GPL and can be freely downloaded.
The implementation of GP is very flexible and extensible, but also performance optimized using online calculations to avoid array allocation and memory overheads. We do include several benchmark problem instances for symbolic regression and classification. There are also more algorithms available such as Random Forests, Neural Networks, k-NN, SVM (if you're doing regression or classification).

parallel iterative algorithms for solving Linear System of Equations

Does someone know any library or ready source code of parallel implementation of quick iterative methods (bicgstab, CG, etc) for solving Linear System of Equations for example using MPI or OpenMP?
PetSC is a good example (both serial and MPI, and with a large library of linear and nonlinear solvers either included or provided as interfaces to external libraries). Trillinos is another example, but it's a much broader project and not as nicely integrated as PetSC. Aztec has a number of solvers, as does Hypre, which is hybrid (MPI+OpenMP).
These are all MPI-based at least in part; I don't know of too many OpenMP-enabled ones, although google suggests Lis, which I'm not familiar with.
Chapter 7 of Parallel Programming for Multicore and Cluster Systems contains algorithms for systems of linear equations, with source code (MPI).

Algorithm for genogram

I am developing a ruby program that should be able to draw a genogram on a web page.
I am therefore looking for an algorithm for drawing a genogram or a similar tree-structure.
I prefer an algorithm in ruby but also other languages will do or some references explaning the principles behind such an algorithm
A recursive algorithm in c++ has been published here but it is not documented in a way that allows me to use it.
Any help about how to implement a genogram would be much apriciated
AFAIK, the canonical work on rendering trees is "Drawing Dynamic Trees" by Sven Moen. You should be able to find the paper or an implementation of his polyline algorithm with a bit of googling.
You could also have a look at Graphviz as that can handle trees as well as arbitrary graphs.

Real world implementations of "classical algorithms"

I wonder how many of you have implemented one of computer science's "classical algorithms" like Dijkstra's algorithm or data structures (e.g. binary search trees) in a real world, not academic project?
Is there a benefit to our dayjobs in knowing these algorithms and data structures when there are tons of libraries, frameworks and APIs which give you the same functionality?
Is there a benefit to our dayjobs in knowing these algorithms and data structures when there are tons of libraries, frameworks and APIs which give you the same functionality?
The library doesn't know what your problem domain is and won't be able to chose the correct algorithm to do the job. That is why I think it is important to know about them: then YOU can make the correct choice of algorithms to solve YOUR problem.
Knowing, or being able to understand these algorithms is important, these are the tools of your trade. It does not mean you have to be able to implement A* in an hour from memory. But you should be able to figure out what the advantages of using a red-black tree as opposed to a normal unbalanced tree are so you can decide if you need it or not. You need to be able to judge the fitness of an algorithm for solving your problem.
This might sound too school-masterish but these "classical algorithms" were not invented to give college students exam questions, they were invented to solve problems or improve on current solutions, just like the array, the linked list or the stack are building blocks to write a program so are some of these. Just like in math where you move from addition and subtraction to integration and differentiation, these are advanced techniques that will help you solve problems that are out there.
They might not be directly applicable to your problems or work situation but in the long run knowing of them will help you as a professional software engineer.
To answer your question, I did an implementation of A* recently for a game.
Is there a benefit to understanding your tools, rather than simply knowing that they exist?
Yes, of course there is. Taking a trivial example, don't you think there's a benefit to knowing what the difference is List (or your language's equivalent dynamic array implementation) and LinkedList (or your language's equivalent)? It's pretty important to know that one has constant random access time, while the other is linear. And one requires N copies if you insert a value in the middle of the sequence, while the other can do it in constant time.
Don't you think there's an advantage to understanding that the same sorting algorithm isn't always optimal? That for almost-sorted data, quicksort sucks, for example? Naively just calling Sort() and hoping for the best can become ridiculously expensive if you don't understand what's happening under the hood.
Of course there are a lot of algorithms you probably won't need, but even so, just understanding how they work may make it easier for yourself to come up with efficient algorithms to solve other, unrelated, problems.
Well, someone has to write the libraries. While working at a mapping software company, I implemented Dijkstra's, as well as binary search trees, b-trees, n-ary trees, bk-trees and hidden markov models.
Besides, if all you want is a single 'well known' algorithm, and you also want the freedom to specialise it and optimise it if it becomes critical to performance, including a whole library seems like a poor choice.
We use a home grown implementation of a p-random number generator from Knuth SemiNumeric as an aid in some statistical processing
In my previous workplace, which was an EDA company, we implemented versions of Prim and Dijsktra's algorithms, disjoint set data structures, A* search and more. All of these had real world significance. I believe this is dependent on problem domain - some domains are more algorithm-intensive and some less so.
Having said that, there is a fine line to walk - I see no business reason for re-implementing STL or Java Generics. In many cases, a standard library is better than "inventing a wheel". The more you are near your core application, the more it may be necessary to implement a textbook algorithm or data structure.
If you never work with performance-critical code, consider yourself lucky. However, I consider this scenario unrealistic. Performance problems could occur anywhere. And then it's necessary to know how to fix that problem. Obviously, merely knowing a few algorithm names isn't enough here – unless you want to implement them all and try them out one after the other.
No, knowing (at least some of) the inner workings of different algorithms is important for gauging their strengths and weaknesses and for analyzing how they would handle your situation.
Obviously, if there's a library already implementing exactly what you need, you're incredibly lucky. But let's face it, even if there is such a library, using it is often not completely straightforward (at the very least, interfaces and data representation often have to be adapted) so it's still good to know what to expect.
A* for a pac man clone. It took me weeks to really get but to this day I consider it a thing of beauty.
I've had to implement some of the classical algorithms from numerical analysis. It was easier to write my own than to connect to an existing library. Also, I've had to write variations on classical algorithms because the textbook case didn't fit my application.
For classical data structures, I nearly always use the standard libraries, such as STL for C++. The one time recently when I thought STL didn't have the structure I needed (a heap) I rolled my own, only to have someone point out almost immediately that I didn't need to do that.
Classical algorithms I have used in actual work:
A topological sort
A red-black tree (although I will
confess that I only had to implement
insertions for that application and
it only got used in a prototype).
This got used to implement an
'ordered dict' type structure in
Python.
A priority queue
State machines of various sorts
Probably one or two others I can't remember.
As to the second part of the question:
An understanding of how the algorithms work, their complexity and semantics gets used on a fairly regular basis. They also inform the design of systems. Occasionally one has to do things involving parsing or protocol handling, or some computation that's slightly clever. Having a working knowledge of what the algorithms do, how they work, how expensive they are and where one might find them lying around in library code goes a long way to knowing how to avoid reinventing the wheel poorly.
I use the Levenshtein distance algorithm to help implement a 'Did you mean [suggested word]?' feature in our website search.
Works quite well when combined with our 'tagging' system, which allows us to associate extra words (other than those in title/description/etc) with items in the database. \
It's not perfect by any means, but it's way better than most corporate site searches, if I don't say so myself ; )
Classical algorithms are usually associated with something glamorous, like games, or Web search, or scientific computation. However, I had to use some of the classical algorithms for a mere enterprise application.
I was building a metadata migration tool, and I had to use topological sort for dependency resolution, various forms of graph traversals for queries on metadata, and a modified variation of Tarjan's union-find datastructure to partition forest-like structured metadata to trees.
That was a really satisfying experience. Most of those algorithms were implemented before, but their implementations lacked something that I would need for my task. That's why It's important to understand their internals.

Resources