Fast Tetrahedralization Algorithm - computational-geometry

I have a concave volume and would like to break it into tetrahedra as quickly as possible. I am not too concerned about the quality of the output and I don't need a Delaunay tetrahedralization. (Specifically, I'm not looking for tetgen, or similar projects, which generates high quality Delaunay tetrahedralization for engineering applications. I want fast)
What is a fast, if not so high quality, algorithm I could use?

Related

What are some popular distance measuring techniques between images?

In machine learning, a lot of techniques require defining a metric between different data points. I want to know what are some popular metrics when the data are images.
An obvious way of measuring distance between images is to sum up the squares of pixel errors. But this is sensitive to simple transformations like translation. For example, even shifting the whole image by one pixel could result in a large distance.
What are some other distance measuring techniques that is more compatible with translation, rotations, etc.?
Wasserstein distance(earth mover's distance) and kullback leibler divergence are the two that I have come across while studying literature about Generative Adversarial Networks(GANs).

Akl-Toussaint throw-away heuristic for convex hull in 3D

I am wondering whether there are any algorithms that use the Akl-Toussaint throw-away heuristic to compute the convex hull in 3D (not just as a simple pre-processing, but as the algorithmic principle or building block). And if so, what would their expected time complexity be?
Also, I am interested in experimental comparisons of such algorithms with the more traditional algorithms in 3D (e.g., Clarkson-Shor).
I would appreciate it very much if you could point me to papers or web pages that shed some light on my questions. (Or answer them directly :-) )

Surface Reconstruction from Cocone algorithms

I have a 3D point cloud that I attained from tracing out the outline of a shape with sensors attached to my fingertips. The resulting data has non-uniform density with large gaps between some of the points.
What are some good surface reconstruction algorithms to use on this kind of data that is recorded by hand and therefore has issues of varying density?
I have been attempting to use the Cocone, Robust Cocone, and Tight Cocone Surface Reconstruction algorithms from Tamal Dey to reconstruct the shape, but I am having difficulty because I believe my data is much less uniform than the example point sets provided with the algorithms. I have read Tamal's literature on each reconstruction algorithm because there are variables that can be altered in the algorithms, but I have been unable to find the right settings to get my data to work with any of the Cocone algorithms.
Does anyone understand the user settings in these algorithms?
What would be the best settings for very non-uniform data points? I can provide the 3D point data of the shape upon request.

Demons algorithm for image registration (for dummies)

I was trying to make a application that compares the difference between 2 images in java with opencv. After trying various approaches I came across the algorithm called Demons algorithm.
To me it seems to give the difference of images by some transformation on each place. But I couldn't understand it since the references I found were too complex for me.
Even the demons algorithm does not do what I need I'm interested in learning it.
Can any one explain simply what happens in the demons algorithm and how to write a simple code to use that algorithm on 2 images.
I can give you an overview of general algorithms for deformable image registration, demons is one of them
There are 3 components of the algorithm, a similarity metric, a transformation model and an optimization algorithm.
A similarity metric is used to compute pixel based / patch based similarity between pixels/patches. Common similarity measures are SSD, normalized cross correlation for mono-modal images while information theoretic measures like mutual information are used in the case of multi-modal image registration.
In the case of deformable registration, they generally have a regular grid super-imposed over the image and the grid is deformed by solving an optimization problem which is formulated such that the similarity metric and the smoothness penalty imposed over the transformation is minimized. In deformable registration, once there are deformations over the grid, the final transformation at the pixel level is computed using a B-Spine interpolation of the grid at the pixel level so that the transformation is smooth and continuous.
There are 2 general approaches towards solving the optimization problem, some people use discrete optimization and solve it as a MRF optimization problem while some people use gradient descent, I think demons uses gradient descent.
In case of MRF based approaches, the unary cost is the cost for deforming each node in grid and it is the similarity computed between patches, the pairwise cost which imposes the smoothness of the grid, is generally a potts/truncated quadratic potential which ensures that neighboring nodes in the grid have almost the same displacement. Once you have the unary and pairwise cost, you feed it to a MRF optimization algorithm and get the displacements at the grid level, then you use a B-Spline interpolation to compute pixel level displacement. This process is repeated in a coarse to fine fashion over several scales and also the algorithm is run many times at each scale (reducing the displacement at each node every time).
In case of gradient descent based methods, they formulate the problem with the similarity metric and the grid transformation computed over the image and then compute the gradient of the energy function which they have formulated. The energy function is minimized using iterative gradient descent, however these approaches can get stuck in a local minima and are quite slow.
Some popular methods are DROP, Elastix, itk provides some tools
If you want to know more about algorithms related to deformable image registration, I will recommend you to take a look to FAIR( guide book), FAIR is a toolbox for Matlab so you will have examples to understand the theory.
http://www.cas.mcmaster.ca/~modersit/FAIR/
Then if you want to specifically see some demon example,, here you have this other toolbox:
http://www.mathworks.es/matlabcentral/fileexchange/21451-multimodality-non-rigid-demon-algorithm-image-registration

Greedy algorithm for active contours - shrinking

I am studying and implementing the Greedy algorithm for active contours as described in paper by Donna Williams - A Fast Algorithm For Active Contours And Curvature Estimation.
One of the advantages over another implementation (by Kass et al.) should be uniform distribution of points along the contour curve. In every iteration each point tries to move itself so the distance to the previous point is as close to the average as possible.
The contour is expected to be drawn around an object in image and then to shrink around it until it is "attached" to the object edges.
But the problem is that the contour won't shrink. It evolves so that the points are equally spaced to each other along the contour, but the contour cannot shrink around the image object because distances between points would go below average and the algorithm would move them back.
Do you have any thoughts on this? What am I missing? Other implementations of active contours do shrink, but have another drawbacks and the Greedy algorithm is supposed to be better and more stable.
The researchers hardly emphasize the disadvantages of their new solution.
Dont trust the paper, too much, If you don't have heard from other sources, that
this algorithm works.
I would implement only an algorithm if it is well accepted in literature (or if I have invented it ;-) ).
Companies need a robust solution that works, a researcher must publish something new,
which may be less useable in practise, and sometimes only works well on specific test sets.

Resources