I have a 3D point cloud that I attained from tracing out the outline of a shape with sensors attached to my fingertips. The resulting data has non-uniform density with large gaps between some of the points.
What are some good surface reconstruction algorithms to use on this kind of data that is recorded by hand and therefore has issues of varying density?
I have been attempting to use the Cocone, Robust Cocone, and Tight Cocone Surface Reconstruction algorithms from Tamal Dey to reconstruct the shape, but I am having difficulty because I believe my data is much less uniform than the example point sets provided with the algorithms. I have read Tamal's literature on each reconstruction algorithm because there are variables that can be altered in the algorithms, but I have been unable to find the right settings to get my data to work with any of the Cocone algorithms.
Does anyone understand the user settings in these algorithms?
What would be the best settings for very non-uniform data points? I can provide the 3D point data of the shape upon request.
Related
In machine learning, a lot of techniques require defining a metric between different data points. I want to know what are some popular metrics when the data are images.
An obvious way of measuring distance between images is to sum up the squares of pixel errors. But this is sensitive to simple transformations like translation. For example, even shifting the whole image by one pixel could result in a large distance.
What are some other distance measuring techniques that is more compatible with translation, rotations, etc.?
Wasserstein distance(earth mover's distance) and kullback leibler divergence are the two that I have come across while studying literature about Generative Adversarial Networks(GANs).
I'm looking for an algorithm solving a problem with detecting collisions between non-aligned objects in 2d (e.g. rectangles), as below:
How do I go about it? All of the articles I found online handle axis-aligned objects, which is not what I need.
As for know, I handled only collisions between circles by measuring distance between their centers, but this case is way more difficult.
I was trying to make a application that compares the difference between 2 images in java with opencv. After trying various approaches I came across the algorithm called Demons algorithm.
To me it seems to give the difference of images by some transformation on each place. But I couldn't understand it since the references I found were too complex for me.
Even the demons algorithm does not do what I need I'm interested in learning it.
Can any one explain simply what happens in the demons algorithm and how to write a simple code to use that algorithm on 2 images.
I can give you an overview of general algorithms for deformable image registration, demons is one of them
There are 3 components of the algorithm, a similarity metric, a transformation model and an optimization algorithm.
A similarity metric is used to compute pixel based / patch based similarity between pixels/patches. Common similarity measures are SSD, normalized cross correlation for mono-modal images while information theoretic measures like mutual information are used in the case of multi-modal image registration.
In the case of deformable registration, they generally have a regular grid super-imposed over the image and the grid is deformed by solving an optimization problem which is formulated such that the similarity metric and the smoothness penalty imposed over the transformation is minimized. In deformable registration, once there are deformations over the grid, the final transformation at the pixel level is computed using a B-Spine interpolation of the grid at the pixel level so that the transformation is smooth and continuous.
There are 2 general approaches towards solving the optimization problem, some people use discrete optimization and solve it as a MRF optimization problem while some people use gradient descent, I think demons uses gradient descent.
In case of MRF based approaches, the unary cost is the cost for deforming each node in grid and it is the similarity computed between patches, the pairwise cost which imposes the smoothness of the grid, is generally a potts/truncated quadratic potential which ensures that neighboring nodes in the grid have almost the same displacement. Once you have the unary and pairwise cost, you feed it to a MRF optimization algorithm and get the displacements at the grid level, then you use a B-Spline interpolation to compute pixel level displacement. This process is repeated in a coarse to fine fashion over several scales and also the algorithm is run many times at each scale (reducing the displacement at each node every time).
In case of gradient descent based methods, they formulate the problem with the similarity metric and the grid transformation computed over the image and then compute the gradient of the energy function which they have formulated. The energy function is minimized using iterative gradient descent, however these approaches can get stuck in a local minima and are quite slow.
Some popular methods are DROP, Elastix, itk provides some tools
If you want to know more about algorithms related to deformable image registration, I will recommend you to take a look to FAIR( guide book), FAIR is a toolbox for Matlab so you will have examples to understand the theory.
http://www.cas.mcmaster.ca/~modersit/FAIR/
Then if you want to specifically see some demon example,, here you have this other toolbox:
http://www.mathworks.es/matlabcentral/fileexchange/21451-multimodality-non-rigid-demon-algorithm-image-registration
Motivating example: I am trying to implement a land-only infection simulation in parallel based on the UK map.
I sample points uniformly spread over the land area and determine its infection status at each time step, which depends on the previous status of its neighbouring points (SIR model). The country is irregularly shaped, and so cartesian coordinates do not load-balance well - what are more efficient decomposition methods I should consider as standard?
Many thanks.
An excellent article (Seal & Aluru, 2001) outlining methods of
Orthogonal Recursive Bisection
Space Filling Curves
Octrees and Compressed Octrees
and a further paper (Aluru & Sevilgen) focussing on Space Filling Curves.
deLaunay meshes are another standard decomposition for irregular objects.
You should consider how such meshes are load balanced; here's a sample article.
I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.
Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.
I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.
I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.
Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe
There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.
unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?
It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp