I'm building a segmentation algorithm. I'm segmenting pieces of paper in a book that have been slightly crumpled. Imagine taking a piece of paper, crumpling it into a ball, and then trying to straighten it back out.
The piece of paper is an actually 3D object (has depth -- small but still existent), but I want to segment a 2D plane running through the geometric center of the 3D object. Is this a center of mass problem?
I have a 3D matrix of binary values -- 1 being on the piece of paper, and 0 not on the piece of paper.
What kind of algorithm can I run to find the 2D plane?
You may want a 3D least-squares plane fit. This will minimize the separation between your plane and the voxel points. See here for math and code: http://www.ilikebigbits.com/blog/2015/3/2/plane-from-points
Related
Is there any simple algorithm like Voronoi diagram to divide any rectangular plane to triangles, eventually, using # of pre-defined points.
To be honest, I have to write a very simple fragment shader like this.
Theoretically, this Voronoii shader could be 'upgraded' by Delaunay triangulation
but wanna find the more elegant solution.
The first thing that comes to my mind is to create n random points (with specific seed) to fill a cylinder volume. The triangle points will be intersection of lines between those points and plane going through the axis of cylinder. The animation would be simply done by rotating the plane ...
I see it something like this:
So the neighboring points should be interconnected with each other. Forming tetrahedrons that fills the volume of the cylinder. So create uniform tetrahedron grid and add random noise to the points position (with specific seed).
This whole task is very similar to rendering cross section of 4D mesh see:
4D rendering techniques
As the 4D simplex is also tetrahedron. The only diference is you are in 3D and cutting by 3D plane.
You can reverse-engineer this example shadertoy.com/view/MdfBzl
like I did. Thanks to mattz.
I want to plot a sphere unit around a 3D object (.obj format file) with matlab, in order to guarantee a normalization of all the objects in my DB, and so on to achieve the scale invariance. I found in the state of the art that there are some algorithms implemented with C++ like Gartner's algorithm that is the fastest https://www.inf.ethz.ch/personal/gaertner/miniball.html, Fisher's algorithm which is more efficient for high dimensions https://github.com/hbf/miniball or welzl'one, or Mogiddo's algorithm too.
My question is: does the function sphere in matlab do the same thing, so all we have to do is to change the center of the sphere, or there is a specific matlab implementation for one of those algorithms above?
My problem involves matching a set of 2d points to a set of 3d points, with known correspondence between the two. Basically I have points on an image, and I need the optimal translation and rotation to fit the points to a known 3d point cloud. Kabsch algorithm is originally meant for finding the best fit of 3d points to another point cloud, and there are implementations out there for 2d to 2d, but not something I can use. I do know it's possible, but just don't know how to go about it. I searched for code out there and came up empty. I'm programming in matlab at the moment, but any language would do.
Thank you.
Edit: The goal is getting a rotation and translation of the 3d point cloud to best match the 2d points when it is projected onto an image plane.
I should also mention that the 3d to 2d projection is done using a weak perspective.
So basically, you have a "plane" or a "line" of points, like the third dimension was 0. You could threat them like this, and use the tipicall kabsh algorithm of squared distance minimisation, don't you?
EDIT: maybe it's a nonsense, but what about projecting the 3d body to 2d coordinates, and do a 2d comparison? Computationally is expensive, so it includes exploring all the angles of the 3d object + projection, but it's easier losing one dimension by applying a projection, that adding a new dimenssion to a 2d point.
I am learning camera matrix stuff. I already known that I can get the homography of the camera (3*3 matrix) by using four points in a plane in object space. I want to know if we can get the homagraphy with four points not in a plane? If yes, how can I get the matrix? What formulas should I look at?
I also confused homography with another concept: I only need to know three points if I want to convert from points from one coordinate to another coordinate system. So why we need four points in computing homography?
Homography maps points
1. On plane to points at another plane
2. Projections of points in 3D (no obligatory lying on the same plane) during a pure camera rotation or zoom.
The latter can be easily verified if you look at the rays that connect points while sensor plane rotates: green are two sensor positions and black is a 3d object
Since Homography is between projections and not between objects in 3D you don’t care what these projections represent. But this can be confusing, I agree. For example you can point your camera at 3D scene (that is not flat!), then rotate your camera and the two resulting pictures of the scene will be related by homography. This is, by the way, a foundation for image panoramas.
Three point correspondences you mentioned may be reladte to a transformation called Affine (happens during large zooms when a perspective effects disappears) or to the finding a rigid rotation and translation in 3D space. Both require 3 point correspondences but the former needs only 2D points while the latter needs 3D points. The latter case has 6DOF ( 3 for rotation and 3 for translation) while each correspondence provides 2DOF, hence 6/2=3 correspondences. Homography has 8 DOF so there should be 8/2=4 correspondences;
Below is a little diagram that explains the difference between affine and homographs transformation when the original square tilts forward. In affine case the perspective effect is negligible that is far side has the same length as a near one. In the case of Homography the far side is shorter.
If you only have 4 points - and they're not on the same plane - then computing a homography will not work.
If you have a loads of points, and 4 of them do lie on a plane but some don't, there are filters you can use to try to remove the ones not lying on a plane. The filters implemented by OpenCV are called RANSAC and LMeDs.
Also as Hammer says in a comment under your question - The 4th point is there to figure out perspective.
Homography is a 3X3 matrix, which consists of 8 independent unknowns which means it requires 4 equations to solve these unknowns. So, in order to calculate homography we need at least 4 points.
In homography we assume that Z=0 in world scene, so the image projected is assumed as 2D. In a very famous journal named ORB-SLAM, the author formulated a scene-selective approach depending on motion parallax in scene.
Homography is the relation between two planes and the degree of freedom in case of homography transform is 7; hence you need minimum 4 corresponding points.
4 points will give you 4 pair of (x,y) hence you can calculate 7 variables. Homography is homogines transfrom hence the (3,3) value in homography matrix is always 1.
So your first question that can you calculate homography with 3 points in the plane and 4th not on the plane : it's not possible. You need projection of that point on the plane and then you can calculate the homography.
Your 2nd question about how to calculate homography matrix, you can see implemetation of findHomography() in opencv.
I have a 3D Cartesian cube. For each point in this cube there is a corresponding density value. When the density changes suddenly it means that there is a cavity. Now to find the cavity I calculate the gradient at each point in the cube. This gives me a point cloud on the surface of the cavity. I would now like to mesh the surface of the cavity given the point cloud.
Unfortunately I don't have any experience with surface reconstruction and was wondering if someone can recommend a suitable algorithm which will produce a closed surface of the cavity?
The cube is quite big so a point cloud of the surface of a cavity can easily be 500.000 points or more. I have read this post: robust algorithm for surface reconstruction from 3D point cloud? which I find useful. However it seems that the problem I am facing is simpler, given that:
The coordinates of the points are always integer
The point distribution is even
The distance from one point to its closest neighbour is either 1, sqrt(2) or sqrt(3)
You probably want the marching cubes algorithm.
The Marching Cubes algorithm will do exactly what you want. For a working implementation (using Three.js for rendering the graphics), check out:
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, I think the best article is the website:
http://paulbourke.net/geometry/polygonise/