Tessellation in 3D - algorithm

I have a set of Points in 3D space.
The image below is an example:
I would like to turn these points into a surface. I just know the X,Y and Z values of the points.
For example, check out the image below, which shows a mesh of a human face generated from points in 3D space.
i googled so much but, what i found is some images and explaination
but no one has explained with practical aspect and practical example.
is there any good or best algorithms which help me to solve this problem.
Please....
Thaks...........

You want to do a Delaunay-Triangulation. See example application here: http://www.geometrylab.de/VoroGlide/.

Related

Is image stitching with a fundamental matrix, instead of a homography, possible?

I would like to ask a question I already asked on the OpenCV board but did not get an answer to: http://answers.opencv.org/question/189206/questions-about-the-fundamental-matrix-and-homographies/.
After learning about the fundamental matrix I have the following question that I could not answer by googling. The fundamental matrix is a more general case of the homography as it is independent of scene's structure. So I was wondering if it could be used for image stitching instead of a homography. But all papers I found only use homographies. So I reread the material about the properties of the fundamental matrix and now I am wondering:
Is it not possible to use the fundamental matrix for stitching because of its rank deficiency and the fact that it does only relate points in Image 1 to lines (epipolar lines) in Image 2?
Another question I have regarding homographies: All papers I read about image stitching use homographies for rotational panoramas. What if I want to create a panorama based only on translation between images? Can I use the homography as well? The answers provided by a google search vary quite a lot.
Kind regards and thanks for your help!
Conundraah
About using fundamental matrix for stitching.
It actually depends on how you want to stitch the image together.
The problem is even if you get the fundamental matrix, when you stitch images together, you will only need homography matrix to do the transformation of images. So what is the point of using fundamental matrix. Unless you figure out how to handle the different distance on the same image.
In the case of panorama images, the assumption is that the scene structure is far enough to be seen as planar, so comparatively the translation could be ignored. If that is not the case, translation could be considered.

PMVS definition of "n-adjacent"

I am currently reading over Yasutaka Furukawa et al.'s Paper "Accurate, Dense, and Robust Multi-View Stereopsis" (PDF available here), where they describe an MVS-algorithm for reconstructing a 3D point-cloud from images.
I do understand the concepts and the main steps, but there is one detail that I am struggling with. This may be because I am not an English native speaker, so maybe a small hint would be enough.
On page 4 of the linked source, in chapter 3.2 "Expansion", there is the definition of "n-adjacent" patches:
|(c(p)−c(p'))·n(p)|+|(c(p)−c(p'))·n(p')| < 2ρ_2
My question is about ρ_2, that is described as in the following:
[...] ρ_2 is determined automatically as the distance at the depth of the
midpoint of c(p) and c(p') corresponding to an image displacement of β1 pixels
in R(p).
I do not understand what "distance" in this context should be, and I do not understand the stated correspondence to the image displacement.
I know that this is a very specific question, but since this paper is somewhat popular I hoped, that there is somebody, that can help me.
Alright, I think I do get it now.
It just means, that ρ_2 is the distance you have to move in a plane, located as far away from the camera (depth) as the midpoint of c(p) and c(p'), so that you get a displacement of β1 pixels in the image showing the scene.

Is it possible to measure depth of an image(JPEG/PNG)

I am wondering there must be a way to get a depth of image. Certainly some portions can be extruded so that we get 3d version of 2d image. Any sources that will help in this out.
FYI: I would like to get point cloud from 2d image.
Thank you in advance..
Reconstruction from single 2D image is not possible. As mentioned by others, there is loads of literature to refer to. Multi-View Geometry by Hartely and Zisserman can be a good start. An example tutorial to start with: Reconstruction. you can also refer to computer vision toolbox from matlab/opencv
It is possible to draw some 3D information from a 2D image.
There are methods to extract depth infortmation from the blur of a pixel. When a picture is captured, you focus in one place and let remaining portion blur, and hence blur captures depth.
See Make3d project to startwith.
Thank you.

Image Warp Filter - Algorithm and Rasterization

I'd like to implement a Filter that allows resampling of an image by moving a number of control points that mark edges and tangent directions. The goal is to be able to freely transform an image as seen in Photoshop when you use "Free Transform" and chose Warpmode "Custom". The image is fitted into a some kind of Spline-Patch (if that is a valid name) that can be manipulated.
I understand how simple splines (paths) work but how do you connect them to form a patch?
And how can you sample such a patch to render the morphed image? For each pixel in the target I'd need to know what pixel in the source image corresponds. I don't even know where to start searching...
Any helpful info (keywords, links, papers, reference implementations) are greatly appreciated!
This document will get you a good insight into warping: http://www.gson.org/thesis/warping-thesis.pdf
However, this will include filtering out high frequencies, which will make the implementation a lot more complicated but will give a better result.
An easy way to accomplish what you want to do would be to loop through every pixel in your final image, plug the coordinates into your splines and retrieve the pixel in your original image. This pixel might have coordinates 0.4/1.2 so you could bilinearly interpolate between 0/1, 1/1, 0/2 and 1/2.
As for splines: there are many resources and solutions online for the 1D case. As for 2D it gets a bit trickier to find helpful resources.
A simple example for the 1D case: http://www-users.cselabs.umn.edu/classes/Spring-2009/csci2031/quad_spline.pdf
Here's a great guide for the 2D case: http://en.wikipedia.org/wiki/Bicubic_interpolation
Based upon this you could derive an own scheme for splines for the 2D case. Define a bivariate (with x and y) polynomial and set your constraints to solve for the coefficients of the polynomial.
Just keep in mind that the borders of the spline patches have to be consistent (both in value and derivative) to avoid ugly jumps.
Good luck!

Dense pixelwise reverse projection

I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:
Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.
I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...
A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.
That's a good suggestion... and I will definitely look into it (photosynth kind of resparked my interest in this subject - but I've been working on it for months for robochamps) - but it's a sparse implementation - it looks for "good" features (points in the image that should be easily identifiable in other views of the same image), and while I certainly plan to score each match based on how good the feature it's matching is, I want the full dense algorithm to derive every pixel... or should I say voxel lol?
After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?
I worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)

Resources