I have two separate pointclouds(type= sensor_msgs/PointCloud2) from two different sensors, a 3D stereo camera and a 2D LiDAR. I wanted to know how can I fuse these two pointclouds if the stereo pointcloud is 3D with fix length and a 2D LiDAR pointcloud with variable pointcloud length?
If someone has worked on it please help me, your help will be highly appreciated.
Thanks
I studied this in my research.
The first is you have to calibrate 2 sensors to know their extrinsic. There are a few open source packages you can play with which I listed Below
The Second is fuse the data. The simple way is just based on calibration transform and use the tf to send. The complicated way is to deply pipelines such as depth image to LIDAR alignment and depth map variance estimation and fusion. You can choose to do it ez way like easiler landmark included EKF estimation, or you can follow CMU Zhangji`s Visual-LIDAR-Inertial fusion work for the direct 3D feature to LIDAR alignment. The choice is urs
(1)
http://wiki.ros.org/velo2cam_calibration
Guindel, C., Beltrán, J., Martín, D. and García, F. (2017). Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups. IEEE International Conference on Intelligent Transportation Systems (ITSC), 674–679.
Pros. Pretty accurate and ez to use package. Cons. you have to made rigid cut board.
(2) https://github.com/ankitdhall/lidar_camera_calibration
LiDAR-Camera Calibration using 3D-3D Point correspondences, arXiv 2017
Pros. Ez to use, Ez to make the hardware. Cons May is not so accurate
There were couple of others I listed In my thesis, I`ll go back and check for it and update here. If I remember
Related
I'm going to match the sketch face (drawing photo) in to the color photo. so for the research i want to find out what are the challenges that matching sketch drawing in to color faces. for now i have find out that
resolution pixel difference
texture difference
distance difference
and color (not much effect)
I want to know (in technical terms) what are other challenges and what are available OPEN CV and JAVA CV method and algorithms to overcome that challenges?
Here is some example of the sketches and the photos that are known to match them:
This problem is called multi-modal face recognition. There has been a lot of interest in comparing a high quality mugshot (modality 1) to low quality surveillance images (modality 2), another is frontal images to profiles, or pictures to sketches like the OP is interested in. Partial Least Squares (PLS) and Tied Factor Analysis (TFA) have been used for this purpose.
A key difficulty is computing two linear projections from the image in modality 1 (and modality 2) to a space where two points being close means that the individual is the same. This is the key technical step. Here are some papers on this approach:
Abhishek Sharma, David W Jacobs : Bypassing Synthesis: PLS for
Face Recognition with Pose, Low-Resolution and Sketch. CVPR
2011.
S.J.D. Prince, J.H. Elder, J. Warrell, F.M. Felisberti, Tied Factor
Analysis for Face Recognition across Large Pose Differences, IEEE
Patt. Anal. Mach. Intell, 30(6), 970-984, 2008. Elder is a specialist in this area and has a variety of papers on the topic.
B. Klare, Z. Li and A. K. Jain, Matching forensic sketches to
mugshot photos, IEEE Pattern Analysis and Machine Intelligence, 29
Sept. 2010.
As you can understand this is an active research area/problem. In terms using OpenCV to overcome the difficulties, let me give you an analogy: you need to build build a house (match sketches to photos) and you're asking how will having a Stanley hammer (OpenCV) will help. Sure, it will probably help. But you'll also need a lot of other resources: wood, time/money, pipes, cable, etc.
I think that James Elder's old work on the completeness of the edge map (using reconstruction by solving the Laplace equation) is quite relevant here. See the results at the end of this paper: http://elderlab.yorku.ca/~elder/publications/journals/ElderIJCV99.pdf
You could give Eigenfaces a try, though i never tested them with sketches i think they could a least be a good starting point for your research.
See Wiki: http://en.wikipedia.org/wiki/Eigenface and the Tutorial for OpenCV: http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html (including not only Eigenfaces!)
OpenCV can be used for feature extraction and machine learning required for this task. I guess you can start with the papers in the answers above, start with some basic features and prototype a classifier with OpenCV.
I guess you might also want to detect and match feature points on the faces. If you use this approach, you will have to do the feature point detectors on your own (training the Viola-Jones detector in OpenCV with your own data is an option).
I am working on a research project to construct the 360 degree 3D view of a room using a single rotating kinect placed in the center.
My current approach is to obtain the 3D point clouds obtained by kinect after every 2 to 5 degrees of rotation, using the Iterative Closest Point Algorithm.
Note that we need to build the view real time as the kinect rotates so we need to capture the point cloud after a small degree of rotation of kinect.
However the ICP algo is computationally expensive.
I am looking for a better solution to the above problem. Any help/ pointers in this direction will be appreciated.
I'm not sure how familiar you are with the intersection of machine learning and computer vision. But recently, a much harder problem has been solved with advances in machine learning: generating 3D models of large areas from an unstructured collection of images. For example, this example of "building Rome in a day": see this video, as it may just blow your mind.
With your mind suitably blown, you may want to check out the machine learning techniques that allowed this computation to take place efficiently in this video.
You may want to follow up with Noah Snavely's PhD thesis and check out the algorithms that he used and other work that has been incorporated to build this system. It seems that the problem of reconstructing a single room from one rotating point must be a much easier inference problem. Then again, you may just want to check out the implementation in their software :)
Is it possible to make a 3D representation of an object by capturing many different angles using a webcam? If it is, how is it possible and how is the image-processing done?
My plan is to make a 3D representation of a person using a webcam, then from the 3D representation, i will be able to tell the person's vital statistics.
As Bart said (but did not post as an actual answer) this is entirely possible.
The research topic you are interested in is often called multi view stereo or something similar.
The basic idea resolves around using point correspondences between two (or more) images and then try to find the best matching camera positions. When the positions are found you can use stereo algorithms to back project the image points into a 3D coordinate system and form a point cloud.
From that point cloud you can then further process it to get the measurements you are looking for.
If you are completely new to the subject you have some fascinating reading to look forward to!
Bart proposed Multiple view geometry by Hartley and Zisserman, which is a very nice book indeed.
As Bart and Kigurai pointed out, this process has been studied under the title of "stereo" or "multi-view stereo" techniques. To be able to get a 3D model from a set of pictures, you need to do the following:
a) You need to know the "internal" parameters of a camera. This includes the focal length of the camera, the principal point of the image and account for radial distortion in the image.
b) You also need to know the position and orientation of each camera with respect to each other or a "world" co-ordinate system. This is called the "pose" of the camera.
There are algorithms to perform (a) and (b) which are described in Hartley and Zisserman's "Multiple View Geometry" book. Alternatively, you can use Noah Snavely's "Bundler" http://phototour.cs.washington.edu/bundler/ software to also do the same thing in a very robust manner.
Once you have the camera parameters, you essentially know how a 3D point (X,Y,Z) in the world maps to an image co-ordinate (u,v) on the photo. You also know how to map an image co-ordinate to the world. You can create a dense point cloud by searching for a match for each pixel on one photo in a photo taken from a different view-point. This requires a two-dimensional search. You can simplify this procedure by making the search 1-dimensional. This is called "rectification". You essentially take two photos and transform then so that their rows correspond to the same line in the world (simplified statement). Now you only have to search along image rows.
An algorithm for this can be also found in Hartley and Zisserman.
Finally, you need to do the matching based on some measure. There is a lot of literature out there on "stereo matching". Another word used is "disparity estimation". This is basically searching for the match of pixel (u,v) on one photo to its match (u, v') on the other photo. Once you have the match, the difference between them can be used to map back to a 3D point.
You can use Yasutaka Furukawa's "CMVS" or "PMVS2" software to do this. Or if you want to experiment by yourself, openCV is a open-source computer vision toolbox to do many of the sub-tasks required for this.
This can be done with two webcams in the same ways your eyes work. It is called stereoscopic vision.
Have a look at this:
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
An affordable alternative to get 3D data would be the Kinect camera system.
Maybe not the answer you are hoping for but Microsoft's Kinect is doing that exact thing, there are some open source drivers out there that allow you to connect it to your windows/linux box.
is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.
I am trying to figure out what algorithms there are to do surface reconstruction from 3D range data. At a first glance, it seems that the Ball pivoting algorithm (BPA) and Poisson surface reconstruction are the more established methods?
What are the established, more robust algorithm in the field other than BPA and Poisson surface reconstruction algorithm?
Recommended research publications?
Is there available source code?
I have been facing this dilemma for some months now, and made exhaustive research.
Algorithms
Mainly there are 2 categories of algorithms: computation geometry, and implicit surfaces.
Computation Geometry
They fit the mesh on the existing points.
Probably the most famous algorithm of this group is powercrust, because it is theoretically well-established - it guarantees watertight mesh.
Ball Pivoting is patented by IBM. Also, it is not suitable for pointclouds with varying point density.
Implicit functions
One fits implicit functions on the pointcloud, then uses a marching-cube-like algorithm to extract the zero-set of the function into a mesh.
Methods in this category differ mainly by the different implicit functions used.
Poisson, Hoppe's, and MPU are the most famous algorithms in this category. If you are new to the topic, i recommend to read Hoppe's thesis, it is very explanatory.
The algorithms of this category usually can be implemented so that they are able to process huge inputs very efficiently, and one can scale their quality<->speed trade-off. They are not disturbed by noise, varying point-density, holes. A disadvantage of them is that they require consistently oriented surface normals at the input points.
Implementations
You will find small number of free implementations. However it depends on whether You are going to integrate it into free software (in this case GPL license is acceptable for You) or into a commercial software (in this case You need a more liberal license). The latter is very rare.
One is in VTK. I suspect it to be difficult to integrate (no documentation is available for free), it has a strange, over-complicated architecture, and is not designed for high-performance applications. Also has some limitations for the allowed input pointclouds.
Take a look at this Poisson implementation, and after that share your experience about it with me please.
Also:
here are a few high-performance algorithms, with surface reconstruction among them.
CGAL is a famous 3d library, but it is free only for free projects.
Meshlab is a famous application with GPL.
Also (Added August 2013):
The library PCL has a module dedicated to surface reconstruction and is in active development (and is part of Google's Summer of Code). The surface module contains a number of different algorithms for reconstruction. PCL also has the ability to estimate surface normals, incase you do not have them provided with your point data, this functionality can be found in the features module. PCL is released under the terms of the BSD license and is open source software, it is free for commercial and research use.
If you want make some direct experiments with various surface reconstruction algorithms you should try MeshLab, the mesh-processing system, it is open source and it contains implementations of many of the previously cited surface reconstruction algorithms, like:
Poisson Surface Recon
a couple of MLS based approach,
a ball pivoting implementation
a variant of the Curless volume based approach
Delaunay based techniques (Alpha shapes and Voronoi filtering)
tools for computing normals from scattered point sets
and many other tools for comparing/measuring/cleaning/simplifying the resulting meshes.
Sources are protected by GPL, so you could not use them in a commercial closed source project, but it is very important to get the right feeling about the properties of the various surface reconstruction algorithms (how sensitive to noise they are, the speed, the robustness to outliers, how they preserve fine details etc etc) before starting to implement one of them.
You might start looking at some recent work in the field - currently something like Fast low-memory streaming MLS reconstruction of point-sampled surfaces by Gianmauro Cuccuru, Enrico Gobbetti, Fabio Marton, Renato Pajarola, and Ruggero Pintus. Its citations can get you going through the literature pretty quickly.
While not a mesh representation, an ex-colleague recommended me this link
to source code for a Thin Plate Spline method:
Link
Anyone tried it?
Not sure if it's exactly right for your case, since it seems weird that you omitted it, but marching cubes is commonly mentioned in cases like these.
As I had this problem too, I did develop and implement my own point cloud crust algorithm. The sources, as well as the documentation, can be found on github.com: https://github.com/meixxi/PointCloudCrust. The algorithm is implemented in Java.
Maybe, this can help you. You can find also a short python script on the page which illustrates how to use the library. Have fun!
Here on GitHub, is a open source Mesh Processing Library in C++ by Dr. Hugues Hoppe, in which the surface reconstruction program Recon is a good option for your problem...
There is 3D Delaunay tool by Geometric Tools. This tool is used DirecX and OpenGL. Unfortunately, you may need buy a book to see actual example code of the library. You still read the code and figure out.
Matlab also introduced a surface reconstruction tool using Delaunay, delaunayTriangulation class.
You might be interested in Alpha Shapes.