Path Planning on a 3d point cloud - point-clouds

I have a 3d point cloud of a location on which I am trying to design a path planner for a mobile robot. Can anyone please guide me towards the right approach to take in solving the problem. For the point cloud, I have the coordinates of the obstacles on the map (their x,y,z positions). I am trying to solve the problem as a stand-alone general purpose planner for a mobile robot without using ROS.
My current stumbling block lies in the theoretical aspect as well since the fact that the point cloud consists of just x,y,z points, how is a path planning algorithm like A* run on such types of data where you can't define a general grid like that for a 2d case with each grid cell as a node? I have the coordinates of the obstacles on the map (their x,y,z positions).
Would greatly appreciate if anyone can provide me with some guidance on how to move forward.
Thank You in advance!

I am facing a similar problem. The first step should be calculating the normal vectors of the point cloud. Then, the path can be calculated based on the normal vector.

Related

Match 3D point cloud to CAD model

I have a point cloud of an object, obtained with a laser scanner, and a CAD surface model of that object.
How can I match the point cloud to the surface, to obtain the translation and rotation between cloud and model?
I suppose I could sample the surface and try the Iterative Closest Point (ICP) algorithm to match the resulting sampled point cloud to the scanner point cloud.
Would that actually work?
And are there better algorithms for this task?
In new OpenCV, I have implemented a surface matching module to match a 3D model to a 3D scene. No initial pose is required and the detection process is fully automatic. The model also involves an ICP.
To get an idea, please check that out a video here (though it is not generated by the implementation in OpenCV):
https://www.youtube.com/watch?v=uFnqLFznuZU
The full source code is here and the documentation is here.
You mentioned that you needed to sample your CAD model. This is correct and we have given a sampling algorithm suited for point pair feature matching, such as the one implemented in OpenCV:
Birdal, Tolga, and Slobodan Ilic. A point sampling algorithm for 3D matching of irregular geometries. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017.
http://campar.in.tum.de/pub/tbirdal2017iros/tbirdal2017iros.pdf
Yes, ICP can be applied to this problem, as you suggest with sampling the surface. It would be best if you have all available faces in your laser scan otherwise you may have to remove invisible faces from your model (depending on how many of these there are).
One way of automatically preparing a model by getting rid of some of the hidden faces is to calculate the concave hull which can be used to discard hidden faces (which are for example faces that are not close to the concave hull). Depending on how involved the model is this may or may not be necessary.
ICP works well if given a good initial guess because it ignores points that are not close with respect to the current guess. If ICP is not coming up with a good alignment you may try it with multiple random restarts to try and fix this problem, choosing the best alignment.
A more involved solution is to do local feature matching. You sample and calculate an invariant descriptor like SHOT or FPFH. You find the best matches, reject non-consistent matches, use them to come up with a good initial alignment and then refine with ICP. But you may not need this step depending on how robust and fast the random-restart ICP is.
There's an open source library for point cloud algorithms which implements registration against other point clouds. May be you can try some of their methods to see if any fit.
As a starter, if they don't have anything specific to fit against a polygon mesh, you can treat the mesh vertices as another point cloud and fit your point cloud against it. This is something that they definitely support.

Connected components in organized 3d point cloud data

Hy!
I have organized point clouds from a Kinect sensor. Let's say I have a organized point cloud of a sofa with a table in front. What I would like the get are two clouds: sofa and table
I am searching for some algorithm to get the connected components.
Does anyone have some pseudo code or papers? Or maybe some code (Matlab)
My idea at the moment: I could use the 2D information to get neighboring pixels of a point.
Next I could check the euclidean distance to the neighboring pixels. If the distance is below a threshold, the pixel belongs to the same cluster. ...
Thanks
As #Amro pointed out, DBSCAN is the algorithm you should study. It is a clustering based on ''density-connected'' components.
Also note the GDBSCAN variant (Generalized DBSCAN). You are not restricted to primitive distances such as Euclidean, but you can make your "neighborhood" definition as complex as you'd like.
Matlab probably is not the best choice. For DBSCAN to be really fast, you need support for index acceleration. Recent scikit-learn (0.14 to be precise) just got basic index acceleration for DBSCAN, and ELKI has had it for years. ELKI seems to be more ''flexible'' wrt. to having GDBSCAN and having index structures that are easy to extend with custom distance functions. sklearn probably only accelerates a few built-in distances.
you can use the connected component segmentation plugin from "Tools>Segmentation>label connected component" from cloudcompare software

360 degree 3D view of a room using a single rotating kinect

I am working on a research project to construct the 360 degree 3D view of a room using a single rotating kinect placed in the center.
My current approach is to obtain the 3D point clouds obtained by kinect after every 2 to 5 degrees of rotation, using the Iterative Closest Point Algorithm.
Note that we need to build the view real time as the kinect rotates so we need to capture the point cloud after a small degree of rotation of kinect.
However the ICP algo is computationally expensive.
I am looking for a better solution to the above problem. Any help/ pointers in this direction will be appreciated.
I'm not sure how familiar you are with the intersection of machine learning and computer vision. But recently, a much harder problem has been solved with advances in machine learning: generating 3D models of large areas from an unstructured collection of images. For example, this example of "building Rome in a day": see this video, as it may just blow your mind.
With your mind suitably blown, you may want to check out the machine learning techniques that allowed this computation to take place efficiently in this video.
You may want to follow up with Noah Snavely's PhD thesis and check out the algorithms that he used and other work that has been incorporated to build this system. It seems that the problem of reconstructing a single room from one rotating point must be a much easier inference problem. Then again, you may just want to check out the implementation in their software :)

Finding cross on the image

I have set of binary images, on which i need to find the cross (examples attached). I use findcontours to extract borders from the binary image. But i can't understand how can i determine is this shape (border) cross or not? Maybe opencv has some built-in methods, which could help to solve this problem. I thought to solve this problem using Machine learning, but i think there is a simpler way to do this. Thanks!
Viola-Jones object detection could be a good start. Though the main usage of the algorithm (AFAIK) is face detection, it was actually designed for any object detection, such as your cross.
The algorithm is Machine-Learning based algorithm (so, you will need a set of classified "crosses" and a set of classified "not crosses"), and you will need to identify the significant "features" (patterns) that will help the algorithm recognize crosses.
The algorithm is implemented in OpenCV as cvHaarDetectObjects()
From the original image, lets say you've extracted sets of polygons that could potentially be your cross. Assuming that all of the cross is visible, to the extent that all edges can be distinguished as having a length, you could try the following.
Reject all polygons that did not have exactly 12 vertices required to
form your polygon.
Re-order the vertices such that the shortest edge length is first.
Create a best fit perspective transformation that maps your vertices onto a cross of uniform size
Examine the residuals generated by using this transformation to project your cross back onto the uniform cross, where the residual for any given point is the distance between the projected point and the corresponding uniform point.
If all the residuals are within your defined tolerance, you've found a cross.
Note that this works primarily due to the simplicity of the geometric shape you're searching for. Your contours will also need to have noise removed for this to work, e.g. each line within the cross needs to be converted to a single simple line.
Depending on your requirements, you could try some local feature detector like SIFT or SURF. Check OpenSURF which is an interesting implementation of the latter.
after some days of struggle, i came to a conclusion that the only robust way here is to use SVM + HOG. That's all.
You could erode each blob and analyze their number of pixels is going down. No mater the rotation scaling of the crosses they should always go down with the same ratio, excepted when you're closing down on the remaining center. Again, when the blob is small enough you should expect it to be in the center of the original blob. You won't need any machine learning algorithm or training data to resolve this.

Algorithm to find 'hot spots' in a database of GPS coordinates

I've got an amount of data that I'm about to put into a database, it's a list of GPS points.
I want to iterate over this database and create a table of 'hot spots' where there are a high number of database points in a certain size of area (either a square area, or a circular area - I don't need to be exact).
Can anyone recommend existing algorithms that might help me with this?
Thanks in advance!
r3mo
K-Means clustering would be a good starting point, for identifying hot-spots. See wikipedia entry.
How about creating a raster with a given cell size and assigning the raster value to the number of points falling within each pixel (a density plot)? It's a basic approach with some limitations (where you place the grid and the pixel size will affect the outcome), but if that's all you need... This could be accomplished easily in R using the spatstat package. Check out this pdf tutorial on spatstat for examples.
Unless another variable is attached to your points, it's not really hotspot detection, just a determination of point density...

Resources