How to determine cluster rotation around z-axis? - cluster-computing

I have a <PointXYZ> cloud that needs to be clustered.
I've used pcl::EuclideanClusterExtraction to extract the clusters,
Also, I've used pcl::computeCentroid(*(cluster),centroid) to get the centroid x,y,z values.
I am asking if there is a function like computeCentroid in pcl that can compute the rotation around the z-axis. If it is not implemented, may I know how to manually calculate it from the cluster points? ps: I know that manual calculation may slow the overall performance since I am not doing any optimizations.
Thanks.

Related

Fast 3D mesh generation from pointcloud

I would like to build a simple mesh from a set of points in the fastest way possible. Hypothethically my point cloud could be in a very low number of points range (something like 1000 to 50000).
I've seen about 3D Delenauy triangularization and some other methods, but most of the time I don't find speed reported on papers and other times I see huge computational times in the order of minutes.
An interesting algorithm I've found is this: https://doc.cgal.org/latest/Poisson_surface_reconstruction_3/index.html
My main concern is that this is used for making 2D surfaces in 3D space, while I have points in my pointcloud which would lie in the interior of the final volume.
Could you suggest me some algorithms which could be useful in my scenario? And also a raw estimate of computational times? Is it possible to make such task in less than 5 seconds?
Think I'm not trying to make human faces, sculptures, or stuff like that. The meshes I'm trying to reconstruct are always pretty polyhedrical.
Thanks for your attention

Distance to horizon with terrain elevation data

Looking for an algorithm to compute actual distance from a latitude/longitude/elevation to the visible horizon taking into account the actual surrounding terrain and the curve of the earth. Assume you have enough terrain data for the surrounding several hundred miles from any of the open elevation datasets. The problem can be simplified to an approximate by checking a few cardinal directions. Ideally I'd like to be able to compute the real solution as well.
Disclosure: I'm the developer and maintainer of the below mentioned software package.
I'm not sure if you're still looking for a solution as this question is already a bit older. However, one solution for your problem would be to apply the open-source package HORAYZON (https://github.com/ChristianSteger/HORAYZON). It's based on the high-performance ray tracing library Intel Embree (https://www.embree.org) and thus very fast and it considers Earth's curvature. With this package, you can compute the horizon angle and the distance to the horizon line for one or multiple arbitrary location(s) on a Digital Elevation Model (DEM) and set various options like the number of cardinal sampling directions, the maximal search distance for the horizon, etc. However - I'm not sure what you mean by "real solution". Do you mean the "perfect" solution - i.e. by considering elevation information from all DEM cells without doing a discrete sampling along the azimuth angle? Unfortunately, this cannot be done with the above mentioned package (but one could theoretically implement it).

Edge Detection Point Cloud

I am working on a application that is filtering a point cloud from a laser distance measuring device. Its a small array only 3x176x132 and Im trying to find parts inside a bin and pick the top most. So far I have played around with filtering the data into a way that can be processed by more traditional vision algorithms. I have been using the Sobel operator on the distance data and normalizing it and this is what I came up with
The Same filter applied to the PMD amplitude
My problem is I feel like I am not getting enough out of the distance data. When I probe the actual height values I see a drop the thickness of a part around the edges but this is not reflected in the results. I think it has to do with the fact that the largest distance changes in the image are 800mm and a part is only 10mm but Im sure there must be a better way to filter this.
Any suggestions

Photon Mapping - Several issues

So I'm trying to implement a photon mapping algorithm to simulate global illumination in my ray tracing program. However, I'm running into a few issues that are making it hard to complete the implementation.
My program already successfully traces photons throughout the scene, stores them in a balanced KD-Tree and can gather the k nearest photons near any given point p, so most of the work is complete. The issue is mostly when it comes time for the radiance estimate.
First, I can't seem to get the indirect illumination to be bright enough to make any noticeable difference in my scene. If my light source emits 100,000 photons, then the power of each stored photon (which for 100,000 emitted photons would be roughly around 500,000 stored photons in my program) must be scaled down by a factor of 100,000 which makes them very dim. I thought I could mitigate this when dividing by the area if the encapsulating circle (pi*rad^2) if I have a search radius much less than 1 but decreasing the search radius to such a small number leaves me with very few photons for the estimate and if I use a large radius I can get enough photons but i don't get the extra "power boost" of using a small radius and might wind up including incorrect photons. So I don't know what to do.
Additionally, my other problem is that if I were to artificially scale up the photon power to increase the indirect lighting contribution, the resulting illumination is splotchy, uneven and ugly and doesn't look at all realistic. I know this problem is vague but I don't know why it looks this way since I'm pretty sure I'm doing the radiance estimate and brdf calculations correctly.
Without knowing the exact calculations you are doing throughout your rendering system, no one will be able to tell you why your indirect illumination is so weak. It could be that your materials are dark enough that there just isn't much indirect light. It could be that you are missing a factor of pi in your indirect illumination calculation somewhere, or you could be missing a divide-by-pi in your direct illumination calculation, so the indirect is dim in comparison.
As for splotchiness, that's what photon mapping looks like without a ton of photons. Try 100 million photons (or at least 10 million) instead and see if the issue persists.

Connected components in organized 3d point cloud data

Hy!
I have organized point clouds from a Kinect sensor. Let's say I have a organized point cloud of a sofa with a table in front. What I would like the get are two clouds: sofa and table
I am searching for some algorithm to get the connected components.
Does anyone have some pseudo code or papers? Or maybe some code (Matlab)
My idea at the moment: I could use the 2D information to get neighboring pixels of a point.
Next I could check the euclidean distance to the neighboring pixels. If the distance is below a threshold, the pixel belongs to the same cluster. ...
Thanks
As #Amro pointed out, DBSCAN is the algorithm you should study. It is a clustering based on ''density-connected'' components.
Also note the GDBSCAN variant (Generalized DBSCAN). You are not restricted to primitive distances such as Euclidean, but you can make your "neighborhood" definition as complex as you'd like.
Matlab probably is not the best choice. For DBSCAN to be really fast, you need support for index acceleration. Recent scikit-learn (0.14 to be precise) just got basic index acceleration for DBSCAN, and ELKI has had it for years. ELKI seems to be more ''flexible'' wrt. to having GDBSCAN and having index structures that are easy to extend with custom distance functions. sklearn probably only accelerates a few built-in distances.
you can use the connected component segmentation plugin from "Tools>Segmentation>label connected component" from cloudcompare software

Resources