open3d constructed mesh is black - point-clouds

I'm trying to create a mesh for a simple environment (i.e. playpen, ROS noetic and Gazebo). I used 10 pcd files (recorded using HDL-32E lidar) to create the mesh environment by using the following steps:
1- Remove radius outliers (nb_points=10, radius=0.8) from pcd files and save as ply files
2- Register ply files using point-to-plane ICP and pose graph optimization
3- Combine the ply files. Apparently, the combined cloud looks good (see combined_plys.png).
4- Reconstruct the mesh environment using poisson reconstruction (depth=14). The resultant mesh file shows a black rectangle only (see front.png). The flipped side show a kind of playpen environment but in bad look (see flipped.png). It is observed that the construction process generate a warning "Extract bad average roots: 21".
I did some R&D and observed that normals play a critical role in mesh reconstruction. I created the normals using cloudcompare and then set their orientation using orient_normals_to_align_with_direction. The registered and combined cloud now have normals, apparently aligned (see pic normals_front and normals_back). Consequently, there is some improvement in the flipped mesh but still the front side is a black rectangle. Any help/hint is much appreciated.
Combined_cloud
front_mesh
flipped_mesh
normals_front
normals_back
flipped_mesh_with_normals
Could you guys suggest how to fix this issue? Thanks in advance

I saw your question has been answered in other forum. That solution is a bit complicated and I didn't go through it. Just sharing how I solved this with a little Open3D settings.
I also encounter black reconstruction problem.
From my trials I found that it is the normals of the vertex of the mesh that we have to calculate, not those of the original point cloud. Here's what I do.
# Calculate the normals of the vertex
mesh.compute_vertex_normals()
# Paint it gray. Not necessary but the reflection of lighting is hardly perceivable with black surfaces.
mesh.paint_uniform_color(np.array([[0.5],[0.5],[0.5]]))

Related

Creating a fence diagram in Mayavi or Matplotlib

working with Matplotlib I have produced some resistivity cross sections of the soil, obtaining pictures like this:
Now I would like to display all those sections in 3D so as to visualise better the spatial distribution of resistivity in the field (i.e. a so-called fence diagram). I would also like to plot the 2D map of the site where those measurements were carried out at the base of my plot (say on the XY plane).
As far as I have seen this is not feasible (or at least not convenient) with Matplotlib in 3D hence I decided to switch to Mayavi.
My questions are:
is it feasible georeferenced rasters and then properly place them on the correct (vertical) planes (not necessarily parallel to the cartesian ones) with Mayavi? Does imshow() serves this purpose?
is it better to recreate the contours in Mayavi at the proper locations? If this is the case I did not find a function to create contours from unstructured data (the input images were created with tricontour/tricontourf in Matplotlib). I do not think interpolating over a structured grid in scipy would do, given the non convex domain.
Ok, answering my own question:
mesh = mlab.triangular_mesh
surf = mlab.pipeline.surface(mesh)
seems to do the job.
To be consistent with the previous work, the triangulation, duly masked, can be directly imported from Matplotlib.

Understanding of NurbsSurface

I want to create a NurbsSurface in OpenGL. I use a grid of control points size of 40x48. Besides I create indices in order to determine the order of vertices.
In this way I created my surface of triangles.
Just to avoid misunderstandings. I have
float[] vertices=x1,y1,z1,x2,y2,z2,x3,y3,z3....... and
float[] indices= 1,6,2,7,3,8....
Now I don't want to draw triangles. I would like to interpolate the surface points. I thought about nurbs or B-Splines.
The clue is:
In order to determine the Nurbs algorithms I have to interpolate patch by patch. In my understanding one patch is defined as for example points 1,6,2,7 or 2,7,3,8(Please open the picture).
First of all I created the vertices and indices in order to use a vertexshader.
But actually it would be enough to draw it on the old way. In this case I would determine vertices and indices as follows:
float[] vertices= v1,v2,v3... with v=x,y,z
and
float[] indices= 1,6,2,7,3,8....
In OpenGL, there is a Nurbs function ready to use. glNewNurbsRenderer. So I can render a patch easily.
Unfortunately, I fail at the point, how to stitch the patches together. I found an explanation Teapot example but (maybe I have become obsessed by this) I can't transfer the solution to my case. Can you help?
You have set of control points from which you want to draw surface.
There are two ways you can go about this
Which is described in Teapot example link you have provided.
Calculate the vertices from control points and pass then down the graphics
pipeline with GL_TRIANGLE as topology. Please remember graphics hardware
needs triangulated data in order to draw.
Follow this link which shows how to evaluate vertices from control points
http://www.glprogramming.com/red/chapter12.html
You can prepare path of your control points and use tessellation shaders to
triangulate and stitch those points.
For this you prepare set of control points as patch use GL_PATCH primitive
and pass it to tessellation control shader. In this you will specify what
tessellation level you want. Depending on that your patch will be tessellated
by another fixed function stage known as Primitive Generator.
Then your generated vertices will be pass to tessellation evaluation shader
in which you can fine tune. Here you can specify outer or inner tessellation
level which will further subdivide your patch.
I would suggest you put your VBO and IBO like you have with control points and when drawing use GL_PATCH primitive. Follow below tutorial about how to use tessellation shader to draw nurb surfaces.
Note : Second method I have suggested is kind of tricky and you will have to read lot of research papers.
I think if you dont want to go with modern pipeline then I suggest go with option 1.

Surface Reconstruction given point cloud and surface normals

I have a .xyz file that has irregularly spaced points and gives the position and surface normal (ie XYZIJK). Are there algorithms out there that can reconstruct the surface that factor in the IJK vectors? Most algorithms I have found assume that surface normals aren't known.
This would ultimately be used to plot surface error data (from the nominal surface) using python 3.x, and I'm sure I will have many more follow on questions once I find a good reconstruction algorithm.
The state of the art right now is Poisson Surface Reconstruction and its screened variant. Code for both is available, e.g. under http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version8.0/. It is also implemented in MeshLab if you want to take a quick look.
If you want to take a look at other methods, check out this STAR. Page three has a table of a couple of approaches and their inputs.

I need help drawing sunrays, glimmers, bursts, sparkles, etc in C

I am in the process of learning how to create a lens flare application. I've got most of the basic components figured out and now I'm moving on to the more complicated ones such as the glimmers / glints / spikeball as seen here: http://wiki.nuaj.net/images/e/e1/OpticalFlaresLensObjects.png
Or these: http://ak3.picdn.net/shutterstock/videos/1996229/preview/stock-footage-blue-flare-rotate.jpg
Some have suggested creating particles that emanate outwards from the center while fading out and either increasing or decreasing in size but I've tried this and there are just too many nested loops which makes performance awful.
Someone else suggested drawing a circular gradient from center white to radius black and using some algorithms to lighten and darken areas thus producing rays.
Does anyone have any ideas? I'm really stuck on this one.
I am using a limited compiler that is similar to C but I don't have any access to antialiasing, predefined shapes, etc. Everything has to be hand-coded.
Any help would be greatly appreciated!
I would create large circle selections, then use a radial gradient. Each side of the gradient is white, but one side has 100% alpha and the other 0%. Once you have used the gradient tool to draw that gradient inside the circle. Deselect it and use the transform tool to Skew or in a sense smash it. Then duplicate it several times and turn each one creating a spiral or circle holding Ctrl to constrain when needed. Then once those several layers are in the rotation or design that you want. Group them in a folder and then you can further effect them all at once with another transform or skew. WHen you use these real smal, they are like little stars. But you can do many different things when creating each one to make them different. Like making each one lower in opacity than the last etc...
I found a few examples of how to do lens-flare 'via code'. Ideally you'd want to do this as a post-process - meaning after you're done with your regular render, you process the image further.
Fragment shaders are apt for this step. The easiest version I found is this one. The basic idea is to
Identify really bright spots in your image and potentially down sample it.
Shoot rays from the fragment to the center of the image and sample some pixels along the way.
Accumalate the samples and apply further processing - chromatic distortion etc - on it.
And you get a whole range of options to play with.
Another more common alternative seems to be
Have a set of basic images (circles, hexes) and render them as a bunch of bright objects, along the path from the camera to the light(s).
Composite this image on top of the regular render of you scene.
The problem is in determining when to turn on lens flare, since it is dependant on whether a light is visible/occluded from a camera. GPU Gems comes to rescue, with better options.
A more serious, physically based implementation is listed in this paper. This is a real-time version of making lens-flares, but you need a hardware that can support both vertex and geometry shaders.

3d model construction using multiple images from multiple points (kinect)

is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.

Resources