Convert 2D planes to 3D model - algorithm

We have a multiple 2D planar image of an object scanned from a fan-beam perspective. An example is in Fig 5 below. We have multiple grainy dotted planes to scan the whole image.
The issue with these images is that they cannot be directly mapped to a 3D plane due to the fan beam deformation.
Is there correction algorithms/methods that can be recommended so that these planes can be correctly mapped to 3D plane and an object can be reconstructed properly?

Depending on how you store your data, there might be various approaches. Guessing that you store the data as points ("grainy dotted planes"), you can do an interpolation of the corresponding points in the consecutive planes and thereby get the scan of the entire object. It does require the points to be in the same frame, so you might have to do some kind of transformation to find the parameters of each plane to a global framework.
Another procedure might be use of a least square fitting of each plane which can then be used to map together the object. You might find some helpful approaches of scanning 3d objects using 2d methods. Hope this helps.

Related

Creating Heatmap Over 3D Model From Vector 3 Point Data

I am attempting to render a flat, dynamically created heatmap on top of a 3D model that is loaded from an OBJ (or STL).
I am currently loading and rendering an OBJ with Three.js. I have vector3 points that I am currently drawing as simple red cubes (image below). These data points are all raycasted to my OBJs mesh and are lying on the surface. The vector3 points are loaded from an external data source and will change depending on what data is being viewed/collected.
I would like to render my vector3 point data into a heatmap on the surface of my OBJ. Here are some examples illustrating the type of visual effects I am trying to achieve:
I feel like vertex coloring is the method of achieving this, but my issue is that my OBJ model does not have enough tessellation to do this. As you can see many red dots fall on each face. I am struggling to find a way to draw over my object's mesh with colors exactly where my red point data is. I was assuming I would need to convert my random vector3 points into a mesh, but cannot find a method to do so.
I've looked at the possibility of generating a texture, but 1) I do not have a UV map for my OBJs and do not see a way to programmatically generate them and 2) I am a bit lost on how I would correlate vector3 point data to UV points.
I've looked at using shaders, but my vector3 point data appears to be too large for using a shader (could be hundreds of thousands of points). I also feel it is not the right approach to render the heatmap every frame and would rather only render it once on load.
I've looked into isosurfaces with point clouds and the marching cubes algorithm, but I didn't think this was the right direction since only my data is a bit like a point cloud, and I am unsure as to how I would keep this smooth along the surface of my OBJ mesh.
Although I would prefer to keep everything in JavaScript for viewing in the browser, I am open to doing server side processing in any language/program with REST so long as it can be automated without human intervention, and pushed back to the browser for rendering.
Any suggestions or guidance is appreciated.
I'm only guessing but it seems like first you need to have UV coordinates that map every triangle to a texture. Rather than do this by hand I'd suggest using a modeling package. Most modeling packages have some way of automatically and uniformly mapping every triangle to a texture. For example in Blender
Next to put the heatmap in the texture by computing which triangles are affected by each dot (your raycasting), looking up their texture coordinates, projecting that dot into texture space and then putting the colors in that part of the texture. I'm only guessing that you need to not just do exact points but probably need to consider adjacent triangles since some heat info that hits near the edge of a triangle needs to bleed over into the adjacent triangle but that adjacent triangle might be using a completely different part of the texture.

How to convert a picture into different view by the test position using ray tracing

Now I want train a path loss model, and I have a map picture, and I want to convert this map into different views by the test location(x,y)
I need a conversion algorithm to produce a lot different map views by the test location.Now I can show a example of this(I am sorry this hard to describe)
in the left up is the map with 4 column,in the right bottom is the convert-new-map:
I want to use some "light resource"(the location A) to project onto the building in the map, then some light will be blocked, then we will get the shadow in this test location.
so the shadow from the AP location and test location can present the environment information in this area.
If you have some idea to solve this, please let me know.
Thanks in advance
Cheng Hong
After discussing and googling, I find out that I should using some ray tracing technology for a 2D map.
In my research, I have two point, location A and location P in a map.
And now I want to use ray tracing to convert the map combining the two locations into a new map view.
In this new map view, the location A point is in the center, then some shadow will be added resulting from the building(call it black column) in the origin map. Then this new map is a kind of presentation or describer for the map and two location point. That is what I want to do.
you need to add more specs like the map is an raster image or vector? This has nothing to do with conversion (hence the retag) you just want to render your 2D map as 3D scene or its 2D slice (single horizontal line) this can be done really easily.
raster map
google Wolfenstein ray casting rendering techniques like:
Algorithm for 2D Raytracer
vector map
construct mesh from your map and render by any 3D gfx api like OpenGL. To get started with this approach you need to grasp this:
Understanding 4x4 homogenous transform matrices
see also the sub-links in there ...
To implement the lighting condition you can implement any kind of shading. The easiest is normal shading. For more info see:
Normal shading this may enlight thing or two (for beginners)
Normal/Bump mapping see fragment shader and search the dot
mirrored light see for slightly more complex lighting scheme
simple complete GL+VAO/VBO+GLSL+shaders example in C++
Curved Frosted Glass Shader? for sub surface scattering

What is the fastest shadowing algorithm (CPU only)?

Suppose I have a 3D model:
The model is given in the form of vertices, faces (all triangles) and normal vectors. The model may have holes and/or transparent parts.
For an arbitrarily placed light source at infinity, I have to determine:
[required] which triangles are (partially) shadowed by other triangles
Then, for the partially shadowed triangles:
[bonus] what fraction of the area of the triangle is shadowed
[superbonus] come up with a new mesh that describe the shape of the shadows exactly
My final application has to run on headless machines, that is, they have no GPU. Therefore, all the standard things from OpenGL, OpenCL, etc. might not be the best choice.
What is the most efficient algorithm to determine these things, considering this limitation?
Do you have single mesh or more meshes ?
Meaning if the shadow is projected on single 'ground' surface or on more like room walls or even near objects. According to this info the solutions are very different
for flat ground/wall surfaces
is usually the best way a projected render to this surface
camera direction is opposite to light normal and screen is the render to surface. Surface is not usually perpendicular to light so you need to use projection to compensate... You need 1 render pass for each target surface so it is not suitable if shadow is projected onto near mesh (just for ground/walls)
for more complicated scenes
You need to use more advanced approach. There are quite a number of them and each has its advantages and disadvantages. I would use Voxel map but if you are limited by space than some stencil/vector approach will be better. Of course all of these techniques are quite expensive and without GPU I would not even try to implement them.
This is how Voxel map looks like:
if you want just self shadowing then voxel map size can be only some boundig box around your mesh and in that case you do not incorporate whole mesh volume instead just projection of each pixel into light direction (ignore first voxel...) to avoid shadow on lighted surface

In GLGE, Is it possible specify the face of a mesh that a texture should be mapped to? (WEBGL as well)

I'm trying to make an environment map which is in the form of a cube that has images mapped onto particular faces to give the illusion of being in the area (sorta like google's street view)
I'm trying to do It in glgehowever, with my limited experience, I only know how to map one texture to a whole mesh (Which is what I'm doing at the moment). If I were to create 6 different textures, would it possible for me to specify the faces that those textures should be loaded to?
You could generate the six faces of the cube as separate objects and use a different texture for each. Alternative is to set different texture coordinates for the different faces of the cube.
If you want ready-to-run code, three.js has a couple of skybox examples. E.g. http://mrdoob.github.com/three.js/examples/webgl_panorama_equirectangular.html
You should look at "UV Mapping". Check this example. Roughly, UVs describe how the polygons are mapped (in x,y) on the texture.
Sounds like you want a cube map texture — it takes six separate images, and you lookup in it with a direction vector rather than (u,v) coordinates. They are the usual way to do environments. Cube map textures are available in WebGL.

How to generate one texture from N textures?

Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens.
I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface.
Does anybody knows where to start? Articles, references, books?
Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm.
Generate texture-mapping coords for your geometry
Generate a big blank texture
For each pixel
Figure out the point on the geometry it maps to
Figure out the pixel in each image that projects onto this point
Colour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera
Apply your completed texture to the geometry
Google up "shadow mapping", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.
I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction.
Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture.

Resources