Fit a shape to points - algorithm

I am trying to fit known well-defined shapes (eg boxes, cylinders; with configurable positions, rotations and dimensions) to a set of points with normals generated from sampling a 3D mesh. My current method is to define a custom fit function for each shape and pass it to a third-party optimisation function:
fitness = get_fitness(shape_parameters, points)
best_parameters = external.optimise(get_fitness, initial_parameters, points)
(for reference, I'm currently using Python 3 and scipy.optimize.minimize with bounds, but language is irrelevant).
This fitness function for a rectangle would look something like
def get_fitness(parameters, points):
side_fitnesses = []
for side in [top, right, bottom, left, back, front]:
dists = get_side_distances(parameters, points, side)
ndevs = get_side_normal_deviations(parameters, points, side)
side_fitnesses.append(combine_dists_and_ndevs(dists, ndevs))
fitnesses = choose_best_side_for_each_point(side_fitnesses)
return mean(fitnesses)
However this means I have to determine outliers (with/without caching), and I can only fit one shape at a time.
For example (in 2D), for these points (with normals), I'd like the following result:
Notice that there are multiple shapes returned, and outliers are ignored. In general, there can be many, one or zero shapes in the input data. Post-processing can remove invalid (eg too small) results.
Note: my real problem is in 3D. I have segments of a 3D mesh representation of real-world objects, which means I have more information than just points/normals (such as face areas and connectivity) which are in the example above.
Further reading:
Not well-defined shape-fitting
Highly technical n-dimensional fitting
Primitive shape-fitting thesis
Unanswered SO question on something similar
PS: I'm not sure if StackOverflow is the best StackExchange site for this question

well so you will have to handle meshes with volume then. That changes things a lot ...
segmentate objects
be selecting all faces enclosing its inside ... So its similar to this:
Finding holes in 2d point sets?
so simply find a point inside yet unused mesh ... and "fill" the volume until you hit all the faces it is composed of. Select those faces as belonging to new object. and set them as used... beware touching objects may lead to usage of faces twice or more ...
You can do this also on vector math/space so just test if line from some inside point to a face is hitting any other face ... if no you found your surface face ... similar to Hit test
process object (optional)
you can further segmentate the object mesh into "planar" objects that it is composed of by grouping faces belonging to the same plane ... or inside enclosing edge/contour ... then detect what they are
triangle
rectangle
polygon
disc
from count and type of faces you can detect basic objects like:
cone = 1 disc + 1 curved surface with singular edge point parallel to disc center
box/cube = 6 rectangles/squares
cylinder = 2 discs + 1 curved surface with center axis going through discs centers
compute basic geometric properties of individual objects (optional)
like BBOX or OBB, surface, volume, geom. center, mass center, ...
Now just decide what type of object it is. For example ratio between surface area and volume can hint sphere or ellipsoid, if OBB matches sides it hints box, if geom and mass centers are the same it hints symmetrical object ...
pass the mesh to possible object type fitting function
so based on bullets #2,#3 you have an idea which object could be which shapes so just confirm it with your fitting function ...
to ease up this process you can use properties from #3 for example of this see similar:
ellipse matching
so you can come up with similar techniques for basic 3D shapes...

Related

Camera Geometry: Algorithm for "object area correction"

A project I've been working on for the past few months is calculating the top area of ​​an object taken with a 3D depth camera from top view.
workflow of my project:
capture a group of objects image(RGB,DEPTH data) from top-view
Instance Segmentation with RGB image
Calculate the real area of ​​the segmented mask with DEPTH data
Some problem on the project:
All given objects have different shapes
The side of the object, not the top, begins to be seen as it moves to the outside of the image.
Because of this, the mask area to be segmented gradually increases.
As a result, the actual area of ​​an object located outside the image is calculated to be larger than that of an object located in the center.
In the example image, object 1 is located in the middle of the angle, so only the top of the object is visible, but object 2 is located outside the angle, so part of the top is lost and the side is visible.
Because of this, the mask area to be segmented is larger for objects located on the periphery than for objects located in the center.
I only want to find the area of ​​the top of an object.
example what I want image:
Is there a way to geometrically correct the area of ​​an object located on outside of the image?
I tried to calibrate by multiplying the area calculated according to the angle formed by Vector 1 connecting the center point of the camera lens to the center point of the floor and Vector 2 connecting the center point of the lens to the center of gravity of the target object by a specific value.
However, I gave up because I couldn't logically explain how much correction was needed.
fig 3:
What I would do is convert your RGB and Depth image to 3D mesh (surface with bumps) using your camera settings (FOVs,focal length) something like this:
Align already captured rgb and depth images
and then project it onto ground plane (perpendicul to camera view direction in the middle of screen). To obtain ground plane simply take 3 3D positions of the ground p0,p1,p2 (forming triangle) and using cross product to compute the ground normal:
n = normalize(cross(p1-p0,p2-p1))
now you plane is defined by p0,n so just each 3D coordinate convert like this:
by simply adding normal vector (towards ground) multiplied by distance to ground, if I see it right something like this:
p' = p + n * dot(p-p0,n)
That should eliminate the problem with visible sides on edges of FOV however you should also take into account that by showing side some part of top is also hidden so to remedy that you might also find axis of symmetry, and use just half of top side (that is not hidden partially) and just multiply the measured half area by 2 ...
Accurate computation is virtually hopeless, because you don't see all sides.
Assuming your depth information is available as a range image, you can consider the points inside the segmentation mask of a single chicken, estimate the vertical direction at that point, rotate and project the points to obtain the silhouette.
But as a part of the surface is occluded, you may have to reconstruct it using symmetry.
There is no way to do this accurately for arbitrary objects, since there can be parts of the object that contribute to the "top area", but which the camera cannot see. Since the camera cannot see these parts, you can't tell how big they are.
Since all your objects are known to be chickens, though, you could get a pretty accurate estimate like this:
Use Principal Component Analysis to determine the orientation of each chicken.
Using many objects in many images, find a best-fit polynomial that estimates apparent chicken size by distance from the image center, and orientation relative to the distance vector.
For any given chicken, then, you can divide its apparent size by the estimated average apparent size for its distance and orientation, to get a normalized chicken size measurement.

Possibly to prioritise drawing of objects in Threejs?

I am working on a CAD type system using threejs. I have thin objects next to other objects (think thin 2mm metal sheeting fixed to posts on a building measured in metres). When I am zoomed in it all looks fine. The objects do not intersect at all. As I zoom out the objects get smaller and I end up with cases where the post object 'glimmers' (sort of shows through) the metal sheet object as I rotate it around.
I understand it's the small numbers I am working with that is causing this effect. However, is there a way to set a priority such that one object (the metal sheeting) is more important than another object (post) so it doesn't get that sort of effect?
To answer the question from the title, it is possible to prioritize drawing orders with.
myMesh.renderOrder = 5
myOtherMesh.renderOrder = 7
It is then possible to apply different depth effects, turn off the test etc.
Another way is to group objects with, layers. Set the appropriate layer mask on the camera and then render (multiple times).
myMesh.layers.set(5)
camera.layers.set(1)
renderer.render(scene,camera)
camera.layers.set(5)
renderer.render(scene,camera)
This is called z-fighting, where two fragments are so close in the given depth space that their z-values are within the margin of error that their true depths might get inverted.
The easiest way to resolve this is to reduce the scale of your depth buffer. This is controlled by the near and far properties on your camera. You'll need to play with the values to determine what works best for your senario. If you can minimize the distance between the planes, you'll have better luck avoiding z-fighting.
For example, if (as a loose estimate) the bounding sphere of your entire model has a diameter of 100, then the distance between near and far need only be 100. However, their values are set as the distance into camera space. So as you zoom out, and your camera moves further away, you should adjust the values to maintain the minimum distance between them. If your camera is at z = 100, then set near = 50 and far = 150. When you pull your camera back to z = 250, then update near = 200 and far = 300.
Another option is to use the WebGLRenderer.logarithmicDepthBuffer option. (example)
Edit: There is one other cause: the faces of the shapes are actually co-planar. If two triangles are occupying the same space, then you're all but guaranteeing z-fighting.
The simple solution is to move one of the components such that the faces are no longer co-planar. You could also potentially apply a polygonOffset to the sheet metal material, but your use-case doesn't sound like that is appropriate.

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

Creating Heatmap Over 3D Model From Vector 3 Point Data

I am attempting to render a flat, dynamically created heatmap on top of a 3D model that is loaded from an OBJ (or STL).
I am currently loading and rendering an OBJ with Three.js. I have vector3 points that I am currently drawing as simple red cubes (image below). These data points are all raycasted to my OBJs mesh and are lying on the surface. The vector3 points are loaded from an external data source and will change depending on what data is being viewed/collected.
I would like to render my vector3 point data into a heatmap on the surface of my OBJ. Here are some examples illustrating the type of visual effects I am trying to achieve:
I feel like vertex coloring is the method of achieving this, but my issue is that my OBJ model does not have enough tessellation to do this. As you can see many red dots fall on each face. I am struggling to find a way to draw over my object's mesh with colors exactly where my red point data is. I was assuming I would need to convert my random vector3 points into a mesh, but cannot find a method to do so.
I've looked at the possibility of generating a texture, but 1) I do not have a UV map for my OBJs and do not see a way to programmatically generate them and 2) I am a bit lost on how I would correlate vector3 point data to UV points.
I've looked at using shaders, but my vector3 point data appears to be too large for using a shader (could be hundreds of thousands of points). I also feel it is not the right approach to render the heatmap every frame and would rather only render it once on load.
I've looked into isosurfaces with point clouds and the marching cubes algorithm, but I didn't think this was the right direction since only my data is a bit like a point cloud, and I am unsure as to how I would keep this smooth along the surface of my OBJ mesh.
Although I would prefer to keep everything in JavaScript for viewing in the browser, I am open to doing server side processing in any language/program with REST so long as it can be automated without human intervention, and pushed back to the browser for rendering.
Any suggestions or guidance is appreciated.
I'm only guessing but it seems like first you need to have UV coordinates that map every triangle to a texture. Rather than do this by hand I'd suggest using a modeling package. Most modeling packages have some way of automatically and uniformly mapping every triangle to a texture. For example in Blender
Next to put the heatmap in the texture by computing which triangles are affected by each dot (your raycasting), looking up their texture coordinates, projecting that dot into texture space and then putting the colors in that part of the texture. I'm only guessing that you need to not just do exact points but probably need to consider adjacent triangles since some heat info that hits near the edge of a triangle needs to bleed over into the adjacent triangle but that adjacent triangle might be using a completely different part of the texture.

What is the fastest shadowing algorithm (CPU only)?

Suppose I have a 3D model:
The model is given in the form of vertices, faces (all triangles) and normal vectors. The model may have holes and/or transparent parts.
For an arbitrarily placed light source at infinity, I have to determine:
[required] which triangles are (partially) shadowed by other triangles
Then, for the partially shadowed triangles:
[bonus] what fraction of the area of the triangle is shadowed
[superbonus] come up with a new mesh that describe the shape of the shadows exactly
My final application has to run on headless machines, that is, they have no GPU. Therefore, all the standard things from OpenGL, OpenCL, etc. might not be the best choice.
What is the most efficient algorithm to determine these things, considering this limitation?
Do you have single mesh or more meshes ?
Meaning if the shadow is projected on single 'ground' surface or on more like room walls or even near objects. According to this info the solutions are very different
for flat ground/wall surfaces
is usually the best way a projected render to this surface
camera direction is opposite to light normal and screen is the render to surface. Surface is not usually perpendicular to light so you need to use projection to compensate... You need 1 render pass for each target surface so it is not suitable if shadow is projected onto near mesh (just for ground/walls)
for more complicated scenes
You need to use more advanced approach. There are quite a number of them and each has its advantages and disadvantages. I would use Voxel map but if you are limited by space than some stencil/vector approach will be better. Of course all of these techniques are quite expensive and without GPU I would not even try to implement them.
This is how Voxel map looks like:
if you want just self shadowing then voxel map size can be only some boundig box around your mesh and in that case you do not incorporate whole mesh volume instead just projection of each pixel into light direction (ignore first voxel...) to avoid shadow on lighted surface

Resources