Camera Geometry: Algorithm for "object area correction" - algorithm

A project I've been working on for the past few months is calculating the top area of ​​an object taken with a 3D depth camera from top view.
workflow of my project:
capture a group of objects image(RGB,DEPTH data) from top-view
Instance Segmentation with RGB image
Calculate the real area of ​​the segmented mask with DEPTH data
Some problem on the project:
All given objects have different shapes
The side of the object, not the top, begins to be seen as it moves to the outside of the image.
Because of this, the mask area to be segmented gradually increases.
As a result, the actual area of ​​an object located outside the image is calculated to be larger than that of an object located in the center.
In the example image, object 1 is located in the middle of the angle, so only the top of the object is visible, but object 2 is located outside the angle, so part of the top is lost and the side is visible.
Because of this, the mask area to be segmented is larger for objects located on the periphery than for objects located in the center.
I only want to find the area of ​​the top of an object.
example what I want image:
Is there a way to geometrically correct the area of ​​an object located on outside of the image?
I tried to calibrate by multiplying the area calculated according to the angle formed by Vector 1 connecting the center point of the camera lens to the center point of the floor and Vector 2 connecting the center point of the lens to the center of gravity of the target object by a specific value.
However, I gave up because I couldn't logically explain how much correction was needed.
fig 3:

What I would do is convert your RGB and Depth image to 3D mesh (surface with bumps) using your camera settings (FOVs,focal length) something like this:
Align already captured rgb and depth images
and then project it onto ground plane (perpendicul to camera view direction in the middle of screen). To obtain ground plane simply take 3 3D positions of the ground p0,p1,p2 (forming triangle) and using cross product to compute the ground normal:
n = normalize(cross(p1-p0,p2-p1))
now you plane is defined by p0,n so just each 3D coordinate convert like this:
by simply adding normal vector (towards ground) multiplied by distance to ground, if I see it right something like this:
p' = p + n * dot(p-p0,n)
That should eliminate the problem with visible sides on edges of FOV however you should also take into account that by showing side some part of top is also hidden so to remedy that you might also find axis of symmetry, and use just half of top side (that is not hidden partially) and just multiply the measured half area by 2 ...

Accurate computation is virtually hopeless, because you don't see all sides.
Assuming your depth information is available as a range image, you can consider the points inside the segmentation mask of a single chicken, estimate the vertical direction at that point, rotate and project the points to obtain the silhouette.
But as a part of the surface is occluded, you may have to reconstruct it using symmetry.

There is no way to do this accurately for arbitrary objects, since there can be parts of the object that contribute to the "top area", but which the camera cannot see. Since the camera cannot see these parts, you can't tell how big they are.
Since all your objects are known to be chickens, though, you could get a pretty accurate estimate like this:
Use Principal Component Analysis to determine the orientation of each chicken.
Using many objects in many images, find a best-fit polynomial that estimates apparent chicken size by distance from the image center, and orientation relative to the distance vector.
For any given chicken, then, you can divide its apparent size by the estimated average apparent size for its distance and orientation, to get a normalized chicken size measurement.

Related

Align Pointclouds to a coordinate system

I'm currently working on 3D-Reconstruction from images to measure the object size.
I have created a pointcloud and wanted to measure it size now (in the scaling of the pointcloud, as I know the size, because it is the reference object).
But that's where I don't know how to do it. I was previously thinking about searching the outer points (it's a box) and just calculate the distance. This of course works, but it's not really the width, length, and height of the object, as the pointcloud is rotated and translated.
Plotted pointclouds with coordinate axis
As you can see in the image, the pointclouds are rotated and translated and my question is:
How can I calculate the rotation and translation to apply it back to the coordinate axis?
If something was unclear, hit me up, and I will try to elaborate better.
Edit:
So I calculated the plane of the side of my object now, which is:
0.96x + -0.03y + 0.28z + -4.11 = 0
I don't really care if it's aligned with the XY-plane or the YZ-plane, but it has to be aligned with one of them.
Now it's about the calculation from the object plane and the world plane where I'm struggling.

Fit a shape to points

I am trying to fit known well-defined shapes (eg boxes, cylinders; with configurable positions, rotations and dimensions) to a set of points with normals generated from sampling a 3D mesh. My current method is to define a custom fit function for each shape and pass it to a third-party optimisation function:
fitness = get_fitness(shape_parameters, points)
best_parameters = external.optimise(get_fitness, initial_parameters, points)
(for reference, I'm currently using Python 3 and scipy.optimize.minimize with bounds, but language is irrelevant).
This fitness function for a rectangle would look something like
def get_fitness(parameters, points):
side_fitnesses = []
for side in [top, right, bottom, left, back, front]:
dists = get_side_distances(parameters, points, side)
ndevs = get_side_normal_deviations(parameters, points, side)
side_fitnesses.append(combine_dists_and_ndevs(dists, ndevs))
fitnesses = choose_best_side_for_each_point(side_fitnesses)
return mean(fitnesses)
However this means I have to determine outliers (with/without caching), and I can only fit one shape at a time.
For example (in 2D), for these points (with normals), I'd like the following result:
Notice that there are multiple shapes returned, and outliers are ignored. In general, there can be many, one or zero shapes in the input data. Post-processing can remove invalid (eg too small) results.
Note: my real problem is in 3D. I have segments of a 3D mesh representation of real-world objects, which means I have more information than just points/normals (such as face areas and connectivity) which are in the example above.
Further reading:
Not well-defined shape-fitting
Highly technical n-dimensional fitting
Primitive shape-fitting thesis
Unanswered SO question on something similar
PS: I'm not sure if StackOverflow is the best StackExchange site for this question
well so you will have to handle meshes with volume then. That changes things a lot ...
segmentate objects
be selecting all faces enclosing its inside ... So its similar to this:
Finding holes in 2d point sets?
so simply find a point inside yet unused mesh ... and "fill" the volume until you hit all the faces it is composed of. Select those faces as belonging to new object. and set them as used... beware touching objects may lead to usage of faces twice or more ...
You can do this also on vector math/space so just test if line from some inside point to a face is hitting any other face ... if no you found your surface face ... similar to Hit test
process object (optional)
you can further segmentate the object mesh into "planar" objects that it is composed of by grouping faces belonging to the same plane ... or inside enclosing edge/contour ... then detect what they are
triangle
rectangle
polygon
disc
from count and type of faces you can detect basic objects like:
cone = 1 disc + 1 curved surface with singular edge point parallel to disc center
box/cube = 6 rectangles/squares
cylinder = 2 discs + 1 curved surface with center axis going through discs centers
compute basic geometric properties of individual objects (optional)
like BBOX or OBB, surface, volume, geom. center, mass center, ...
Now just decide what type of object it is. For example ratio between surface area and volume can hint sphere or ellipsoid, if OBB matches sides it hints box, if geom and mass centers are the same it hints symmetrical object ...
pass the mesh to possible object type fitting function
so based on bullets #2,#3 you have an idea which object could be which shapes so just confirm it with your fitting function ...
to ease up this process you can use properties from #3 for example of this see similar:
ellipse matching
so you can come up with similar techniques for basic 3D shapes...

How to fit a rectangle into a space (get relative camera position) having its dimentions and shape?

Say we have a web cam image of a road. We have picked 4 connected 2d lines of a rectangle, and we know there are 90deg angles between them. We have set their real life dimentions. How one can get 3d camera position relative to that rectangle from such data?
Having 4 pairs of corresponding coordinates - real word ones and coordinates at camera matrix (in photo image coordinate system), one can calcalate matrix of perspective transformation, for example, with OpenCV function getPerspectiveTransform.
Then apply decomposeProjectionMatrix
If you are using another means/libraries for image acquisition/treatment, they might contain something similar

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

Correct Translation for artificial horizon

I would like to draw an artificial horizon. The center of the view would represent perfectly horizontal view with roll rotating the horizontal line and pitch moving it up or down.
The question is: what is the correct calculation to translate the horizon line up or down (pitch) given the pitch angle.
My guess is that this would probably depend on the FOV angle that one would assume for an assumed camera, so this angle would need to be a factor in the algorithm sought. Ideally I would figure out this angle for the iPhone/iPad camera so that the artificial horizon would line up with the actual horizon if you hold the device in front of you and look towards the horizon.
Until now I've been guesstimating the offset, but I would like to have the exact formula.
Try horizon_offset/(screen_height/2)=tan(pitch)/tan(vertical_FOV/2).
Look at the picture, and the formula derives itself.
(source: zwibbler.com)
.
Update I have two angles mixed up. One is the FOV angle of the camera, the other is the viewing angle of the screen. These are two different things. The latter depends on the viewing distance. You probably have to estimate this distance, and adjust magnification and/or focal distance such that objects visible on the screen are the same angular size as the same objects visible with the naked eye. (With my particular phone, you would need to magnify the image by an additional factor of about 3 after the 5x zoom, if the user stretches his hand with the phone all the way forward). Then the two angles are the same, and the formula works.
If you want to introduce magnification (i.e. objects on the screen have different sizes from their real-life counterparts), multiply the horizon offset by the magnification factor.
Update 2 When taking the viewing distance into account, the screen size cancels out, and the offset simply becomes viewing_distance*tan(pitch_angle) (with unit magnification).

Resources