Distance to Object Webcam C920HD or use OpenCV calibrate.py - image

I am trying to determine the distance of an object and the height of an object towards my camera. Is it possible or do I need to use OpenCV calibrate.py to gather more information? I am confused because the Logitech C920HD has 3 MP and scales to 15 MP via software.
I have following info:
Resolution (pixel): 1920x1080
Focal Length (mm): 3.67mm
Pixel Size (µm): 3.98
Sensor Size (inches): 1/2.88
Object real height (mm): 180
Object image height (px): 370
I checked this formula:
distance (mm) = 3.67(mm) * 180(mm) * 1080(px) / 511 (px) * (1/2.88)(inches)*2.54 (mm/inches)
Which gives me 15.8 cm. Altough it should be about 60cm.
What am I doing wrong?
Thanks for help!

Your formula looks the correct one, however, for it to hold over the entire image plane, you should correct lens distortions first, e.g., following the answer
Camera calibration, reverse projection of pixel to direction
Along the way, OpenCV lens calibration module will estimate your true focal length.
Filling the formula gives
Distance = 3.67 mm * 180 mm * 1080/511 / sensor_height_mm = 1396 mm^2 / sensor_height_mm
Leaving sensor_height_mm unknown. Given your camera is 16:9 format
w^2 + h^2 = D^2
(16x)^2+(9x)^2 = D^2
<=>
x = sqrt( D^2/337 )
<=>
h = 9x = 9*sqrt( D^2/337 )
Remember the rule of 16:
https://photo.stackexchange.com/questions/24952/why-is-a-1-sensor-actually-13-2-%C3%97-8-8mm/24954
Most importantly, a 1/2.88" sensor has 16/2.88 mm image circle diameter instead of 25.4/2.88 mm. Funny enough, the true image circle diameter is metric. Thus the sensor diameter is
D = 16 mm/ 2.88 = 5.556 mm
and
sensor_height_mm = h = 2.72 mm
giving
Distance = 513 mm
Note, that this distance is measured with respect to the lenses first principal point and not the sensor position or the lens front element position.
As you correct the barrel distortion, the reading should get more accurate. It's quite a lot for this camera. I have similar.
Hope this helps.

Related

How to find the pixel location of a GPS point in an orthoimage with known orientation and GPS location

I have a problem where I need to determine whether a given latitude, longitude GPS-point is in a given orthoimage (approx. 1 hectare area) with known real-world orientation and GPS-location (corresponding to the center of image).
That is, given a GPS-point P, I need to determine:
Is point P located in the orthoimage, and if yes,
What is the pixel location of point P in the orthoimage.
My question is summarized in the following image:
As you can see in the image, I know the GPS-coordinates of the image (center) and where North is located with respect to the image. Also, I know how many centimeters in the ground each pixel corresponds to.
My question is: What would be an efficient and smart way to achieve the goals in my problem?
One approach I had in mind was to solve a linear mapping between the GPS- and pixel-points and then use this mapping to answer both problems 1-2. I thought this could be a reasonable approach, even though the earth has curvature and the GPS-coordinates are (I'd say) more like a parabolic function of the pixel coordinates, since the distances are very small (one image is an approximately 1 hectare area) I could assume without significant loss in accuracy that the GPS-coordinates change locally linearly w.r.t pixel coordinates.
What do you think? Thank you.
Update:
The orthophotos have been taken with a Phantom 4 Pro drone with gimbal camera system.
I thought about one possibility myself, not perfect but it's a start:
The following information is given:
a rectangular orthoimage Img, Yaw of the image (that is, how many degrees the image is facing away from north), pix_size pixel size in the ground (centimeters/pixel).
The problem is: Given an arbitrary GPS-point p = (lat, long), determine the pixel location of p in Img.
Denote c = (latc, longc) and cp = (x,y) as the GPS- and pixel-coordinates of the center point of Img.
Determine how much we must move along North-South and West-East axes to get from c to p. Let lat_delta = latc-lat and long_delta = longc-long. If lat_delta < 0 -> p is more in north than c, otherwise p is more in south than c. The same goes analoguously for long_delta.
> if lat_delta < 0:
> pN = [latc + abs(lat_delta), longc]
> else:
> pN = [latc - abs(lat_delta), longc]
>
> if lat_long < 0:
> pE = [latc, longc + abs(long_delta)]
> else:
> pE = [latc, longc - abs(long_delta)]
Now the points c, p, pN and pE form a "spherical" right triangle (I think I could safely assume it to be planar because the orthophoto describes max 1 hectare area). So the Pythagorean theorem applies sufficiently enough for my purposes.
Next, I calculate the ground distances dN = Haversine(c,pN) and dE = Haversine(c, pE), which tell me how much in ground distance I must move in North-South and West-East axes in order to get from c to p.
Now I will apply a rotation matrix R(-Yaw) to vectors n = [0,1] and e = [1,0], which represent the upwards and right vectors in my pixel coordinate system. So I get nr = R(-Yaw)*n and er = R(-Yaw)*e where nr is a unit pixel vector pointing towards North in the image and er is similarly a unit pixel vector pointing towards East in the image.
Next, I calculate the ratios mN = dN/pix_size and mE = dE/pix_size (the factors also need to take into account the +- direction). Now I calculate the pixel location of p by:
pp = cp + mN*nr + mE*er,
where I can now easily check if the pixel values pp are within the bounds of the image Img.
Of course this method does not work in a general large area case and needs to be refined for this purpose.

Calculate the angles of a pixel to a camera plane in a depth-image

I have a z-image from a ToF Camera (Kinect V2). I do not have the pixel size, but I know that the depth image has a resolution of 512x424. I also know that I have a fov of 70.6x60 degrees.
I asked how to get the Pixel size before here. In Matlab this code looks like the following.
The brighter the pixel, the closer the object.
close all
clear all
%Load image
depth = imread('depth_0_30_0_0.5.png');
frame_width = 512;
frame_height = 424;
horizontal_scaling = tan((70.6 / 2) * (pi/180));
vertical_scaling = tan((60 / 2) * (pi/180));
%pixel size
with_size = horizontal_scaling * 2 .* (double(depth)/frame_width);
height_size = vertical_scaling * 2 .* (double(depth)/frame_height);
The image itself is a cube rotated by 30 degree, and can be seen here: .
What I want to do now is calculate the horizontal angle of a pixel to the camera-plane and the vertical angle to the camera plane.
I tried to do this with triangulation, I calculate the z-distance from one pixel to another, first in the horizontal direction and then in the vertical direction. I do this with a convolution:
%get the horizontal errors
dx = abs(conv2(depth,[1 -1],'same'));
%get the vertical errors
dy = abs(conv2(depth,[1 -1]','same'));
After this I calculate it via the atan, like this:
horizontal_angle = rad2deg(atan(with_size ./ dx));
vertical_angle = rad2deg(atan(height_size ./ dy));
horizontal_angle(horizontal_angle == NaN) = 0;
vertical_angle(vertical_angle == NaN) = 0;
Which gives back promising results, like these:
However, using a little bit more complex image like this, which is turned by 60° and 30°.
Gives back the same angle images for horizontal and vertical angles, which look like this:
After subtracting both images from each other, I get the following image - which shows that there is a difference between those two.
So, I have the following questions: How can I proof this concept? Is the math correct, and the test case is just poorly chosen? Is the angle difference from horizontal to vertical angles in the two images too close? Are there any errors in the calculation ?
While my previous code may looks good, it had a flaw. I tested it with smaller images (5x5,3x3 and so on) and saw, that there is an offset created by the difference picture (dx,dy) made by the convolution. It is simple not possible to map the difference picture (which holds the difference between two pixels) to the pixels itself, since the difference picture is smaller than the original one.
For a fast fix, I do a downsampling. So I changed the filter mask to:
%get the horizontal differences
dx = abs(conv2(depth,[1 0 -1],'valid'));
%get the vertical differences
dy = abs(conv2(depth,[1 0 -1]','valid'));
And changed the angle function to:
%get the angles by the tangent
horizontal_angle = rad2deg(atan(with_size(2:end-1,2:end-1)...
./ dx(2:end-1,:)))
vertical_angle = rad2deg(atan(height_size(2:end-1,2:end-1)...
./ dy(:,2:end-1)))
Also I used a padding function to get the angle map to the same size as the original images.
horizontal_angle = padarray(horizontal_angle,[1 1],0);
vertical_angle = padarray(vertical_angle[1 1],0);

Google Tango: Aligning Depth and Color Frames

I would like to align a (synchronous) depth/color frame pair, using the Google Tango tablet, such that, assuming that both frames have the same resolution, each pixel in the depth frame corresponds to the same pixel in the color frame, i.e., I would like to achieve a retinotopic mapping. How can this be achieved using the latest C API (Hilbert Release Version 1.6)? Any help on this will be greatly appreciated.
Generating simple crude UV coordinates to map tango point cloud points back onto source image (texture coordinates) - see comments above for more details, we've messed this thread up but good :-( (Language is C#, classes are .Net) Field of view calculate FOV horizontal (true) or vertical (false)
public PointF PictureUV(Vector3D imagePlaneLocation)
{
// u is a function of x where y is 0
double u = Math.Atan2(imagePlaneLocation.X, imagePlaneLocation.Z);
u += (FieldOfView(true) / 2.0);
u = u/FieldOfView(true);
double v = Math.Atan2(imagePlaneLocation.Y, imagePlaneLocation.Z);
v += (FieldOfView() / 2.0);
v = v / FieldOfView();
return new PointF((float)u, (float)(1.0 - v));
}
Mark, thanks for your quick response. Probably my question was a bit inprecise. You are of course damn right in saying that a retinotopic mapping between a 2D and a 3D image cannot be established. Shame on me. Nonetheless,
what I need is a mapping in which all depth samples (x_n,y_n,d_n), 1<=n<=N, N being the number of depth values, correspond to the same pixels (x_n,y_n) in the (synchronized) color frame. It is well taken that the depth sensor cannot provide depth information for troublesome areas in the visual field.
One of your conditions is not possible - there is no guarantee that tango will hand you a point cloud measurement of something in the visual field if it has trouble seeing it - also there isn't a 1:1 correspondence between pixels and depth frame as the depth info is 3D
I have not tried this but we can probably do:
for each (X,Y,Z) from point cloud:
u_pixel = -(X/Z)* Fx, v_pixel = -(Y/Z)* Fy.
x = (u-cx)/Fx, y = (v-cy)/Fy.
for distortion correction (k1,k2,k2 can from distortion[] part of TangoInstrinsics, r = Math.sqrt(x^2 + y^2)))
x_corrected = x * (1 + k1 * r2 + k2 * r4 + k3 * r6)
y_corrected = y * (1 + k1 * r2 + k2 * r4 + k3 * r6)
Then we can convert normalized x_corrected, y_corrected to x_raster, y_raster by using reverse of the above formula (x_raster = x_correct*Fx+ cx)

Recognizing edges based on points and normals

I have a bit of a problem categorizing points based on relative normals.
What I would like to do is use the information I got below to fit a simplified polygon to the points, with a bias towards 90 degree angles to an extent.
I have the rough (although not very accurate) normal lines for each point, but I'm not sure how to separate the data base on closeness of points and closeness of the normals. I plan to do a linear regression after chunking the points for each face, as the normal lines sometimes does not fit well with the actual faces (although they are close to each other for each face)
Example:
alt text http://a.imageshack.us/img842/8439/ptnormals.png
Ideally, I would like to be able to fit a rectangle around this data. However, the polygon does not need to be convex, nor does it have to be aligned with the axis.
Any hints as to how to achieve something like this would be awesome.
Thanks in advance
I am not sure if this is what you are looking for, but here's my attempt at solving the problem as I understood it:
I am using the angles of the normal vectors to find points belonging to each side of the rectangle (left, right, up, down), then simply fit a line to each.
%# create random data (replace those with your actual data)
num = randi([10 20]);
pT = zeros(num,2);
pT(:,1) = rand(num,1);
pT(:,2) = ones(num,1) + 0.01*randn(num,1);
aT = 90 + 10*randn(num,1);
num = randi([10 20]);
pB = zeros(num,2);
pB(:,1) = rand(num,1);
pB(:,2) = zeros(num,1) + 0.01*randn(num,1);
aB = 270 + 10*randn(num,1);
num = randi([10 20]);
pR = zeros(num,2);
pR(:,1) = ones(num,1) + 0.01*randn(num,1);
pR(:,2) = rand(num,1);
aR = 0 + 10*randn(num,1);
num = randi([10 20]);
pL = zeros(num,2);
pL(:,1) = zeros(num,1) + 0.01*randn(num,1);
pL(:,2) = rand(num,1);
aL = 180 + 10*randn(num,1);
pts = [pT;pR;pB;pL]; %# x/y coords
angle = mod([aT;aR;aB;aL],360); %# angle in degrees [0,360]
%# plot points and normals
plot(pts(:,1), pts(:,2), 'o'), hold on
theta = angle * pi / 180;
quiver(pts(:,1), pts(:,2), cos(theta), sin(theta), 0.4, 'Color','g')
hold off
%# divide points based on angle
[~,bin] = histc(angle,[0 45 135 225 315 360]);
bin(bin==5) = 1; %# combine last and first bin
%# fit line to each segment
hold on
for i=1:4
%# indices of points in this segment
idx = ( bin == i );
%# x/y or y/x
if i==2||i==4, xx=1; yy=2; else xx=2; yy=1; end
%# fit line
coeff = polyfit(pts(idx,xx), pts(idx,yy), 1);
fit(:,1) = 0:0.05:1;
fit(:,2) = polyval(coeff, fit(:,1));
%# plot fitted line
plot(fit(:,xx), fit(:,yy), 'Color','r', 'LineWidth',2)
end
hold off
I'd try the following
Cluster the points based on proximity and similar angle. I'd use single-linkage hierarchical clustering (LINKAGE in Matlab), since you don't know a priori how many edges there will be. Single linkage favors linear structures, which is exactly what you're looking for. As the distance criterion between two points you can use the euclidean distance between point coordinates multiplied by a function of the angle that increases very steeply as soon as the angle differs more than, say, 20 or 30 degrees.
Do (robust) linear regression into the data. Using the normals may or may not help. My guess is that they won't help too much. For simplicity, you may want to disregard the normals initially.
Find the intersections between the lines.
If you have to, you can always try and improve the fit, for example by constraining opposite lines to be parallel.
If that fails, you could try and implement the approach in THIS PAPER, which allows fitting multiple straight lines at once.
You could get the mean value for the X and Y coordinates for each side and then just make lines based on that.

Calculating the Bounding Rectangle at an Angle of a Polygon

I have the need to determine the bounding rectangle for a polygon at an arbitrary angle. This picture illustrates what I need to do:
alt text http://kevlar.net/RotatedBoundingRectangle.png
The pink rectangle is what I need to determine at various angles for simple 2d polygons.
Any solutions are much appreciated!
Edit:
Thanks for the answers, I got it working once I got the center points correct. You guys are awesome!
To get a bounding box with a certain angle, rotate the polygon the other way round by that angle. Then you can use the min/max x/y coordinates to get a simple bounding box and rotate that by the angle to get your final result.
From your comment it seems you have problems with getting the center point of the polygon. The center of a polygon should be the average of the coordinate sums of each point. So for points P1,...,PN, calculate:
xsum = p1.x + ... + pn.x;
ysum = p1.y + ... + pn.y;
xcenter = xsum / n;
ycenter = ysum / n;
To make this complete, I also add some formulas for the rotation involved. To rotate a point (x,y) around a center point (cx, cy), do the following:
// Translate center to (0,0)
xt = x - cx;
yt = y - cy;
// Rotate by angle alpha (make sure to convert alpha to radians if needed)
xr = xt * cos(alpha) - yt * sin(alpha);
yr = xt * sin(alpha) + yt * cos(alpha);
// Translate back to (cx, cy)
result.x = xr + cx;
result.y = yr + cx;
To get the smallest rectangle you should get the right angle. This can acomplished by an algorithm used in collision detection: oriented bounding boxes.
The basic steps:
Get all vertices cordinates
Build a covariance matrix
Find the eigenvalues
Project all the vertices in the eigenvalue space
Find max and min in every eigenvalue space.
For more information just google OBB "colision detection"
Ps: If you just project all vertices and find maximum and minimum you're making AABB (axis aligned bounding box). Its easier and requires less computational effort, but doesn't guarantee the minimum box.
I'm interpreting your question to mean "For a given 2D polygon, how do you calculate the position of a bounding rectangle for which the angle of orientation is predetermined?"
And I would do it by rotating the polygon against the angle of orientation, then use a simple search for its maximum and minimum points in the two cardinal directions using whatever search algorithm is appropriate for the structure the points of the polygon are stored in. (Simply put, you need to find the highest and lowest X values, and highest and lowest Y values.)
Then the minima and maxima define your rectangle.
You can do the same thing without rotating the polygon first, but your search for minimum and maximum points has to be more sophisticated.
To get a rectangle with minimal area enclosing a polygon, you can use a rotating calipers algorithm.
The key insight is that (unlike in your sample image, so I assume you don't actually require minimal area?), any such minimal rectangle is collinear with at least one edge of (the convex hull of) the polygon.
Here is a python implementation for the answer by #schnaader.
Given a pointset with coordinates x and y and the degree of the rectangle to bound those points, the function returns a point set with the four corners (and a repetition of the first corner).
def BoundingRectangleAnglePoints(x,y, alphadeg):
#convert to radians and reverse direction
alpha = np.radians(alphadeg)
#calculate center
cx = np.mean(x)
cy = np.mean(y)
#Translate center to (0,0)
xt = x - cx
yt = y - cy
#Rotate by angle alpha (make sure to convert alpha to radians if needed)
xr = xt * np.cos(alpha) - yt * np.sin(alpha)
yr = xt * np.sin(alpha) + yt * np.cos(alpha)
#Find the min and max in rotated space
minx_r = np.min(xr)
miny_r = np.min(yr)
maxx_r = np.max(xr)
maxy_r = np.max(yr)
#Set up the minimum and maximum points of the bounding rectangle
xbound_r = np.asarray([minx_r, minx_r, maxx_r, maxx_r,minx_r])
ybound_r = np.asarray([miny_r, maxy_r, maxy_r, miny_r,miny_r])
#Rotate and Translate back to (cx, cy)
xbound = (xbound_r * np.cos(-alpha) - ybound_r * np.sin(-alpha))+cx
ybound = (xbound_r * np.sin(-alpha) + ybound_r * np.cos(-alpha))+cy
return xbound, ybound

Resources