Why is the visual angle equals 0.0213° - pixel

It's been said that the Visual angle = object size / object distance.
Calculation
One pixel on a device with a pixel density of 96dpi and a distance from the reader of an arm's length. For a nominal arm's length of 28 inches, the visual angle is therefore about 0.0213 degrees.
(1/96/28) = 0.00037 ≠ 0.0213
Update
Text Explanation
It is recommended that the reference pixel be the visual angle of one pixel on a device with a pixel density of 96dpi and a distance from the reader of an arm's length. For a nominal arm's length of 28 inches, the visual angle is therefore about 0.0213 degrees.
Confusion 1
Shouldn't it be 0.0213 radians?
Graphical Explanation
Confusion 2
Shouldn't it be like that, even though the described sector should look much narrower?

The angle is expressed in radians, not in degrees. You need to convert between the two.

Related

How to find pixel per meter [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have a static camera through which I am focusing on the covered area, total covered area by a camera is
length 78.7 cm
width 102.1 cm
heigh 118.5 cm
my image size is 800 * 480
now in the total covered area, I have another box whose
length is 22.6 cm
width is 25.6 cm
height is 24 cm
I want to find out how many pixels I have per meter. I am using the formula of
m/pixels * 0.39 but it's not giving the exact answer.
Many manufacturers use the pixels per meter measure as a metric of video surveillance image quality. For instance, you need around 130 ppm to have enough detail to accurately recognize facial detail and indentify license plates.
To calculate pixel density (pixel per meter) you need the number of horizontal pixels of the image or video source and the width in meters of the scene where you are looking at.
Therefore,
ppm = ImageWidth (in pixels) / Field of view (in meters)
The easiest way to calculate ppm for an specific scene is to point the camera where you want to calculate ppm, and then divide the number of pixels of your image by the distance in meters of the field of view of you camera in that specific point. Note that this measure will not be constant across the vertical axis of your camera. Each line of your image will have a different pixel density.
If you calibrate the camera you could do these calculations theoretically, since you could know the width of the field of view in advance, but it is a little bit more complicated.
Your static camera will have a specific fixed lens size determining the focal length (f number). This means that there is a specific ideal focus point along the depth of view, this 2D plane of focus will form a parabolic shape proportional to the shape and size of the lens and because lens size and FOV are inversely proportional to one another the larger your lens size, the small the FOV and as someone already mentioned the focus point diminishes outwards in all directions from the center point of your focal area (think Doppler effect) this is why we tend to find blurry pixels in the corners of images/video with object outside foreground / background of the focus plane.
This is a beautiful problem to solve using math.
so by calculating the length of the hypotenuse vs the height of a right angle triangle you know the difference in focal distance between your focal point and the most outer pixel displayed from your image sensor and because you should know the resolution of your camera you can calculate the loss of pixel density from center in all directions.
i feel like there probably a clean looking formula to calculate this using that sly ol' dog Pi but i couldn't be bothered.
You might find this useful as step 1:
https://www.omnicalculator.com/math/right-triangle-side-angle
with the above calculator:
hypotenuse = C = distance from camera to outer most pixel capture
Height of triangle = b = distance from camera to focal point (center of your photo)
half the FOV angle = α (you would do this for both vertical and horizontal FOV per pixel example 2MP - 1920 x 1080 points of references)
every lens size has a specific vertical and horizontal FOV angle.
This will allow you to calculate which parts of objects in the scene are within perfect focus thus retaining the densest pixels.
PS if you wanted to be scientifically accurate you would need to calculate this for every pixel on your image sensor over the size of the sensor so if you had a 2MP camera you would need to do 1920 x 1080 / 1/3" (calculate the pixel density over your sensor)for example. the quality of the glass of your lens will also play a factor just not sure on what front.
The colour of the objects in the scene, atmospheric conditions and lux level in scene will allow for variable light wavelengths to the camera, also influencing the density pixels captured.
lastly because you want to calculate the resultant conversion of resolution captured vs displayed your display medium will influence the actual performance.
realistically you wouldnt be able to tell with certainty unless you measure every single distance of every pixel path from pixel on sensor to object point.
let us know how it goes. i could also be completely wrong LOL
You can't calculate the number of pixels per meter unless you know the distance to the object being captured. An object 10 meters away will have fewer pixels per meter than an object 1 meter away. All you can accurately calculate is the number of pixels per degree of your camera's field of view.
Even if you point the camera at a flat wall, the distance from the camera to the wall will change as the incident angle changes, so distance of the middle of the wall will be closer to the camera than the distance from the corners of the wall. This can be calculated using some simple trigonometry.

Transform Matrix for ECEF to site co-ordinates

I am given site co-ordinate systems having the following parameters:
Projection Type (usually Transverse Mercator)
Ellipsoid/Datum (usually GRS80/GDA94)
Central Meridian
Central Scale Factor
False Easting
False Northing
and then need to programmatically convert a large number of points from ECEF into the site co-ordinate system, so ideally I'd like to use a transform matrix.
Wikipedia gives the formula for this transform matrix as:
http://upload.wikimedia.org/math/6/c/5/6c5e10c1708acc1663d618c2f3fecc98.png
But how do I calculate the parameters needed for this formula from the site mapping parameters I have been given?
The usual way to do this conversion is to first convert from ECEF to geodetic coordinates (latitude,longitude,height), and then to convert these to map coordinates (northing,easting,height). Each of these transforms is non-linear. However if the site is not too large and your accuracy requirements not too stringent, you could carry out the above transforms on a few dozen (say) points round the perimeter of the site, and then use these points and the original to find an affine transform that best approximates the map coordinates from the ECEF coordinates.
I've played around with this a bit and it appears that while it is possible to get the eastings and northings with fair accuracy (eg a couple of centimetres over a site within a circle of radius 10km and a 20m height variation over the site; but if the height variation is 200m the accuracy drops to 2 decimetres), it is not possible to get even fair accuracy on the height -- in the example the height could be ~8m in error. This is unavoidable, as a line of constant height in site coordinates will be close to a circular arc, and if you compute the greatest distance of the chord from the arc for an arc of length 20km and a circle of radius earth radius you get ~16m.

Algorithm for Finding Longest Stretch of a Value at any Angle in a 2D Matrix

I am currently working on a computer-vision program that requires me to determine the "direction" of a color blob in an image. The color blob generally follows an elliptical shape and thus can be used to track direction (with respect to an initially defined/determined orientation) through time.
The means by which I figured I would calculate changes in direction are described as follows:
Quantize possible directions (360 degrees) into N directions (potentially 8, for 45 degree angle increments).
Given a stored matrix representing the initial state (t0) of the color blob, also acquire a matrix representing the current state (tn) of the blob.
Iterate through these N directions and search for the longest stretch of the color value for that given direction. (e.g. if the ellipse is rotated 45 degrees with 0 being vertical, the longest length should be attributed to the 45 degree mark / or 225 degrees).
The concept itself isn't complicated, but I'm having trouble with the following:
Calculating the longest stretch of a value at any angle in an image. This is simple for angles such as 0, 45, 90, etc. but more difficult for the in-between angles. "Quantizing" the angles is not as easy to me as it sounds.
Please do not worry about potential issue with distinguishing angles such as 0 and 90. Inertia can be used to determine the most likely direction of the color blob (in other words, based upon past orientation states).
My main concern is identifying the "longest stretch" in the matrix.
Thank you for your help!
You can use image moments as suggested here: Matlab - Image Momentum Calculation.
In matlab you would use regionprops with the property 'Orientation', but the wiki article in the previous answer should give you all of the information you need to code it in the language of your choice.

3D effect to distort paper

This may be a little hard to describe since I don't have a sample. I'm trying to find a math function or full 3d function in php or a similar language that can help me with the following effect:
imagine if you were to take a flat sheet or paper and glue it on a glass of water. It wouldn't be flat any more. It would have a curve, and one of its sides might end up being slightly hidden.
Anyone can refer me to a good library or resource on the web where such functions can be found?
Lets say the center of your paper is x=0, and your cylinder is vertical along the y-axis. Your x-coordinate on the paper could be equated to an arc length on the surface of the cylinder. Arc length (s) is equal to the angle (in radians) times the radius. Your radius is given, so you can compute the angle from the arc length and radius. Angle = Arc Length / Radius. Since you now have the angle and the radius, you can compute the new x-offset, which would be (radius * cos(angle)). So your mapping functions would be:
new_x = radius * cos(old_x/radius)
new_y = old_y; //y-coordinate doesn't change
new_z = radius * sin(old_x/radius);
You'll have to enforce boundaries (keep x on the paper, and make sure it's not more than half the circumference (x must be less than or equal to PI*r). Also, watch the signs... especially the z-coordinate, which will depend on whether your coordinate system is right-handed or left-handed, or where you imagine the paper starting on the cylindar (back or front). Finally, you can use standard matrix transforms to move and position the paper/cylinder in 3D space once you have the warped coordinates.

Resources for image distortion algorithms

Where can I find algorithms for image distortions? There are so much info of Blur and other classic algorithms but so little of more complex ones. In particular, I am interested in swirl effect image distortion algorithm.
I can't find any references, but I can give a basic idea of how distortion effects work.
The key to the distortion is a function which takes two coordinates (x,y) in the distorted image, and transforms them to coordinates (u,v) in the original image. This specifies the inverse function of the distortion, since it takes the distorted image back to the original image
To generate the distorted image, one loops over x and y, calculates the point (u,v) from (x,y) using the inverse distortion function, and sets the colour components at (x,y) to be the same as those at (u,v) in the original image. One ususally uses interpolation (e.g. http://en.wikipedia.org/wiki/Bilinear_interpolation ) to determine the colour at (u,v), since (u,v) usually does not lie exactly on the centre of a pixel, but rather at some fractional point between pixels.
A swirl is essentially a rotation, where the angle of rotation is dependent on the distance from the centre of the image. An example would be:
a = amount of rotation
b = size of effect
angle = a*exp(-(x*x+y*y)/(b*b))
u = cos(angle)*x + sin(angle)*y
v = -sin(angle)*x + cos(angle)*y
Here, I assume for simplicity that the centre of the swirl is at (0,0). The swirl can be put anywhere by subtracting the swirl position coordinates from x and y before the distortion function, and adding them to u and v after it.
There are various swirl effects around: some (like the above) swirl only a localised area, and have the amount of swirl decreasing towards the edge of the image. Others increase the swirling towards the edge of the image. This sort of thing can be done by playing about with the angle= line, e.g.
angle = a*(x*x+y*y)
There is a Java implementation of lot of image filters/effects at Jerry's Java Image Filters. Maybe you can take inspiration from there.
The swirl and others like it are a matrix transformation on the pixel locations. You make a new image and get the color from a position on the image that you get from multiplying the current position by a matrix.
The matrix is dependent on the current position.
here is a good CodeProject showing how to do it
http://www.codeproject.com/KB/GDI-plus/displacementfilters.aspx
there has a new graphic library have many feature
http://code.google.com/p/picasso-graphic/
Take a look at ImageMagick. It's a image conversion and editing toolkit and has interfaces for all popular languages.
The -displace operator can create swirls with the correct displacement map.
If you are for some reason not satisfied with the ImageMagick interface, you can always take a look at the source code of the filters and go from there.

Resources