Related
For some odd reason, the code below does not produce the desired effect. The sinewave is fine and functional, and the snippet of code below works perfectly, except for the movement forward, which is jolty.
I am using millis() and dividing by 1000 to convert to seconds, but the effect still produces what I would expect form using the second() function - the sinewave moving forward every 1 second. I want the lines to move horizontally smoothly, as the sinewave line does vertically. What have I done wrong?
int cyclesBeforeStopping = 4;
int distanceBeforeStopping = 400;
float frequency = 0.2; // in Hz
float peak = 25; // Highest point of wave
float trough = 275; // Lowest point of wave
float amplitudeOffset;
float forwardOffset;
float timeUntilStop = (1 / frequency) * cyclesBeforeStopping;
void setup(){
size(600,300);
frameRate(50);
forwardOffset = ForwardOffset();
}
void draw(){
background(255);
float forwardOffsetNow = forwardOffset * (millis() / 1000);
// Guidelines
line(50 + forwardOffsetNow, peak, 100 + forwardOffsetNow, peak);
line(50 + forwardOffsetNow, trough, 100 + forwardOffsetNow, trough);
// Sine line
float newPosition = NewPosition();
line(50 + forwardOffsetNow, newPosition, 100 + forwardOffsetNow, newPosition);
}
float ForwardOffset() {
float forwardOffsetVar = (distanceBeforeStopping) / timeUntilStop;
return forwardOffsetVar;
}
float NewPosition() {
float omega = TWO_PI * frequency;
float amplitude = trough - peak;
float halfway = peak + amplitude / 2;
float newPosition = halfway - (amplitude * sin(omega * millis() / 1000) / 2);
return newPosition;
}
A one dot solution :) just do:
float forwardOffsetNow = forwardOffset * (millis() / 1000.);//mind the dot after 1000
and also:
float newPosition = halfway - (amplitude * sin(omega * millis() / 1000.) / 2);
This is called integer division, well documented in Processing's wiki troubleshooting, here.
It can be avoided by using a dot or an f after the number, like:
millis()/1000.;
or
millis()/1000.0;
or
millis()/1000f;
All will work.
I am using Goertzel algorithm to get the amplitude of a certain frequency.
I am trying now to get the phase from it, and I don't know how.
Can some one explain, and show me how to get the phase of a certain-f from this code?
Also, I am using it to 16khz, with sample rate 44.1. What's the smallest length of samples that I can run it on?
double AlgorithmGoertzel( int16_t *sample,int sampleRate, double Freq, int len )
{
double realW = 2.0 * cos(2.0 * M_PI * Freq / sampleRate);
double imagW = 2.0 * sin(2.0 * M_PI * Freq / sampleRate);
double d1 = 0;
double d2 = 0;
double y;
for (int i = 0; i < len; i++) {
y=(double)(signed short)sample[i] +realW * d1 - d2;
d2 = d1;
d1 = y;
}
double rR = 0.5 * realW *d1-d2;
double rI = 0.5 * imagW *d1-d2;
return (sqrt(pow(rR, 2)+pow(rI,2)))/len;
}
Do a rectangular to polar conversion. That will give you phase and magnitude.
magnitude = sqrt ((Vreal * Vreal) + (Vimag * Vimag))
phase = atan2 (Vimag, Vreal)
I don't think the algorithm consists of multiplying the sequence by a constant, but by the complex signal exp(n*i*2pi*freq/samplerate); 0<=n<=length, and getting the average magnitude (or power of the signal).
As the complex output is R*exp(i theta), R gives the power at the given frequency and theta gives the phase. (theta == atan2 ( imag, real))
The number of samples you need to feed a Goertzel filter will be inversely proportional to your desired or required filter bandwidth. A Goertzel provides a Sinc shaped bandpass filter, with the main lobe width proportional to 2*Fs/N.
If you use a complex Goertzel, the resulting phase will be relative to some point in the filter's data window. You may thus have to calculate an offset to get phase relative to some other reference point in time.
I'm using linear interpolation for animating an object between two 2d coordinates on the screen. This is pretty close to what I want, but because of rounding, I get a jagged motion. In ASCII art:
ooo
ooo
ooo
oo
Notice how it walks in a Manhattan grid, instead of taking 45 degree turns. What I'd like is linear interpolation along the line which Bresenham's algorithm would have created:
oo
oo
oo
oo
For each x there is only one corresponding y. (And swap x/y for a line that is steep)
So why don't I just use Bresenham's algorithm? I certainly could, but that algorithm is iterative, and I'd like to know just one coordinate along the line.
I am going to try solving this by linearly interpolating the x coordinate, round it to the pixel grid, and then finding the corresponding y. (Again, swap x/y for steep lines). No matter how that solution pans out, though, I'd be interested in other suggestion and maybe previous experience.
Bresenham's algorithm for lines was introduced to draw a complete line a bit faster than usual approaches. It has two major advantages:
It works on integer variables
It works iteratively, which is fast, when drawing the complete line
The first advantage is not a great deal, if you calculate only some coordinates. The second advantage turns out as a disadvantage when calculating only some coordinates. So after all, there is no need to use Bresenham's algorithm.
Instead, you can use a different algorithm, which results in the same line. For example the DDA (digital differential analyzer). This is basically, the same approach you mentioned.
First step: Calculate the slope.
m = (y_end - y_start) / (x_end - x_start)
Second step: Calculate the iteration step, which is simply:
i = x - x_start
Third step: Calculate the coresponding y-value:
y = y_start + i * m
= y_start + (x - x_start) * (y_end - y_start) / (x_end - x_start)
Here's the solution I ended up with:
public static Vector2 BresenhamLerp(Vector2 a, Vector2 b, float percent)
{
if (a.x == b.x || Math.Abs(a.x - b.x) < Math.Abs(a.y - b.y))
{
// Didn't do this part yet. Basically, we just need to recurse
// with x/y swapped and swap result on return
}
Vector2 result;
result.x = Math.Round((1-percent) * a.x + percent * b.x);
float adjustedPercent = (result.x - a.x + 0.5f) / (b.x - a.x);
result.y = Math.Round((1-adjustedPercent) * a.y + adjustedPercent * b.y);
return result;
}
This is what I just figured out would work. Probably not the most beautiful interpolations, but it is just a 1-2 float additions per iteration on the line with a one-time precalculation. Works by calculating the number of steps on a manhattan matrix.
Ah, and it does not yet catch the case when the line is vertical (dx = 0)
This is the naive bresenham, but the iterations could in theory only use integers as well. If you want to get rid of the float color value, things are going to get harder because the line might be longer than the color difference, so delta-color < 1.
void Brepolate( uint8_t* pColorBuffer, uint8_t cs, float xs, float ys, float zs, uint8_t ce, float xe, float ye, float ze )
{
float nColSteps = (xe - xs) + (ye - ys);
float fColInc = ((float)cs - (float)ce) / nColSteps;
float fCol = cs;
float dx = xe - xs;
float dy = ye - ys;
float fCol = cs;
if (dx > 0.5)
{
float de = fabs( dy / dx );
float re = de - 0.5f;
uint32_t iY = ys;
uint32_t iX;
for ( uint32_t iX = xs;
iX <= xe;
iX++ )
{
uint32_t off = surf.Offset( iX, iY );
pColorBuffer[off] = fCol;
re += de;
if (re >= 0.5f)
{
iY++;
re -= 1.0f;
fCol += fColInc;
}
fCol += fColInc;
}
}
}
I have a set of latitudes and longitudes of locations.
How to find distance from one location in the set to another?
Is there a formula ?
The Haversine formula assumes a spherical earth. However, the shape of the earh is more complex. An oblate spheroid model will give better results.
If such accuracy is needed, you should better use Vincenty inverse formula.
See http://en.wikipedia.org/wiki/Vincenty's_formulae for details. Using it, you can get a 0.5mm accuracy for the spheroid model.
There is no perfect formula, since the real shape of the earth is too complex to be expressed by a formula. Moreover, the shape of earth changes due to climate events (see http://www.nasa.gov/centers/goddard/earthandsun/earthshape.html), and also changes over time due to the rotation of the earth.
You should also note that the method above does not take altitudes into account, and assumes a sea-level oblate spheroid.
Edit 10-Jul-2010: I found out that there are rare situations for which Vincenty inverse formula does not converge to the declared accuracy. A better idea is to use GeographicLib (see http://sourceforge.net/projects/geographiclib/) which is also more accurate.
Here's one: http://www.movable-type.co.uk/scripts/latlong.html
Using Haversine formula:
R = earth’s radius (mean radius = 6,371km)
Δlat = lat2− lat1
Δlong = long2− long1
a = sin²(Δlat/2) + cos(lat1).cos(lat2).sin²(Δlong/2)
c = 2.atan2(√a, √(1−a))
d = R.c
Apply the Haversine formula to find the distance. See the C# code below to find the distance between 2 coordinates. Better still if you want to say find a list of stores within a certain radius, you could apply a WHERE clause in SQL or a LINQ filter in C# to it.
The formula here is in kilometres, you will have to change the relevant numbers and it will work for miles.
E.g: Convert 6371.392896 to miles.
DECLARE #radiusInKm AS FLOAT
DECLARE #lat2Compare AS FLOAT
DECLARE #long2Compare AS FLOAT
SET #radiusInKm = 5.000
SET #lat2Compare = insert_your_lat_to_compare_here
SET #long2Compare = insert_you_long_to_compare_here
SELECT * FROM insert_your_table_here WITH(NOLOCK)
WHERE (6371.392896*2*ATN2(SQRT((sin((radians(GeoLatitude - #lat2Compare)) / 2) * sin((radians(GeoLatitude - #lat2Compare)) / 2)) + (cos(radians(GeoLatitude)) * cos(radians(#lat2Compare)) * sin(radians(GeoLongitude - #long2Compare)/2) * sin(radians(GeoLongitude - #long2Compare)/2)))
, SQRT(1-((sin((radians(GeoLatitude - #lat2Compare)) / 2) * sin((radians(GeoLatitude - #lat2Compare)) / 2)) + (cos(radians(GeoLatitude)) * cos(radians(#lat2Compare)) * sin(radians(GeoLongitude - #long2Compare)/2) * sin(radians(GeoLongitude - #long2Compare)/2)))
))) <= #radiusInKm
If you would like to perform the Haversine formula in C#,
double resultDistance = 0.0;
double avgRadiusOfEarth = 6371.392896; //Radius of the earth differ, I'm taking the average.
//Haversine formula
//distance = R * 2 * aTan2 ( square root of A, square root of 1 - A )
// where A = sinus squared (difference in latitude / 2) + (cosine of latitude 1 * cosine of latitude 2 * sinus squared (difference in longitude / 2))
// and R = the circumference of the earth
double differenceInLat = DegreeToRadian(currentLatitude - latitudeToCompare);
double differenceInLong = DegreeToRadian(currentLongitude - longtitudeToCompare);
double aInnerFormula = Math.Cos(DegreeToRadian(currentLatitude)) * Math.Cos(DegreeToRadian(latitudeToCompare)) * Math.Sin(differenceInLong / 2) * Math.Sin(differenceInLong / 2);
double aFormula = (Math.Sin((differenceInLat) / 2) * Math.Sin((differenceInLat) / 2)) + (aInnerFormula);
resultDistance = avgRadiusOfEarth * 2 * Math.Atan2(Math.Sqrt(aFormula), Math.Sqrt(1 - aFormula));
DegreesToRadian is a function I custom created, its is a simple 1 liner of"Math.PI * angle / 180.0
My blog entry - SQL Haversine
Are you looking for
Haversine formula
The haversine formula is an equation
important in navigation, giving
great-circle distances between two
points on a sphere from their
longitudes and latitudes. It is a
special case of a more general formula
in spherical trigonometry, the law of
haversines, relating the sides and
angles of spherical "triangles".
Have a look at this.. has a javascript example as well.
Find Distance
Use the Great Circle Distance Formula.
here is a fiddle with finding locations / near locations to long/lat by given IP:
http://jsfiddle.net/bassta/zrgd9qc3/2/
And here is the function I use to calculate the distance in straight line:
function distance(lat1, lng1, lat2, lng2) {
var radlat1 = Math.PI * lat1 / 180;
var radlat2 = Math.PI * lat2 / 180;
var radlon1 = Math.PI * lng1 / 180;
var radlon2 = Math.PI * lng2 / 180;
var theta = lng1 - lng2;
var radtheta = Math.PI * theta / 180;
var dist = Math.sin(radlat1) * Math.sin(radlat2) + Math.cos(radlat1) * Math.cos(radlat2) * Math.cos(radtheta);
dist = Math.acos(dist);
dist = dist * 180 / Math.PI;
dist = dist * 60 * 1.1515;
//Get in in kilometers
dist = dist * 1.609344;
return dist;
}
It returns the distance in Kilometers
If you are measuring distances less than (perhaps) 1 degree lat/long change, are looking for a very high performance approximation, and are willing to accept more inaccuracy than Haversine formula, consider these two alternatives:
(1) "Polar Coordinate Flat-Earth Formula" from Computing Distances:
a = pi/2 - lat1
b = pi/2 - lat2
c = sqrt( a^2 + b^2 - 2 * a * b * cos(lon2 - lon1) )
d = R * c
(2) Pythagorean theorem adjusted for latitude, as seen in Ewan Todd's SO post:
d_ew = (long1 - long0) * cos(average(lat0, lat1))
d_ns = (lat1 - lat0)
d = sqrt(d_ew * d_ew + d_ns * d_ns)
NOTES:
Compared to Ewan's post, I've substituted average(lat0, lat1) for lat0 inside of cos( lat0 ).
#2 is vague on whether values are degrees, radians, or kilometers; you will need some conversion code as well. See my complete code at bottom of this post.
#1 is designed to work well even near the poles, though if you are measuring a distance whose endpoints are on "opposite" sides of the pole (longitudes differ by more than 90 degrees?), Haversine is recommended instead, even for small distances.
I haven't thoroughly measured errors of these approaches, so you should take representative points for your application, and compare results to some high-quality library, to decide if the accuracies are acceptable. For distances less than a few kilometers my gut sense is that these are within 1% of correct measurement.
An alternative way to gain high performance (when applicable):
If you have a large set of static points, within one or two degrees of longitude/latitude, that you will then be calculating distances from a small number of dynamic (moving) points, consider converting your static points ONCE to the containing UTM zone (or to any other local Cartesian coordinate system), and then doing all your math in that Cartesian coordinate system.
Cartesian = flat earth = Pythagorean theorem applies, so distance = sqrt(dx^2 + dy^2).
Then the cost of accurately converting the few moving points to UTM is easily afforded.
CAVEAT for #1 (Polar): May be very wrong for distances less than 0.1 (?) meter. Even with double precision math, the following coordinates, whose true distance is about 0.005 meters, was given as "zero" by my implementation of Polar algorithm:
inputs:
lon1Xdeg 16.6564465477996 double
lat1Ydeg 57.7760262271983 double
lon2Xdeg 16.6564466358281 double
lat2Ydeg 57.776026248554 double
results:
Oblate spheroid formula:
0.00575254911118364 double
Haversine:
0.00573422966122257 double
Polar:
0
this was due to the two factors u and v exactly canceling each other:
u 0.632619944868587 double
v -0.632619944868587 double
In another case, it gave a distance of 0.067129 m when the oblate spheroid answer was 0.002887 m. The problem was that cos(lon2 - lon1) was too close to 1, so cos function returned exactly 1.
Other than measuring sub-meter distances, the max errors (compared to an oblate spheroid formula) I found for the limited small-distance data I've fed in so far:
maxHaversineErrorRatio 0.00350976281908381 double
maxPolarErrorRatio 0.0510789996931342 double
where "1" would represent a 100% error in the answer; e.g. when it returned "0", that was an error of "1" (excluded from above "maxPolar"). So "0.01" would be an error of "1 part in 100" or 1%.
Comparing Polar error with Haversine error over distances less than 2000 meters to see how much worse this simpler formula is. So far, the worst I've seen is 51 parts per 1000 for Polar vs 4 parts per 1000 for Haversine. At about 58 degrees latitude.
Now implemented "Pythagorean with Latitude Adjustment".
It is MUCH more consistent than Polar for distances < 2000 m.
I originally thought the Polar problems were only when < 1 m,
but the result shown immediately below is quite troubling.
As distances approach zero, pythagorean/latitude approaches haversine.
For example this measurement ~ 217 meters:
lon1Xdeg 16.6531667510102 double
lat1Ydeg 57.7751705615804 double
lon2Xdeg 16.6564468739869 double
lat2Ydeg 57.7760263007586 double
oblate 217.201200413731
haversine 216.518428601051
polar 226.128616011973
pythag-cos 216.518428631907
havErrRatio 0.00314349925958048
polErrRatio 0.041102054598393
pycErrRatio 0.00314349911751603
Polar has a much worse error with these inputs; either there is some mistake in my code, or in Cos function I am running on, or I have to recommend not using Polar, even though most Polar measurements were much closer than this.
OTOH, Pythagorean, even with * cos(latitude) adjustment, has error that increases more rapidly than distance (ratio of max_error/distance increases for larger distances), so you need to carefully consider the maximum distance you will measure, and the acceptable error. In addition, it is not advisable to COMPARE two nearly-equal distances using Pythagorean, to decide which is shorter, as the error is different in different DIRECTIONS (evidence not shown).
Worst case measurements, errorRatio = Abs(error) / distance (Sweden; up to 2000 m):
t_maxHaversineErrorRatio 0.00351012021578681 double
t_maxPolarErrorRatio 66.0825360597085 double
t_maxPythagoreanErrorRatio 0.00350976281416454 double
As mentioned before, the extreme polar errors are for sub-meter distances, where it could report zero instead of 6 cm, or report over 0.5 m for a distance of 1 cm (hence the "66 x" worst case shown in t_maxPolarErrorRatio), but there are also some poor results at larger distances. [Needs to be tested again with a Cosine function that is known to be highly accurate.]
Measurements taken in C# code in Xamarin.Android running on a Moto E4.
C# code:
// x=longitude, y= latitude. oblate spheroid formula. TODO: From where?
public static double calculateDistanceDD_AED( double lon1Xdeg, double lat1Ydeg, double lon2Xdeg, double lat2Ydeg )
{
double c_dblEarthRadius = 6378.135; // km
double c_dblFlattening = 1.0 / 298.257223563; // WGS84 inverse
// flattening
// Q: Why "-" for longitudes??
double p1x = -degreesToRadians( lon1Xdeg );
double p1y = degreesToRadians( lat1Ydeg );
double p2x = -degreesToRadians( lon2Xdeg );
double p2y = degreesToRadians( lat2Ydeg );
double F = (p1y + p2y) / 2;
double G = (p1y - p2y) / 2;
double L = (p1x - p2x) / 2;
double sing = Math.Sin( G );
double cosl = Math.Cos( L );
double cosf = Math.Cos( F );
double sinl = Math.Sin( L );
double sinf = Math.Sin( F );
double cosg = Math.Cos( G );
double S = sing * sing * cosl * cosl + cosf * cosf * sinl * sinl;
double C = cosg * cosg * cosl * cosl + sinf * sinf * sinl * sinl;
double W = Math.Atan2( Math.Sqrt( S ), Math.Sqrt( C ) );
if (W == 0.0)
return 0.0;
double R = Math.Sqrt( (S * C) ) / W;
double H1 = (3 * R - 1.0) / (2.0 * C);
double H2 = (3 * R + 1.0) / (2.0 * S);
double D = 2 * W * c_dblEarthRadius;
// Apply flattening factor
D = D * (1.0 + c_dblFlattening * H1 * sinf * sinf * cosg * cosg - c_dblFlattening * H2 * cosf * cosf * sing * sing);
// Transform to meters
D = D * 1000.0;
// tmstest
if (true)
{
// Compare Haversine.
double haversine = HaversineApproxDistanceGeo( lon1Xdeg, lat1Ydeg, lon2Xdeg, lat2Ydeg );
double error = haversine - D;
double absError = Math.Abs( error );
double errorRatio = absError / D;
if (errorRatio > t_maxHaversineErrorRatio)
{
if (errorRatio > t_maxHaversineErrorRatio * 1.1)
Helper.test();
t_maxHaversineErrorRatio = errorRatio;
}
// Compare Polar Coordinate Flat Earth.
double polarDistanceGeo = ApproxDistanceGeo_Polar( lon1Xdeg, lat1Ydeg, lon2Xdeg, lat2Ydeg, D );
double error2 = polarDistanceGeo - D;
double absError2 = Math.Abs( error2 );
double errorRatio2 = absError2 / D;
if (errorRatio2 > t_maxPolarErrorRatio)
{
if (polarDistanceGeo > 0)
{
if (errorRatio2 > t_maxPolarErrorRatio * 1.1)
Helper.test();
t_maxPolarErrorRatio = errorRatio2;
}
else
Helper.dubious();
}
// Compare Pythagorean Theorem with Latitude Adjustment.
double pythagoreanDistanceGeo = ApproxDistanceGeo_PythagoreanCosLatitude( lon1Xdeg, lat1Ydeg, lon2Xdeg, lat2Ydeg, D );
double error3 = pythagoreanDistanceGeo - D;
double absError3 = Math.Abs( error3 );
double errorRatio3 = absError3 / D;
if (errorRatio3 > t_maxPythagoreanErrorRatio)
{
if (D < 2000)
{
if (errorRatio3 > t_maxPythagoreanErrorRatio * 1.05)
Helper.test();
t_maxPythagoreanErrorRatio = errorRatio3;
}
}
}
return D;
}
// As a fraction of the distance.
private static double t_maxHaversineErrorRatio, t_maxPolarErrorRatio, t_maxPythagoreanErrorRatio;
// Average of equatorial and polar radii (meters).
public const double EarthAvgRadius = 6371000;
public const double EarthAvgCircumference = EarthAvgRadius * 2 * PI;
// CAUTION: This is an average of great circles; won't be the actual distance of any longitude or latitude degree.
public const double EarthAvgMeterPerGreatCircleDegree = EarthAvgCircumference / 360;
// Haversine formula (assumes Earth is sphere).
// "deg" = degrees.
// Perhaps based on Haversine Formula in https://cs.nyu.edu/visual/home/proj/tiger/gisfaq.html
public static double HaversineApproxDistanceGeo(double lon1Xdeg, double lat1Ydeg, double lon2Xdeg, double lat2Ydeg)
{
double lon1 = degreesToRadians( lon1Xdeg );
double lat1 = degreesToRadians( lat1Ydeg );
double lon2 = degreesToRadians( lon2Xdeg );
double lat2 = degreesToRadians( lat2Ydeg );
double dlon = lon2 - lon1;
double dlat = lat2 - lat1;
double sinDLat2 = Sin( dlat / 2 );
double sinDLon2 = Sin( dlon / 2 );
double a = sinDLat2 * sinDLat2 + Cos( lat1 ) * Cos( lat2 ) * sinDLon2 * sinDLon2;
double c = 2 * Atan2( Sqrt( a ), Sqrt( 1 - a ) );
double d = EarthAvgRadius * c;
return d;
}
// From https://stackoverflow.com/a/19772119/199364
// Based on Polar Coordinate Flat Earth in https://cs.nyu.edu/visual/home/proj/tiger/gisfaq.html
public static double ApproxDistanceGeo_Polar( double lon1deg, double lat1deg, double lon2deg, double lat2deg, double D = 0 )
{
double approxUnitDistSq = ApproxUnitDistSq_Polar(lon1deg, lat1deg, lon2deg, lat2deg, D);
double c = Sqrt( approxUnitDistSq );
return EarthAvgRadius * c;
}
// Might be useful to avoid taking Sqrt, when comparing to some threshold.
// Threshold would have to be adjusted to match: Power(threshold / EarthAvgRadius, 2)
private static double ApproxUnitDistSq_Polar(double lon1deg, double lat1deg, double lon2deg, double lat2deg, double D = 0 )
{
const double HalfPi = PI / 2; //1.5707963267949;
double lon1 = degreesToRadians(lon1deg);
double lat1 = degreesToRadians(lat1deg);
double lon2 = degreesToRadians(lon2deg);
double lat2 = degreesToRadians(lat2deg);
double a = HalfPi - lat1;
double b = HalfPi - lat2;
double u = a * a + b * b;
double dlon21 = lon2 - lon1;
double cosDeltaLon = Cos( dlon21 );
double v = -2 * a * b * cosDeltaLon;
// TBD: Is "Abs" necessary? That is, is "u + v" ever negative?
// (I think not; "v" looks like a secondary term. Though might be round-off issue near zero when a~=b.)
double approxUnitDistSq = Abs(u + v);
//if (approxUnitDistSq.nearlyEquals(0, 1E-16))
// Helper.dubious();
//else if (D > 0)
//{
// double dba = b - a;
// double unitD = D / EarthAvgRadius;
// double unitDSq = unitD * unitD;
// if (approxUnitDistSq > 2 * unitDSq)
// Helper.dubious();
// else if (approxUnitDistSq * 2 < unitDSq)
// Helper.dubious();
//}
return approxUnitDistSq;
}
// Pythagorean Theorem with Latitude Adjustment - from Ewan Todd - https://stackoverflow.com/a/1664836/199364
// Refined by ToolmakerSteve - https://stackoverflow.com/a/53468745/199364
public static double ApproxDistanceGeo_PythagoreanCosLatitude( double lon1deg, double lat1deg, double lon2deg, double lat2deg, double D = 0 )
{
double approxDegreesSq = ApproxDegreesSq_PythagoreanCosLatitude( lon1deg, lat1deg, lon2deg, lat2deg );
// approximate degrees on the great circle between the points.
double d_degrees = Sqrt( approxDegreesSq );
return d_degrees * EarthAvgMeterPerGreatCircleDegree;
}
public static double ApproxDegreesSq_PythagoreanCosLatitude( double lon1deg, double lat1deg, double lon2deg, double lat2deg )
{
double avgLatDeg = average( lat1deg , lat2deg );
double avgLat = degreesToRadians( avgLatDeg );
double d_ew = (lon2deg - lon1deg) * Cos( avgLat );
double d_ns = (lat2deg - lat1deg);
double approxDegreesSq = d_ew * d_ew + d_ns * d_ns;
return approxDegreesSq;
}
I am done using SQL query
select *, (acos(sin(input_lat* 0.01745329)*sin(lattitude *0.01745329) + cos(input_lat *0.01745329)*cos(lattitude *0.01745329)*cos((input_long -longitude)*0.01745329))* 57.29577951 )* 69.16 As D from table_name
Following is the module (coded in f90) containing three formulas discussed in the previous answers. You can either put this module at the top of your program
(before PROGRAM MAIN) or compile it separately and include the module directory during compilation. The following module contains three formulas. First two are great-circle distances based on the assumption that earth is spherical.
module spherical_dists
contains
subroutine great_circle_distance(lon1,lat1,lon2,lat2,dist)
!https://en.wikipedia.org/wiki/Great-circle_distance
! It takes lon, lats of two points on an assumed spherical earth and
! calculates the distance between them along the great circle connecting the two points
implicit none
real,intent(in)::lon1,lon2,lat1,lat2
real,intent(out)::dist
real,parameter::pi=3.141592,mean_earth_radius=6371.0088
real::lonr1,lonr2,latr1,latr2
real::delangl,dellon
lonr1=lon1*(pi/180.);lonr2=lon2*(pi/180.)
latr1=lat1*(pi/180.);latr2=lat2*(pi/180.)
dellon=lonr2-lonr1
delangl=acos(sin(latr1)*sin(latr2)+cos(latr1)*cos(latr2)*cos(dellon))
dist=delangl*mean_earth_radius
end subroutine
subroutine haversine_formula(lon1,lat1,lon2,lat2,dist)
! https://en.wikipedia.org/wiki/Haversine_formula
! This is similar above but numerically better conditioned for small distances
implicit none
real,intent(in)::lon1,lon2,lat1,lat2
!lon, lats of two points
real,intent(out)::dist
real,parameter::pi=3.141592,mean_earth_radius=6371.0088
real::lonr1,lonr2,latr1,latr2
real::delangl,dellon,dellat,a
! degrees are converted to radians
lonr1=lon1*(pi/180.);lonr2=lon2*(pi/180.)
latr1=lat1*(pi/180.);latr2=lat2*(pi/180.)
dellon=lonr2-lonr1 ! These dels simplify the haversine formula
dellat=latr2-latr1
! The actual haversine formula
a=(sin(dellat/2))**2+cos(latr1)*cos(latr2)*(sin(dellon/2))**2
delangl=2*asin(sqrt(a)) !2*asin(sqrt(a))
dist=delangl*mean_earth_radius
end subroutine
subroutine vincenty_formula(lon1,lat1,lon2,lat2,dist)
!https://en.wikipedia.org/wiki/Vincenty%27s_formulae
!It's a better approximation over previous two, since it considers earth to in oblate spheroid, which better approximates the shape of the earth
implicit none
real,intent(in)::lon1,lon2,lat1,lat2
real,intent(out)::dist
real,parameter::pi=3.141592,mean_earth_radius=6371.0088
real::lonr1,lonr2,latr1,latr2
real::delangl,dellon,nom,denom
lonr1=lon1*(pi/180.);lonr2=lon2*(pi/180.)
latr1=lat1*(pi/180.);latr2=lat2*(pi/180.)
dellon=lonr2-lonr1
nom=sqrt((cos(latr2)*sin(dellon))**2. + (cos(latr1)*sin(latr2)-sin(latr1)*cos(latr2)*cos(dellon))**2.)
denom=sin(latr1)*sin(latr2)+cos(latr1)*cos(latr2)*cos(dellon)
delangl=atan2(nom,denom)
dist=delangl*mean_earth_radius
end subroutine
end module
On this page you can see the whole code and formulas how distances of locations are calculated in Android Location class
android/location/Location.java
EDIT: According the hint from #Richard I put the code of the linked function into my answer, to avoid invalidated link:
private static void computeDistanceAndBearing(double lat1, double lon1,
double lat2, double lon2, BearingDistanceCache results) {
// Based on http://www.ngs.noaa.gov/PUBS_LIB/inverse.pdf
// using the "Inverse Formula" (section 4)
int MAXITERS = 20;
// Convert lat/long to radians
lat1 *= Math.PI / 180.0;
lat2 *= Math.PI / 180.0;
lon1 *= Math.PI / 180.0;
lon2 *= Math.PI / 180.0;
double a = 6378137.0; // WGS84 major axis
double b = 6356752.3142; // WGS84 semi-major axis
double f = (a - b) / a;
double aSqMinusBSqOverBSq = (a * a - b * b) / (b * b);
double L = lon2 - lon1;
double A = 0.0;
double U1 = Math.atan((1.0 - f) * Math.tan(lat1));
double U2 = Math.atan((1.0 - f) * Math.tan(lat2));
double cosU1 = Math.cos(U1);
double cosU2 = Math.cos(U2);
double sinU1 = Math.sin(U1);
double sinU2 = Math.sin(U2);
double cosU1cosU2 = cosU1 * cosU2;
double sinU1sinU2 = sinU1 * sinU2;
double sigma = 0.0;
double deltaSigma = 0.0;
double cosSqAlpha = 0.0;
double cos2SM = 0.0;
double cosSigma = 0.0;
double sinSigma = 0.0;
double cosLambda = 0.0;
double sinLambda = 0.0;
double lambda = L; // initial guess
for (int iter = 0; iter < MAXITERS; iter++) {
double lambdaOrig = lambda;
cosLambda = Math.cos(lambda);
sinLambda = Math.sin(lambda);
double t1 = cosU2 * sinLambda;
double t2 = cosU1 * sinU2 - sinU1 * cosU2 * cosLambda;
double sinSqSigma = t1 * t1 + t2 * t2; // (14)
sinSigma = Math.sqrt(sinSqSigma);
cosSigma = sinU1sinU2 + cosU1cosU2 * cosLambda; // (15)
sigma = Math.atan2(sinSigma, cosSigma); // (16)
double sinAlpha = (sinSigma == 0) ? 0.0 :
cosU1cosU2 * sinLambda / sinSigma; // (17)
cosSqAlpha = 1.0 - sinAlpha * sinAlpha;
cos2SM = (cosSqAlpha == 0) ? 0.0 :
cosSigma - 2.0 * sinU1sinU2 / cosSqAlpha; // (18)
double uSquared = cosSqAlpha * aSqMinusBSqOverBSq; // defn
A = 1 + (uSquared / 16384.0) * // (3)
(4096.0 + uSquared *
(-768 + uSquared * (320.0 - 175.0 * uSquared)));
double B = (uSquared / 1024.0) * // (4)
(256.0 + uSquared *
(-128.0 + uSquared * (74.0 - 47.0 * uSquared)));
double C = (f / 16.0) *
cosSqAlpha *
(4.0 + f * (4.0 - 3.0 * cosSqAlpha)); // (10)
double cos2SMSq = cos2SM * cos2SM;
deltaSigma = B * sinSigma * // (6)
(cos2SM + (B / 4.0) *
(cosSigma * (-1.0 + 2.0 * cos2SMSq) -
(B / 6.0) * cos2SM *
(-3.0 + 4.0 * sinSigma * sinSigma) *
(-3.0 + 4.0 * cos2SMSq)));
lambda = L +
(1.0 - C) * f * sinAlpha *
(sigma + C * sinSigma *
(cos2SM + C * cosSigma *
(-1.0 + 2.0 * cos2SM * cos2SM))); // (11)
double delta = (lambda - lambdaOrig) / lambda;
if (Math.abs(delta) < 1.0e-12) {
break;
}
}
float distance = (float) (b * A * (sigma - deltaSigma));
results.mDistance = distance;
float initialBearing = (float) Math.atan2(cosU2 * sinLambda,
cosU1 * sinU2 - sinU1 * cosU2 * cosLambda);
initialBearing *= 180.0 / Math.PI;
results.mInitialBearing = initialBearing;
float finalBearing = (float) Math.atan2(cosU1 * sinLambda,
-sinU1 * cosU2 + cosU1 * sinU2 * cosLambda);
finalBearing *= 180.0 / Math.PI;
results.mFinalBearing = finalBearing;
results.mLat1 = lat1;
results.mLat2 = lat2;
results.mLon1 = lon1;
results.mLon2 = lon2;
}
just use the distance formula Sqrt( (x2-x1)^2 + (y2-y1)^2 )
I have an application that defines a real world rectangle on top of an image/photograph, of course in 2D it may not be a rectangle because you are looking at it from an angle.
The problem is, say that the rectangle needs to have grid lines drawn on it, for example if it is 3x5 so I need to draw 2 lines from side 1 to side 3, and 4 lines from side 2 to side 4.
As of right now I am breaking up each line into equidistant parts, to get the start and end point of all the grid lines. However the more of an angle the rectangle is on, the more "incorrect" these lines become, as horizontal lines further from you should be closer together.
Does anyone know the name of the algorithm that I should be searching for?
Yes I know you can do this in 3D, however I am limited to 2D for this particular application.
Here's the solution.
The basic idea is you can find the perspective correct "center" of your rectangle by connecting the corners diagonally. The intersection of the two resulting lines is your perspective correct center. From there you subdivide your rectangle into four smaller rectangles, and you repeat the process. The number of times depends on how accurate you want it. You can subdivide to just below the size of a pixel for effectively perfect perspective.
Then in your subrectangles you just apply your standard uncorrected "textured" triangles, or rectangles or whatever.
You can perform this algorithm without going to the complex trouble of building a 'real' 3d world. it's also good for if you do have a real 3d world modeled, but your textriangles are not perspective corrected in hardware, or you need a performant way to get perspective correct planes without per pixel rendering trickery.
Image: Example of Bilinear & Perspective Transform (Note: The height of top & bottom horizontal grid lines is actually half of the rest lines height, on both drawings)
========================================
I know this is an old question, but I have a generic solution so I decided to publish it hopping it will be useful to the future readers.
The code bellow can draw an arbitrary perspective grid without the need of repetitive computations.
I begin actually with a similar problem: to draw a 2D perspective Grid and then transform the underline image to restore the perspective.
I started to read here:
http://www.imagemagick.org/Usage/distorts/#bilinear_forward
and then here (the Leptonica Library):
http://www.leptonica.com/affine.html
were I found this:
When you look at an object in a plane from some arbitrary direction at
a finite distance, you get an additional "keystone" distortion in the
image. This is a projective transform, which keeps straight lines
straight but does not preserve the angles between lines. This warping
cannot be described by a linear affine transformation, and in fact
differs by x- and y-dependent terms in the denominator.
The transformation is not linear, as many people already pointed out in this thread. It involves solving a linear system of 8 equations (once) to compute the 8 required coefficients and then you can use them to transform as many points as you want.
To avoid including all Leptonica library in my project, I took some pieces of code from it, I removed all special Leptonica data-types & macros, I fixed some memory leaks and I converted it to a C++ class (mostly for encapsulation reasons) which does just one thing:
It maps a (Qt) QPointF float (x,y) coordinate to the corresponding Perspective Coordinate.
If you want to adapt the code to another C++ library, the only thing to redefine/substitute is the QPointF coordinate class.
I hope some future readers would find it useful.
The code bellow is divided into 3 parts:
A. An example on how to use the genImageProjective C++ class to draw a 2D perspective Grid
B. genImageProjective.h file
C. genImageProjective.cpp file
//============================================================
// C++ Code Example on how to use the
// genImageProjective class to draw a perspective 2D Grid
//============================================================
#include "genImageProjective.h"
// Input: 4 Perspective-Tranformed points:
// perspPoints[0] = top-left
// perspPoints[1] = top-right
// perspPoints[2] = bottom-right
// perspPoints[3] = bottom-left
void drawGrid(QPointF *perspPoints)
{
(...)
// Setup a non-transformed area rectangle
// I use a simple square rectangle here because in this case we are not interested in the source-rectangle,
// (we want to just draw a grid on the perspPoints[] area)
// but you can use any arbitrary rectangle to perform a real mapping to the perspPoints[] area
QPointF topLeft = QPointF(0,0);
QPointF topRight = QPointF(1000,0);
QPointF bottomRight = QPointF(1000,1000);
QPointF bottomLeft = QPointF(0,1000);
float width = topRight.x() - topLeft.x();
float height = bottomLeft.y() - topLeft.y();
// Setup Projective trasform object
genImageProjective imageProjective;
imageProjective.sourceArea[0] = topLeft;
imageProjective.sourceArea[1] = topRight;
imageProjective.sourceArea[2] = bottomRight;
imageProjective.sourceArea[3] = bottomLeft;
imageProjective.destArea[0] = perspPoints[0];
imageProjective.destArea[1] = perspPoints[1];
imageProjective.destArea[2] = perspPoints[2];
imageProjective.destArea[3] = perspPoints[3];
// Compute projective transform coefficients
if (imageProjective.computeCoeefficients() != 0)
return; // This can actually fail if any 3 points of Source or Dest are colinear
// Initialize Grid parameters (without transform)
float gridFirstLine = 0.1f; // The normalized position of first Grid Line (0.0 to 1.0)
float gridStep = 0.1f; // The normalized Grd size (=distance between grid lines: 0.0 to 1.0)
// Draw Horizonal Grid lines
QPointF lineStart, lineEnd, tempPnt;
for (float pos = gridFirstLine; pos <= 1.0f; pos += gridStep)
{
// Compute Grid Line Start
tempPnt = QPointF(topLeft.x(), topLeft.y() + pos*width);
imageProjective.mapSourceToDestPoint(tempPnt, lineStart);
// Compute Grid Line End
tempPnt = QPointF(topRight.x(), topLeft.y() + pos*width);
imageProjective.mapSourceToDestPoint(tempPnt, lineEnd);
// Draw Horizontal Line (use your prefered method to draw the line)
(...)
}
// Draw Vertical Grid lines
for (float pos = gridFirstLine; pos <= 1.0f; pos += gridStep)
{
// Compute Grid Line Start
tempPnt = QPointF(topLeft.x() + pos*height, topLeft.y());
imageProjective.mapSourceToDestPoint(tempPnt, lineStart);
// Compute Grid Line End
tempPnt = QPointF(topLeft.x() + pos*height, bottomLeft.y());
imageProjective.mapSourceToDestPoint(tempPnt, lineEnd);
// Draw Vertical Line (use your prefered method to draw the line)
(...)
}
(...)
}
==========================================
//========================================
//C++ Header File: genImageProjective.h
//========================================
#ifndef GENIMAGE_H
#define GENIMAGE_H
#include <QPointF>
// Class to transform an Image Point using Perspective transformation
class genImageProjective
{
public:
genImageProjective();
int computeCoeefficients(void);
int mapSourceToDestPoint(QPointF& sourcePoint, QPointF& destPoint);
public:
QPointF sourceArea[4]; // Source Image area limits (Rectangular)
QPointF destArea[4]; // Destination Image area limits (Perspectivelly Transformed)
private:
static int gaussjordan(float **a, float *b, int n);
bool coefficientsComputed;
float vc[8]; // Vector of Transform Coefficients
};
#endif // GENIMAGE_H
//========================================
//========================================
//C++ CPP File: genImageProjective.cpp
//========================================
#include <math.h>
#include "genImageProjective.h"
// ----------------------------------------------------
// class genImageProjective
// ----------------------------------------------------
genImageProjective::genImageProjective()
{
sourceArea[0] = sourceArea[1] = sourceArea[2] = sourceArea[3] = QPointF(0,0);
destArea[0] = destArea[1] = destArea[2] = destArea[3] = QPointF(0,0);
coefficientsComputed = false;
}
// --------------------------------------------------------------
// Compute projective transform coeeeficients
// RetValue: 0: Success, !=0: Error
/*-------------------------------------------------------------*
* Projective coordinate transformation *
*-------------------------------------------------------------*/
/*!
* computeCoeefficients()
*
* Input: this->sourceArea[4]: (source 4 points; unprimed)
* this->destArea[4]: (transformed 4 points; primed)
* this->vc (computed vector of transform coefficients)
* Return: 0 if OK; <0 on error
*
* We have a set of 8 equations, describing the projective
* transformation that takes 4 points (sourceArea) into 4 other
* points (destArea). These equations are:
*
* x1' = (c[0]*x1 + c[1]*y1 + c[2]) / (c[6]*x1 + c[7]*y1 + 1)
* y1' = (c[3]*x1 + c[4]*y1 + c[5]) / (c[6]*x1 + c[7]*y1 + 1)
* x2' = (c[0]*x2 + c[1]*y2 + c[2]) / (c[6]*x2 + c[7]*y2 + 1)
* y2' = (c[3]*x2 + c[4]*y2 + c[5]) / (c[6]*x2 + c[7]*y2 + 1)
* x3' = (c[0]*x3 + c[1]*y3 + c[2]) / (c[6]*x3 + c[7]*y3 + 1)
* y3' = (c[3]*x3 + c[4]*y3 + c[5]) / (c[6]*x3 + c[7]*y3 + 1)
* x4' = (c[0]*x4 + c[1]*y4 + c[2]) / (c[6]*x4 + c[7]*y4 + 1)
* y4' = (c[3]*x4 + c[4]*y4 + c[5]) / (c[6]*x4 + c[7]*y4 + 1)
*
* Multiplying both sides of each eqn by the denominator, we get
*
* AC = B
*
* where B and C are column vectors
*
* B = [ x1' y1' x2' y2' x3' y3' x4' y4' ]
* C = [ c[0] c[1] c[2] c[3] c[4] c[5] c[6] c[7] ]
*
* and A is the 8x8 matrix
*
* x1 y1 1 0 0 0 -x1*x1' -y1*x1'
* 0 0 0 x1 y1 1 -x1*y1' -y1*y1'
* x2 y2 1 0 0 0 -x2*x2' -y2*x2'
* 0 0 0 x2 y2 1 -x2*y2' -y2*y2'
* x3 y3 1 0 0 0 -x3*x3' -y3*x3'
* 0 0 0 x3 y3 1 -x3*y3' -y3*y3'
* x4 y4 1 0 0 0 -x4*x4' -y4*x4'
* 0 0 0 x4 y4 1 -x4*y4' -y4*y4'
*
* These eight equations are solved here for the coefficients C.
*
* These eight coefficients can then be used to find the mapping
* (x,y) --> (x',y'):
*
* x' = (c[0]x + c[1]y + c[2]) / (c[6]x + c[7]y + 1)
* y' = (c[3]x + c[4]y + c[5]) / (c[6]x + c[7]y + 1)
*
*/
int genImageProjective::computeCoeefficients(void)
{
int retValue = 0;
int i;
float *a[8]; /* 8x8 matrix A */
float *b = this->vc; /* rhs vector of primed coords X'; coeffs returned in vc[] */
b[0] = destArea[0].x();
b[1] = destArea[0].y();
b[2] = destArea[1].x();
b[3] = destArea[1].y();
b[4] = destArea[2].x();
b[5] = destArea[2].y();
b[6] = destArea[3].x();
b[7] = destArea[3].y();
for (i = 0; i < 8; i++)
a[i] = NULL;
for (i = 0; i < 8; i++)
{
if ((a[i] = (float *)calloc(8, sizeof(float))) == NULL)
{
retValue = -100; // ERROR_INT("a[i] not made", procName, 1);
goto Terminate;
}
}
a[0][0] = sourceArea[0].x();
a[0][1] = sourceArea[0].y();
a[0][2] = 1.;
a[0][6] = -sourceArea[0].x() * b[0];
a[0][7] = -sourceArea[0].y() * b[0];
a[1][3] = sourceArea[0].x();
a[1][4] = sourceArea[0].y();
a[1][5] = 1;
a[1][6] = -sourceArea[0].x() * b[1];
a[1][7] = -sourceArea[0].y() * b[1];
a[2][0] = sourceArea[1].x();
a[2][1] = sourceArea[1].y();
a[2][2] = 1.;
a[2][6] = -sourceArea[1].x() * b[2];
a[2][7] = -sourceArea[1].y() * b[2];
a[3][3] = sourceArea[1].x();
a[3][4] = sourceArea[1].y();
a[3][5] = 1;
a[3][6] = -sourceArea[1].x() * b[3];
a[3][7] = -sourceArea[1].y() * b[3];
a[4][0] = sourceArea[2].x();
a[4][1] = sourceArea[2].y();
a[4][2] = 1.;
a[4][6] = -sourceArea[2].x() * b[4];
a[4][7] = -sourceArea[2].y() * b[4];
a[5][3] = sourceArea[2].x();
a[5][4] = sourceArea[2].y();
a[5][5] = 1;
a[5][6] = -sourceArea[2].x() * b[5];
a[5][7] = -sourceArea[2].y() * b[5];
a[6][0] = sourceArea[3].x();
a[6][1] = sourceArea[3].y();
a[6][2] = 1.;
a[6][6] = -sourceArea[3].x() * b[6];
a[6][7] = -sourceArea[3].y() * b[6];
a[7][3] = sourceArea[3].x();
a[7][4] = sourceArea[3].y();
a[7][5] = 1;
a[7][6] = -sourceArea[3].x() * b[7];
a[7][7] = -sourceArea[3].y() * b[7];
retValue = gaussjordan(a, b, 8);
Terminate:
// Clean up
for (i = 0; i < 8; i++)
{
if (a[i])
free(a[i]);
}
this->coefficientsComputed = (retValue == 0);
return retValue;
}
/*-------------------------------------------------------------*
* Gauss-jordan linear equation solver *
*-------------------------------------------------------------*/
/*
* gaussjordan()
*
* Input: a (n x n matrix)
* b (rhs column vector)
* n (dimension)
* Return: 0 if ok, 1 on error
*
* Note side effects:
* (1) the matrix a is transformed to its inverse
* (2) the vector b is transformed to the solution X to the
* linear equation AX = B
*
* Adapted from "Numerical Recipes in C, Second Edition", 1992
* pp. 36-41 (gauss-jordan elimination)
*/
#define SWAP(a,b) {temp = (a); (a) = (b); (b) = temp;}
int genImageProjective::gaussjordan(float **a, float *b, int n)
{
int retValue = 0;
int i, icol=0, irow=0, j, k, l, ll;
int *indexc = NULL, *indexr = NULL, *ipiv = NULL;
float big, dum, pivinv, temp;
if (!a)
{
retValue = -1; // ERROR_INT("a not defined", procName, 1);
goto Terminate;
}
if (!b)
{
retValue = -2; // ERROR_INT("b not defined", procName, 1);
goto Terminate;
}
if ((indexc = (int *)calloc(n, sizeof(int))) == NULL)
{
retValue = -3; // ERROR_INT("indexc not made", procName, 1);
goto Terminate;
}
if ((indexr = (int *)calloc(n, sizeof(int))) == NULL)
{
retValue = -4; // ERROR_INT("indexr not made", procName, 1);
goto Terminate;
}
if ((ipiv = (int *)calloc(n, sizeof(int))) == NULL)
{
retValue = -5; // ERROR_INT("ipiv not made", procName, 1);
goto Terminate;
}
for (i = 0; i < n; i++)
{
big = 0.0;
for (j = 0; j < n; j++)
{
if (ipiv[j] != 1)
{
for (k = 0; k < n; k++)
{
if (ipiv[k] == 0)
{
if (fabs(a[j][k]) >= big)
{
big = fabs(a[j][k]);
irow = j;
icol = k;
}
}
else if (ipiv[k] > 1)
{
retValue = -6; // ERROR_INT("singular matrix", procName, 1);
goto Terminate;
}
}
}
}
++(ipiv[icol]);
if (irow != icol)
{
for (l = 0; l < n; l++)
SWAP(a[irow][l], a[icol][l]);
SWAP(b[irow], b[icol]);
}
indexr[i] = irow;
indexc[i] = icol;
if (a[icol][icol] == 0.0)
{
retValue = -7; // ERROR_INT("singular matrix", procName, 1);
goto Terminate;
}
pivinv = 1.0 / a[icol][icol];
a[icol][icol] = 1.0;
for (l = 0; l < n; l++)
a[icol][l] *= pivinv;
b[icol] *= pivinv;
for (ll = 0; ll < n; ll++)
{
if (ll != icol)
{
dum = a[ll][icol];
a[ll][icol] = 0.0;
for (l = 0; l < n; l++)
a[ll][l] -= a[icol][l] * dum;
b[ll] -= b[icol] * dum;
}
}
}
for (l = n - 1; l >= 0; l--)
{
if (indexr[l] != indexc[l])
{
for (k = 0; k < n; k++)
SWAP(a[k][indexr[l]], a[k][indexc[l]]);
}
}
Terminate:
if (indexr)
free(indexr);
if (indexc)
free(indexc);
if (ipiv)
free(ipiv);
return retValue;
}
// --------------------------------------------------------------
// Map a source point to destination using projective transform
// --------------------------------------------------------------
// Params:
// sourcePoint: initial point
// destPoint: transformed point
// RetValue: 0: Success, !=0: Error
// --------------------------------------------------------------
// Notes:
// 1. You must call once computeCoeefficients() to compute
// the this->vc[] vector of 8 coefficients, before you call
// mapSourceToDestPoint().
// 2. If there was an error or the 8 coefficients were not computed,
// a -1 is returned and destPoint is just set to sourcePoint value.
// --------------------------------------------------------------
int genImageProjective::mapSourceToDestPoint(QPointF& sourcePoint, QPointF& destPoint)
{
if (coefficientsComputed)
{
float factor = 1.0f / (vc[6] * sourcePoint.x() + vc[7] * sourcePoint.y() + 1.);
destPoint.setX( factor * (vc[0] * sourcePoint.x() + vc[1] * sourcePoint.y() + vc[2]) );
destPoint.setY( factor * (vc[3] * sourcePoint.x() + vc[4] * sourcePoint.y() + vc[5]) );
return 0;
}
else // There was an error while computing coefficients
{
destPoint = sourcePoint; // just copy the source to destination...
return -1; // ...and return an error
}
}
//========================================
Using Breton's subdivision method (which is related to Mongo's extension method), will get you accurate arbitrary power-of-two divisions. To split into non-power-of-two divisions using those methods you will have to subdivide to sub-pixel spacing, which can be computationally expensive.
However, I believe you may be able to apply a variation of Haga's Theorem (which is used in origami to divide a side into Nths given a side divided into (N-1)ths) to the perspective-square subdivisions to produce arbitrary divisions from the closest power of 2 without having to continue subdividing.
The most elegant and fastest solution would be to find the homography matrix, which maps rectangle coordinates to photo coordinates.
With a decent matrix library it should not be a difficult task, as long as you know your math.
Keywords: Collineation, Homography, Direct Linear Transformation
However, the recursive algorithm above should work, but probably if your resources are limited, projective geometry is the only way to go.
I think the selected answer is not the best solution available. A better solution is to apply perspective (projective) transformation of a rectangle to simple grid as following Matlab script and image show. You can implement this algorithm with C++ and OpenCV as well.
function drawpersgrid
sz = [ 24, 16 ]; % [x y]
srcpt = [ 0 0; sz(1) 0; 0 sz(2); sz(1) sz(2)];
destpt = [ 20 50; 100 60; 0 150; 200 200;];
% make rectangular grid
[X,Y] = meshgrid(0:sz(1),0:sz(2));
% find projective transform matching corner points
tform = maketform('projective',srcpt,destpt);
% apply the projective transform to the grid
[X1,Y1] = tformfwd(tform,X,Y);
hold on;
%% find grid
for i=1:sz(2)
for j=1:sz(1)
x = [ X1(i,j);X1(i,j+1);X1(i+1,j+1);X1(i+1,j);X1(i,j)];
y = [ Y1(i,j);Y1(i,j+1);Y1(i+1,j+1);Y1(i+1,j);Y1(i,j)];
plot(x,y,'b');
end
end
hold off;
In the special case when you look perpendicular to sides 1 and 3, you can divide those sides in equal parts. Then draw a diagonal, and draw parallels to side 1 through each intersection of the diagonal and the dividing lines drawn earlier.
This a geometric solution I thought out. I do not know whether the 'algorithm' has a name.
Say you want to start by dividing the 'rectangle' into n pieces with vertical lines first.
The goal is to place points P1..Pn-1 on the top line which we can use to draw lines through them to the points where the left and right line meet or parallel to them when such point does not exist.
If the top and bottom line are parallel to each other just place thoose points to split the top line between the corners equidistantly.
Else place n points Q1..Qn on the left line so that theese and the top-left corner are equidistant and i < j => Qi is closer to the top-left cornern than Qj.
In order to map the Q-points to the top line find the intersection S of the line from Qn through the top-right corner and the parallel to the left line through the intersection of top and bottom line. Now connect S with Q1..Qn-1. The intersection of the new lines with the top line are the wanted P-points.
Do this analog for the horizontal lines.
Given a rotation around the y axis, especially if rotation surfaces are planar, the perspective is generated by vertical gradients. These get progressively closer in perspective. Instead of using diagonals to define four rectangles, which can work given powers of two... define two rectangles, left and right. They'll be higher than wide, eventually, if one continues to divide the surface into narrower vertical segments. This can accommodate surfaces that are not square. If a rotation is around the x axis, then horizontal gradients are needed.
What you need to do is represent it in 3D (world) and then project it down to 2D (screen).
This will require you to use a 4D transformation matrix which does the projection on a 4D homogeneous down to a 3D homogeneous vector, which you can then convert down to a 2D screen space vector.
I couldn't find it in Google either, but a good computer graphics books will have the details.
Keywords are projection matrix, projection transformation, affine transformation, homogeneous vector, world space, screen space, perspective transformation, 3D transformation
And by the way, this usually takes a few lectures to explain all of that. So good luck.