How can I do efficient range searching + counting with latitude/longitude data? - algorithm

I'm working with a large set of points represented by latitude/longitude pairs (the points are not necessarily unique, there could be several points in the set that are at the same location). The points are stored in a database.
What I need to do is figure out a way to efficiently perform a search to get the number of points that lie within a given radius (say, 25 miles) of an arbitrary point.
The count does not need to be 100% accurate - more importantly, it just has to be fast, and reasonably close to the correct count. Doing this with SQL is possible, by using a query with some trigonometry in the WHERE clause to filter points by their distance to a reference point. Unfortunately, this query is very, very expensive and caching is not likely to provide much help because the locations will be very spread out.
I'm ultimately looking to build some kind of in-memory structure that will be able to handle this kind of operation efficiently - trading off some of the accuracy and liveness of the data (maybe rebuilding it only once a day) in return for speed. I've been doing some research on kd-trees, but i'm not clear yet on how well this can be applied to latitude/longitude data (as opposed to x,y data in a 2d plane).
If anybody has any ideas or solutions I should look into, I'd really appreciate it - so thanks in advance.

I don't think you should use this solution. Having randomly thought about it a few days ago, I think that measuring the distance from a specific point the grid squares' locations will be based on circles rather than a uniformed grid. The further away from 0,0 you are the less accurate this will be!
What I did was to have 2 additional values on my PostalCode class. Whenever I update the Long/Lat on a PostalCode I calculate an X,Y distance from Long 0, Lat 0.
public static class MathExtender
{
public static double GetDistanceBetweenPoints(double sourceLatitude, double sourceLongitude, double destLatitude, double destLongitude)
{
double theta = sourceLongitude - destLongitude;
double distance =
Math.Sin(DegToRad(sourceLatitude))
* Math.Sin(DegToRad(destLatitude))
+ Math.Cos(DegToRad(sourceLatitude))
* Math.Cos(DegToRad(destLatitude))
* Math.Cos(DegToRad(theta));
distance = Math.Acos(distance);
distance = RadToDeg(distance);
distance = distance * 60 * 1.1515;
return (distance);
}
public static double DegToRad(double degrees)
{
return (degrees * Math.PI / 180.0);
}
public static double RadToDeg(double radians)
{
return (radians / Math.PI * 180.0);
}
}
Then I update my class like so:
private void CalculateGridReference()
{
GridReferenceX = MathExtender.GetDistanceBetweenPoints(0, 0, 0, Longitude);
GridReferenceY = MathExtender.GetDistanceBetweenPoints(0, 0, Latitude, 0);
}
So now I have an x,y grid distance (in miles) from grid reference 0,0 for each row in my DB. If I want to find all places with 5 miles of a long/lat I would first get the X,Y grid reference (say 25,75) then I would search 20..30, 70..80 in the DB, and further filter the results in memory using
MathExtensder.GetDistanceBetweenPoints(candidate.Lat, candidate.Long, search.Lat, search.Long) < TheRadiusOfInterest
The in DB part is ultra fast, and the in-memory part works on a smaller set to make it ultra accurate.

Use R-Trees.
In Oracle, using Oracle Spatial, you can create an index:
CREATE INDEX ix_spatial ON spatial_table (locations) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
that will create an R-Tree for you and search over it.
You may use any Earth Model you like: WGS84, PZ-90 etc.

Use some kind of search tree for spatial data, e.g. a quad tree. More such data structures are referenced under "See also".

This UDF (SQL Server) will get you the distance between two lat/lon points:
CREATE FUNCTION [dbo].[zipDistance] (
#Lat1 decimal(11, 6),
#Lon1 decimal(11, 6),
#Lat2 decimal(11, 6),
#Lon2 decimal(11, 6)
)
RETURNS
decimal(11, 6) AS
BEGIN
IF #Lat1 = #Lat2 AND #Lon1 = #Lon2
RETURN 0 /* same lat/long points, 0 distance = */
DECLARE #x decimal(18,13)
SET #x = 0.0
/* degrees -> radians */
SET #Lat1 = #Lat1 * PI() / 180
SET #Lon1 = #Lon1 * PI() / 180
SET #Lat2 = #Lat2 * PI() / 180
SET #Lon2 = #Lon2 * PI() / 180
/* accurate to +/- 30 feet */
SET #x = Sin(#Lat1) * Sin(#Lat2) + Cos(#Lat1) * Cos(#Lat2) * Cos(#Lon2 - #Lon1)
IF 1 = #x
RETURN 0
DECLARE #EarthRad decimal(5,1)
SET #EarthRad = 3963.1
RETURN #EarthRadius * (-1 * ATAN(#x / SQRT(1 - #x * #x)) + PI() / 2)
END
And, obviously, you can use this in a separate query, such as:
SELECT * FROM table WHERE [dbo].[zipDistance] < 25.0

You can find an excellent explaination of Bombe's suggestion in the Jan Philip Matuschek's article "Finding Points Within a Distance of a Latitude/Longitude Using Bounding Coordinates".

Could you maybe supply a sample of your existing expensive query?
If you're doing proper great-circle calculation based on taking the sine() and cosine() of the reference point and the other data points, then a very substantial optimisation could be made by actually storing those sin/cos values in the database in addition to the lat/long values.
Alternatively, just use your database to extract a rectangle of lat/long ranges that match, and only afterwards filter out the ones that are outside the true circular radius.
But do bear in mind that one degree of longitude is a somewhat shorter distance at high latitudes than at the equator. It should be easy to figure out the right aspect ratio for that rectangle, though. You'd also have errors if you need to consider areas very close to the poles, as a rectanglar selection wouldn't cope with a circle that overlapped a pole.

Related

Finding translation and scale on two sets of points to get least square error in their distance?

I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github

Distances between houses, Google Directions API query limit is too low, need better algorithm

I need to rent two houses. I want them to be as close as possible. There are about 300 houses available for rent. I wish to use the Google Maps Directions API to calculate the walking distance between any two available houses so then I can sort the list and choose two that are close.
Everything works great, except that Google sets a theoretical limit of 2,500 queries per day (and in practice the limit is much lower, at only 250 per day). I have 3002/2 - 300 = 44,700 queries to make so obviously this limit is not enough for me.
This would be a one time thing, any hints on how can I accomplish what I need with the Google Maps API? Can I somehow run the program distributed so the limit will affect only one instance? Would Google App Engine help?
I also welcome advice for improving the algorithm. If two houses are far apart, and another house is close to one of them it means it would not have to check the third house with the remaining house as they are probably far away. I also care more about the qualitative nature of the algorithm, not the exact distances, so maybe there is a simple approximation I can make that will result in fewer queries.
Thanks,
The geographic distance between any two houses, as the crow flies, will be a strict lower bound on the walking distance. So I'd start with 300 queries, to get the long/lat for each house, plug them into the Haversine formula (for example) to get the distances between the 45,000 unordered pairs, and sort them to get the closest pairs by geographic distance. Then with
some likely candidates in hand, you can start checking the walking distances with another set of calls to the Google API.
Consider, that you are pizza deliverer and you want to calculate your effective range (where you can go within 30 minutes). And you want to make a colored bar 3d graph of N to E section of that time data, something like (with bogus data):
And you want to include like 100k of houses ... Well at least I heard, that program like this, was made before limits where introduced in Google map. Limits just bite hard in this case.
If you have geo location from all the houses, then you can find a prediction from how far are the points on earth, when you fly like a bird. Sort them based on that, and find results for best predictions.
Edit: Added Java code example, that could be useful when creating predictions:
/**
* Thaddeus Vincenty's inverse method formulae implementation for
* geographical distance between two given points on earth.
* #param L1
* geographical latitude of standpoint in decimal degrees
* #param G1
* geographical longitude of standpoint in decimal degrees
* #param L2
* geographical latitude of destination in decimal degrees
* #param G2
* geographical longitude of destination in decimal degrees
* #return Geographical distance in kilometeres
*/
public static double getDistance(final double L1, final double G1,
final double L2, final double G2) {
double delta, p0, p1, p2, p3;
// The average radius for a spherical approximation of Earth
double rEarth = 6371.01d;
delta = G1 - G2;
p0 = Math.cos(L2) * Math.cos(delta);
p1 = Math.cos(L2) * Math.sin(delta);
p2 = Math.cos(L1) * Math.sin(L2) - Math.sin(L1) * p0;
p3 = Math.sin(L1) * Math.sin(L2) + Math.cos(L1) * p0;
return rEarth * Math.atan2(Math.sqrt(p1 * p1 + p2 * p2), p3);
}
/**
* Rounds double to nr number of decimal places
* #param d
* floating-point number
* #param nr
* decimal places to keep
* #return rounded number with nr decimal places
*/
public static double round(double d, int nr) {
return new java.math.BigDecimal(Double.toString(d)).setScale(nr,
java.math.BigDecimal.ROUND_HALF_UP).doubleValue();
}
public static void main(String[] args) {
double L1 = Math.toRadians(Double.parseDouble(args[0]));
double G1 = Math.toRadians(Double.parseDouble(args[1]));
double L2 = Math.toRadians(Double.parseDouble(args[2]));
double G2 = Math.toRadians(Double.parseDouble(args[3]));
System.out.println(round(getDistance(L1, G1, L2, G2), 2));
}
i would use this formula :
distance = Math.acos(Math.sin(lat1)*Math.sin(lat2) +
Math.cos(lat1)*Math.cos(lat2) *
Math.cos(lon2-lon1)) * 6371;
to rank all 45,000 houses by distance as the crow flies.
I would then take the top 250 results ranked by shortest distance and run them through google to get the accurate walking distance and re-rank.
I don't know if this is working with the directions method but you want to interleave (nesting) the directions query into a single query. A google chunk can hold up to 24 directions per chunk. Thus you can increase your query to a maximum of 250 * 24 (6000 directions) per day. Maybe you want to change your IP adress after 6000 queries? Maybe Google let you then query more then 6000 directions per day? I got the interleaving idea from geweb tsp solver where he interleaves 24 cities from a query matrix into 1 chunk saving up to 22 single queries and thus reducing bandwidth and Google's API limitation.

Algorithm to control acceleration until a position is reached

I have a point that moves (in one dimension), and I need it to move smoothly. So I think that it's velocity has to be a continuous function and I need to control the acceleration and then calculate it's velocity and position.
The algorithm doesn't seem something obvious to me, but I guess this must be a common problem, I just can't find the solution.
Notes:
The final destination of the object may change while it's moving and the movement needs to be smooth anyway.
I guess that a naive implementation would produce bouncing, and I need to avoid that.
This is a perfect candidate for using a "critically damped spring".
Conceptually you attach the point to the target point with a spring, or piece of elastic. The spring is damped so that you get no 'bouncing'. You can control how fast the system reacts by changing a constant called the "SpringConstant". This is essentially how strong the piece of elastic is.
Basically you apply two forces to the position, then integrate this over time. The first force is that applied by the spring, Fs = SpringConstant * DistanceToTarget. The second is the damping force, Fd = -CurrentVelocity * 2 * sqrt( SpringConstant ).
The CurrentVelocity forms part of the state of the system, and can be initialised to zero.
In each step, you multiply the sum of these two forces by the time step. This gives you the change of the value of the CurrentVelocity. Multiply this by the time step again and it will give you the displacement.
We add this to the actual position of the point.
In C++ code:
float CriticallyDampedSpring( float a_Target,
float a_Current,
float & a_Velocity,
float a_TimeStep )
{
float currentToTarget = a_Target - a_Current;
float springForce = currentToTarget * SPRING_CONSTANT;
float dampingForce = -a_Velocity * 2 * sqrt( SPRING_CONSTANT );
float force = springForce + dampingForce;
a_Velocity += force * a_TimeStep;
float displacement = a_Velocity * a_TimeStep;
return a_Current + displacement;
}
In systems I was working with a value of around 5 was a good point to start experimenting with the value of the spring constant. Set it too high will result in too fast a reaction, and too low the point will react too slowly.
Note, you might be best to make a class that keeps the velocity state rather than have to pass it into the function over and over.
I hope this is helpful, good luck :)
EDIT: In case it's useful for others, it's easy to apply this to 2 or 3 dimensions. In this case you can just apply the CriticallyDampedSpring independently once for each dimension. Depending on the motion you want you might find it better to work in polar coordinates (for 2D), or spherical coordinates (for 3D).
I'd do something like Alex Deem's answer for trajectory planning, but with limits on force and velocity:
In pseudocode:
xtarget: target position
vtarget: target velocity*
x: object position
v: object velocity
dt: timestep
F = Ki * (xtarget-x) + Kp * (vtarget-v);
F = clipMagnitude(F, Fmax);
v = v + F * dt;
v = clipMagnitude(v, vmax);
x = x + v * dt;
clipMagnitude(y, ymax):
r = magnitude(y) / ymax
if (r <= 1)
return y;
else
return y * (1/r);
where Ki and Kp are tuning constants, Fmax and vmax are maximum force and velocity. This should work for 1-D, 2-D, or 3-D situations (magnitude(y) = abs(y) in 1-D, otherwise use vector magnitude).
It's not quite clear exactly what you're after, but I'm going to assume the following:
There is some maximum acceleration;
You want the object to have stopped moving when it reaches the destination;
Unlike velocity, you do not require acceleration to be continuous.
Let A be the maximum acceleration (by which I mean the acceleration is always between -A and A).
The equation you want is
v_f^2 = v_i^2 + 2 a d
where v_f = 0 is the final velocity, v_i is the initial (current) velocity, and d is the distance to the destination (when you switch from acceleration A to acceleration -A -- that is, from speeding up to slowing down; here I'm assuming d is positive).
Solving:
d = v_i^2 / (2A)
is the distance. (The negatives cancel).
If the current distance remaining is greater than d, speed up as quickly as possible. Otherwise, begin slowing down.
Let's say you update the object's position every t_step seconds. Then:
new_position = old_position + old_velocity * t_step + (1/2)a(t_step)^2
new_velocity = old_velocity + a * t_step.
If the destination is between new_position and old_position (i.e., the object reached its destination in between updates), simply set new_position = destination.
You need an easing formula, which you would call at a set interval, passing in the time elapsed, start point, end point and duration you want the animation to be.
Doing time-based calculations will account for slow clients and other random hiccups. Since it calculates on time elapsed vs. the time in which it has to compkete, it will account for slow intervals between calls when returning how far along your point should be in the animation.
The jquery.easing plugin has a ton of easing functions you can look at:
http://gsgd.co.uk/sandbox/jquery/easing/
I've found it best to pass in 0 and 1 as my start and end point, since it will return a floating point between the two, you can easily apply it to the real value you are modifying using multiplication.

Algorithm to find all Latitude Longitude locations within a certain distance from a given Lat Lng location

Given a database of places with Latitude + Longitude locations, such as 40.8120390, -73.4889650, how would I find all locations within a given distance of a specific location?
It doesn't seem very efficient to select all locations from the DB and then go through them one by one, getting the distance from the starting location to see if they are within the specified distance. Is there a good way to narrow down the initially selected locations from the DB? Once I have (or don't?) a narrowed down set of locations, do I still go through them one by one to check the distance, or is there a better way?
The language I do this in doesn't really matter. Thanks!
Start by Comparing the distance between latitudes. Each degree of latitude is approximately 69 miles (111 kilometers) apart. The range varies (due to the earth's slightly ellipsoid shape) from 68.703 miles (110.567 km) at the equator to 69.407 (111.699 km) at the poles. The distance between two locations will be equal or larger than the distance between their latitudes.
Note that this is not true for longitudes - the length of each degree of longitude is dependent on the latitude. However, if your data is bounded to some area (a single country for example) - you can calculate a minimal and maximal bounds for the longitudes as well.
Continue will a low-accuracy, fast distance calculation that assumes spherical earth:
The great circle distance d between two points with coordinates {lat1,lon1} and {lat2,lon2} is given by:
d = acos(sin(lat1)*sin(lat2)+cos(lat1)*cos(lat2)*cos(lon1-lon2))
A mathematically equivalent formula, which is less subject to rounding error for short distances is:
d = 2*asin(sqrt((sin((lat1-lat2)/2))^2 +
cos(lat1)*cos(lat2)*(sin((lon1-lon2)/2))^2))
d is the distance in radians
distance_km ≈ radius_km * distance_radians ≈ 6371 * d
(6371 km is the average radius of the earth)
This method computational requirements are mimimal. However the result is very accurate for small distances.
Then, if it is in a given distance, more or less, use a more accurate method.
GeographicLib is the most accurate implementation I know, though Vincenty inverse formula may be used as well.
If you are using an RDBMS, set the latitude as the primary key and the longitude as a secondary key. Query for a latitude range, or for a latitude/longitude range, as described above, then calculate the exact distances for the result set.
Note that modern versions of all major RDBMSs support geographical data-types and queries natively.
Based on the current user's latitude, longitude and the distance you wants to find,the sql query is given below.
SELECT * FROM(
SELECT *,(((acos(sin((#latitude*pi()/180)) * sin((Latitude*pi()/180))+cos((#latitude*pi()/180)) * cos((Latitude*pi()/180)) * cos(((#longitude - Longitude)*pi()/180))))*180/pi())*60*1.1515*1.609344) as distance FROM Distances) t
WHERE distance <= #distance
#latitude and #longitude are the latitude and longitude of the point.
Latitude and longitude are the columns of distances table. Value of pi is 22/7
Tank´s Yogihosting
I have in my database one goups of tables from Open Streep Maps and I tested successful.
Distance work fine in meters.
SET #orig_lat=-8.116137;
SET #orig_lon=-34.897488;
SET #dist=1000;
SELECT *,(((acos(sin((#orig_lat*pi()/180)) * sin((dest.latitude*pi()/180))+cos((#orig_lat*pi()/180))*cos((dest.latitude*pi()/180))*cos(((#orig_lon-dest.longitude)*pi()/180))))*180/pi())*60*1.1515*1609.344) as distance FROM nodes AS dest HAVING distance < #dist ORDER BY distance ASC LIMIT 100;
PostgreSQL GIS extensions might be helpful - as in, it may already implement much of the functionality you are thinking of implementing.
As biziclop mentioned, some sort of metric space tree would probably be your best option. I have experience using kd-trees and quad trees to do these sorts of range queries and they're amazingly fast; they're also not that hard to write. I'd suggest looking into one of these structures, as they also let you answer other interesting questions like "what's the closest point in my data set to this other point?"
What you need is spatial search. You can use Solr Spatial search. It also got lat/long datatype built in, check here.
You may convert latitude-longitude to UTM format which is metric format that may help you to calculate distances. Then you can easily decide if point falls into specific location.
Since you say that any language is acceptable, the natural choice is PostGIS:
SELECT * FROM places
WHERE ST_DistanceSpheroid(geom, $location, $spheroid) < $max_metres;
If you want to use WGS datum, you should set $spheroid to 'SPHEROID["WGS 84",6378137,298.257223563]'
Assuming that you have indexed places by the geom column, this should be reasonably efficient.
Thanks to the solution provided by #yogihosting I was able to achieve similar result from schemaless columns of mysql with codes shown below:
// #params - will be bound to named query parameters
$criteria = [];
$criteria['latitude'] = '9.0285183';
$criteria['longitude'] = '7.4869546';
$criteria['distance'] = 500;
$criteria['skill'] = 'software developer';
// Get doctrine connection
$conn = $this->getEntityManager()->getConnection();
$sql = '
SELECT DISTINCT m.uuid AS phone, (((acos(sin((:latitude*pi()/180)) * sin((JSON_EXTRACT(m.location, "$.latitude")*pi()/180))+cos((:latitude*pi()/180)) *
cos((JSON_EXTRACT(m.location, "$.latitude")*pi()/180)) *
cos(((:longitude - JSON_EXTRACT(m.location, "$.longitude"))*pi()/180))))*180/pi())*60*1.1515*1.609344) AS distance FROM member_profile AS m
INNER JOIN member_card_subscription mcs ON mcs.primary_identity = m.uuid
WHERE mcs.end > now() AND JSON_SEARCH(m.skill_logic, "one", :skill) IS NOT NULL AND (((acos(sin((:latitude*pi()/180)) * sin((JSON_EXTRACT(m.location, "$.latitude")*pi()/180))+cos((:latitude*pi()/180)) *
cos((JSON_EXTRACT(m.location, "$.latitude")*pi()/180)) *
cos(((:longitude - JSON_EXTRACT(m.location, "$.longitude"))*pi()/180))))*180/pi())*60*1.1515*1.609344) <= :distance ORDER BY distance
';
$stmt = $conn->prepare($sql);
$stmt->execute(['latitude'=>$criteria['latitude'], 'longitude'=>$criteria['longitude'], 'skill'=>$criteria['skill'], 'distance'=>$criteria['distance']]);
var_dump($stmt->fetchAll());
Please note the above code snippet is using doctrine DB connection and PHP
you may check this equation
i think it will help
SELECT id, ( 3959 * acos( cos( radians(37) ) * cos( radians( lat ) ) * cos( radians( lng ) - radians(-122) ) + sin( radians(37) ) * sin( radians( lat ) ) ) ) AS distance FROM markers HAVING distance < 25 ORDER BY distance LIMIT 0 , 20;

Scaling vectors from a center point?

I'm trying to figure out if I have points that make for example a square:
* *
* *
and let's say I know the center of this square.
I want a formula that will make it for eample twice its size but from the center
* *
* *
* *
* *
Therefore the new shape is twice as large and from the center of the polygon. It has to work for any shape not just squares.
I'm looking more for the theory behind it more than the implementation.
If you know the center point cp and a point v in the polygon you would like to scale by scale, then:
v2 = v - cp; // get a vector to v relative to the centerpoint
v2_scaled = v2 * scale; // scale the cp-relative-vector
v1_scaled = v2_scaled + cp; // translate the scaled vector back
This translate-scale-translate pattern can be performed on vectors of any dimension.
If you want the shape twice as large, scale the distance of the coordinates to be sqrt(2) times further from the center.
In other words, let's say your point is at (x, y) and the center is (xcent, ycent). Your new point should be at
(xcent + sqrt(2)*(x - xcent), ycent + sqrt(2)*(y - ycent))
This will scale the distances from the new 'origin', (xcent, ycent) in such a way that the area doubles. (Because sqrt(2)*sqrt(2) == 2).
I'm not sure there's a clean way to do this for all types of objects. For relatively simple ones, you should be able to find the "center" as the average of all the X and Y values of the individual points. To double the size, you find the length and angle of a vector from the center to the point. Double the length of the vector, and retain the same angle to get the new point.
Edit: of course, "twice the size" is open to several interpretations (e.g., doubling the perimeter vs. doubling the area) These would change the multiplier used above, but the basic algorithm would remain essentially the same.
To do what you want you need to perform three operations: translate the square so that its centroid coincides with the origin of the coordinate system, scale the resulting square, translate it back.

Resources