Find street intersections within an area in using Google Maps API - algorithm

Given a square area, what is the best way to find the approximate coordinates of every street intersection within the given area ?

Since there is no description of your application, I can't tell if you need to use Google Maps or if another data source would answer your needs.
If http://openstreetmap.org fulfills the requirements of your application, then it's easy:
the OSM API has a request to pull data from a rectangular region. You get XML data.
filter this data to keep only the street you are interested in, probably the "key=highway" tags
filter this to keep only the points belonging to two or more lines.
Please disregard this if Google Maps is a requirement.
But still: since the roads exist independently of the database, the above method will yield roads intersections (in lat/long coordinates) with a pretty high correlation with what you would get from Google maps ;-) You can then use those points to display them over a Google map, knowing that both datasets aren't identical so it won't be perfect.

Might not be the easiest method but I used a seperate database of our countries roads with their linestrings.
I took the first and last points of each line string, then counted the number of roads within 50 m of each start/end point. I then took the nodes from navigation route and used these to compare the number of roads intersecting with each node. I then looked at the direction each start point and the next point along that road, which gives you direction. from that with a bit of maths you can work out the number and angle of the roads at the next intersection. I then made a road rules application that tells you which vehicles to give way to

Related

Fetching the nearest location of points, while accounting for bodies of water

I've got a database of points (in this case, schools) and in my app, users search for the nearest ones to them. Under the hood, we're currently using ElasticSearch to filter by latlng (using Geo Distance, which gets the distance as the crow flies. For the majority of places, this works fine, but in some coastal areas, this will pick up places that are impossible to get to, in the example below, a radius of 20 miles will pick up schools in Weston-Super-Mare, in reality 55 miles:
I initially decided to use the Google Maps Distance Matrix API to filter my inital as the crow flies search, but there's a limit of 25 destinations per query, and as the requests will be dynamic and user-facing, it's not practical to parcel these requests up into small pieces and pop in a background job.
Is there any way to carry out these calculations while accounting for bodies of water on a database level? The schools are stored in a Postgres database, so I thoughts about using PostGIS and some kind of point in polygon query, but I have no idea where to start looking.
Any ideas are very much appreciated!

Fast way search millions of coordinates by distance

I have a data set of about 20 million coordinates. I want to be able to pass in a latitude, longitude, and distance in miles and return all coordinates that are within the mile range of my given coordinates. I need the response time to ideally be sub 50ms.
I have tried loading all coordinates in memory in a golang service which, on every request, will loop through the data and using haversine filter all coordinates which are within the given miles distance of my given coordinate.
This method sees the results return in around 2 seconds. What approach would be good to increase the speed of the results? I am open to any suggestions.
I am toying around with the idea of grouping all coordinates by degree and only filtering by the nearest to the given coordinates. Haven't had any luck improving the response times yet though. My data set is only a test one too as the real data could potentially be in the hundreds of millions.
I think that this is more of a data structure problem. One good way to store large sets of geospatial coordinates is with an R-tree. It provides logn M search. I have limited knowledge of Go, but I have used an R-Tree to great effect for similarly sized datasets in a similar use case in a JS application. From a quick search it appears as though there are at least a couple Go R-Tree implementations out there.
Idea would be to have a "grid" that partitions coordinates, so that when you do need to do a lookup you can safely return all coordinates in particular cell, do not return any from the cells too far away from target, and only do per coordinate comparison for coordinates that are in the cells that contains some coordinates within distance and some outside the distance.
Simplified to 1D:
Coordinates are from 1 to 100
you partition into 5 blocks of 20
When somebody looks for all coordinates within distance 25 from 47
you return all coordinates in blocks [30,39], [40,49],[50,59],[60,69] and then after doing per coordinate analysis for blocks [20,29] and [70,79] you additionally return 22,23,24,25,26,27,28,29, 70,71,72.
Unfortunately I have no realistic way to estimate speedup of this approach so you would need to implement it and benchmark it by yourself.
MongoDB has various geographic searches $geoNear will allow you to search for points within a specific distance from a point or within a shape.
https://docs.mongodb.com/manual/reference/operator/aggregation/geoNear/
PostGIS for Postgres has something similar, but I am not too familiar with it.

Distance matrix between 500,000 sets of coordinates

I'm working on a project with 500,000 participants. We have in our database the precise coordinates of their home, and we want to release this data to someone who needs it to evaluate how close our participants live to one another.
We are very reluctant to release the precise coordinates, because this is an anonymized project and the risk for re-identification would be very high. Rounded coordinates (to something like 100m or 1km) are apparently not precise enough for what they're trying to achieve.
A nice workaround would have been to send them a 500,000 by 500,000 matrix with the absolute distance between each pair of participants, but this means 250 billion entries, or rather 125 billion if we remove half the matrix since |A-B| = |B-A|.
I've never worked with this type of data before, so I was wondering if anyone had a clever idea on how to deal with this? (Something that would not involve sending them 2 TB of data!)
Thanks.
Provided that the recipient of the data is happy to perform the great circle calculation to calculate the distance themselves, then you only need to send the 500,000 lines, but with transposed latitudes and longitudes.
First of all identify an approximate geospatial centre of your dataset, and then work out the offsets needed to transpose this centre to 0°N and 0°E. Then apply these same offsets to the users' latitudes and longitudes. This will centre the results around the equator and the prime meridian.
Provided your real data isn't too close to the poles, the distance calculated between real points A and B will be very close to the corresponding offset points.
Obviously the offsets applied need to be kept secret.
This approach may not work if it is known that your data is based around a particular place - the recipient may be able to deduce where the real points are - but that is something you'll need to decide yourself.

select the most fitting road by sketching lines on the map

I have a 2D road net data, which contains a lot of road points and the lines that link them.
I want to select a road(or several road) by drawing lines on the map, and the lines can be very inaccurate, I want to find the most fitting road. Is there any way to achieve this? Thanks.
========================================================================
UPDATE:
It's not a route searching problem, setting start point and end point and finding a best route.
What I want to do is, when user want to select a certain route on the map, he will draw a sketch line on the map like this:
and then the most fitting route can be found and highlighted like this:
1) Find the roads that link together
2) assign cost from one road to another road link (only the ones that are touching)
3) Pick up the ones with lowest cost - recursively

What's the name of algorithm/field about positioning many small objects to form shapes?

sorry that English is not my native language. I would like to know what's the name of the algorithm/field about positioning small objects to form shapes?
I don't know what's the term for it, so let me give some examples.
e.g.1.
In cartoons, sometimes there will be a swarm of insects forming a skeleton head in the air
e.g.2.
In the wars in the 1700s, infantry units are a bunch of men standing together, forming columns or ranks, changing shapes as the battle rage on.
e.g.3
In opening ceremonies of Olympics, often there will be a bunch of dancers forming variou symbols on the field.
So bascally, numerous small objects beginning in arbitrary positions, moving to a new position such that they together form a shape in 2D or 3D.
What is such technique called?
In graphics, this would normally be called a "particle system" (and Googling for that should yield quite a few results that are at least reasonably relevant).
If you assume that the dancers/soldiers don't interfere when moving, then you can view the problem as a maximum matching problem.
For each person, you know their starting location, and you know the shape of the final pattern. You probably want to minimize the total time it takes to form the final shape from the start shape.
You can determine if it's possible to go from an initial state from a start state in time T by forming a bipartite graph. For each person and final position, if the person can reach the position in <= T, add an edge from the person to that position. Then run a maximum matching algorithm to see if everyone can find some position in the final locations within the time constraint.
Do a binary search on the time T and you'll have the minimum amount of time to go from one state to another.
http://en.wikipedia.org/wiki/Matching_(graph_theory)#Maximum_matchings_in_bipartite_graphs

Resources