What is the proper way to use the radius parameter in the Google Places API? - google-places-api

I am using the Google Places API to retrieve all the POI (Places of Interest) around a the current location, it works ok but I have noticed that whatever the value of the radius is, I always get the same number of results (~ 20). As a result, if I give a radius that is too big, I don't necessarily get the nearest POIs. If I reduce the amount of the radius to be small enough, I will retrieve those nearest places again (from experimentation, I have noticed that 100 meters is a proper value) but that means that I will not get any POIs beyond 100 meters which is not quite what I want.
My question is: is there any way by which I can get all the POIs (with no limitations) within a certain radius.
Thank you!

The Google Places API always returns 20 results by design, selecting the 20 results that best fit the criteria you define in your request. The Developer's Guide / Docs don't explicitly cite that number anywhere that I have seen. I learned about the limit watching the Autocomplete Demo & Places API Demo & Discussion Video, given by Paul Saxman, a Developer Advocate at Google and Marcelo Camelo, Google's Technical Lead for the Places API.
The entire video is worth watching, but more specific to your question, if you set the playback timer at about 11:50, Marcelo Camelo is contrasting the Autocomplete tool versus the general Places API, and that's the portion of the video where he mentions the 20 result limit. He mentions 20 as the standard result count several times.
There are many other good Places API and Google Maps videos linked to that area on YouTube as well.

As mentioned on the Google Places Issue Tracker here: http://code.google.com/p/gmaps-api-issues/issues/detail?id=3425
We are restricted by our data provider licenses to enable apps to display no more than 20 places results at a time. Consequently we are not able to increase this limit at this time.
It does sound like you are however trying to return results that are closest to a specified location, this is now possible by using the 'rankby=distance' parameter instead of 'radius' in your request.
e.g.
https://maps.googleapis.com/maps/api/place/search/json?location=-33.8670522,151.1957362&rankby=distance&types=food&name=harbour&sensor=false&key=YOUR_API_KEY

Try google.maps.places.RankBy.DISTANCE; as default is google.maps.places.RankBy.PROMINENCE;
An easy example of this is shown Here
(Chrome only)

Related

How to fetch the count of businesses or premises in an area using Google Places API?

I need to be able to guesstimate an area's population density.
For example, if I selected Time's Square, I need to get the rough population density in a 1KM radius.
I know the Places API does not have a specific function for this, but I think something like this could work:
Fetch the count of all the businesses or premises in an area, and compare them to known results. For example, if central Mumbai has a businesses/premises count of 1000, and a rural town area has a businesses/premises count of 10, then it would be fair to say low density is probably < 100, medium density is probably 100 - 700, and high density is over 700. Or something along those lines.
So is there a way to fetch the count of businesses or premises in an area using Google Places API?
This functionality is not currently supported by the Google Maps Platform -- Places API.
If you are interested to have this feature, you can file this as feature request in Google's Public Issue Tracker:
https://developers.google.com/maps/support/#issue_tracker
Issue Tracker is a tool used internally at Google to track bugs and feature requests during product development.

Azure Face Identify more than 10 faces in a Person Group

So, a straightforward question. My first on SO. Asking here because it says so on the Azure Docs to ask here.
I understand that the Face API can identify at most 10 faces in an API call. Is it possible to get this limit raised to, say, 50? Maybe through some specific pricing agreement?
Thanks and regards.
I see the best way to do this is by handling it on your side, maybe by dividing your 50 faces image into 5 pieces where each piece will have 10 faces, then make an API call for each piece, noting that the paid tier is limited to 10 calls per second, so, if you have more than 10 pieces you'll have to put them in a queue and have the view load the results using the async/await pattern or just load for a few seconds until you have all the results to present and use.
We are using the cognitive services for a while and didn't found a way to increase some of the hard limits even after some discussions with representatives from MS (if you can find the right person they are very helpful, but it's hard to find the right person :). Similar to the 10 person limit, you also can't change 24 hour face keeping duration. At least we were not able to find it.
Of course this case only true for using the scenario of Create PersonList, Detect Image and Identify. If you just use identification using face list, you don't have this limit as you have the ability to save those faces in a face list and then query them which will search the whole list even if it's 10 or 100 faces.

Youtube Data API v3 drop in result count

I have used YouTube API v3 to retrieve the number of video searches as a function of location. The purpose of this data is to study the distribution of YouTube's public: urban/rural, center of the cities or the metropolitan area, etc.
As part of this study, I checked the stability of the public (if there are changes over time of public behavior), retrieving the information regularly over the last 3 months.
Around 3 weeks ago, I observed a sudden drop of the number of counts in total results. More specifically, I retrieve pageInfo.totalResults from search.list. Before the change, totalResults yielded up to tens of thousands in the cities and provided useful information about the rural areas. However, now the highest count gets to a few dozens in the same areas.
I have been using the exact same script to study these areas, yielding the same information with no change. All my searches through the API documentation obtained no result. There is no record of any change on the API or the way the results are stored/retrieved from the API.
Has there been any change on how pageInfo.totalResults is stored/retrieved? Or, is totalResults a accumulative variable, restoring the counts at a certain time of the year.
I will be happy to forward this issue to the correct place, but it seems the issue tracker of YouTube API is inactive currently.
Thanks and best regards,
Diego

Total POIs in a country or for every category

Not really a technical question but they don't have other means of contact apart from https://groups.google.com/group/google-places-api and SO.
I'd like to get the total count of all POIs or the total amount of POIs in a category on a country. Is this possible in the API (since it always returns 20 results at a time)? if not, is there a google places support email to ask them about certain questions?
As mentioned on Places API Google Group:
Google does not disclose such information. However, If you are
familiar with a few places and locations in Thailand, you or your
client could simply perform a few searched on maps.google.com and
compare the results with your own knowledge to get a general
understanding of how precise or complete the data is.

An easy way with Twitter API to get the list of Followings for a very large list of users?

I have about 200,000 Twitter followers across a few twitter accounts. I am trying to find the twitter accounts that a large proportion of my followers are following.
Having looked over the Search API I think this is going to be very slow, unless I am missing a something.
40 calls using GET followers/ids to get the list of 200,000 accounts. Then all I can think of is doing 200,000 calls to GET friends/ids. But at the current rate limit of 150 calls/hour, that would take 55 days. Even if I could get Twitter to up my limit slightly, this is still going to be slow going. Any ideas?
The short answer to your question is, no, there is indeed no quick way to do this. And furthermore, with API v 1.0 being deprecated sometime in March, and v 1.1 being the law of the land (more on this in a moment).
As I understand it, what you want to do is compile a list of followed accounts for each of the initial 200,000 follower accounts. You then want to count each one of these 200,000 original accounts as a "voter", and then the total set of accounts followed by any of these 200,000 as "candidates". Ultimately, you want to be able to rank this list of candidates by "votes" from the list of 200,000.
A few things:
1.) I believe you're actually referencing the REST API, not the Search API.
2.) Based on what you've said about getting 150 requests per hour, I can infer that you're making unauthenticated requests to the API endpoints in question. That limits you to only 150 calls. As a short term fix (i.e., in the next few weeks, prior to v 1.0 being retired), you could make authenticated requests instead, which will boost your hourly rate limit to 350 (source: Twitter API Documentation). That alone would more than double your calls per hour.
2.) If this is something you expect to need to do on an ongoing basis, things get much worse. Once API 1.0 is no longer available, you'll be subject to the v 1.1 API limits, which a.) require authentication, no matter what, and b.) are limited per API method/endpoint. For GET friends/ids and GET followers/ids, in particular, you will be able to only make 15 calls per 15 minutes or 60 per hour. That means that the sort of analysis you want to do will basically become unfeasible (unless you were to skirt the Twitter API terms of service by using multiple apps/ip addresses, etc.). You can read all about this here. Suffice to say, researchers and developers that rely on these API endpoints for doing network analysis are less than happy about these changes, but Twitter doesn't appear to be moderating its position on this.
Given all of the above, my best advice would be to use API version 1.0 while you still can, and start making authenticated requests.
Another thought -- not sure what your use case is -- but you might consider pulling in, say, the 1000 most recent tweets from each of the 200,000 followers and then leveraging the metadata contained in each tweet about mentions. Mentions of other users are potentially more informative than knowing that someone simply follows someone else. You could still tally the most mentioned accounts. The benefit here would be that in moving from API 1.0 to 1.1, the endpoint for pulling in timelines for users will actually have it's API limit raised from 350 per hour to 720 (Source: Twitter API 1.1 documentation)
Hope this helps, and good luck!
Ben

Resources