Google Places API and Measurement of Quota Requests - google-api

I wonder if someone else has experienced the same issue, and might have an answer to it.
I am using the Google Places API. There I do two kinds of requests
https://maps.googleapis.com/maps/api/place/textsearch/
and
https://maps.googleapis.com/maps/api/place/details/
After I have done about 20,000 of these requests my Quota of 150,000 has been eaten up, and I do get an error message.
The strange thing is, when I look at the Google API Console I can see the following:
In the API & Services Section I can see the following (which reflects the real requests I have done)
and in the IAM & admin section I do see a much higher value
This looks artifically high, and is limiting the service way to early.
Does anyone else have the same issue?

I figuert out, why there is this difference in request in the API view and requests in the Quota view.
When using the TextSeach Places API, each text search request, will be multiplied by a factor 10 towards your free contingent.
it is mentioned at this page:
TextSearchRequests

Related

Google Places API OVER_QUERY_LIMIT

I'm working on a script which sends a few hundreds of API calls to Google Places API using the [nearbySearch]. After a few requests, I quickly get an OVER_QUOTA_LIMIT error.
In Google Cloud Console, I can see the requests made in the last days or hours but:
I'm a far cry from the 6 000 requests/minutes limit
I don't see any Quota Exceed Error in the graph
(https://developers.google.com/maps/documentation/javascript/places#place_search_requests)
If I scroll down a bit, I can see that there's apparently a "Premium Plan", but no request have been made up to now.
Now I can see that the Premium Plan is not available anymore for sign up or new customers..
So I guess it's just a graph to support people who have previously signed for this plan but that it is not relevant in my case.
My payment settings have been set up correctly so I don't understand what's happening here.
Thank you so much.

Is it possible to dynamically query Google APIs to see how much of the limit/quota you've used?

For a given Google API, is there any way to dynamically check usage against any of the current limits for that API?
For example, this page https://developers.google.com/classroom/limits?hl=en shows that I can query the Classrooms API 4,000,000 times per client per day. At midday, without going to the API Console, how could I know that I've already hit 3 million queries?
I'm hoping that there's a billing or usage API that covers this, but can't see it.
Note: I'm not having any issue right now with a specific call, just anticipating that my usage will scale up significantly in the next few months, so am looking for a solution for monitoring rather than advice on not hitting the limits at all. My specific use-case is for Google Classrooms, but reading wider around this I can't see a general solution either.
Answer:
No, dynamically you can't retrieve this information.
Feature Request:
You can however let Google know that this is a feature that is important for the Google Workspace APIs to have, and that you would like to request they implement it.
The page to file a Feature Request for the Google Classroom API is here, as there is no specific component for Google Workspace APIs in general I would suggest filing it here instead.
You can use Google's Cloud Monitoring API to achieve this. This is the documentation page for APIs-
https://cloud.google.com/monitoring/api/v3
This is the documentation page for concerned metrics-
https://cloud.google.com/monitoring/api/metrics_gcp#serviceruntime/quota/allocation/usage
https://cloud.google.com/monitoring/api/metrics_gcp#serviceruntime/quota/exceeded
https://cloud.google.com/monitoring/api/metrics_gcp#serviceruntime/quota/limit

High latency with google sheets api calls

I'm having very high latencies (5min) on google sheets append values requests. This is starting to happen more and more. As I have been doing the same requests for months (and same volume) I don't think it is because of my inputs.
Checking google support they suggested to create a topic here. Not sure if a google engineer will comment or if some other end users are experiencing the same problem?
In the charts you see a lot of errors, these are the errors I get: "append_sheet: googleapi: Error 500: Internal error encountered., backendError". So nothing useful :s.
Greetings,
PJ

Google URL Shortener 403 Rate Limit Exceeded

Using the google url shortener api, it was working fine till I started testing at load. Quickly started getting back 403 Rate Limit Exceeded errors from Google, even though I signed up to use the API and it comes with 1,000,000 hits a day. I can see the requests coming in on the google reporting tool, and they are just sending back 403's for everything. 403's started coming back at around 345/350 hits to the API, have been continuing for hours.
Thoughts?
The API limits requests to 1 request / per second / per user.
A user is defined as a unique IP address.
So if you were doing your load testing from a single IP this would have cause your rate limit issue.
https://developers.google.com/analytics/devguides/reporting/mcf/v3/limits-quotas#general_api
I don't think "1 request / per second / per user." as written in doc is 100% correct in my case, or the google url shortener case. (FYI: I am using "Public API access", not "OAuth")
I have almost the same problem but, for me, it is more likely to be "I get this error for some URLs for some period of times." What does it mean? Please continue reading.
These are what I found:
I can use 10 threads to use google url shortener at the same time, but not always ...
when processing, even one url is fail on one thread, the other threads still can get the other urls.
when a url is fail, and later I tried the same url again (even there are no other processes running, it still does not work for some PERIOD OF TIME. Even, I tried to add more string like "&test=1", it does not help. But if I changed to another url, it works.
So, I guess that google's server may have cache of each url. If a url is fail, it must wait for a while to let the cache released.
So, I have to write some creepy code like this to solve my problem:
when there is a fail, that particular thread will sleep for 1 minute (yes 1 minute)
and keep trying for 10 times (so totally, it can be 10 minutes for a fail url)
However, this creepy code is fine for my case because I am using ExecutorService with fixed-thread-pool size of 10. So, if there is a fail, the others still can get the shorten urls. It solves the problem...at least for me.
You need to go to the google shortener extension, and in option select 'Grant Access'
Right-click on the Extension icon, go to Options and click Grant Access on the bottom.

Garb request to Google Analytics fails from home ISP, but works elsewhere

I'm trying to use the garb gem to access data from the Google analytics API and find that http requests using garb work just fine from a Linode account, but are refused from home (Comcast). Is Google rejecting some kinds of http requests from certain ISPs, or am I just doing something wrong? Simple example is below:
require 'garb'
Garb::Session.login('XXXXXX#gmail.com', 'XXXXXX')
#profile = Garb::Profile.all.first
#report = Garb::Report.new(#profile)
#report.metrics :visits
puts #report.results
This give => [#<OpenStruct visits="21">] on my Linode, but the exact same thing run from my home ISP gives:
Garb::DataRequest::ClientError: "<errorsxmlns=.........
Which is raised here in garb:
def send_request
response = if #session.single_user?
single_user_request
elsif #session.oauth_user?
oauth_user_request
end
raise ClientError, response.body.inspect unless response.kind_of?(Net::HTTPSuccess)
response
end
The initial session login works just fine from both IPs. The error is only thrown when results are requested. Is there anything I can do to fix this? I haven't (yet) verified that I get exactly the same behavior going through clientlogin/data requests by hand. I'm pretty convinced it is not a gem issue, but an IP-related one--perhaps something to do with Google web services quota policies--but I'm willing to entertain all possible solutions.
Thanks,
Orion
You've probably made too many calls to google in a short space of time. I haven't seen it happen with Garb, but I've seen it happen when using an API to scrape search results pages. Google notices and flags your IP. Try browsing to google.com and running a normal google search from the ip that's blocked, you'll probably be required to enter a captcha. They probably block API calls from that IP at this stage, you'll get cleared eventually after a few days I think.
Jeremy's probably right.
Google Analytics API has multiple quotas you need to worry about. See here their list here. I've hit the 10 queries per second per IP address quota and/or the 10 concurrent requests per profile before. I also saw 4 concurrent requests per IP address somewhere.
You should post the full error message Garb gives you next time, since those have actually helped me figure out what caused it in the past.
Also, these quotas are for projects sending registered API keys along with their requests. If you're not, the quotas are much lower. I hit the quota for an unregistered project before. Registering your project is fairly easy, and you just add the following line
Garb::Session.api_key = 'API_KEY'
to your code (I'm using Sija's fork) before the Garb::Session.login line.
Another thing, once you register your project, go to Quotas page on the API console and click the "Set per-user limits" and up that from the default 1.0 to the max 10.0 requests/second/user. If you click "Request more" they give some tips for optimizing your calls/timing as to not hit the limit.

Resources