Google URL Shortener 403 Rate Limit Exceeded - google-api

Using the google url shortener api, it was working fine till I started testing at load. Quickly started getting back 403 Rate Limit Exceeded errors from Google, even though I signed up to use the API and it comes with 1,000,000 hits a day. I can see the requests coming in on the google reporting tool, and they are just sending back 403's for everything. 403's started coming back at around 345/350 hits to the API, have been continuing for hours.
Thoughts?

The API limits requests to 1 request / per second / per user.
A user is defined as a unique IP address.
So if you were doing your load testing from a single IP this would have cause your rate limit issue.
https://developers.google.com/analytics/devguides/reporting/mcf/v3/limits-quotas#general_api

I don't think "1 request / per second / per user." as written in doc is 100% correct in my case, or the google url shortener case. (FYI: I am using "Public API access", not "OAuth")
I have almost the same problem but, for me, it is more likely to be "I get this error for some URLs for some period of times." What does it mean? Please continue reading.
These are what I found:
I can use 10 threads to use google url shortener at the same time, but not always ...
when processing, even one url is fail on one thread, the other threads still can get the other urls.
when a url is fail, and later I tried the same url again (even there are no other processes running, it still does not work for some PERIOD OF TIME. Even, I tried to add more string like "&test=1", it does not help. But if I changed to another url, it works.
So, I guess that google's server may have cache of each url. If a url is fail, it must wait for a while to let the cache released.
So, I have to write some creepy code like this to solve my problem:
when there is a fail, that particular thread will sleep for 1 minute (yes 1 minute)
and keep trying for 10 times (so totally, it can be 10 minutes for a fail url)
However, this creepy code is fine for my case because I am using ExecutorService with fixed-thread-pool size of 10. So, if there is a fail, the others still can get the shorten urls. It solves the problem...at least for me.

You need to go to the google shortener extension, and in option select 'Grant Access'

Right-click on the Extension icon, go to Options and click Grant Access on the bottom.

Related

Google Places API OVER_QUERY_LIMIT

I'm working on a script which sends a few hundreds of API calls to Google Places API using the [nearbySearch]. After a few requests, I quickly get an OVER_QUOTA_LIMIT error.
In Google Cloud Console, I can see the requests made in the last days or hours but:
I'm a far cry from the 6 000 requests/minutes limit
I don't see any Quota Exceed Error in the graph
(https://developers.google.com/maps/documentation/javascript/places#place_search_requests)
If I scroll down a bit, I can see that there's apparently a "Premium Plan", but no request have been made up to now.
Now I can see that the Premium Plan is not available anymore for sign up or new customers..
So I guess it's just a graph to support people who have previously signed for this plan but that it is not relevant in my case.
My payment settings have been set up correctly so I don't understand what's happening here.
Thank you so much.

Google DisplayVideo -360 (DV360) Daily limit exceeded error

We have an app which uses different APIs like dv360 and google ads. The issue is that we are facing Daily limit exceeded exception very frequently. We have checked project dashboard to cross verify if we are actually hitting the quota. However, we found that we are not hitting the allowed quota for any of these APIs but still we are getting this error.
Can someone point out what can be the reason for the error ?
Thanks in advance.
The first thing that you need to understand is that the dashboard is not perfect it is not accurate by any sense of the word. it is an estimate and it is not real-time.
What you should check is the error message that you are getting this is the truth. You would not get that error message unless you where hitting the quota limit.
The first thing that you should do is check your code and ensure that you are not hitting the Usage Limits defined on that page.
if you are hitting the limit and you cant reduce the number of requests you are making then you will need to request a quota extension.

Google Analytic API Error (500) Backend Error

We have a sales tracker app. In this app, we collect all analytic data from 5 different analytic accounts (websites) and creating reports. It was working till today morning itself. Now it shows some errors like 500 Backend Error:
PHP Fatal error: Uncaught Google_Service_Exception: {"error":{"errors":[{"domain":"global","reason":"backendError","message":"Backend Error"}],"code":500,"message":"Backend Error"}}
500 errors are catch all errors that normally mean that there is something on the servers end that is the issue. If you check the documentation you will see the above comment. Google says that they dont want you to retry that error. However if you scroll down a bit more in the documentation you will find this section.
However we find the following as well.
However there is nothing with both "code":500,"message":"Backend Error"
backoff
There are a number of error messages where backoff would work.
With a python example that includes
This is because the Google analytics api is slightly different than the other Google apis the way it returns errors is not the same and in most cases its better. The reason for this is that backend error can be cause by flood protection. Not often but it can happen mostly around the hour. You should never run a large script on the hour because then you are competing with everyone that has cron jobs set up to extract data every hour.
I normally only use backoff for 'userRateLimitExceeded', 'quotaExceeded', 'internalServerError' errors not for 'backendError' but Google is stating it in their documentation so it may be worth a shot.
In the mean time i am going to send an email off to the team to get some clarification on the documentation.
500,"message":"Backend Error"
As for the message above i have seen this a few times and its often related to an issue on Googles end. Give back off a try while i wait to hear back from the team.

Google Places API and Measurement of Quota Requests

I wonder if someone else has experienced the same issue, and might have an answer to it.
I am using the Google Places API. There I do two kinds of requests
https://maps.googleapis.com/maps/api/place/textsearch/
and
https://maps.googleapis.com/maps/api/place/details/
After I have done about 20,000 of these requests my Quota of 150,000 has been eaten up, and I do get an error message.
The strange thing is, when I look at the Google API Console I can see the following:
In the API & Services Section I can see the following (which reflects the real requests I have done)
and in the IAM & admin section I do see a much higher value
This looks artifically high, and is limiting the service way to early.
Does anyone else have the same issue?
I figuert out, why there is this difference in request in the API view and requests in the Quota view.
When using the TextSeach Places API, each text search request, will be multiplied by a factor 10 towards your free contingent.
it is mentioned at this page:
TextSearchRequests

Garb request to Google Analytics fails from home ISP, but works elsewhere

I'm trying to use the garb gem to access data from the Google analytics API and find that http requests using garb work just fine from a Linode account, but are refused from home (Comcast). Is Google rejecting some kinds of http requests from certain ISPs, or am I just doing something wrong? Simple example is below:
require 'garb'
Garb::Session.login('XXXXXX#gmail.com', 'XXXXXX')
#profile = Garb::Profile.all.first
#report = Garb::Report.new(#profile)
#report.metrics :visits
puts #report.results
This give => [#<OpenStruct visits="21">] on my Linode, but the exact same thing run from my home ISP gives:
Garb::DataRequest::ClientError: "<errorsxmlns=.........
Which is raised here in garb:
def send_request
response = if #session.single_user?
single_user_request
elsif #session.oauth_user?
oauth_user_request
end
raise ClientError, response.body.inspect unless response.kind_of?(Net::HTTPSuccess)
response
end
The initial session login works just fine from both IPs. The error is only thrown when results are requested. Is there anything I can do to fix this? I haven't (yet) verified that I get exactly the same behavior going through clientlogin/data requests by hand. I'm pretty convinced it is not a gem issue, but an IP-related one--perhaps something to do with Google web services quota policies--but I'm willing to entertain all possible solutions.
Thanks,
Orion
You've probably made too many calls to google in a short space of time. I haven't seen it happen with Garb, but I've seen it happen when using an API to scrape search results pages. Google notices and flags your IP. Try browsing to google.com and running a normal google search from the ip that's blocked, you'll probably be required to enter a captcha. They probably block API calls from that IP at this stage, you'll get cleared eventually after a few days I think.
Jeremy's probably right.
Google Analytics API has multiple quotas you need to worry about. See here their list here. I've hit the 10 queries per second per IP address quota and/or the 10 concurrent requests per profile before. I also saw 4 concurrent requests per IP address somewhere.
You should post the full error message Garb gives you next time, since those have actually helped me figure out what caused it in the past.
Also, these quotas are for projects sending registered API keys along with their requests. If you're not, the quotas are much lower. I hit the quota for an unregistered project before. Registering your project is fairly easy, and you just add the following line
Garb::Session.api_key = 'API_KEY'
to your code (I'm using Sija's fork) before the Garb::Session.login line.
Another thing, once you register your project, go to Quotas page on the API console and click the "Set per-user limits" and up that from the default 1.0 to the max 10.0 requests/second/user. If you click "Request more" they give some tips for optimizing your calls/timing as to not hit the limit.

Resources