Google DisplayVideo -360 (DV360) Daily limit exceeded error - google-api

We have an app which uses different APIs like dv360 and google ads. The issue is that we are facing Daily limit exceeded exception very frequently. We have checked project dashboard to cross verify if we are actually hitting the quota. However, we found that we are not hitting the allowed quota for any of these APIs but still we are getting this error.
Can someone point out what can be the reason for the error ?
Thanks in advance.

The first thing that you need to understand is that the dashboard is not perfect it is not accurate by any sense of the word. it is an estimate and it is not real-time.
What you should check is the error message that you are getting this is the truth. You would not get that error message unless you where hitting the quota limit.
The first thing that you should do is check your code and ensure that you are not hitting the Usage Limits defined on that page.
if you are hitting the limit and you cant reduce the number of requests you are making then you will need to request a quota extension.

Related

googleapi: Error 500: Internal error encountered., backendError when calling Admin SDK

This is strange we have started receiving too many per day (nearly 300-500 times)
googleapi: Error 500: Internal error encountered., backendError
while calling Admin SDK Directory API. https://developers.google.com/admin-sdk/directory
Google chat support for API is removed and Facing issue with how to address the problem. Our rate limit and query calls are way in the limits. We query Admin SDK as a cronjob schedule.
Is there a way we can debug 500 errors for AdminSDK
Is there any information regarding which deployment Region of Google is more capable to handle load?
No error on console
Logs error
Contact Workspace Support or File a Bug
As DalmTo pointed out, 500 errors are usually out of your control. Especially in this case since you are calling Google's servers.
The best option to debug these errors is probably contacting support directly. You can find the corresponding contact information for your Workspace account ⁠on this page.
https://support.google.com/a/answer/1047213?hl=en
If you are able to regularly reproduce this and you think its a bug, fill out the template here:
https://issuetracker.google.com/issues/new?component=191635&template=824102
Ensure to include your code, complete reproduction steps, some evidence/statistics of the failures, and any other information you can give.
This usually happens when the Google Cloud Platform service account used for authentication was created after the in-app products.
To solve this, make a trivial edit to your in-app products in the Google Play Console. For example, add a temporary tag. Then try again to make the request and it should succeed.
By the way, the same is true for the purchases.voidedpurchases which reports an "insufficient permissions" error.

High latency with google sheets api calls

I'm having very high latencies (5min) on google sheets append values requests. This is starting to happen more and more. As I have been doing the same requests for months (and same volume) I don't think it is because of my inputs.
Checking google support they suggested to create a topic here. Not sure if a google engineer will comment or if some other end users are experiencing the same problem?
In the charts you see a lot of errors, these are the errors I get: "append_sheet: googleapi: Error 500: Internal error encountered., backendError". So nothing useful :s.
Greetings,
PJ

Google Analytic API Error (500) Backend Error

We have a sales tracker app. In this app, we collect all analytic data from 5 different analytic accounts (websites) and creating reports. It was working till today morning itself. Now it shows some errors like 500 Backend Error:
PHP Fatal error: Uncaught Google_Service_Exception: {"error":{"errors":[{"domain":"global","reason":"backendError","message":"Backend Error"}],"code":500,"message":"Backend Error"}}
500 errors are catch all errors that normally mean that there is something on the servers end that is the issue. If you check the documentation you will see the above comment. Google says that they dont want you to retry that error. However if you scroll down a bit more in the documentation you will find this section.
However we find the following as well.
However there is nothing with both "code":500,"message":"Backend Error"
backoff
There are a number of error messages where backoff would work.
With a python example that includes
This is because the Google analytics api is slightly different than the other Google apis the way it returns errors is not the same and in most cases its better. The reason for this is that backend error can be cause by flood protection. Not often but it can happen mostly around the hour. You should never run a large script on the hour because then you are competing with everyone that has cron jobs set up to extract data every hour.
I normally only use backoff for 'userRateLimitExceeded', 'quotaExceeded', 'internalServerError' errors not for 'backendError' but Google is stating it in their documentation so it may be worth a shot.
In the mean time i am going to send an email off to the team to get some clarification on the documentation.
500,"message":"Backend Error"
As for the message above i have seen this a few times and its often related to an issue on Googles end. Give back off a try while i wait to hear back from the team.

Google Places API and Measurement of Quota Requests

I wonder if someone else has experienced the same issue, and might have an answer to it.
I am using the Google Places API. There I do two kinds of requests
https://maps.googleapis.com/maps/api/place/textsearch/
and
https://maps.googleapis.com/maps/api/place/details/
After I have done about 20,000 of these requests my Quota of 150,000 has been eaten up, and I do get an error message.
The strange thing is, when I look at the Google API Console I can see the following:
In the API & Services Section I can see the following (which reflects the real requests I have done)
and in the IAM & admin section I do see a much higher value
This looks artifically high, and is limiting the service way to early.
Does anyone else have the same issue?
I figuert out, why there is this difference in request in the API view and requests in the Quota view.
When using the TextSeach Places API, each text search request, will be multiplied by a factor 10 towards your free contingent.
it is mentioned at this page:
TextSearchRequests

Google URL Shortener 403 Rate Limit Exceeded

Using the google url shortener api, it was working fine till I started testing at load. Quickly started getting back 403 Rate Limit Exceeded errors from Google, even though I signed up to use the API and it comes with 1,000,000 hits a day. I can see the requests coming in on the google reporting tool, and they are just sending back 403's for everything. 403's started coming back at around 345/350 hits to the API, have been continuing for hours.
Thoughts?
The API limits requests to 1 request / per second / per user.
A user is defined as a unique IP address.
So if you were doing your load testing from a single IP this would have cause your rate limit issue.
https://developers.google.com/analytics/devguides/reporting/mcf/v3/limits-quotas#general_api
I don't think "1 request / per second / per user." as written in doc is 100% correct in my case, or the google url shortener case. (FYI: I am using "Public API access", not "OAuth")
I have almost the same problem but, for me, it is more likely to be "I get this error for some URLs for some period of times." What does it mean? Please continue reading.
These are what I found:
I can use 10 threads to use google url shortener at the same time, but not always ...
when processing, even one url is fail on one thread, the other threads still can get the other urls.
when a url is fail, and later I tried the same url again (even there are no other processes running, it still does not work for some PERIOD OF TIME. Even, I tried to add more string like "&test=1", it does not help. But if I changed to another url, it works.
So, I guess that google's server may have cache of each url. If a url is fail, it must wait for a while to let the cache released.
So, I have to write some creepy code like this to solve my problem:
when there is a fail, that particular thread will sleep for 1 minute (yes 1 minute)
and keep trying for 10 times (so totally, it can be 10 minutes for a fail url)
However, this creepy code is fine for my case because I am using ExecutorService with fixed-thread-pool size of 10. So, if there is a fail, the others still can get the shorten urls. It solves the problem...at least for me.
You need to go to the google shortener extension, and in option select 'Grant Access'
Right-click on the Extension icon, go to Options and click Grant Access on the bottom.

Resources