Google API Key giving Query over limit error - google-api

We have an web application which was working fine till yesterday. But since yesterday afternoon , one of our projects in google api console , all the keys started giving OVER_QUERY_LIMIT error.
And we cross checked that the quotas for that project and api are still not full. Can anybody help me to understand what may have caused this.
And after a days use also the API keys are still giving the same error.
Just to give more information we are using Geocoding API and Distance Matrix API in our application.

If you exceed the usage limits you will get an OVER_QUERY_LIMIT status code as a response. This means that the web service will stop providing normal responses and switch to returning only status code OVER_QUERY_LIMIT until more usage is allowed again. This can happen:
Within a few seconds, if the error was received because your application sent too many requests per second.
Within the next 24 hours, if the error was received because your application sent too many requests per day. The daily quotas are reset at midnight, Pacific Time.
This screencast provides a step-by-step explanation of proper request throttling and error handling, which is applicable to all web services.
Upon receiving a response with status code OVER_QUERY_LIMIT, your application should determine which usage limit has been exceeded. This can be done by pausing for 2 seconds and resending the same request. If status code is still OVER_QUERY_LIMIT, your application is sending too many requests per day. Otherwise, your application is sending too many requests per second.
Note: It is also possible to get the OVER_QUERY_LIMIT error:
From the Google Maps Elevation API when more than 512 points per request are provided.
From the Google Maps Distance Matrix API when more than 625 elements per request are provided.
Applications should ensure these limits are not reached before sending requests.
Documentation usage limits

Related

What could cause AWS S3 MultiObjectDeleteException?

In our Spring Boot app, we are using AmazonS3Client.deleteObjects() to delete multiple objects in a bucket. From time to time, the request throws MultiObjectDeleteException and one or many objects won't be deleted. It is not often, about 5 failures among thousands of requests. But still it could be a problem. What could lead to the exception?
And I have no idea how to debug. The log from our app follows the data flow but not showing much useful information. It suddenly throws the exception after the request. Please help.
Another thing is that the exception comes back with a 200 code. How could this be possible?
com.amazonaws.services.s3.model.MultiObjectDeleteException: One or
more objects could not be deleted (Service: null; Status Code: 200;
Error Code: null; Request ID: xxxx; S3 Extended Request ID: yyyy;
Proxy: null)
TLDR: Some error rates are normal and the application should handle them. 500 and 503 errors are retriable. The MultiObjectDeleteException should provide a clue and getDeletedObjects() gives you a list of the deleted objects. The rest you should mostly try later.
In the MultiObjectDeleteException documentation is said that exception should have an explanation of the issue which caused the error
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/MultiObjectDeleteException.html
Exception for partial or total failure of the multi-object delete API, including the errors that occurred. For successfully deleted objects, refer to getDeletedObjects().
According to https://aws.amazon.com/s3/sla/ AWS does not guarantee 100% availability. Again, according to that document:
• “Error Rate” means: (i) the total number of internal server errors returned by the Amazon S3 Service as error status “InternalError” or “ServiceUnavailable” divided by (ii) the total number of requests for the applicable request type during that 5-minute interval. We will calculate the Error Rate for each Amazon S3 Service account as a percentage for each 5-minute interval in the monthly billing cycle. The calculation of the number of internal server errors will not include errors that arise directly or indirectly as a result of any of the Amazon S3 SLA Exclusions.
Usually we think about SLA in the terms of downtimes so it is easy to assume that AWS does mean the same. But that's not the case here. Some number of errors is normal and should be expected. In many documents AWS does suggest that you should implement a combination of slowdowns and retries e.g. here https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorBestPractices.html
Some 500 and 503 errors are, again, part of the normal operation https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
The documents specifically says:
Because Amazon S3 is a distributed service, a very small percentage of 5xx errors is expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can be retried. This means that it's a best practice to have a fault-tolerance mechanism or to implement retry logic for any applications making requests to Amazon S3. By doing so, S3 can recover from these errors.
Edit: Later was added a question: "How is it possible that the API call returned status code 200 while some objects were not deleted."
And the answer to that is very simple: This is how the API is defined. From the JDK reference page for deleteObjects you can go directly to the AWS API documentation page https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
Which says that this is the expected behavior. Status code 200 means that the high level API code succeeded and was able to request the deletion of the listed objects. Well, some of these actions did fail and, but the API call did create a report about it in the response.
Why does the Java API throw an exception then? Again, the authors of the AWS Java SDK tried to translate the response to the Java programming language and they clearly thought that while AWS API works with a non-zero error rate as part of the service agreement, Java developers are more used to a situation that anything but 100% success should end up by an exception.
Both of the abstractions are well documented and it is the programmer who is responsible for a precise implementation. The engineering rule is cheap, fast, reliable - chose two. AWS was able to provide a service which has all three with a reasonable concession that part of the reliability will be implemented on the client side - retries and slow-downs.

Google Gmail API User-rate limit for internal (unverified) apps

After 40 calls to users.settings.filters.create I start to receive User-rate limit errors. All future filters.create calls then fail for the next (approx) 24hrs. Specific error message below.
HTTP 429
"User-rate limit exceeded. Retry after 2021-05-19T07:24:15.104Z
(Forwarding rules)] Location[ - ] Reason[rateLimitExceeded]
Domain[global]"
I have a 5-second delay between each call, so well under the published daily usage and per-user rate limits. I calculate the API allows 250 / 5 = 50 calls per second.
https://developers.google.com/gmail/api/reference/quota
We are using Google Workspace Legacy edition, the project OAuth consent is set for Internal use and the project is not verified (not a requirement for Internal).
Is there an obvious reason that 40 consecutive filters.create calls spread over 200 seconds would trigger a User-rate limit in these circumstances?
Limitations for unverified apps are across the board. The fact that your app is for internal use only doesn't matter you must still abide by the unverified app restrictions.
If you want to be able to send more then that you will need to apply for verification.
Because your app is for workspace account and internal use it should be easer for you to verify it.

YouTube API requests failing due to "Access Not Configured" (also: "queries per day" quota is locked to 0)

No matter what we try, all YouTube API requests we make are failing.
As we first thought this was a propagation issue, we waited out 10 minutes, then 30 minutes, 2 hours and now over 24 hours, to no avail.
We have found this thread, which covers a similar issue with an iOS app, but does not correspond to our use case.
Here is a run-down of what the problem is:
Activating the "Youtube Data API v3" API for our account shows as successful, and the API shows as enabled.
A POST to https://www.googleapis.com/upload/youtube/v3/videos (videos insert) consistently fails with the following error, despite the fact that we have waited hours for the API enabling to propagate:
Access Not Configured. YouTube Data API has not been used in project XXXXXXXXXXXX before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/youtube.googleapis.com/overview?project=928939889952 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Although the error does not directly point to this, our "Queries per day" quota for the YouTube Data API is showing as "0". We are not able to understand why this is showing as zero, and unfortunately, all attempts to edit it to something higher have failed, and disabling and then re-enabling the API has not solved the problem. In a completely separate project/account, this shows as "10,000" when enabling the YouTube Data API, and indeed video insert API calls work under that project.
This is a significant roadblock for us, as it prevents us from deploying our application: any help would be appreciated.
No access configured
Actually means that you dont have permission to access the api. It basically means you have enabled the api but dont have the quota to use it. Its different then the you have run out of quota error message.
After a strange change sometime last year by default the quota for the Youtube api is now 0. You need to request a quota extension it can take anywhere between a week to several months to get permission to use it.
It took me three months. No i dont have any idea how they expect anyone to develop a new application without any quota and to know ahead of time that they need to apply for quota in order to start developing their application. Its quite frustrating.

Seeing high rate of SocketTimeoutException from calling calendar api

Our app uses Calendar API extensively and since 10PM PDT on 6/19/2019, we are seeing high rate of SocketTimeoutException from using the Calendar API java client. It's not so bad that our app is entirely broken, but it's bad enough that it's hard to make any sequence of event updates without a failure.
I believe the default timeout is 20 seconds (which I thought was already pretty long) and we up'd it to 30 seconds but did not help. Should the timeout be longer than 30 seconds? for event insert/update/delete calls?
Is it possible that we're being rate limited somehow? (Though I believe that would be returned with 403 exception with relevant error message, not SocketTimeoutException) Or is Google Calendar experiencing some other issues after the outage?
Thanks!
If you're inserting thousands of files simultaneously, it's conceivable that you are choking some resource (sockets, bandwidth, etc).
You may need to optimize your code by reducing the number of API calls made simultaneously per user/sec.
Increase the read timeouts: Timeouts and Errors

Google's RuntimeConfig API responds with 'Our systems have detected unusual traffic from your computer network'

Since today (november 20 2018) we get error responses from Google's RuntimeConfig API:
Our systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests, and not a robot...
(check this link for complete HTML error)
We retrieve variables from Google's RuntimeConfig using the API in our code. We do quite a few request, but not more than before:
A developer starts his server locally, which retrieves all the needed variables (+- 30 everytime you start).
Requesting RuntimeConfig variables via GCloud results in the same HTML error:
gcloud beta runtime-config configs variables get-value databaseHost --config-name database --project=your-test-environment
Other gcloud api requests work (projects describe, gsutil, etc).
How can I verify if I violated any terms? I can only find a usage limit in GCloud Console of 6000 calls per minute.
You can find the quotas for Runtime Configurator and how much of those you are using in the Cloud Console under IAM & Admin. In the Quotas section you can filter on Service = Cloud Runtime Configuration API and you should see all the quotas and how close to those you are for this API. There are 4 quotas that may affect you (docs here):
1200 Queries Per Minute (QPM) for delete, create, and update requests
600 QPM for watch requests
6000 QPM for get and list requests.
4MB of data per project, which consists of all data written to the Runtime Configurator service and accompanying metadata.
We had the exact same issue on November 20th when a large amount of our preemptibles were reallocated at the same time.
Our startup-scripts make use of the gcloud beta runtime-config...-commands and they all responded with 503.
These commands responded correctly again after a few hours.
We have had a support-ticket with Google and there was a problem with their internal quota mechanisms at the time which since is fixed so the issue is resolved.

Resources