When I run requests series like
https://api.sparkpost.com:443/api/v1/suppression-list/he**0#gmail.com
Sometimes, I get error:
name: 'SparkPostError',
errors: [ { message: 'Too many requests' } ],
statusCode: 429
Too many - this is how many?
How long the server keeps track of period? How long the server resets the counter? How to solve this problem?
https://developers.sparkpost.com/api/index.html#header-rate-limiting
After a quick look at the site...
The answer of support
As mentioned before we do limit requests on our endpoints to prevent abuse and while we can’t reveal the actual limits we do recommend that you wait several seconds before making consecutive requests. If you are making these requests via some type of automated process (script, code, etc.), we highly recommend that you add a wait for several seconds upon seeing this 429 error and reattempt after. Decreasing the frequency of your requests and waiting before reattempting is the only way around this error message.
https://developers.sparkpost.com/api/index.html#header-rate-limiting
Rate Limiting
Note: To prevent abuse, our servers enforce request rate limiting, which may trigger responses with HTTP status code 429.
SparkPost implements rate limiting on the following API endpoints:
/api/v1/message-events
/api/v1/metrics/*
The limits imposed here are dynamic but as a general rule, polling these endpoints more than once in 2 minutes may encounter rate limiting and a 429 status code.
Related
If there is a limit on the number of resources created using POST request, what should be the status code?
Let's say, there is a restriction on the number of resources created using POST wherein only 10 resources can be created. The 11th POST request should fail due to the above constraint. What should be the status code?
Should it be 422 with a meaningful message, something along the lines of "Resource count limit reached"? or is there a status code for this?
It really depends on your use-case.
If user is limited in time (let's say 10 per day) but might actually get more credits later automatically, I suggest 429 Too Many Requests as client sent to many requests in one day.
If credits are locked (ie: User only had 10 free credits), I suggest 403 Forbidden as the request is fully understood and processable but the server does deny it due to lack of credits.
Anyway 422 Unprocessable entity is not correct as the request is well formed and server might process it with given credits. Nothing is really missing from the request (from what I understand from your post).
I think that HTTP400 is appropriate, especially if you can provide helpful feedback in the error response. If a user is submitting an invalid payload in the request- it's a bad request. Anything else might get confusing.
Though, HTTP405 (Not Allowed) might be better. If there are no POST more requests accepted by the server for a particular resource that may be more accurate. However it really just depends on the future use of the API.
I've noticed that when I issue 200 responses via NiFi, the response is typically immediate. However, 404 and 500 errors seem to take so long that they often cause the client to timeout.
Is this an intentional behavior? Or is my HandleHTTPResponse processor possibly setup wrong?
--
Edit: While answered below, it's worth clarifying -- the HandleHTTPResponse was not behaving differently; I just happened to be routing [penalized] flowfiles to processors that were set to give 404/500 error codes ... so, it appeared there was a correlation.
The failed requests might be penalized. Check out the settings on your failure path and update the default 30 seconds value to 0, which makes more sense when handling expected http errors.
Without knowing which responses are taking this long, my guess is that the error responses are being generated by an exception which may be thrown due to an internal timeout (i.e. waiting for some other connection or operation which fails to complete, exhausting the timeout, which leads to the HTTP response taking so long). You can profile these operations in the JVM if you like.
I have read the gmail api quota explanation here (https://developers.google.com/gmail/api/v1/reference/quota), but am still having troubles understanding what causes us to go over the limit.
Question 1:
What is a user in a per user quota? I am not sure if the user is an individual gmail user, or a service client using the gmail api.
Question 2:
We've seen the following error a few times, but don't see any obvious limit we've hit.
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "rateLimitExceeded",
"message": "Rate Limit Exceeded"
}
],
"code": 429,
"message": "Rate Limit Exceeded"
}
We were under 250 units/s and 25,000 units/100s. We're only using history.list and message.get calls no sending or modifications.
Is there some other quota I am missing?
User quota is based upon the account you are accessing. So it would be the GMail account. Sometimes you can trick it by sending a random quotaUser but this doesn't always work Google also uses your IP address to track quota I suspect.
User rate limit is flood protection you are going to fast.
Per User Rate Limit 250 quota units per user per second, moving
average (allows short bursts)
Exceeding a rate limit will cause an HTTP 403 or HTTP 429 Too Many
Requests response and your app should respond by retrying with
exponential backoff.
Googles calculations are not perfect you could be sending more or less and still hit this quota. Just implementexponential backoff.
Exponential backoff
The flow for implementing simple exponential backoff is as follows:
Make a request to the API.
Receive an HTTP 403 rate-limited response, which indicates you should retry the request.
Wait 1 + random_number_milliseconds seconds and retry the request.
Receive an HTTP 403 rate-limited response, which indicates you should retry the request.
Wait 2 + random_number_milliseconds seconds, and retry the request.
Receive an HTTP 403 rate-limited response, which indicates you should retry the request.
Wait 4 + random_number_milliseconds seconds, and retry the request.
Receive an HTTP 403 rate-limited response, which indicates you should retry the request.
Wait 8 + random_number_milliseconds seconds, and retry the request.
Receive an HTTP 403 rate-limited response, which indicates you should retry the request.
Wait 16 + random_number_milliseconds seconds, and retry the request.
Stop. Report or log an error.
For your Question 1
Here are the meaning of the different quota in your Gmail
QPD(quota per day) - meaning the maximum numbers of request over a 24 hour period a client id is able to make to an API
QPS(quota per second) - meaning a global quota per second for the application, meaning how many calls a second an application can make
quota per seconds per user - meaning the number of queries a user, the application can make.
For question number 2
Well, if you check the Quota of Gmail in your developer console, the Gmail has a default quota of:
So what can I suggest you is to use the following tips so that you work with your quota efficiently:
Push notification - it improve the performance of your application. It allows you to eliminate the extra network and compute costs involved with polling resources to determine if they have changed. Whenever a mailbox changes, the Gmail API notifies your backend server application.
Use synchronization to retrieve and store as many of the most recent messages or threads as are necessary for your purpose.
Batching Requests - to reduce the number of HTTP connections your client has to make.
If you notice that you reach this limit and you need more than this, then you can apply for more quota here.
I was writing a node application that uses the gmail API when I noticed this error. My understanding of the error is that there are too many concurrent requests. It seems to be prompting me to wait 15 minutes and try again. After the waiting period, I tried to poke the API with the gui over at https://developers.google.com/gmail/api/v1/reference/users/messages/list#response, but the same error appears (with the time incremental 15 minutes). I've looked at my quota usage on the API site in the developer console, but there's no activity other than the errors. Does anyone know why this might be? I'd be extremely appreciative.
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "rateLimitExceeded",
"message": "User-rate limit exceeded. Retry after 2016-07-11T23:51:49.309Z"
}
],
"code": 429,
"message": "User-rate limit exceeded. Retry after 2016-07-11T23:51:49.309Z"
}
}
The Gmail API is subject to a daily usage limit that applies to all requests made from your application, as well as per-user rate limits.
Daily Usage 1,000,000,000 quota units per day Per User Rate
Limit
250 quota units per user per second, moving average (allows
short bursts)
Exceeding a rate limit will cause an HTTP 403 or HTTP 429 Too Many Requests response and your app should respond by retrying with exponential backoff.
Exponential backoff is a standard error handling strategy for network
applications in which the client periodically retries a failed request
over an increasing amount of time. If a high volume of requests or
heavy network traffic causes the server to return errors, exponential
backoff may be a good strategy for handling those errors. Conversely,
it is not a relevant strategy for dealing with errors unrelated to
rate-limiting, network volume or response times, such as invalid
authorization credentials or file not found errors.
Used properly, exponential backoff increases the efficiency of
bandwidth usage, reduces the number of requests required to get a
successful response, and maximizes the throughput of requests in
concurrent environments.
I'm getting the servingLimitExceeded error message for results within batch but not for an entire batch. For example, I may get 100 records responding with this error and then it starts returning more results. All within the a single batch.
If batches are handled internally by Google API, how can I adjust them to not hit the rate limit? I tried adding a 1-second delay between batches but that doesn't change this. I also set retries = 3 on the Ruby client, but I don't know if that means it retries a failed batch. I don't think it's retrying individual API calls within the batch, because the back-off should resolve this.
Do I have to record the failed results and create a new batch to recover those separately?
Incidentally, the documented quota limit errors are confusing. There are dailyLimitExceeded and rateLimitExceeded messages but this isn't returning one of those. The servingLimitExceeded description of "The overall rate limit specified for the API has already been reached" is not all that helpful but I'm assuming this is the rate limit that we hit.
Update
Looking at the code, I see that the retries in the ruby google-api-client only apply to transmission and authorization (401) errors. A 403 (which is what rate limit returns) raises a ClientError which is not retried anyway.
So setting retries on the client object has no bearing on this.
Is there something I can do to address this in the batch?
We received word from Webmaster team that the API is limited to 20QPS and there is currently no way to go higher.
One suggested solution is to make smaller batch requests.