Raised Google Drive API Per-user limit, still getting userRateLimitExceeded errors - google-api

Similar to Raising Google Drive API per-user limit does not prevent rate limit exceptions
In Drive API Console, quotas looks like this:
Despite Per-user limit being set to an unnecessarily high requests/sec, I am still getting rate errors at the user-level.
What I'm doing:
I am using approx 8 threads uploading to Drive, and they are ALL implementing a robust exponential back-off of 1, 2, 4, 8, 16, 32, 64 sec back-off respectively (pretty excessive back-off, but necessary imho). The problem can still persists through all of this back-off in some of the threads.
Is there some other rate that is not being advertised / cannot be set?
I'm nowhere near the requests/sec, and still have 99.53% total quota. Why am I still getting userRateLimitExceeded errors?

userRateLimitExceeded is flood protection basically. Its used to prevent people from sending to many requests to fast.
Indicates that the user rate limit has been exceeded. The maximum rate
limit is 10 qps per IP address. The default value set in Google
Developers Console is 1 qps per IP address. You can increase this
limit in the Google Developers Console to a maximum of 10 qps.
You need to slow your code down, by implementing Exponential Backoff.
Make a request to the API
Receive an error response that has a retry-able error code
Wait 1s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 2s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 4s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 8s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 16s + random_number_milliseconds seconds
Retry request
If you still get an error, stop and log the error.
The idea is that every time you see that error you wait a few seconds then try and send it again. If you get the error again you wait a little longer.
Quota user:
Now I am not sure how your application works but, If all the quests are coming from the same IP this could cause your issue. As you can see by the Quota you get 10 requests per second / per user. How does Google know its a user? They look at the IP address. If all your reuqests are coming from the same IP then its one user and you are locked to 10 requests per second.
You can get around this by adding QuotaUser to your request.
quotaUser - Alternative to userIp. Link
Lets you enforce per-user quotas from a server-side application even in cases when the user's IP address is unknown. This can occur,
for example, with applications that run cron jobs on App Engine on a
user's behalf.
You can choose any arbitrary string that uniquely identifies a user, but it is limited to 40 characters.
Overrides userIp if both are provided.
Learn more about capping usage.
If you send a different quotauser on every reqest, say a random number, then Google thinks its a different user and will assume that its only one request in the 10 seconds. Its a little trick to get around the ip limitation when running server applications that request everything from the same IP.

Related

API load testing in jmeter, while running for 100 users, 45 -50 users getting 200 reponse but remaining shows 500 internal server error

i tried with one API for 100 users. for 50 users I am getting success response but for remaining 50 I am getting 500 internal server error. how half of the API's alone getting failed. please suggest me a solution
As per 500 Internal Server Error description
The HyperText Transfer Protocol (HTTP) 500 Internal Server Error server error response code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request.
So you need to look at your server logs to get the reason for failure. Most probably it becomes overloaded and cannot handle 100 users. Try increasing the load gradually and inspect relationship between:
Number of users and number of requests per second
Number of users and response time
My expectation is that
at the first phase of the test the response time will remain the same and the number of requests per second will grow proportionally to the number of users.
at some stage you will see that the number of requests per second stops growing. The moment right before that is known as saturation point.
After that response time will start growing
After that the errors will start occurring
You might want to collect and report the aforementioned metrics and indicate what is the current bottleneck. If you need to understand the reason - it's a whole big different story

Sudden increase of response time in Jmeter

I run some load tests using Jmeter.
and found out unexpected increase of response time at the end of each test plan.
Just before the end of the test plan(duration 20 minutes), the response time increased all of sudden.
It occurred again when I run same test plan with different duration(duration 30 minutes). and latency is almost the same as response times, that seems no problem on network.
I'm very curious why the response time increased even when the number of threads are decreasing. Could you guess what the reason is?
Thank you in advance.
From your screenshot, it is clearly visible that for both cases (20 min and 30 min) response time got increased after the test is complete (duration reached to its endpoint). That's because of threads insufficient ramp-down time.
If your JMeter test is stopped forcefully, all the active threads will be closed immediately. So the requests generated by those threads will get higher response time.
See I am gussing. This happened once for me also. have you checked request are sended properly using View Result Tree Listener. Please check request status once graph getting increasing.
Or
Due to other traffic or Api is used by other users. This my be cause.

Incorrect graph generate by jmeter listener Hits per Seconds and Composite Graph

learning using jmeter and getting problem when reading graph listener output
creating Thread group with number thread 8, ram-up 1 and loop forever
adding listener active threads over time, hits per seconds, response times over times
result:
a. in Active Threads Over Time getting correct result with maximum 8 thread
b. in Hits per Second, graph result is really weird, there is 148 number of hist/sec
trying to debug and change thread to 1, Hits per Second still generate weird graph with 20 hits/sec
any idea why this happening?
i use latest release from jmeter 3.0
As I had clarified here, jp#gc - Hits per Second, this listener shows the total number of requests sent to the server per second. Per Second is by default - it can be changed in the settings tab.
When you have 1 user, JMeter sends 18-20 requests / second (Loop forever will keep sending the requests for the user as soon as the user gets the response). So, The user was able to make 19 requests in a second. When you have 8 users, the test plan sends around 133 requests. It seems to work fine and nothing weird here.
When you have 8 users, JMeter would not have any issues in sending the first 8 requests (first request for each thread).But the subsequent requests for each thread will be sent only if the response is received for the previous request. (if you have any timers to simulate user think time,then the user will wait for the duration to send the next request after the response is received).
If 1 user is able to make 19 requests (or server processed 19 requests per second), then 8 users should be able to send 152 requests. But, when you increase the user load/increase the number of requests sent to the server, It's throughput (number of requests the server can process / unit time) will also increase gradually as shown in the picture. If you keep increasing the user, at one point, you would see the server's throughput (number of hits / second) gets saturated / would not increase beyond this point. So, may be, here the server got saturated at 133 requests / second. That is why we could not see 152 requests for 8 users. To understand the behavior, you need to increase user (ramp up) slowly.
Check here for few tips on JMeter

jmeter -How to set max value in Aggregate report

I have a test plan for Rest API with one thread group with 2 samplers within.
While running load test for
no of threads(users):80
Ramp up period: 1
I get "Response code: 504 Response message: GATEWAY_TIMEOUT" in jmeter.
I observed that when Max value in Aggregate graph reaches 60000ms all response gets timed out.
What need to be done to prevent time out issue.
Load test works fine when I use 50 users or less.
I think you are getting timeouts because at load of 80+ users, response time shoots up but your application or rest API's have less time out duration set. Because of heavy response times you are exceeding time out duration and getting those errors.
To resolve this issue simplest solution would be to increase time out values if possible.
Otherwise you need to improve response time of those Rest API's to a better value so that you won't get timeouts.
While doing this, monitor system utilization to be sure that changes are not hampering anywhere else.
From what you are saying it seems your application limit is ~60 users load with given configuration.
please check you ELB settings , or application server settings(glassfish/apache) , ELB has by default 59 seconds of time out , after that ELB would time expire your request .
But you can see the response for those requests in the DB which might have taken longer time to respond

Google Analytics API returning 403 rate limit but we are well under the rate

We daily download data from Google Analytics API. This morning, a number of jobs on one of our accounts hit a 403 "Serving Limit Exceeded" error. However, I checked into the statistics posted in the console and our records don't appear to be anywhere near the limit.
We made 335 requests this morning. This is far less than the 50k daily requests limit.
The API Chart on our console page shows that we peaked at 0.1322 requests/second which is much lower than the limit I've read being about 10 requests/second.
We run at most two simultaneous processes from each of two IP addresses; these have a five second delay between jobs and the jobs make only one request each.
Those 335 requests are spread across four different GA accounts, although they are likely queued such that all request for a single account are contiguous.
The errors occurred between midnight and 6AM Pacific time (-0700).
When I re-ran all the jobs at 8AM Pacific time they ran without error.
I'm I missing something in the rate limiting? Can someone explain what factor would cause us to hit this limit?
We're using the google-api-client gem.

Resources