What does (extra requests) mean in Parse.com services? - parse-platform

In Parse's pricing FAQ it is mentioned that "If your app hits its request limit, your extra requests will begin to fail with error code 155 (RequestLimitExceeded)".
What does (extra requests) mean?
Are they the requests made within the same minute? day? month? or all other requests made until the request limit is increased?

From the Parse.com FAQs:
The request limit is calculated on a per-minute basis. For example, if
an app is set to 30 requests/second, your app will hit its request
limit once it makes more than 1,800 requests over a 60 second period.
If your app hits its request limit, your extra requests will begin to
fail with error code 155 (RequestLimitExceeded). To prevent the
requests from failing you should adjust the request limit slider for
the relevant app on the Account Overview page. Please note that you
can see your actual requests/second on the Performance Analytics tab.
Your plan supports a certain number of requests per second. Extra requests are all requests that are started after the limit defined by your plan has already been reached. As outlined above, it is calculated on a per-minute basis: if more than 60 * your rate per second requests are made with your API key per minute, some of them will be extra requests and fail.
The FAQ entry actually contains an example: If your plan allows for 30 requests to be made per second, every minute 60 * 30 = 1800 requests are allowed. After the minute has passed, the counter will be reset.

Related

concurrency test in jmeter

I am new to jmeter, and I would like to do a test in which I send 500 requests per second, this for 10 seconds
The configuration I have is:
I would like to know if my configuration is correct or it can be done better
Your configuration means:
5000 users will be kicked off in 10 seconds, i.e. each second 500 users will be started
Once started the users will start executing Samplers upside down
The actual number of requests per second will depend on the application response time.
In your case you will only be able to achieve 500 requests per second if your application response time will be 1 second precisely. If it will be more - you will get less requests per second and vice versa.
If you need to send 500 requests per second for 10 seconds sharp I would suggest using Concurrency Thread Group and Throughput Shaping Timer combination.
The Throughput Shaping Timer needs to be set up like this:
And the concurrency thread group like this:
The configuration is for example only, in your case the number of threads required to conduct 500 request per second load might be different and again in mainly depends on the application response time.

Rate limiting WebCient requests after certain number of retries?

Is it possible to rate limit webclient requests for a url or any requests after a defined number of retries using resilience4j or otherwise?
Following is a sample code.
webClient.post()
.bodyValue(...)
.header(...)
.uri(...)
.retrieve()
.bodyToMono(Void.class)
.retryWhen(...) // every 15 mins, for 1 day
.subscribe();
Example use is say 10000 requests in a day need to be sent to 100 different urls. There are retries for case when a url is temporary unavailable.
But if a url comes back up after few hours, it would get accumulated large number of requests which I would like to rate limit.
Effectively I don't want to rate limit the whole operation but for specific urls which are not available for a long time or rate limit requests which have been retried 'x' number of times. Is there a way to achieve this?
Not to be confused with circuit breaker requirement, it's not exactly an issue to keep retrying every few mins atleast in the context.

Incorrect graph generate by jmeter listener Hits per Seconds and Composite Graph

learning using jmeter and getting problem when reading graph listener output
creating Thread group with number thread 8, ram-up 1 and loop forever
adding listener active threads over time, hits per seconds, response times over times
result:
a. in Active Threads Over Time getting correct result with maximum 8 thread
b. in Hits per Second, graph result is really weird, there is 148 number of hist/sec
trying to debug and change thread to 1, Hits per Second still generate weird graph with 20 hits/sec
any idea why this happening?
i use latest release from jmeter 3.0
As I had clarified here, jp#gc - Hits per Second, this listener shows the total number of requests sent to the server per second. Per Second is by default - it can be changed in the settings tab.
When you have 1 user, JMeter sends 18-20 requests / second (Loop forever will keep sending the requests for the user as soon as the user gets the response). So, The user was able to make 19 requests in a second. When you have 8 users, the test plan sends around 133 requests. It seems to work fine and nothing weird here.
When you have 8 users, JMeter would not have any issues in sending the first 8 requests (first request for each thread).But the subsequent requests for each thread will be sent only if the response is received for the previous request. (if you have any timers to simulate user think time,then the user will wait for the duration to send the next request after the response is received).
If 1 user is able to make 19 requests (or server processed 19 requests per second), then 8 users should be able to send 152 requests. But, when you increase the user load/increase the number of requests sent to the server, It's throughput (number of requests the server can process / unit time) will also increase gradually as shown in the picture. If you keep increasing the user, at one point, you would see the server's throughput (number of hits / second) gets saturated / would not increase beyond this point. So, may be, here the server got saturated at 133 requests / second. That is why we could not see 152 requests for 8 users. To understand the behavior, you need to increase user (ramp up) slowly.
Check here for few tips on JMeter

jmeter -How to set max value in Aggregate report

I have a test plan for Rest API with one thread group with 2 samplers within.
While running load test for
no of threads(users):80
Ramp up period: 1
I get "Response code: 504 Response message: GATEWAY_TIMEOUT" in jmeter.
I observed that when Max value in Aggregate graph reaches 60000ms all response gets timed out.
What need to be done to prevent time out issue.
Load test works fine when I use 50 users or less.
I think you are getting timeouts because at load of 80+ users, response time shoots up but your application or rest API's have less time out duration set. Because of heavy response times you are exceeding time out duration and getting those errors.
To resolve this issue simplest solution would be to increase time out values if possible.
Otherwise you need to improve response time of those Rest API's to a better value so that you won't get timeouts.
While doing this, monitor system utilization to be sure that changes are not hampering anywhere else.
From what you are saying it seems your application limit is ~60 users load with given configuration.
please check you ELB settings , or application server settings(glassfish/apache) , ELB has by default 59 seconds of time out , after that ELB would time expire your request .
But you can see the response for those requests in the DB which might have taken longer time to respond

Google Analytics API returning 403 rate limit but we are well under the rate

We daily download data from Google Analytics API. This morning, a number of jobs on one of our accounts hit a 403 "Serving Limit Exceeded" error. However, I checked into the statistics posted in the console and our records don't appear to be anywhere near the limit.
We made 335 requests this morning. This is far less than the 50k daily requests limit.
The API Chart on our console page shows that we peaked at 0.1322 requests/second which is much lower than the limit I've read being about 10 requests/second.
We run at most two simultaneous processes from each of two IP addresses; these have a five second delay between jobs and the jobs make only one request each.
Those 335 requests are spread across four different GA accounts, although they are likely queued such that all request for a single account are contiguous.
The errors occurred between midnight and 6AM Pacific time (-0700).
When I re-ran all the jobs at 8AM Pacific time they ran without error.
I'm I missing something in the rate limiting? Can someone explain what factor would cause us to hit this limit?
We're using the google-api-client gem.

Resources