X-Rate-Limit with two restrictions? - limit

I have a "search" endpoint in my API that requires quite a lot of work in the backend.
For that reason, I've added a Rate limit per minutes (10 req/m) but I also would like to add a rate limit per day to avoid abuse.
The issue I'm facing is regarding the X-Rate-Limit headers. Which one do I display? Is there a technique to display two "values" like in my case?

My experience with the X-Rate-Limit header is limited to working with the League of Legends API.
According to their documentation here, rate limits are comma-separated and use the same unit of time. For instance, a rate limit of 100 calls per minute, and 500 calls per hour would be
X-Rate-Limit-Count: 100:1,500:60 if minutes are your unit of time
They use seconds as their unit of time. I don't know whether or not this is standard practice or if any unit of time is acceptable. If seconds are preferred, the above example would look like X-Rate-Limit-Count: 100:60,500:3600.

Related

How to set the 4000 users and 1 hour duration using JMETER

In Jmeter I have a scenario like
Load tested with 4000 users and 1 hour duration
759965 requests made and out of which one request failed on an average 18894.13 requests made per second.
This was the earlier scenario and I want to make the same scenario again with the above information. Can someone guide me how to set up the environment and also the results. I have designed my script using Co-relation with the help regular expression extractor.enter image description here
For the normal Thread Group the configuration would be something like:
It would also be a good idea to use some ramp-up period so the load would increase gradually and you could correlate increasing load with other metrics like response time or transactions per second.
You might also want to use one of Custom Thread Groups which can be installed as JMeter Plugins, they provide easy visual way to define the number of threads, test duration, ramp-up, ramp-down, time to hold the load, eventual spikes, etc.
Once you define your desired workload you should run your test in command-line non-GUI mode, with regards to the test results the easiest option is to generate HTML Reporting Dashboard

How much load it is?

I have tried but have a doubt that whether the below-mentioned specification is equivalent to 4000load or not.
the number of threads-100,
ramp-up period-10 secs,
loop count- 40, then
which is equal to how much load??
You are loading 100 concurrent threads, the loops just adds more execution time.
So it isn't equivalent to 4000 concurrent threads hitting your server
I don't know what do you mean by 4000load, your test will send 4000 requests per each Sampler which is in your Thread Group as fast as it can. The actual test duration will depend on your application response time but will not be less than 10 seconds.
You might want to take a look at Transactions per Second and Server Hits per Second charts to see how many requests your configuration delivers, both charts can be installed using JMeter Plugins Manager
Also you can generate HTML Reporting Dashboard which will have consolidated aggregate view of your test results.

What timer to use to control the API requests?

I am trying to load test an API and I am trying to make sure I fire only 2 requests in a second due to the throttling limit set at the API Gateway level so if the third request is sent within a second (this happens if the response time of the earlier request is < 1 sec) then I get HTTP-429 error saying 'too many requests'. Please could someone suggest if I can use any timer to achieve this?
Thanks,
N
Constant Throughput Timer is the easiest of built-in test elements, 2 requests per second is 120 requests per minute. However it is precise enough only on "minute" level so you might need to play with ramp-up period
Precise Throughput Timer is more "precise" but a little bit harder to use as you need to provide controlled throughput and test duration
If you don't mind using JMeter Plugins there is Throughput Shaping Timer which provides the maximum flexibility and visual way of defining the load
For this particular case, I suggest the Arrivals Tread Group. This TG will let you exactly configure the desired TPS (Arrival Rate), plus the plugin will instantiate the necessary threads to generate the load. No need to guess how many threads/vusers you'll need.

Visual Studio Load Test request completion and think time

I'm using load test in Visual Studio to test our web api services. But to my surprise I can't seem to test what I want to. Actually I have a single url in my .webtest file and try to send the same url time and again to see what is the avg. response time.
Here are the details
1.I use constant load of 1 user
2.Test duration of 1 hour
3.Think time of 10 seconds (not the think time between iterations)
4.The avg. response time that I get is 1.5 seconds
5.So the avg. test time comes out to be 11.5 seconds
6.Requests/sec are 0.088
7.And I'm using Sequential Test Order among 4 types of different tests
So these figures are making me think that every time a virtual user sends a request besides the specified think time it waits for the request to complete before he sends a new one (request). Thus technically the total think time becomes
Total think time = think time specified + avg. response time
But I don't want the user to wait for an already sent request to come back and then send a new one after a specified think time. I need to configure the load test in such a way that if the think time is 10 seconds then the user should send next request after every 10 seconds without waiting the first one to come back then think for another 10 seconds and then send a new request (hence making the total think time to 11.5 seconds in my case as mentioned above). And no matter what type of test I choose among 4 different types Visual Studio is always forcing the virtual user to wait for the completion of the request then add specified think time and then send a new one.
I know what Visual Studio load test is doing is more of a practical approach where the user sends the request wait till it comes back then think or interact with the website and then sends a new one.
Any help or suggestion would be appreciated towards what I'm trying to achieve.
In the properties of the scenario, set the "Test mix type" to be "Test mix based on user pace" and set the "Tests per user per hour" as appropriate. See here.
The suggestion in the question that:
Total think time = think time specified + avg. response time
is erroneous. To my mind adding the values does not provide a useful result. The two values on the right are as stated. Think time simulates the time a user spends reading the page, deciding what to do next and typing/clicking/etc their response. Response time is the "turn around" time between sending a request and getting the response. Adding them does not increase the think time in any sense, it just makes the total duration for handing the request in this specific test. Another test might make the same request with a different think time. Note that many web pages cause more than one request and response to be issued; JavaScript and other allow web pages to do many clever things.

Does JMeter show the correct average response time for the first page it hits for many virtual users?

I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)

Resources