I run some load tests using Jmeter.
and found out unexpected increase of response time at the end of each test plan.
Just before the end of the test plan(duration 20 minutes), the response time increased all of sudden.
It occurred again when I run same test plan with different duration(duration 30 minutes). and latency is almost the same as response times, that seems no problem on network.
I'm very curious why the response time increased even when the number of threads are decreasing. Could you guess what the reason is?
Thank you in advance.
From your screenshot, it is clearly visible that for both cases (20 min and 30 min) response time got increased after the test is complete (duration reached to its endpoint). That's because of threads insufficient ramp-down time.
If your JMeter test is stopped forcefully, all the active threads will be closed immediately. So the requests generated by those threads will get higher response time.
See I am gussing. This happened once for me also. have you checked request are sended properly using View Result Tree Listener. Please check request status once graph getting increasing.
Or
Due to other traffic or Api is used by other users. This my be cause.
Related
I have a HTTP Request in my Thread Grpup that takes around 20 to 30 seconds to complete with a single user, so when I added 50 users I get a 500/Internal Server Error or 503/Server has been shutdown sometimes.
I want to add a Constant Timer with 40 seconds (in miliseconds) under the HTTP Request so maybe the application will have some time to process it. I am going to the rigth way?
If I add the Constant Timer will it be calculate as well in the Summary Report?
I need that the Jmeter give the time to the API (My aplication) complete the process (need at least 30 seconds) and it may be or not affetct my Summary Report
PreProcessors, Post-Processors and Timers are not counted in the Elapsed time. so response time will not be impacted.
However Throughout (the number of requests for the test duration) will be lower.
See JMeter Glossary for more information on the above metrics.
With regards to "right way" - real users don't "hammer" application non-stop, they need some time to "think" between operations so if you're simulating a real user you should have non-zero think time, however 40 seconds it kind of too much for me. Take a look at How to make JMeter behave more like a real browser article for more tips on properly configuring your JMeter test.
I am using Jmeter for performan testing, I believe elapsed time is the response time which I am considering(i.e.,85 milliseconds). When I hit the same request from postman It is taking very less time(i.e.,35 milliseconds) so I want to know where my JMETER giving correct results or not
Elapsed time consists of:
Connect time (it might include SSL handshake)
Latency
Application response time
Given you're running the same request (URL, body, headers) from the same machine you should have similar results.
Try running the request for more times, i.e. 10 or 100 using newman and JMeter (set number of iterations in Thread Group to 100). If you will still be seeing differences - consider comparing the requests using an external sniffer tool like Wireshark, it will give you more insights regarding what's going on under the hood.
I have a test plan for Rest API with one thread group with 2 samplers within.
While running load test for
no of threads(users):80
Ramp up period: 1
I get "Response code: 504 Response message: GATEWAY_TIMEOUT" in jmeter.
I observed that when Max value in Aggregate graph reaches 60000ms all response gets timed out.
What need to be done to prevent time out issue.
Load test works fine when I use 50 users or less.
I think you are getting timeouts because at load of 80+ users, response time shoots up but your application or rest API's have less time out duration set. Because of heavy response times you are exceeding time out duration and getting those errors.
To resolve this issue simplest solution would be to increase time out values if possible.
Otherwise you need to improve response time of those Rest API's to a better value so that you won't get timeouts.
While doing this, monitor system utilization to be sure that changes are not hampering anywhere else.
From what you are saying it seems your application limit is ~60 users load with given configuration.
please check you ELB settings , or application server settings(glassfish/apache) , ELB has by default 59 seconds of time out , after that ELB would time expire your request .
But you can see the response for those requests in the DB which might have taken longer time to respond
I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)
We're running a load test right now in JMeter viewing the Aggregate Report page. While we watch, Samples are increasing by nearly 500/second the number is going up very fast. However, throughput on the same page stays pegged at 18/second and our error rate is not increasing.
How can jmeter be sending so many samples if our server is only handling 18/second and the # of errors is not increasing (we only have 20 errors out of millions of samples).
Do requests equate to samples (they seem to)? Are we missing something?
If you add a "View Results Tree" Listener you can see EACH request and response - and you should check if the responses are what you actually want.
And in the "View Results in Table" Listener compare the Bytes for each response. Does it match the size in all cases?
In cases of errors or incorrect responses - these will be different.
Requests DO equal samples.
Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test. The aggregate report is number of requests that hit the server PER HOUR.
Remember that almost all errors are user defined. Using JoseK's recommendation, install the View Results Tree to see what your responses actually are. If they are green, but fail your own criteria, add assertions to turn them into errors.