Jmeter Sample Increment vs Throughput - jmeter

We're running a load test right now in JMeter viewing the Aggregate Report page. While we watch, Samples are increasing by nearly 500/second the number is going up very fast. However, throughput on the same page stays pegged at 18/second and our error rate is not increasing.
How can jmeter be sending so many samples if our server is only handling 18/second and the # of errors is not increasing (we only have 20 errors out of millions of samples).
Do requests equate to samples (they seem to)? Are we missing something?

If you add a "View Results Tree" Listener you can see EACH request and response - and you should check if the responses are what you actually want.
And in the "View Results in Table" Listener compare the Bytes for each response. Does it match the size in all cases?
In cases of errors or incorrect responses - these will be different.

Requests DO equal samples.
Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test. The aggregate report is number of requests that hit the server PER HOUR.
Remember that almost all errors are user defined. Using JoseK's recommendation, install the View Results Tree to see what your responses actually are. If they are green, but fail your own criteria, add assertions to turn them into errors.

Related

How much load it is?

I have tried but have a doubt that whether the below-mentioned specification is equivalent to 4000load or not.
the number of threads-100,
ramp-up period-10 secs,
loop count- 40, then
which is equal to how much load??
You are loading 100 concurrent threads, the loops just adds more execution time.
So it isn't equivalent to 4000 concurrent threads hitting your server
I don't know what do you mean by 4000load, your test will send 4000 requests per each Sampler which is in your Thread Group as fast as it can. The actual test duration will depend on your application response time but will not be less than 10 seconds.
You might want to take a look at Transactions per Second and Server Hits per Second charts to see how many requests your configuration delivers, both charts can be installed using JMeter Plugins Manager
Also you can generate HTML Reporting Dashboard which will have consolidated aggregate view of your test results.

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

what is meant by Error % in jmeter(summary result)?

1) I can not understand error % in summary result in Listeners. 2) For example first time I've to run a test plan its error% is 90% and then run same test plan it shows 100% error. This error% is vary when i run my test plan.
Error% denotes the percent of requests with errors.
100% error means all the requests sent from JMeter have failed.
You should add a Tree View Listener and then check the individual requests and responses. Such high percentage of error means that either your server is not available or all of your requests are invalid.
So you should use Tree View Listener in order to identify the actual issue.
Error % means how many requests failed or resulted in error throughout the test duration. Its calculated based on the #samples field.
2 and 3 Can you please give more details about your test plan? Like number of threads, ramp-up and duration.
Such high error percentage will need further analysis. Check if you have missed out correlation of some requests i.e. any dynamic values that are passed from one request to other or check for resource utilization of your target system if it can handle the load you are generating.

jmeter -How to set max value in Aggregate report

I have a test plan for Rest API with one thread group with 2 samplers within.
While running load test for
no of threads(users):80
Ramp up period: 1
I get "Response code: 504 Response message: GATEWAY_TIMEOUT" in jmeter.
I observed that when Max value in Aggregate graph reaches 60000ms all response gets timed out.
What need to be done to prevent time out issue.
Load test works fine when I use 50 users or less.
I think you are getting timeouts because at load of 80+ users, response time shoots up but your application or rest API's have less time out duration set. Because of heavy response times you are exceeding time out duration and getting those errors.
To resolve this issue simplest solution would be to increase time out values if possible.
Otherwise you need to improve response time of those Rest API's to a better value so that you won't get timeouts.
While doing this, monitor system utilization to be sure that changes are not hampering anywhere else.
From what you are saying it seems your application limit is ~60 users load with given configuration.
please check you ELB settings , or application server settings(glassfish/apache) , ELB has by default 59 seconds of time out , after that ELB would time expire your request .
But you can see the response for those requests in the DB which might have taken longer time to respond

Does JMeter show the correct average response time for the first page it hits for many virtual users?

I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)

Resources