How can I understand that my server is doing fine?
I did some performance testing and the result was like:
No Of Sample: 750
Latest Sample: 3317
Average: 601
Deviation : 1152
Throughput: 2613.24
Median: 386
what are these parameters mean?
how can I give correct inputs and expect correct result?
I believe that JMeter Glossary can explain all the terms.
Just in case if it goes away:
No of Sample - total number of samples executed.
Latest sample - self-explanatory
Average - Arithmetic_mean of all samplers execution time: sum of all samplers duration divided by the "No of Sample"
Throughput - number of requests per time unit, like hits per second
Median and Deviation - are statistical terms
In regards to whether your server behavior is acceptable or not - it depends on what it is doing. 601 ms average response time sounds very good for i.e. online shop, but it may be not acceptable for finance operations, medical equipment or NASA spaceships. Besides it is quite unclear how many concurrent users were involved into load test as it were 2-5 virtual users the application under test may behave good, and in case of 20-50 concurrent users response time will get 60 seconds - that would be bad.
See Performance Metrics for Websites guide to learn about the most common measures which need to be done during performance testing.
Related
I write a JMeter test and use 1000 threads, and get a throughput of 330 requests per second. What was the average response time?
same test in number 2 and I use 100 threads and get a throughput of 330 requests per second. What was the average response time?
I think it has to do with little law, but I have no idea how to solve it? Any help, thanks.
We don't know, in order to determine the average response time we need to know your test duration
JMeter calculates average response time as arithmetic mean or the all response times for individual samplers, it can be observed in i.e. Aggregate Report listener.
Also the fact you have the same throughput for 100 and 1000 users looks utterly suspicious, for well-behaved application you should get 10x times more throughput for 1000 users than for 100 users.
The reasons could be in:
Your application cannot handle more than 330 requests per second which indicates a performance bottleneck
JMeter cannot produce more than 330 requests per second, make sure to follow JMeter Best Practices or consider Distributed Testing if your load generator hardware specifications are too low to produce the required load.
Do we need to adjust Throughput given by jmeter, to find out the actual tps of the system
For eg : I am getting 100 tps for concurrent 250 users. This ran for 10 hrs. Can I come to a conclusion like my software can handle 100 transactions per second. Or else do I need to do some adjustment and need to get a value. Why i am asking this because when load started, system will take sometime to perform in adequate level (warm up time). If so how to do this. Please help me to understand this.
By default JMeter sends requests as fast as it can, the main factor which are affecting TPS rate are:
number of threads (virtual users) - this you can define in Thread Group
your application response time - this is not something you can control
Ideally when you increase number of threads the number of TPS should increase by the same factor, i.e. if you have 250 users and getting 100 tps you should get 200 tps for 500 users. If this is not the case - these 500 users are beyond saturation point and your application bottleneck is somewhere between 250 and 500 users (if not earlier).
With regards to "warm up" time - the recommended approach of conducting the load is doing it gradually, this way you will allow your application to get prepared to increasing load, warm up caches, let JIT compiler/optimizer to go their work, etc. Moreover this way you will be able to correlate the increasing load with increasing/decreasing throughput, response time, number of errors, etc. while having 250 users released at once doesn't tell the full story. See
The system warmup period varies from one system to the other. Warm up period is where configurations are cached, different libraries are initialized (eg. Builder.init()) and other initial functions that usually don't happen for subsequent calls. If you study results of the load test, there is a slow period at the very beginning. For most systems, it could be as small as 5 to 10 minutes. These values could be even negligible if the test is as long as 10 hours. But then again, average calculation can be effected if the results give extremely low values at the start (it always depend on the jump from initial warming up period to normal operations).
As per jmeter configurations this thread may explain the configuration. How to exclude warmup time from JMeter summary?
When i carryout load testing in jmeter i have list of samples. Each sample returns its own throughput. However in the aggregated graph or summary result it has a total row at the bottom and adds all the throughput. What does this signify?
Can i just use the total throughput as the total throughput of the entire test run. Why does summary report adds up all the sample throughput rather than showing the average throughput?
In the following picture i ran load test with 2 user and 2 ramp up time.
As shown above the total actually sums up the throughput rather than aggregating it.
However, when i carry out test with 1 user and 1 ramp up time then it aggregates the throughput and displays the average throughput of the samplers.
In the below figure i carried out test with 1 thread and 1 ramp up time.
IS this a bug?
No, It's not a bug!!!
The Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The Throughput is the real load processed by your server (Application under test) during a run but it does not tell you anything about the performance of your server during this same run.
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So, In your case For 2 User : The application handled 10.7 requests/second.
And For the Single user : The Application handled 22.9 requests/second.
Its not sums up here, If you see in your screenshot the sum is coming around 14.4/sec, Sot its not sum of all throughput. Its calculated value based on the load you had given and your application would support mentioned throughput.
In your case if a user accesses the application it supports 22.9 requests per sec. But when two users access the application then it supports 10.7 requests per sec.
Please look at here for more info about Jmeter throughput
Jmeter aggregate report total throughput - how is calculated
In case of performance testing, average is something we all avoid.
Going back to actual question. Consider you have 5 requests in one workflow. You are running this test for 50 iterations. So making 250 requests during the load test.
Now you want to analyze individual request performance as well as overall system performance. In this case, when you want to drill down and look at individual request in order to find bottlenecks, you look at the throughput and response time of request.
If you want to find the overall load your system can handle, look at the total throughput.
I would like to understand the Jmeter output for in depth.
I am confused with the 'throughput rate' concept.Does it mean that the server can only handle 48.1 requests/min at the given load or does it mean something else .What is the difference between the total throughput rate and the throughput rate shown by individual requests.In my case there 8 requests sent and the individual request shows throughput rate as 6.1/min.Please explain.
I need to suggest any changes to server side/explain the jmeter report,Please suggest how i can explain what needs to be done.
The total summary report is as below:
Total Users:100
Ramp up time:1000s
Total Samples : 800
Min:325
Max:20353
Std.Dev: 4524.91
Throughput:48.1/min
Error: 0.38%
Thanks in advance.
As per JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So you providing the "load" of 0.8 requests per second which is quite low.
JMeter provides a test element which controls this "Throughput" value so you can choose whether you will be simulating "N" concurrent users or sending "N" requests per second. Take a look at How to use JMeter's Throughput Constant Timer guide for more details on goal-oriented load test scenario implementation with JMeter.
I use JMeter to test my webapp application, I have aggregate graph with some score values, but I actually don't know what they mean...
Aggregate graph shows for example:
average
median
min
max
I don't know about what refer that values.
For what refer 90% line?
I also don't know what's the unit of throughput per second (bytes?).
Anybody knows?
JMeter documentation shows only general information about reports and listeners.
This link contains a helpful explanation for Jmeter Usuage, Results, Tips, Considerations and Deployment.
Good Luck!
Throughput - means number of requests per one second. So if two users open your website at the same time throughput will be 2/s - 2 requests in one second.
How it can be useful: check your website analytics and you see number of hosts and hits per one day. Throughput stands for hits per day. If analytics shows 200 000 hits per day this means: 200 000 / 86400 (seconds in one day) = 2,31 hits/s.
Average - the average of response time. I think you know what is response time - it's time between sending request and getting response from server. To get the average response time you should sum all samplers response time and devide to number of samplers. Sampler means user, request, hit, the meaning is the same.
Min - the minimal response time in all samplers. Differently we may say the fastest response.
Max - opposite of Min, the slowest response.
The throughput is generally measured in requests/second in things Jmeter.
As far as knowing what requests are within the 90% line, not really a way to do it with this listener. This is representing aggregate information, so it only reports on information about all the tests, not specific results.
For some different methods and ideas on getting useful information out of the responses, take a look at this jmeter wiki on log analysis.
If you don't already have it, jmeter plugins has a lot of useful controllers and listeners that can make understanding the results easier as well.