I am testing how many requests my web server can respond to, and I'm using a Test project in VS 2010, using a LoadTest running 1 single actual test method. I'm getting results, but I'm not sure what they mean. In the graph below, "Test Response Time", I'm not sure what scale these numbers are from. Any have the legend available?
The 'Test Response Time' is the number of milliseconds it took to run one test. You can find more detailed info (including units) in the view just below the graphs.
Related
I'm looking for a way to display pie chart/table after running 100 tests, but all available built-in reports seem to accumulate data on time spent per sample, controller and about performance metrics.
Although tests mostly check performance and some metrics are usefull, we also need statistics on actual response data.
Each http request queries service for items availability per product.
After tests finish we also would like pie chart appear with 3 sections:
Available
Low on stock
Unavailable
Now, I found "Save Responses to a file" listener but it generates separate files which isn't very good. Also with "View Results Tree" we can specify filename where responses will be dumped.
We don't need the whole response object and preferably not even write anything to disk.
And than, how to actually visualize that data in JMeter after tests complete? Would it be Aggregate Graph?
So to recap: while threads run, each value from json response object (parsed with JPath) should be remembered somewhere and after tests complete these variables should be grouped and displayed as a pie chart.
I can think only of Sample Variables property, if you add the next line to user.properties file:
sample_variables=value1,value2,etc.
next time when you run JMeter test in command-line non-GUI mode the .jtl result file will contain as many extra columns as you have Sample Variables and each cell will contain respective value of the variable for each Sampler.
You will be able to use Excel or equivalent to build your charts. Alternatively you could use Backend Listener and come up with a Grafana dashboard showing what you need.
I have created a test plan in Jmeter and ran it for 10 users, it has run successfully without any error, as in the below screenshot of the listeners which I have added in my test plan.
In the above listeners, how may I come to know that the values of these fields Standard Deviation, Throughput, Median, Error% calculated as expected Or is there any ideal/expected/benchmark values of the above fields through which I compare and found that my test plan work as standard. Moreover how may I able to explain that the performance of my test plan is fine/good/better or best
Please suggest me thanks
It sounds like you don't really understand what you're doing so I would recommend starting with i.e. Performance Testing Guidance for Web Applications e-book.
With regards to the "values" - we have no any idea whether the "values" match your expectations. There are no any reference "values", normally your project should have non-functional requirements or SLAs which should define maximum response time or minimum number of hits per unit of time.
Check out JMeter Glossary to learn what the "values" mean.
If you don't have NFRs or SLAs defined you still can perform a stress test like:
Make sure that your JMeter test behaves like a real browser, at least I fail to see:
HTTP Cookie Manager
HTTP Cache Manager
HTTP Header Manager
You should be running your test in command-line non-GUI mode
Start with 1 virtual user and gradually increase the load until
you see the saturation point
you start seeing performance degradation
This way you will be able to state what is the maximum number of users you system can support without issues
I have a WebAPI service that I put together to test throughput hosted in Azure. I have it set up to call Task.Delay with a configurable number (IE webservice/api/endpoint?delay=500). When I run against the endpoint via Fiddler, everything works as expected, delays, etc.
I created a Load Test using VS Enterprise and used some of my free cloud load testing minutes to slam it with 500 concurrent users over 2 minutes. After multiple runs of the load test, it says the average test time is roughly 1.64 seconds. I have turned off think times for the test.
When I run my request in Fiddler concurrently with the Load test, I am seeing sub-second responses, even when spamming the execute button. My load test is doing effectively the same thing and getting 1.64 second response times.
What am I missing?
Code running in my unit test (which is then called for my load test):
var client = new HttpClient { BaseAddress = new Uri(CloudServiceUrl) };
var response = client.GetAsync($"{AuthAsyncTestUri}&bankSimTime={bankDelay}&databaseSimTime={databaseDelay}");
AuthAsyncTestUri is the endpoint for my cloud-hosted service.
There are several delay(), sleep(), pause(), etc methods available to a process. These methods cause the thread (or possible the program or process for some of them) to pause execution. Calling them from code used in a load test is not recommended, see the bottom of page 187 of the Visual Studio Performance Testing Quick Reference Guide (Version 3.6).
Visual Studio load tests do not have one thread per virtual user. Each operating system thread runs many virtual users. On a four-core computer I have seen a load test using four threads for the virtual users.
Suppose a load test is running on a four-core computer and Visual Studio starts four threads to execute the test cases. Suppose one virtual user calls sleep() or similar. That will suspend that thread, leaving three threads available to execute other virtual user activity. Suppose that four virtual users call sleep() or similar at approximately the same time. That will stop all four threads and no virtual users will be able to execute.
Responding to the following comment that was added to the question
I did try running it with a 5 user load, and saw average test times of less than 500 ms, which match what I see in my Fiddler requests. I'm still trying to figure out why the time goes up dramatically for the 500 user test while staying the same for Fiddler requests run in the middle of the 500 user test.
I think that this comment highlights the problem. At a low user load, the Visual Studio load test and the Fiddler test give similar times. At higher loads something between the load test and the server is limiting throughput and causing the slowdown. It would be worth examining the network route between the computer running the tests and the system being tested. Are there any slow segments on that path? Are there any segments that might see the load test as a denial of service attack and hence might slow down the traffic?
Running a test for as little as 2 minutes does not really show how the test runs. The details in the question do net tell how many tests started, how many finished and how many were abandoned at the end of the two minute run. It is possible that many tests cases were abandoned and that the average time of those that completed was 1.6 second.
If you have the results of the problem run then look at the "details" section of the results. Expand the slider below the image to include the whole run. Tick the option (top left corner) to highlight failing tests. I would expect to see a lot of red at the two minute mark for failing tests. However, the two minute run may be too short compared to the sampling interval (in the run settings) to see much.
Running a first test at 500 users tells you very little. It tells you either that the system copes with that load or that it does not. You need to run the test at several different user loads. Then you start to learn where the boundary between working and not working lies. Hence I recommend using a stepped load.
I believe you need at least one more test run to understand what is happening. I suggest doing a run as follows. Set a one minute cool-down period. Set a stepped load: start at 5 users as you know that that works. Increment by 1 user every two seconds until 100 users. That will take 190 seconds. Run for about another minute at that 100 user load. Total of 4 minutes 10 seconds. Call it 4 minutes. Adding in the one minute cool down makes (5 minutes) x (100 VU) = 500 VUM, which is a small portion of the free minutes per month. After the run look at the graphs of average test times. If all is OK on that test then you could try another that ramps up more quickly to say 500 users.
We are benchmarking our server and therefor we are using multiple benchmark tools. We already used ApacheBench giving us some great results and were also easy to plot in a graph.
Example plot AB results
Now we are using JMeter but having a hard time to get a good plot. We would love to have almost (completely) the same plo as the one from AB. So response time vs. requests. We tried already every listener but haven't found any satisfactory result.
We are using JMeter 2.10.
You can use:
Core Response time graph
Jmeter plugins : http://jmeter-plugins.org which has a lot of nice and very useful graphs which will meet your needs ( response time, percentiles, distribution, response codes per second ...)
Option 1: Isn't Response Time Graph what you're looking for?
Option 2: It's possible to build any graph from JMeter .jtl CSV or XML results file using Excel or equivalent, Google Charts API, JFreeChart, Javascript, etc.
Option 3: There is a JMeter Listener which automatically builds pretty and professional looking graphs for your performance test and have results comparison killer-feature
Context
I am using JMeter with JMeter-plugins for load testing. So far I was either modelling the traffic myself (ramp-up periods, bursts, etc.) or just doing full load testing. Now, however, I need to reproduce the exact same traffic as was in the access log, so for example if I had three requests at, say, 13:00:01, 13:00:03 and 13:00:06, I will need the sampler to hit those respective requests in same timing, second one being 2 seconds after the first, and third one being 3 seconds after the second.
I surfed the web for solutions, however the only hint I got was to write custom LogParser to extract timestamps and time differences. However that doesn't cover the actual timing at which the sample will send the request, since this is stored in ThreadGroup.
Summary
This leads me to my question : how can I reproduce the same exact traffic (with respect to time differences between requests) as was in the access log? I don't want to model a similar one (like Gaussian etc.), I need an exact copy of traffic.
If it's impossible in JMeter, please direct me to the right tool.