JMeter response time decreases after the first run of test plan - jmeter

I have a test plan setup which I am using on my web application. It is pretty simple , a user logs in and then navigates through some of the pages. Everything's working fine except the fact that whenever I run the test plan for the first time(say first time after restarting the web application server) the average response time recorded are around 18000ms but in the susequent runs it is always around 3000ms until i restart the server. I just want to know why this is happening. Pardon me, I am newbie to this and thanks in advance.

You can start to exclude some part of test plan and try again. If this response time does not decrase then you can focus your web application server thread pool size. If it is very small and your Jmeter test plan needs more than this size then application server try to create new threads. When you increase your min thread pool size on app server the response time is still high then need to focus what your test plan does. By the way, I want to have a look at your test plan if you share.

Related

Visual Studio web test transaction total request time wrong

I have a very quick question. I have recorded web scenarios and created different transactions on top of those recorded web requests as well as load tests on it.
When I run the load test and wanted to see the detail of each run, I figured the transaction total time does not match the total of each request under it.
Do you have any idea why?

Whats the impact of response code 400,503 ? Can we ignore these codes if my primary focus is to measure loading time of web application?

I am testing a web application login page loading time with 300 thread users and ramp up period of 300 secs.Most of my samples return response code 200.But few of them return response code 400,503.
My goal is to just check the performance of the web application if 300 users start using it.
I am new to Jmeter and have basic knowledge of programming.
My Question :-
1.Can i ignore these errors and focus just on timings from the summary report ?
2.If i really need to fix these errors, how to fix it ?
There are 2 different problems indicated by these errors:
HTTP Status 400 stands for Bad Request - it means that you're sending malformed requests which cannot be understood by the server. You should inspect request details and amend JMeter configuration as it is the problem in your script.
HTTP Status 503 stands for Service Unavailable - it indicates the problem on server side, i.e. server is not capable of handling the load you're generating. This is something you can already report as the application issue. You can try to identify the underlying cause by:
looking into your application log files
checking whether your application has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using APM tool or JMeter PerfMon Plugin
re-running your test with profiler tool telemetry to deep dive into what's under the hood of the longest response times
So first of all you should ensure that your test is doing what it is supposed to be doing by running it with 1-2 users/loops and inspecting requests/response details. At this stage you should not be having any errors.
Going forward you should increase the load gradually and correlate the increasing number of virtual users with the increasing response time/number of errors
`
Performance testing is different from load testing. What you are doing is load testing.
Performance testing is more about how quickly an action takes. I typically capture performance on a system not under load for a given action.
This gives a baseline that I can then refer to during load tests.
Hopefully, you’ve been given some performance figures to test. E.g. must be able to handle 300 requests in two minutes.
When moving onto load, I run a series of load tests with increasing number of users/threads and capture the results from each test.
Armed with this, I can see how load degrades performance to the point where errors start to show up. This gives you an idea of how much typical load the system can handle.
I’d also look to run soak tests too. This where I’d run JMeter for a long period with typical (not peak) load to make sure the system can handle sustained load.
In terms of the errors you’re seeing, no I would not ignore them. Assuming your test is calling the same endpoint, it seems safe to say the code is fine, its the infrastructure struggling with the load you’re throwing at it.

Apache Jmeter Concurrent Users Performance Testing

I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.

Performance testing with Jmeter

I've recorded a test script of web application (extJS). The test logs into application (I used login and password saved in .txt file and CSV Data Set Config element), makes some calculations with external webservice and adds some elements to database. It works fine but...
I'm not sure that all of my users do these things at the same time... Is there any way to configure it?
E.g 100 users do the same scenario at the same time?
You can see the exact number of concurrent users via Active Threads Over Time Listener available via JMeter Plugins
If you're not happy with what you're seeing and expect more concurrent users you can consider 2 options:
Increase "Loop Count" on Thread Group level as when JMeter thread has finished executing all samplers and doesn't have any more to run and no loops to iterate - it's being shut down.
Add Synchronizing Timer. It pauses all the threads until the desired amount is reached and releases them at exactly the same moment so you will be able to test i.e. what happens when 100 users are trying to log in at the same time.

Why is VS2013 load testing only running 7 requests per second?

I am running some load tests, and for some reason VS is displaying as 7 req/sec, is this normal?
I have a stepped profile, starting at 10, ending at 100, and I would have thought it would run the test for each user.
I.e - 10 users, 10 requests per second?
First, you're running load testing from your local machine (Controller = Local Run). You can run load tests from your developer machine, but you usually can't generate enough traffic to really see how the application responds. To simulate a lot of users, you need a Load Test Rig. (on Premises, or using Windows Azure Cloud Testing). This can be a problem especially, if you're testing a web site hosted on the same computer.
Check the CPU on your machine when running the load test (in the graph) : if it's over 70%, results are biaised.
Second, how do you recorded web tests ? when using web test recorder (in IE), it will add a think time to each request. Think times are used to simulate human behavior that causes people to wait between interactions with a Web site : a real user will never open 4 pages in the same second. You can check Think Time in each request properties. A high value may explain why you've only a few requests/sec if the CPU is still low.
I have a stepped profile, starting at 10, ending at 100, and I would
have thought it would run the test for each user.
In Run settings, you have the option to configure the maximum number of iterations : this will run N scenarios, without any time limit. It's not activated by default.
You have to understand the notion of virtual user : Basically, a virtual user executes concurrently only one test case at the same time, taken from configured web tests, according to test mix/percentage/sceanrios... So 10 concurrent virtual users, will execute at most 10 tests at the same time. The Step goal is usually used to increase the load until the server reaches a point that where performance diminishes significantly.
A complete description of all Load Patterns are available here.
At end, if the number of request/sec is still low, and if it's not because of Load Testing configuration, you may have a problem on your web site ;-)
It all depends on your test configuration, but if your test is setup to do ~1 req/s with one user it should deliver ~10 req/s with 10 users.
I would say that it's probably because your server can't handle responding with more than 7 req/s. To find out where the bottle neck is try to run smaller steps and see where the breaking point is, you can do some monitoring on the servers at the same time to find out what resources are running out and on which server (CPU, mem, bw etc). Like mentioned in the comments profiling is a very good approach to find out what parts of the code and which queries is the resource hog.
Hope this helps!
There are a variety of reasons throughput could be low.
Check your settings for "Think Time Between Test Iterations", step load pattern - step duration is another setting you could modify.
Remember to keep the test moving so looking at think times for each request and making sure you are not taking too long to perform each test end to end.
I have seen where these settings can extend the overall time to more that a few minutes thus reducing the minute by minute transactions.
Check your end to end run time per webtest if run independent from the load test to make sure you know how much time the test takes overall.
Hope this helps.
- Jim Nye

Resources