I am doing some kind of endurance testing with a thread group of 5 threads, and add test action at the end to add delay of 15min after each loop of test execution (please refer to the screenshot), and run overall 32 loops (takes overall around 8 hours).
I let the test run at end of working day using jmeter GUI, and it should take around 8 hours to finish.
Strange thing happens, after jmeter run 4 or 5 loops of 15min, it does not run anymore loops for a couple of hours, then may run 1 or 2 loops in midnight, then continue running in the next morning.
I tried to shorten the delay from 15 min to 1 min or 5 min so that it finishes quicker, then all 32 loops finish with no problem.
So my question comes, how can this happen??? If it's because computer is dormant that jmeter halt, then how come it still runs about an hour after screenlock and also in midnight? If it's because script setting incorrect, then how come if the delay is 1 or 5min it runs ok?
Any suggestions on how to check for this issue? I checked the script settings carefully, nothing suspicious.
Thanks,
Strange thing happens, after jmeter run 4 or 5 loops of 15min, it does
not run anymore loops for a couple of hours, then may run 1 or 2 loops
in midnight, then continue running in the next morning.
Are you sure that your computer will not sleep / hibernate after about 1 - 1.5 hours of inactivity? I usually got this issue for overnight jobs. The time it runs at midnight may well be the time your antivirus software run (therefore awake your computer).
For these problems, just use a computer software out there and you should be fine. Do notice that some company also set up technical measures to prevent employees have their computer on overnight, better check them out.
I don't think this is due to Test Action sampler.
By the way, according to the documentation, you're pausing only current thread for 15 minutes while others are running as in "Pause" mode "Current Thread/All Threads" combobox value is ignored. If your target is to wait for 15 minutes before next iteration without delivering any load - consider using Synchronizing Timer or i.e. switch to Ultimate Thread Group
My expectation is that this is due to non-optimal JMeter configuration. Try the following steps
Increase JVM Heap size allocated to JMeter
Run your test in command-line non-GUI mode
Disable all the listeners, especially View Results Tree
Consider upgrade to JMeter 3.0
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for more information on above steps and few other JMeter tuning tips.
Related
I have a situation where an API is called by 500 users/threads every 10 minutes.
I have created a jmeter script for this. It takes around 4 to 5 minutes to get response for all the 500 threads.
Now I have created a batch file to execute this jmx file. This batch file is then called every 10 minutes using task scheduler in windows.
I am not sure whether this is the best approach.
Have read about test action sampler / Timers / Think time etc.
Please can someone advise which is recommended in my case.
My requirement is to trigger the thread group every 10 minutes irrespective of how long the the previous run took.
According to Linus Torvalds
If it compiles, it is good; if it boots up, it is perfect
Given your approach works fine for your use case you should be good to go.
Personally I would be interested in test results as well (not sure how you're handling them), perhaps a better idea would be putting your script under orchestration of a Continuous Integration server like Jenkins, it provides flexible options on triggering the jobs (including scheduling) and you will be able to get some statistics, trends and conditionally mark your tests as passed or failed basing on the response time using Performance Plugin
I'd like to, independently of any other config regarding amount of threads, ramp up time or anything else, just stop the test if it has been running for more than, for example, 2 hours.
Currently you have in Thread Group Scheduler Checkbox, when you click it
You can define Duration in seconds,
In your case for 2 hours enter 7200 (60 *60 *2)
Another option is putting all requests in Runtime Controller with similar value.
You can also do scripting in While Controller with check time (similar to Dimtri T answer):
${__groovy(${__time(,)} - ${TESTSTART.MS} < 7200000,)}
My test is configured like below.
Thread group is configured to run for 9600s which is 160 mins that is 2 hours 50 mins.
within that I have placed constant timer as 1800000(ms) as that parameter is to be provided in ms. when I start the test, it stops within 4 mins and I can see in log that:
Stop test detected by thread:...
Is there any limit for constant timer, I.E. what could be the reason my test is stopping after 4 mins ?
There is no such limitation on Constant Timer. The error could be because of some other reason.
Share the error message and other configuration details in Thread Group. Also share View Results Tree screen shot after running the test
Note: Test will stop automatically if Jmeter completes with script execution irrespective of the time mentioned in scheduler. so, you should provide enough loop count (or mark forever), so that test would run till the scheduled time completes.
I have a WebAPI service that I put together to test throughput hosted in Azure. I have it set up to call Task.Delay with a configurable number (IE webservice/api/endpoint?delay=500). When I run against the endpoint via Fiddler, everything works as expected, delays, etc.
I created a Load Test using VS Enterprise and used some of my free cloud load testing minutes to slam it with 500 concurrent users over 2 minutes. After multiple runs of the load test, it says the average test time is roughly 1.64 seconds. I have turned off think times for the test.
When I run my request in Fiddler concurrently with the Load test, I am seeing sub-second responses, even when spamming the execute button. My load test is doing effectively the same thing and getting 1.64 second response times.
What am I missing?
Code running in my unit test (which is then called for my load test):
var client = new HttpClient { BaseAddress = new Uri(CloudServiceUrl) };
var response = client.GetAsync($"{AuthAsyncTestUri}&bankSimTime={bankDelay}&databaseSimTime={databaseDelay}");
AuthAsyncTestUri is the endpoint for my cloud-hosted service.
There are several delay(), sleep(), pause(), etc methods available to a process. These methods cause the thread (or possible the program or process for some of them) to pause execution. Calling them from code used in a load test is not recommended, see the bottom of page 187 of the Visual Studio Performance Testing Quick Reference Guide (Version 3.6).
Visual Studio load tests do not have one thread per virtual user. Each operating system thread runs many virtual users. On a four-core computer I have seen a load test using four threads for the virtual users.
Suppose a load test is running on a four-core computer and Visual Studio starts four threads to execute the test cases. Suppose one virtual user calls sleep() or similar. That will suspend that thread, leaving three threads available to execute other virtual user activity. Suppose that four virtual users call sleep() or similar at approximately the same time. That will stop all four threads and no virtual users will be able to execute.
Responding to the following comment that was added to the question
I did try running it with a 5 user load, and saw average test times of less than 500 ms, which match what I see in my Fiddler requests. I'm still trying to figure out why the time goes up dramatically for the 500 user test while staying the same for Fiddler requests run in the middle of the 500 user test.
I think that this comment highlights the problem. At a low user load, the Visual Studio load test and the Fiddler test give similar times. At higher loads something between the load test and the server is limiting throughput and causing the slowdown. It would be worth examining the network route between the computer running the tests and the system being tested. Are there any slow segments on that path? Are there any segments that might see the load test as a denial of service attack and hence might slow down the traffic?
Running a test for as little as 2 minutes does not really show how the test runs. The details in the question do net tell how many tests started, how many finished and how many were abandoned at the end of the two minute run. It is possible that many tests cases were abandoned and that the average time of those that completed was 1.6 second.
If you have the results of the problem run then look at the "details" section of the results. Expand the slider below the image to include the whole run. Tick the option (top left corner) to highlight failing tests. I would expect to see a lot of red at the two minute mark for failing tests. However, the two minute run may be too short compared to the sampling interval (in the run settings) to see much.
Running a first test at 500 users tells you very little. It tells you either that the system copes with that load or that it does not. You need to run the test at several different user loads. Then you start to learn where the boundary between working and not working lies. Hence I recommend using a stepped load.
I believe you need at least one more test run to understand what is happening. I suggest doing a run as follows. Set a one minute cool-down period. Set a stepped load: start at 5 users as you know that that works. Increment by 1 user every two seconds until 100 users. That will take 190 seconds. Run for about another minute at that 100 user load. Total of 4 minutes 10 seconds. Call it 4 minutes. Adding in the one minute cool down makes (5 minutes) x (100 VU) = 500 VUM, which is a small portion of the free minutes per month. After the run look at the graphs of average test times. If all is OK on that test then you could try another that ramps up more quickly to say 500 users.
My goal is to fire a thread every 15 minutes to a website with some actions (e.g. intro, choose_language, search_term). Where I will assert using a Response Assertion to check whether the site is available.
Is it possible to schedule JMeter like this from within JMeter itself? Is it possible using any of the timers? I am thinking of starting my script using the Windows Scheduler as a plan B.
I thought I would be able to set it with the Ramp-Up in the Thread Group. My thought was:
Number of Threads (users): 1
Ramp-Up Period (in seconds): 60
that this would mean that 1 user would be started every 60 seconds, but this seems not to be true.
To do what you want,
You can use one user, and within a debug sampler at end of (login, intro, search) add a timer that last 15 minutes.
You misunderstand rampup, with what you set it's useless as there is only 1 user. With 15 users, it means, start each of these 15 users within 60 minutes then once started it is not used anymore