I am running load test with jmeter with selenium webdriver sample. Purpose is to load test and understand amount of time taken by 500 users to complete a survey on a web dash board. When executing i need to control the number of concurrent threads, and it should be more than 10. New thread should be spawned if number of concurrent threads becomes less that 10.
How do i achieve this. Any pointer in this direction will be helpful.
Regards,
Seshan K.
According to this article you can set the amount of threads in the Stepping Thread Group. This might be great to read through.
You must be looking for the Concurrency Thread Group
This thread group offers simplified approach for configuring threads schedule. It is intended to maintain the level of concurrency, which means starting additional during the runtime threads if there's not enough of them running in parallel.
So it is just enough to install the Concurrency Thread Group using JMeter Plugins Manager and use it instead of normal JMeter's Thread Group
I'm executing multiple scripts for 1 hr in Non GUI Mode. I'm having couple of questions here.
Test Scripts:-
Script1
Script2
Script3
Number of Samples are differing with respect to the scenarios. I need equal distribution for all 3 scenarios. How to achieve this?
I'm saving all 3 scripts in one .jmx file( keeping 3 thread groups and assigning 20 Users per script). Is it correct approach.
I have added assertions for each request to check the response is valid or not.In loadrunner we will keep outside of Transactions but in Jmeter I'm not sure. Do we need to keep them during execution window.
I'm really looking forward to your suggestions.
It is difficult to achieve equal number of samples in all 3 scripts in Jmeter as response times for all 3 requests will be different. And there is no such thing as pacing in Jmeter as was there in Loadrunner. Only think time u can add as constant timer according to ur perception.
One reason of discrepancy could be the Ramp up time. You should give same ramp up period in all 3 thread groups. if ramp up time is different discrepancy in number of samples is expected.
I would be able to help if you could provide some more info like:
1. How much is the RT for each req
2. How much think time u r giving for each.
3. How much ramp up time u giving for each thread group.
4. How much startup delay u r giving.
What is the best practice for Max Wait (ms) value in JDBC Connection Configuration?
JDBC
I am executing 2 types of tests:
20 loops for each number of threads - to get max Throupught
30min runtime for each number of Threads - to get Response time
With Max Wait = 10000ms I can execute JDBC request with 10,20,30,40,60 and 80 Threads without an error. With Max Wait = 20000ms I can go higher and execute with 100, 120, 140 Threads without an error. It seems to be logical behaviour.
Now question.
Can I increase Max Wait value as desired? Is it correct way how to get more test results?
Should I stop testing and do not increase number of Threads if any error occur in some Report? I got e.g. 0.06% errors from 10000 samples. Is this stop for my testing?
Thanks.
Everything depends on what your requirements are and how you defined performance baseline.
Can I increase Max Wait value as desired? Is it correct way how to get more test results?
If you are OK with higher response times and the functionality should be working, then you can keep max time as much as you want. But, practically, there will be the threshold to response times (like, 2 seconds to perform a login transaction), which you define as part of your performance SLA or performance baseline. So, though you are making your requests successful by increasing max time, eventually it is considered as failed request due to high response time (by crossing threshold values)
Note: Higher response times for DB operations eventually results in higher response times for web applications (or end users)
Should I stop testing and do not increase number of Threads if any error occur in some Report?
Same applies to error rates as well. If SLA says, some % error rate is agreed, then you can consider that the test is meeting SLA or performance baseline if the actual error rate is less that that. eg: If requirements says 0% error rate, then 0.1% is also considered as failed.
Is this stop for my testing?
You can stop the test at whatever the point you want. It is completely based on what metrics you want to capture. From my knowledge, It is suggested to continue the test, till it reaches a point where there is no point in continuing the test, like error rate reached 99% etc. If you are getting error rate as 0.6%, then I suggest to continue with the test, to know the breaking point of the system like server crash, response times reached to unacceptable values, memory issues etc.
Following are some good references:
https://www.nngroup.com/articles/response-times-3-important-limits/
http://calendar.perfplanet.com/2011/how-response-times-impact-business/
difference between baseline and benchmark in performance of an application
https://msdn.microsoft.com/en-us/library/ms190943.aspx
https://msdn.microsoft.com/en-us/library/bb924375.aspx
http://searchitchannel.techtarget.com/definition/service-level-agreement
This setting maps to DBCP -> BasicDataSource -> maxWaitMillis parameter, according to the documentation:
The maximum number of milliseconds that the pool will wait (when there are no available connections) for a connection to be returned before throwing an exception, or -1 to wait indefinitely
It should match the relevant setting of your application database configuration. If your goal is to determine the maximum performance - just put -1 there and the timeout will be disabled.
In regards to Is this stop for my testing? - it depends on multiple factors like what application is doing, what you are trying to achieve and what type of testing is being conducted. If you test database which orchestrates nuclear plant operation than zero error threshold is the only acceptable. And if this is a picture gallery of cats, this error level can be considered acceptable.
In majority of cases performance testing is divided into several test executions like:
Load Testing - putting the system under anticipated load to see if it capable to handle forecasted amount of users
Soak Testing - basically the same as Load Testing but keeping the load for a prolonged duration. This allows to detect e.g. memory leaks
Stress testing - determining boundaries of the application, saturation points, bottlenecks, etc. Starting from zero load and gradually increasing it until it breaks mentioning the maximum amount of users, correlation of other metrics like Response Time, Throughput, Error Rate, etc. with the increasing amount of users, checking whether application recovers when load gets back to normal, etc.
See Why ‘Normal’ Load Testing Isn’t Enough article for above testing types described in details.
I am not able to find out the specific answer of how to calculate the number of threads for running a load test in JMeter?
How to identify the loop count ?
Is there any formula?
What are the parameters to consider for the calculation?
Say if you want to fire 100 request to server with 2 tps. then your threas properties should be like below:
Number of threads(users) :2.
Ramp up period: 100
Loop count :50
Based on above example.Please find below explaination.
• Number of Threads (N): Sets the number of threads the JMeter will use to execute our test plan. We must know that each thread will execute the whole test plan, which effectively utilizes the number of users that could use the tested service at any given time simultaneously.
• Ramp-Up Period R: Specifies how much time (in seconds) it will take for JMeter to start all the threads (simultaneous user connections). If the number of users is 5 and the ramp-up time is 10 seconds, then each thread will be started in a 2 second delayed interval. We need to be careful when setting this value, because if the value is too high the first thread will already finish processing the whole test plan before the second thread will even begin. This is important because that would effectively reduce the number of concurrent users using the testing server application at any given time. But the ramp-up period also needs to be high enough to avoid starting all of the thread at a single time, which could overload the target application.
• Loop Count (L): How many times each thread group will loop through all configured elements belonging to that thread group.
Hope it helps!
We have 6 user scenarios that we are trying to test on our application concurrently. We are constantly tuning the percentage of threads going to each scenario and total number of threads. In order to make these changes quickly, I've put all the scenarios under 1 thread group and in that thread group I have 6 throughput controllers setup that that total up to 100% with 'per user' unchecked, each scenarios samplers (with thinktimes) are then inside these throughput controllers.
As far as I can tell, this is accomplishing the goal and I see the proper user distribution going through our system but I'm not sure if I should be breaking these out into 6 different thread groups instead. If so, how should I be controlling the percentage of threads going to each scenario?
Your solution is a good one. It will do what you expect.