Users distribution in Load Test Visual Studio - visual-studio

I created load test project in VS. There are 5 scenarios which are implemented as normal unit test.
Test mix model: Test mix percentage based on the number of tests started.
Scenario A: 10%
Scenario B: 65%
Scenario C: 9%
Scenario D: 8%
Scenario E: 8%
Load pattern: Step. Initial user count: 10. Step user count: 10. Step duration: 10sec. Maximum user count: 300.
Run Duration: 10 minutes.
I would like to know how the load is put on all the scenarios? How the users are distributed between the scenarios in time?
If I put 100 users as initial user count, then 10 virtual users (10% from 100) start replaying scenario A at one time? What happend when they finish? I would be really grateful if someone can explain me know the user distribution works.

Please use the correct terminology. Each "Scenario" in a load test has its own load pattern. This answer assumes that there are 5 test cases A to E.
The precise way load tests start test cases is not defined but the documentation is quite clear. Additionally the load test wizard used when initially creating a load test has good descriptions of the test mix models.
Load tests also make use of random numbers for think times and when choosing which test to run next. This tends to mean the final test results show counts of test cases executed that differ from the desired percentages.
My observations of load tests leads me to believe it works as follows. At various times the load test compares the number of tests currently executing against the number of virtual users that should be active. These times are when the load test's clock ticks and a step load pattern changes, also when a test case finishes. If the comparison shows more virtual users than test cases being executed then sufficient new tests are started to make the numbers equal. The test cases are chosen to match the desired test mix, but remember that there is some randomization in the choice.
Your step pattern is initially 10, step by 10 every 10 sec to a maximum of 300. Maximum users should be after (10 seconds per step)*(300 users)/(10 users per step) = 300 seconds = (5 minutes). The run duration of 10 minutes means 5 minutes ramp up then 5 minutes steady at max users.
For the final paragraph of your question. Given the same percentages but an constant user count of 100 then you would expect the initial number of each test case to be close to the percentages. Thus 10 of A, 65 of B, 9 of C, 8 of D and 8 of E. When any test case completes visual studio will choose a new test case attempting to follow the test mix model, but, as I said earlier, there is some randomization in the choice.

Related

Is there a jmeter plugin to compare test duration against a predefined benchmark?

I am a c# developer and my QA team implemented jmeter based test process, but I don't like their process. What I want is this:
Say I have 10 web api test cases and jmeter simulate 10 users.
When jmeter finishes each test, I want to compare the average duration against a predefined benhmark.
Test 1 is an easy one and I expect the average duration less than 1 second
Test 2 is a complicated one and I expect the average duration to be less than 5 second
Test x expect average duration to be less than N seconds
If any test's average duration is 10% higher than predefined number, the test result should be a fail.
My QA insists it can't be done in Jmeter, is it really the case? I am a c# developer and I can do above easily using nbomber with bit coding.
1-3 - Duration Assertion
4 - JSR223 Assertion along with any Sampler somewhere in tearDown Thread Group, there is TESTSTART.MS pre-defined variable which holds the time of test start. Getting current time, comparing it to the test start time and failing the request is not a big deal.
More information on the Assertions concept: How to Use JMeter Assertions in Three Easy Steps

Which test mix model option to select for REST Api Load Testing in Visual Studio?

I have created the WebTest file and now adding new Load Test using the WebTest file. The WebTest file contains api snapshot of my session that performs several REST Apis including Post, Get, etc. The Web Performance test is passing successfully.
I am creating the Load Test using Load Test Wizard in Visual Studio where I am asked to pick up Test Mix Model as shown in screenshot:
There are following test mix model options for your load test scenario:
Based on the total number of tests
Based on the number of virtual users
Based on user pace
Based on sequential order
I referred the official documentation but I am not able to understand still which model mix to use and what is the difference and performance impact each mix create?
As you only have one web test then I would suggest using the test mix based on number of tests. But actually any mix except user pace should be suitable.
The four test mixes behave as described below. I recommend careful reading of the words shown in the screenshot in the question, and the related words for the other mixes. The diagram above those words is intended to suggest how the mixes work.
As there is only one web test, the "sequential order" mix is not useful. It is only used when you have more than one test (e.g., A, B and C) and they need to be run repeatedly in that order (i.e. A B C A B C A B C ... A B C) by each virtual user (VU).
The "user pace" mix is used when you want a web test to be run a fixed rate (of N times per hour) to apply a steady load to the system under test. Suppose test D takes about 40 seconds to execute and the pace is set to 30 tests per hour. Then we expect the system to start, for each VU, that test every 2 minutes (and hence get about 100 seconds idle between executions). Suppose now that there is also test E that takes about 15 seconds to execute and its pace is set to 90 per hour. Then each VU should additionally run test E every 40 seconds. So now each VU runs tests D and E at those rates. Test D wants 30 * 40 seconds per hour of each VUs time and test E wants 90 * 15 seconds, thus we have 30*40+90*15 = 2550 seconds used of each hour. So each VU has plenty of time to run both tests. Should the tests take much longer or require a faster rate then that might take over 3600 seconds per hour. That is impossible and I would expect to see an error or warning message about it as the test runs.
The other two test mixes are used when you want to run web tests as quickly as possible (within the number of VUs specified and allowing for think times). The distinction is how web tests are chosen to be run. When there is only one web test then the two mixes have the same effect. With more than one web test one mix tries to make the number of web tests executed match the ratios (percentages) specified in the mix. The other mix tries to make the number VUs running each test match the ratios.
Suppose we have tests: A at 40%, B at 30%, C at 20% and D at 10%. With a test mix based on total number of tests we would expect the number of each test executed to approximate the stated percentages. Suppose we have 20 VUs and a test mix based on number of users then we would expect 8 VUs to run test A, 6 VUs to run test B, 4 VUs to run test C and 2 VUs to run test D.

Difference between "Step ramp time" and "Step Duration" with Visual Studio test load

I'm currently trying to test a site with Visual Studio load test. However, I'm not really familiar with that and I see really bad result so I'm wondering if my setup is correct. Basically, I want to simulate that 6 000 users comes to my site in a short period of time, so these are the property I put in my test:
From what I understand, I start with 500 users and then I add 500 new users every 5 seconds until I reach 6 000 where it stabilized. Is this a correct assumption ? Are these number realistic ?
Concerning my scenario, I have 7 of these that request some page of my site.
The ramp time is the time that it takes to move from one step to the next.
During the ramp time, it distributes the extra load as it ramps up.
The step duration is how long it holds the constant load for the step, before starting to ramping up to the next one.

Max VUs at Jmeter distributed testing

Hellow
Which is the maximun number of virtual users that can be testet at a Jmeter distributed test? Is it possible to reach one million of virtual users?
Thak you.
It depends on may factors, technically the limit on JMeter end is very high (I think it should be 231 − 1 or 2 147 483 647 virtual users)
Nature of your application: use cases, is it more about consuming ore creating content, average request and response size, response time, etc.
Nature of your test: again, request and response size, need to use pre/post processors and assertions
Hardware specifications of your load generators
Number of load generators
So I would recommend the following approach:
Start with a single JMeter instance
Make sure you have optimal JMeter configuration and amended your test according to JMeter best practices
Make sure you have monitoring of baseline OS health metrics on that machine
Start with 1 virtual users and gradually increase the number of running users until you start running out of hardware resources (CPU or RAM or Network or Disk IO will be close to maximum)
Mind the number of active users at this stage (you can use i.e. Active Threads Over Time listener) - this is how many users you can simulate for particularly that test scenario. Note, the number might be different for other application or other test scenario.
Multiply the number you get by the number of the load generators you have - if there is > 1M - you are good to go.
If you won't be able to simulate that many users there is a workaround, but personally I don't really like it. The idea is that real users don't hammer application non-stop, they need some time to "think" between actions. Normally you should be simulating these "think times"using JMeter Timers. But if you lack load generators you can consider the following:
Given 1 virtual user needs 15 seconds to think between operations and response time of your application is 5 seconds, it means that each user will be able to execute only 3 requests per minute. So 1M of users will execute 3M requests per minute which gives us 50 000 requests per second which is also high, but more likely to be achievable.

Jmeter strategy to test 10000 users visiting a link and points to be in noted in report

On my local machine I need to test performance of one particular link (with static data) lets say homepage.
What I have tried:
1000 users
Ramp up time - 600s (10 mins)
Loop count - 10
This makes till 10,000th user in 10th min.
But I want its performance when 10k users hitting it together, how to plan it? The way I tried is 10k users in 2 sec with loop count of 10. But that slowed the Jmeter. Rampup time 2 secs because I am assuming user would 2 sec atleast to think and then click.
I am running it in in NON GUI mode, without any listener, and creating a .csv file.
Which all components are matter of concern and how to put them infront of DEV to fix them as bug or improvement?
Local Machine config. : 8GB RAM, 64 bit Windows 7 Pro, 2.2GHz 4 CPUs
I referred to this particular link: http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
The 10k user in JMeter Thread Group will be the limiting factor here. Using single JMeter instance you cannot afford to generate 10k users.Your machine running JMeter will be the Bottleneck here.Try using Distributed Load test.
Distributed Load Test Step by Step
Ramp-up time != think time, JMeter starts all threads during defined ramp-up period. Given 10k users and 2 seconds ramp-up time JMeter will start with 1 user, during 1st second 5k threads will start and during 2nd second remaining 5k will be kicked off.
Take a look at the following test elements:
Constant Timer - to simulate think time
Synchronizing Timer - if you need all 10k threads to fire at exactly the same time.
Also your machine specifications might be too low to handle 10k simultaneous virtual users, if your host gets overloaded you may have to consider Distributed Testing

Resources