How to run multiple cypress test cases in parallel in circle ci - cypress

I am trying to run 60 Cypress test cases in parallel in circle Ci but it's running all the 60 test cases on all the 4 machines rather than dividing the test cases and running them in parallel in all the machines to reduce the time say 20 test cases in each machine,, i am stuck and deadline is just tomorrow please help me out
So far I am just able to run the same test cases in parallel in 4 machines but i want to split the test cases in 4 machines so the overy time should be reduced

Related

How many thread group can be executed at a time during performance test?

I've executed 15 Thread group at time during performance test, but some of thread group didn't executed. Is there any limitation about number of thread group for test execution? Or how many number of active thread group can be executed at a time?
Theoretically the limit of Thread Groups in the Test Plan is as high as 32-bit Integer maximum value.
So you should not be experiencing problems with 15 Thread Groups given you're following JMeter Best Practices in particular:
Run your test in non-GUI mode
Disable all the listeners during the test run
Limit test elements to the absolute minimum, each pre/post processor or assertion has its cost so make sure that your test is as efficient as possible
JMeter default configuration is suitable for tests development and debugging, you will need to perform tuning if you plan to run a load test, i.e. at least increase JVM size
If you still experience problems:
check jmeter.log file for any suspicious entries
consider running your test in Distributed Mode (remember to proportionally decrease the number of threads as JMeter engines are independent and if you have 100 threads defined in the Test Plan and 3 remote engines - you will deliver 100 * 3 = 300 virtual users)

Split and run a part of tests on the same browser to run faster

I have 100 features, i need over 1 hour to run all of them. I want to divide them into 5 groups and run in 5 chrome windows, one group have 20 features. So i can run completely 100 features faster.And number features will always change. How can i control that?

Which test mix model option to select for REST Api Load Testing in Visual Studio?

I have created the WebTest file and now adding new Load Test using the WebTest file. The WebTest file contains api snapshot of my session that performs several REST Apis including Post, Get, etc. The Web Performance test is passing successfully.
I am creating the Load Test using Load Test Wizard in Visual Studio where I am asked to pick up Test Mix Model as shown in screenshot:
There are following test mix model options for your load test scenario:
Based on the total number of tests
Based on the number of virtual users
Based on user pace
Based on sequential order
I referred the official documentation but I am not able to understand still which model mix to use and what is the difference and performance impact each mix create?
As you only have one web test then I would suggest using the test mix based on number of tests. But actually any mix except user pace should be suitable.
The four test mixes behave as described below. I recommend careful reading of the words shown in the screenshot in the question, and the related words for the other mixes. The diagram above those words is intended to suggest how the mixes work.
As there is only one web test, the "sequential order" mix is not useful. It is only used when you have more than one test (e.g., A, B and C) and they need to be run repeatedly in that order (i.e. A B C A B C A B C ... A B C) by each virtual user (VU).
The "user pace" mix is used when you want a web test to be run a fixed rate (of N times per hour) to apply a steady load to the system under test. Suppose test D takes about 40 seconds to execute and the pace is set to 30 tests per hour. Then we expect the system to start, for each VU, that test every 2 minutes (and hence get about 100 seconds idle between executions). Suppose now that there is also test E that takes about 15 seconds to execute and its pace is set to 90 per hour. Then each VU should additionally run test E every 40 seconds. So now each VU runs tests D and E at those rates. Test D wants 30 * 40 seconds per hour of each VUs time and test E wants 90 * 15 seconds, thus we have 30*40+90*15 = 2550 seconds used of each hour. So each VU has plenty of time to run both tests. Should the tests take much longer or require a faster rate then that might take over 3600 seconds per hour. That is impossible and I would expect to see an error or warning message about it as the test runs.
The other two test mixes are used when you want to run web tests as quickly as possible (within the number of VUs specified and allowing for think times). The distinction is how web tests are chosen to be run. When there is only one web test then the two mixes have the same effect. With more than one web test one mix tries to make the number of web tests executed match the ratios (percentages) specified in the mix. The other mix tries to make the number VUs running each test match the ratios.
Suppose we have tests: A at 40%, B at 30%, C at 20% and D at 10%. With a test mix based on total number of tests we would expect the number of each test executed to approximate the stated percentages. Suppose we have 20 VUs and a test mix based on number of users then we would expect 8 VUs to run test A, 6 VUs to run test B, 4 VUs to run test C and 2 VUs to run test D.

What are the two different times output from an XCTest run?

When I run my set of unit tests (Xcode 9.2), it logs output like this:
Test Suite 'All tests' passed at 2017-12-13 14:16:27.947.
Executed 319 tests, with 0 failures (0 unexpected) in 0.372 (0.574) seconds
There are two times here, 0.372 and 0.574 seconds respectively.
Can anyone please tell me (or point me to anything that explains) what the two different values mean, and why there is a difference between the two?
The first 0.372 delta time is the effective time spent by the test cases runtime execution.
The second 0.574 is the effective time spent between the beginning and the end of the measurements.
Why a difference of 0.202 ? I suppose there is a context switching debt of some milliseconds, depending by the Test Cases and Test Suites cardinality.
Moreover, you may check here:
the 5.434 is the delta between 12.247 and 17.681, so between the effective beginning of the unit testing and the end of the execution of the last Test Suite

Users distribution in Load Test Visual Studio

I created load test project in VS. There are 5 scenarios which are implemented as normal unit test.
Test mix model: Test mix percentage based on the number of tests started.
Scenario A: 10%
Scenario B: 65%
Scenario C: 9%
Scenario D: 8%
Scenario E: 8%
Load pattern: Step. Initial user count: 10. Step user count: 10. Step duration: 10sec. Maximum user count: 300.
Run Duration: 10 minutes.
I would like to know how the load is put on all the scenarios? How the users are distributed between the scenarios in time?
If I put 100 users as initial user count, then 10 virtual users (10% from 100) start replaying scenario A at one time? What happend when they finish? I would be really grateful if someone can explain me know the user distribution works.
Please use the correct terminology. Each "Scenario" in a load test has its own load pattern. This answer assumes that there are 5 test cases A to E.
The precise way load tests start test cases is not defined but the documentation is quite clear. Additionally the load test wizard used when initially creating a load test has good descriptions of the test mix models.
Load tests also make use of random numbers for think times and when choosing which test to run next. This tends to mean the final test results show counts of test cases executed that differ from the desired percentages.
My observations of load tests leads me to believe it works as follows. At various times the load test compares the number of tests currently executing against the number of virtual users that should be active. These times are when the load test's clock ticks and a step load pattern changes, also when a test case finishes. If the comparison shows more virtual users than test cases being executed then sufficient new tests are started to make the numbers equal. The test cases are chosen to match the desired test mix, but remember that there is some randomization in the choice.
Your step pattern is initially 10, step by 10 every 10 sec to a maximum of 300. Maximum users should be after (10 seconds per step)*(300 users)/(10 users per step) = 300 seconds = (5 minutes). The run duration of 10 minutes means 5 minutes ramp up then 5 minutes steady at max users.
For the final paragraph of your question. Given the same percentages but an constant user count of 100 then you would expect the initial number of each test case to be close to the percentages. Thus 10 of A, 65 of B, 9 of C, 8 of D and 8 of E. When any test case completes visual studio will choose a new test case attempting to follow the test mix model, but, as I said earlier, there is some randomization in the choice.

Resources