What are the two different times output from an XCTest run? - xcode

When I run my set of unit tests (Xcode 9.2), it logs output like this:
Test Suite 'All tests' passed at 2017-12-13 14:16:27.947.
Executed 319 tests, with 0 failures (0 unexpected) in 0.372 (0.574) seconds
There are two times here, 0.372 and 0.574 seconds respectively.
Can anyone please tell me (or point me to anything that explains) what the two different values mean, and why there is a difference between the two?

The first 0.372 delta time is the effective time spent by the test cases runtime execution.
The second 0.574 is the effective time spent between the beginning and the end of the measurements.
Why a difference of 0.202 ? I suppose there is a context switching debt of some milliseconds, depending by the Test Cases and Test Suites cardinality.
Moreover, you may check here:
the 5.434 is the delta between 12.247 and 17.681, so between the effective beginning of the unit testing and the end of the execution of the last Test Suite

Related

Is there a jmeter plugin to compare test duration against a predefined benchmark?

I am a c# developer and my QA team implemented jmeter based test process, but I don't like their process. What I want is this:
Say I have 10 web api test cases and jmeter simulate 10 users.
When jmeter finishes each test, I want to compare the average duration against a predefined benhmark.
Test 1 is an easy one and I expect the average duration less than 1 second
Test 2 is a complicated one and I expect the average duration to be less than 5 second
Test x expect average duration to be less than N seconds
If any test's average duration is 10% higher than predefined number, the test result should be a fail.
My QA insists it can't be done in Jmeter, is it really the case? I am a c# developer and I can do above easily using nbomber with bit coding.
1-3 - Duration Assertion
4 - JSR223 Assertion along with any Sampler somewhere in tearDown Thread Group, there is TESTSTART.MS pre-defined variable which holds the time of test start. Getting current time, comparing it to the test start time and failing the request is not a big deal.
More information on the Assertions concept: How to Use JMeter Assertions in Three Easy Steps

How to run multiple cypress test cases in parallel in circle ci

I am trying to run 60 Cypress test cases in parallel in circle Ci but it's running all the 60 test cases on all the 4 machines rather than dividing the test cases and running them in parallel in all the machines to reduce the time say 20 test cases in each machine,, i am stuck and deadline is just tomorrow please help me out
So far I am just able to run the same test cases in parallel in 4 machines but i want to split the test cases in 4 machines so the overy time should be reduced

Is there any cirteria for "ramp up " time , what should be the ramp time?

Is there any criteria for set "ramp up " time , what should be the ramp time ?
lets for example, i need to execute script for 30 minute(duration) and 100 concurrent users(threads) so what should be ramp up time for same.
What should be duration for for execution jmeter script? normally we execute for 30 minutes is this correct ?or is there any criteria for same ?
Normally you should receive the answers from the non-functional requirements for your application
According to JMeter documentation:
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
Your test should last long enough so all virtual users would be able to finish their scenarios. It should be definitely longer that the ramp-up time otherwise you may run into the situation when some JMeter threads have already finished their work and were shut down and some hasn't yet been started. Check out JMeter Test Results: Why the Actual Users Number is Lower than Expected for more details. Also if you're executing a Soak Test the duration could be several hours or even days

Which test mix model option to select for REST Api Load Testing in Visual Studio?

I have created the WebTest file and now adding new Load Test using the WebTest file. The WebTest file contains api snapshot of my session that performs several REST Apis including Post, Get, etc. The Web Performance test is passing successfully.
I am creating the Load Test using Load Test Wizard in Visual Studio where I am asked to pick up Test Mix Model as shown in screenshot:
There are following test mix model options for your load test scenario:
Based on the total number of tests
Based on the number of virtual users
Based on user pace
Based on sequential order
I referred the official documentation but I am not able to understand still which model mix to use and what is the difference and performance impact each mix create?
As you only have one web test then I would suggest using the test mix based on number of tests. But actually any mix except user pace should be suitable.
The four test mixes behave as described below. I recommend careful reading of the words shown in the screenshot in the question, and the related words for the other mixes. The diagram above those words is intended to suggest how the mixes work.
As there is only one web test, the "sequential order" mix is not useful. It is only used when you have more than one test (e.g., A, B and C) and they need to be run repeatedly in that order (i.e. A B C A B C A B C ... A B C) by each virtual user (VU).
The "user pace" mix is used when you want a web test to be run a fixed rate (of N times per hour) to apply a steady load to the system under test. Suppose test D takes about 40 seconds to execute and the pace is set to 30 tests per hour. Then we expect the system to start, for each VU, that test every 2 minutes (and hence get about 100 seconds idle between executions). Suppose now that there is also test E that takes about 15 seconds to execute and its pace is set to 90 per hour. Then each VU should additionally run test E every 40 seconds. So now each VU runs tests D and E at those rates. Test D wants 30 * 40 seconds per hour of each VUs time and test E wants 90 * 15 seconds, thus we have 30*40+90*15 = 2550 seconds used of each hour. So each VU has plenty of time to run both tests. Should the tests take much longer or require a faster rate then that might take over 3600 seconds per hour. That is impossible and I would expect to see an error or warning message about it as the test runs.
The other two test mixes are used when you want to run web tests as quickly as possible (within the number of VUs specified and allowing for think times). The distinction is how web tests are chosen to be run. When there is only one web test then the two mixes have the same effect. With more than one web test one mix tries to make the number of web tests executed match the ratios (percentages) specified in the mix. The other mix tries to make the number VUs running each test match the ratios.
Suppose we have tests: A at 40%, B at 30%, C at 20% and D at 10%. With a test mix based on total number of tests we would expect the number of each test executed to approximate the stated percentages. Suppose we have 20 VUs and a test mix based on number of users then we would expect 8 VUs to run test A, 6 VUs to run test B, 4 VUs to run test C and 2 VUs to run test D.

Users distribution in Load Test Visual Studio

I created load test project in VS. There are 5 scenarios which are implemented as normal unit test.
Test mix model: Test mix percentage based on the number of tests started.
Scenario A: 10%
Scenario B: 65%
Scenario C: 9%
Scenario D: 8%
Scenario E: 8%
Load pattern: Step. Initial user count: 10. Step user count: 10. Step duration: 10sec. Maximum user count: 300.
Run Duration: 10 minutes.
I would like to know how the load is put on all the scenarios? How the users are distributed between the scenarios in time?
If I put 100 users as initial user count, then 10 virtual users (10% from 100) start replaying scenario A at one time? What happend when they finish? I would be really grateful if someone can explain me know the user distribution works.
Please use the correct terminology. Each "Scenario" in a load test has its own load pattern. This answer assumes that there are 5 test cases A to E.
The precise way load tests start test cases is not defined but the documentation is quite clear. Additionally the load test wizard used when initially creating a load test has good descriptions of the test mix models.
Load tests also make use of random numbers for think times and when choosing which test to run next. This tends to mean the final test results show counts of test cases executed that differ from the desired percentages.
My observations of load tests leads me to believe it works as follows. At various times the load test compares the number of tests currently executing against the number of virtual users that should be active. These times are when the load test's clock ticks and a step load pattern changes, also when a test case finishes. If the comparison shows more virtual users than test cases being executed then sufficient new tests are started to make the numbers equal. The test cases are chosen to match the desired test mix, but remember that there is some randomization in the choice.
Your step pattern is initially 10, step by 10 every 10 sec to a maximum of 300. Maximum users should be after (10 seconds per step)*(300 users)/(10 users per step) = 300 seconds = (5 minutes). The run duration of 10 minutes means 5 minutes ramp up then 5 minutes steady at max users.
For the final paragraph of your question. Given the same percentages but an constant user count of 100 then you would expect the initial number of each test case to be close to the percentages. Thus 10 of A, 65 of B, 9 of C, 8 of D and 8 of E. When any test case completes visual studio will choose a new test case attempting to follow the test mix model, but, as I said earlier, there is some randomization in the choice.

Resources