How To Capture Unit Testing Metrics - xcode

I'm not sure how to capture the testing result data related to unit tests each time a unit test is run. I use Bamboo as a continuous integration server. It works great, basically makes a build of your project every time you submit code, and send you an email if the build failed / you screwed up somewhere. I would like to begin having Bamboo running full unit tests as well as builds. I would also like to begin gathering data about said unit tests.
My question is, I know in a lot of dif programs you can track the number of lines of code changed, and the total lines of code period in the entire program. I also know that with unit testing, it gives you data such as the number of passes / failures. What I would like to do is automatically gather this data, among other data such as defect density, etc.

Related

Run multiple tests from a previous saved state

I see Cypress lets us get back to the application state during a test to debug using time-travel. Is it possible to use this state snapshot as a starting point for other tests?
Imagine a UI where options in a stepper depend on previous selections in earlier steps, and many of these rely on requests to an API. To run different tests in the last step I would need to complete the earlier steps in exactly the same way each time. This can be added to the before block to make the code simpler but we still have the delay and overheads of API requests each time to get to this exact same state. Given that Cypress already stores the state at various points, can I seed future tests with the state from previous ones?

Is it possible to run simultaneous runs with different device configurations?

Is it possible to run multiple test runs with different test suites at the same time with an account that permits device concurrency?
https://forums.xamarin.com/discussion/39831/run-ui-tests-on-multiple-devices-simultaneously
In this question the answer was this
When you create a test run in Xamarin Test cloud, the second page in the Test Run wizard has an option to run tests concurrently (the Parallelization drop down).
If you are submitting tests at the command line, you can run tests in parallel using one of the following two command line parameters:
--test-chunk to run tests in parallel by method
--fixture-chunk to run tests in parallel by fixture.
But can I test on different devices like in this example?
Device1 - test1, test2
Device2 - test1, test3
Device3 - test4, test5
It is possible to run multiple tests with different test suites at the same time using device concurrency in the Xamarin Test Cloud. This is true whether or not you are using parallelization, however, parallelization does complicate the matter somewhat, because parallelization runs on multiple copies of a single device, and those copies also count against your concurrent devices.
When you select to run on parallel devices, the Test Cloud will automatically run the devices on as many copies of that device that are available.
Example Scenario
Device Concurrency - 3
First Test Run - 1 device selected
Second Test Run - 2 devices selected
Without parallelization - Both tests can run as soon as devices are available, because the concurrency is the total maximum for all tests. You could similarly have three test runs each with a single device and all could start immediately. If you exceed your device concurrency, then your remaining tests will be queued up to wait for another device to be finished.
With parallelization - The first test run may use up 1, 2 or all 3 device concurrency slots; depending on how many devices are available. The slots that are used up by the first test run won't be available for the second test run until tests on them have finished.
Conclusion
Theoretically you can have multiple test runs all using parallelization at the same time; but in practice you might not have enough concurrency slots for them to actually progress concurrently.
You can think of it as a trade-off, for individual test runs on a single device, parallelization will let you get your test results much faster; but subsequent test runs will often have to wait, so it is a tradeoff. But whether you use it or not, you can still queue up more tests afterwards; so there's no "penalty" for adding extra tests beyond what your concurrency will allow to immediately run.

What results can I force in a cucumber scenario

Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.

Specify test end condition in Visual Studio Load Test

I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..

Order of execution of unit tests in Visual Studio 2008

I have unit tests defined for my Visual Studio 2008 solution. These tests are defined in multiple methods and in multiple classes across several files.
I've read in a blog article that when using MSTest, it is a mistake to think that you can depend on the order of execution of your tests:
Execution Interleaving: Since each instance of the test class is instantiated separately on a different thread, there are no guarantees
regarding the order of execution of unit tests in a single class, or
across classes. The execution of tests may be interleaved across
classes, and potentially even assemblies, depending on how you chose
to execute your tests. The key thing here is – all tests could be
executed in any order, it is totally undefined.
That said, I have to have a pre-execution step before any of these tests gets to run. That is, I actually want to define an order of execution somehow. For example, 1) first create the database; 2) test that it's created; then 3) run the remaining 50 tests in arbitrary order.
Any ideas on how I can do that?
I wouldn't test that the database is successfully created; I will assume that all subsequent tests will fail if it is not, and it feels in a way that you would be testing the test code.
Regarding a pre-test step to set up the database, you can do that by creating a method and decorating it with the ClassInitialize attribute. That will make the test framework execute that method prior to any other method within the test class:
[ClassInitialize()]
public static void InitializeClass(TestContext testContext)
{
// your init code here
}
Unit tests should all work standalone, and should not have dependencies on each other, otherwise you can't run a single test in isolation.
Every test that needs the database should then just create it on demand (if it's not already been created - you can use a singleton/static class to ensure that if multiple tests are executed in a batch, the database is only actually created once).
Then it won't matter which test executes first; it'll just be created the first time a test needs a database to use.
In theory it is correct that tests should be independent of each other and be able to run standalone. But in practice, there is a difference between theory and practice, and VS2010 gives me a hard time with its fixed order of execution (random order that is always the same).
Here are some examples:
I have a unit test that cross checks the dates between some tables and verifies that everything is in agreement. Obviously it is of no use to run this test on an empty database, so I want to to run SOME TIME AFTER the unit test that inserts data. Sorry VS2010 doesn't let you do this.
OK, cool, then I will add it to the insert unit test as an epilogue. But then I want to cross check other 10 things and instead of having a unit test ("Make sure that entities with various parameters can be inserted without crashes") I end up having a mega-test.
Then another case.
My unit test inserts entities, just insert, to make sure that this part of the logic works ok. Then I have a multi-threaded version of the test, to make sure that there are no deadlocks and stuff. Clearly I need the multi-threaded test to run SOME TIME AFTER the single threaded test, and ONLY if the single threaded test succeeds. Sorry, VS2010 can't do this.
Another case. I have a unit test that deletes ALL entities of a given kind in the database. This should result in a bunch of empty tables and lots of zeros in other tables. Clearly it is useless to run it on an empty database, so the test inserts 10.000 entities if it finds the DB empty. However, if it runs AFTER the multithreaded test, it will find 250.000 entities, and to delete ALL of them takes TIME. Sorry, VS2010 won't let me do anything about it.
The funny thing is that because of this situation my unit tests started slowly turning into mega-tests, that took more than 30 mins to complete (each) and then VS2010 would time them out, cause the default test timeout is 30 mins. OMG please help! :-)

Resources