I have unit tests defined for my Visual Studio 2008 solution. These tests are defined in multiple methods and in multiple classes across several files.
I've read in a blog article that when using MSTest, it is a mistake to think that you can depend on the order of execution of your tests:
Execution Interleaving: Since each instance of the test class is instantiated separately on a different thread, there are no guarantees
regarding the order of execution of unit tests in a single class, or
across classes. The execution of tests may be interleaved across
classes, and potentially even assemblies, depending on how you chose
to execute your tests. The key thing here is – all tests could be
executed in any order, it is totally undefined.
That said, I have to have a pre-execution step before any of these tests gets to run. That is, I actually want to define an order of execution somehow. For example, 1) first create the database; 2) test that it's created; then 3) run the remaining 50 tests in arbitrary order.
Any ideas on how I can do that?
I wouldn't test that the database is successfully created; I will assume that all subsequent tests will fail if it is not, and it feels in a way that you would be testing the test code.
Regarding a pre-test step to set up the database, you can do that by creating a method and decorating it with the ClassInitialize attribute. That will make the test framework execute that method prior to any other method within the test class:
[ClassInitialize()]
public static void InitializeClass(TestContext testContext)
{
// your init code here
}
Unit tests should all work standalone, and should not have dependencies on each other, otherwise you can't run a single test in isolation.
Every test that needs the database should then just create it on demand (if it's not already been created - you can use a singleton/static class to ensure that if multiple tests are executed in a batch, the database is only actually created once).
Then it won't matter which test executes first; it'll just be created the first time a test needs a database to use.
In theory it is correct that tests should be independent of each other and be able to run standalone. But in practice, there is a difference between theory and practice, and VS2010 gives me a hard time with its fixed order of execution (random order that is always the same).
Here are some examples:
I have a unit test that cross checks the dates between some tables and verifies that everything is in agreement. Obviously it is of no use to run this test on an empty database, so I want to to run SOME TIME AFTER the unit test that inserts data. Sorry VS2010 doesn't let you do this.
OK, cool, then I will add it to the insert unit test as an epilogue. But then I want to cross check other 10 things and instead of having a unit test ("Make sure that entities with various parameters can be inserted without crashes") I end up having a mega-test.
Then another case.
My unit test inserts entities, just insert, to make sure that this part of the logic works ok. Then I have a multi-threaded version of the test, to make sure that there are no deadlocks and stuff. Clearly I need the multi-threaded test to run SOME TIME AFTER the single threaded test, and ONLY if the single threaded test succeeds. Sorry, VS2010 can't do this.
Another case. I have a unit test that deletes ALL entities of a given kind in the database. This should result in a bunch of empty tables and lots of zeros in other tables. Clearly it is useless to run it on an empty database, so the test inserts 10.000 entities if it finds the DB empty. However, if it runs AFTER the multithreaded test, it will find 250.000 entities, and to delete ALL of them takes TIME. Sorry, VS2010 won't let me do anything about it.
The funny thing is that because of this situation my unit tests started slowly turning into mega-tests, that took more than 30 mins to complete (each) and then VS2010 would time them out, cause the default test timeout is 30 mins. OMG please help! :-)
Related
I use jasmine to test my server side code and i need to run tests in serial, not in parallel.
My tests need to make CRUD operation in database. If test are executed in parallel i can't ensure that the database is in a good context for my test
Unless you explicitly choose to create asynchronous tests in Jasmine, everything in Jasmine happens sequentially, in the sense that one test runs only after its preceding test has finished. And if you do write asynchronous tests, then parts of your single test may run in parallel, but you still have the constraint that one test runs only after its preceding test has finished.
However, there are a couple caveats to be aware of:
In an async test if your code exceeds Jasmine's timeout period, you might still have code running when Jasmine decides to give up on that test and proceed to the next. (Thanks to #Gregg for this tip; see this answer.)
"JavaScript is usually considered to have a single thread of execution... however, in reality this isn't quite true, in sneaky nasty ways." I am quoting #bobince from this answer.
I've written many unit tests in a file.
The problem is they don't run in order.
I first make an entry to the database in one method and delete the same entry in another method.
Insert() appears before Remove() in my test file.
But still Remove() runs first and hence I am not able to execute the test cases effectively since it won't find the entry. Reason could be Remove() takes less execution time than Insert()
Can we set the sequence to the test cases?
You can prefix the test names with character alphabetically
like
aTestSomething
bTestAnotherThing
:)
better way
How to order methods of execution using Visual Studio to do integration testing?
Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.
I'm testing a Test Case with a few steps in Microsft Test Manager.
When I run this Test Case, I want to execute only a few steps and then assign another tester to this Test Run.
E.g.
I have three steps. The first two steps are for me to test.
After those two steps, I want to stop testing and assign another tester so that he can test the third step.
But I can't find a way to stop testing, and assign a new user to this Test Case.
Does anyone know if this is possible?
Thanks!
This definitely cannot be done. When you run a Test Case a new Test Run is created and stored in the tfs database. The steps executed for this run and their result, comments, attachments e.t.c. are saved and cannot be edited.
From a test point of view, I think that even if you could do this, you shouldn't. Every test case should be as simple as possible so everyone can execute it. If you really need this, perhaps you should split the test case to two different tests, and the second one will have the first as prerequisite.
I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..