How to set the order for the unit tests to run in asp.net mvc? - asp.net-mvc-3

I've written many unit tests in a file.
The problem is they don't run in order.
I first make an entry to the database in one method and delete the same entry in another method.
Insert() appears before Remove() in my test file.
But still Remove() runs first and hence I am not able to execute the test cases effectively since it won't find the entry. Reason could be Remove() takes less execution time than Insert()
Can we set the sequence to the test cases?

You can prefix the test names with character alphabetically
like
aTestSomething
bTestAnotherThing
:)
better way
How to order methods of execution using Visual Studio to do integration testing?

Related

Need to execute a step (each feature may have diff step) only once before a Cucumber feature file

I want to execute a specific step only once before each cucumber feature files. A cucumber feature files can have multiple scenarios. I don't want Background steps here which executes before each scenario. Every feature file can have a step (which is different in each feature) which will execute only once. So i can't use that step into before hook as i have a specific step for every 20 features. Sample Gherkin shows below:
Scenario: This will execute only once before all scenario in this current feature
When Navigate to the Page URL
Scenario: scenario 1
When Some Action
Then Some Verification
Scenario: scenario 2
When Some Action
Then Some Verification
Scenario: scenario 3
When Some Action
Then Some Verification
I hope you guys understand my Question. I am using Ruby Capybara Cucumber in my framework.
Cucumber doesn't really support what you are asking about. A way to implement this with cucumber hooks would be to use these two pieces of doc:
https://github.com/cucumber/cucumber/wiki/Hooks#tagged-hooks
https://github.com/cucumber/cucumber/wiki/Hooks#running-a-before-hook-only-once
You would tag all your feature files appropriately and you can implement tagged Before hooks that execute once on a per feature tag basis.
It's not beautiful but it accomplishes what you want without waiting on a feature request (or using a different tool).
This can be achieved by associating a Before, After, Around or AfterStep hook with one or more tags. Examples:
Before('#cucumis, #sativus') do
# This will only run before scenarios tagged
# with #cucumis OR #sativus.
end
This must be in the top 5 most frequent questions on the Cucumber mailing list. You can do what you want with hooks. However you almost certainly should not do what you want. The execution time you save by taking this approach is totally outweighed by the amount of time and effort it will take to debug the intermittent failures that such an approach generally leads to.
One of the foundations of creating automated tests is to start from a consistent place. When you have code that setups key things in scenarios, but that is not run for every scenario you have to do the following:
Ensure your setup code creates a consistent base to start from (this is easy)
Ensure that every scenario that uses this base, does not modify the base in any way at all (this is very very difficult)
In your example you'd have to ensure that every action in every scenario ends up on your original page URL. If just one scenario fails to do that, then you will end up with intermittent failures, and you will have to go through every scenario to find your culprit.
In general it is much easier and more effective to put your effort into making your setup code FAST enough so that you are not worried about running it before each scenario.
Yes, This can be done by passing the actual value in you feature file and using "(\\d+)" in you java file. Look at below shown code for better understanding.
Scenario: some test scenario
Given whenever a value is 50
In myFile.java, write the step definition as shown below
#Given("whenever a value is (\\d+)$")
public void testValueInVariable(int value) throws Throwable {
assertEqual(value, 50);
}
you can also have a look at below link to get more clear picture:
https://thomassundberg.wordpress.com/2014/05/29/cucumber-jvm-hello-world/
Some suggestions have been given, especially the one quoting the official documentation which uses a global variable to store whether or not initial setup has been run.
For my case, where multiple features were executed one after another, I had to reset the variable again by checking whether scenario.feature.name has changed:
$feature_name ||= ''
$is_setup ||= false
Before do |scenario|
current_feature_name = scenario.feature.name rescue nil
if current_feature_name != $feature_name
$feature_name = current_feature_name
$is_setup = false
end
end
$is_setup can then be used in steps to determine whether any initial setup needs to be done.

Run Teamcity configuration N times

In the set of my TeamCity configurations, I decided to make something like an aging test*. And run a single configuration for a 100 times.
Can I make in a few simple clicks?
*aging test - test that is showing, that due time/aging, results will not be changed.
As of now, this is not possible from UI. If you run one build configuration few times without any changes, they will be merged and only 1 will be executed. If you want to run 100, you have to trigger them one by one, after the previous one finished executing.
But the better solution is to trigger builds from script using REST API (for more details see the documentation here), if builds have different values in custom parameters they all will be put in the queue.
HOW: Define a dummy custom parameter, and trigger the build from script within a loop. Pass the value of iterating variable as parameter value. So, TeamCity will think those are different builds and execute all of them.

Assign another tester in a Test Run in Microsoft Test Manager

I'm testing a Test Case with a few steps in Microsft Test Manager.
When I run this Test Case, I want to execute only a few steps and then assign another tester to this Test Run.
E.g.
I have three steps. The first two steps are for me to test.
After those two steps, I want to stop testing and assign another tester so that he can test the third step.
But I can't find a way to stop testing, and assign a new user to this Test Case.
Does anyone know if this is possible?
Thanks!
This definitely cannot be done. When you run a Test Case a new Test Run is created and stored in the tfs database. The steps executed for this run and their result, comments, attachments e.t.c. are saved and cannot be edited.
From a test point of view, I think that even if you could do this, you shouldn't. Every test case should be as simple as possible so everyone can execute it. If you really need this, perhaps you should split the test case to two different tests, and the second one will have the first as prerequisite.

Specify test end condition in Visual Studio Load Test

I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..

Order of execution of unit tests in Visual Studio 2008

I have unit tests defined for my Visual Studio 2008 solution. These tests are defined in multiple methods and in multiple classes across several files.
I've read in a blog article that when using MSTest, it is a mistake to think that you can depend on the order of execution of your tests:
Execution Interleaving: Since each instance of the test class is instantiated separately on a different thread, there are no guarantees
regarding the order of execution of unit tests in a single class, or
across classes. The execution of tests may be interleaved across
classes, and potentially even assemblies, depending on how you chose
to execute your tests. The key thing here is – all tests could be
executed in any order, it is totally undefined.
That said, I have to have a pre-execution step before any of these tests gets to run. That is, I actually want to define an order of execution somehow. For example, 1) first create the database; 2) test that it's created; then 3) run the remaining 50 tests in arbitrary order.
Any ideas on how I can do that?
I wouldn't test that the database is successfully created; I will assume that all subsequent tests will fail if it is not, and it feels in a way that you would be testing the test code.
Regarding a pre-test step to set up the database, you can do that by creating a method and decorating it with the ClassInitialize attribute. That will make the test framework execute that method prior to any other method within the test class:
[ClassInitialize()]
public static void InitializeClass(TestContext testContext)
{
// your init code here
}
Unit tests should all work standalone, and should not have dependencies on each other, otherwise you can't run a single test in isolation.
Every test that needs the database should then just create it on demand (if it's not already been created - you can use a singleton/static class to ensure that if multiple tests are executed in a batch, the database is only actually created once).
Then it won't matter which test executes first; it'll just be created the first time a test needs a database to use.
In theory it is correct that tests should be independent of each other and be able to run standalone. But in practice, there is a difference between theory and practice, and VS2010 gives me a hard time with its fixed order of execution (random order that is always the same).
Here are some examples:
I have a unit test that cross checks the dates between some tables and verifies that everything is in agreement. Obviously it is of no use to run this test on an empty database, so I want to to run SOME TIME AFTER the unit test that inserts data. Sorry VS2010 doesn't let you do this.
OK, cool, then I will add it to the insert unit test as an epilogue. But then I want to cross check other 10 things and instead of having a unit test ("Make sure that entities with various parameters can be inserted without crashes") I end up having a mega-test.
Then another case.
My unit test inserts entities, just insert, to make sure that this part of the logic works ok. Then I have a multi-threaded version of the test, to make sure that there are no deadlocks and stuff. Clearly I need the multi-threaded test to run SOME TIME AFTER the single threaded test, and ONLY if the single threaded test succeeds. Sorry, VS2010 can't do this.
Another case. I have a unit test that deletes ALL entities of a given kind in the database. This should result in a bunch of empty tables and lots of zeros in other tables. Clearly it is useless to run it on an empty database, so the test inserts 10.000 entities if it finds the DB empty. However, if it runs AFTER the multithreaded test, it will find 250.000 entities, and to delete ALL of them takes TIME. Sorry, VS2010 won't let me do anything about it.
The funny thing is that because of this situation my unit tests started slowly turning into mega-tests, that took more than 30 mins to complete (each) and then VS2010 would time them out, cause the default test timeout is 30 mins. OMG please help! :-)

Resources