Assign another tester in a Test Run in Microsoft Test Manager - visual-studio-2010

I'm testing a Test Case with a few steps in Microsft Test Manager.
When I run this Test Case, I want to execute only a few steps and then assign another tester to this Test Run.
E.g.
I have three steps. The first two steps are for me to test.
After those two steps, I want to stop testing and assign another tester so that he can test the third step.
But I can't find a way to stop testing, and assign a new user to this Test Case.
Does anyone know if this is possible?
Thanks!

This definitely cannot be done. When you run a Test Case a new Test Run is created and stored in the tfs database. The steps executed for this run and their result, comments, attachments e.t.c. are saved and cannot be edited.
From a test point of view, I think that even if you could do this, you shouldn't. Every test case should be as simple as possible so everyone can execute it. If you really need this, perhaps you should split the test case to two different tests, and the second one will have the first as prerequisite.

Related

Robot Framework - Expected Failure after Prod Refresh

One of my automated test cases (TC) fails predictably after a prod refresh that takes place every few months.
For the TC to pass, there should be 'N/A' for values, which is a precondition. After getting the 'N/A' text, I do insert into a table to create values and then do other steps.
After the refresh, there are values (monies) instead of the 'N/A'.
What are the ways to avoid the failure? Run Keyword If and Run Keyword And Expect Failure would invalidate the original TC and and it will always pass, which I apparently don't need.
There might be other approaches too, however, one of the ways to approach this problem is
You can define init file in directory
__init__.robot
That suite setup and suite teardown in the file would run before anything in the underlying folders.
make use of set global variable with N/A and update the same when you see actual values. i.e every test case would verify whether the variable contains N/A or actual values(i.e not N/A), this can be done using Test Setup with keyword.
NOTE: You can also use set suite variable for the same

Need to execute a step (each feature may have diff step) only once before a Cucumber feature file

I want to execute a specific step only once before each cucumber feature files. A cucumber feature files can have multiple scenarios. I don't want Background steps here which executes before each scenario. Every feature file can have a step (which is different in each feature) which will execute only once. So i can't use that step into before hook as i have a specific step for every 20 features. Sample Gherkin shows below:
Scenario: This will execute only once before all scenario in this current feature
When Navigate to the Page URL
Scenario: scenario 1
When Some Action
Then Some Verification
Scenario: scenario 2
When Some Action
Then Some Verification
Scenario: scenario 3
When Some Action
Then Some Verification
I hope you guys understand my Question. I am using Ruby Capybara Cucumber in my framework.
Cucumber doesn't really support what you are asking about. A way to implement this with cucumber hooks would be to use these two pieces of doc:
https://github.com/cucumber/cucumber/wiki/Hooks#tagged-hooks
https://github.com/cucumber/cucumber/wiki/Hooks#running-a-before-hook-only-once
You would tag all your feature files appropriately and you can implement tagged Before hooks that execute once on a per feature tag basis.
It's not beautiful but it accomplishes what you want without waiting on a feature request (or using a different tool).
This can be achieved by associating a Before, After, Around or AfterStep hook with one or more tags. Examples:
Before('#cucumis, #sativus') do
# This will only run before scenarios tagged
# with #cucumis OR #sativus.
end
This must be in the top 5 most frequent questions on the Cucumber mailing list. You can do what you want with hooks. However you almost certainly should not do what you want. The execution time you save by taking this approach is totally outweighed by the amount of time and effort it will take to debug the intermittent failures that such an approach generally leads to.
One of the foundations of creating automated tests is to start from a consistent place. When you have code that setups key things in scenarios, but that is not run for every scenario you have to do the following:
Ensure your setup code creates a consistent base to start from (this is easy)
Ensure that every scenario that uses this base, does not modify the base in any way at all (this is very very difficult)
In your example you'd have to ensure that every action in every scenario ends up on your original page URL. If just one scenario fails to do that, then you will end up with intermittent failures, and you will have to go through every scenario to find your culprit.
In general it is much easier and more effective to put your effort into making your setup code FAST enough so that you are not worried about running it before each scenario.
Yes, This can be done by passing the actual value in you feature file and using "(\\d+)" in you java file. Look at below shown code for better understanding.
Scenario: some test scenario
Given whenever a value is 50
In myFile.java, write the step definition as shown below
#Given("whenever a value is (\\d+)$")
public void testValueInVariable(int value) throws Throwable {
assertEqual(value, 50);
}
you can also have a look at below link to get more clear picture:
https://thomassundberg.wordpress.com/2014/05/29/cucumber-jvm-hello-world/
Some suggestions have been given, especially the one quoting the official documentation which uses a global variable to store whether or not initial setup has been run.
For my case, where multiple features were executed one after another, I had to reset the variable again by checking whether scenario.feature.name has changed:
$feature_name ||= ''
$is_setup ||= false
Before do |scenario|
current_feature_name = scenario.feature.name rescue nil
if current_feature_name != $feature_name
$feature_name = current_feature_name
$is_setup = false
end
end
$is_setup can then be used in steps to determine whether any initial setup needs to be done.

How to fail fast only specific rspec test script?

I have a test suite of rspec tests which are divided into different files.
Every file represents one test scenario with some number of test steps.
Now, on some particular tests, it can happen that specific step fails but it is too time consuming and not needed to run rest of the steps in that scenario.
I know there is an option --fail-fast in rspec but if I'm running tests like: rspec spec/* that will mean that when first step fails in any script, it will abort complete execution.
I'm just looking for mechanism to abort execution of that specific test scenario (test script) when failure happens but to continue execution of other test scenarios.
Thanks for the help,
Bakir
Use the RSpec-instafail gem.
According to its documentation, it:
Show failing specs instantly. Show passing spec as green dots as usual.

How to set the order for the unit tests to run in asp.net mvc?

I've written many unit tests in a file.
The problem is they don't run in order.
I first make an entry to the database in one method and delete the same entry in another method.
Insert() appears before Remove() in my test file.
But still Remove() runs first and hence I am not able to execute the test cases effectively since it won't find the entry. Reason could be Remove() takes less execution time than Insert()
Can we set the sequence to the test cases?
You can prefix the test names with character alphabetically
like
aTestSomething
bTestAnotherThing
:)
better way
How to order methods of execution using Visual Studio to do integration testing?

Order of execution of unit tests in Visual Studio 2008

I have unit tests defined for my Visual Studio 2008 solution. These tests are defined in multiple methods and in multiple classes across several files.
I've read in a blog article that when using MSTest, it is a mistake to think that you can depend on the order of execution of your tests:
Execution Interleaving: Since each instance of the test class is instantiated separately on a different thread, there are no guarantees
regarding the order of execution of unit tests in a single class, or
across classes. The execution of tests may be interleaved across
classes, and potentially even assemblies, depending on how you chose
to execute your tests. The key thing here is – all tests could be
executed in any order, it is totally undefined.
That said, I have to have a pre-execution step before any of these tests gets to run. That is, I actually want to define an order of execution somehow. For example, 1) first create the database; 2) test that it's created; then 3) run the remaining 50 tests in arbitrary order.
Any ideas on how I can do that?
I wouldn't test that the database is successfully created; I will assume that all subsequent tests will fail if it is not, and it feels in a way that you would be testing the test code.
Regarding a pre-test step to set up the database, you can do that by creating a method and decorating it with the ClassInitialize attribute. That will make the test framework execute that method prior to any other method within the test class:
[ClassInitialize()]
public static void InitializeClass(TestContext testContext)
{
// your init code here
}
Unit tests should all work standalone, and should not have dependencies on each other, otherwise you can't run a single test in isolation.
Every test that needs the database should then just create it on demand (if it's not already been created - you can use a singleton/static class to ensure that if multiple tests are executed in a batch, the database is only actually created once).
Then it won't matter which test executes first; it'll just be created the first time a test needs a database to use.
In theory it is correct that tests should be independent of each other and be able to run standalone. But in practice, there is a difference between theory and practice, and VS2010 gives me a hard time with its fixed order of execution (random order that is always the same).
Here are some examples:
I have a unit test that cross checks the dates between some tables and verifies that everything is in agreement. Obviously it is of no use to run this test on an empty database, so I want to to run SOME TIME AFTER the unit test that inserts data. Sorry VS2010 doesn't let you do this.
OK, cool, then I will add it to the insert unit test as an epilogue. But then I want to cross check other 10 things and instead of having a unit test ("Make sure that entities with various parameters can be inserted without crashes") I end up having a mega-test.
Then another case.
My unit test inserts entities, just insert, to make sure that this part of the logic works ok. Then I have a multi-threaded version of the test, to make sure that there are no deadlocks and stuff. Clearly I need the multi-threaded test to run SOME TIME AFTER the single threaded test, and ONLY if the single threaded test succeeds. Sorry, VS2010 can't do this.
Another case. I have a unit test that deletes ALL entities of a given kind in the database. This should result in a bunch of empty tables and lots of zeros in other tables. Clearly it is useless to run it on an empty database, so the test inserts 10.000 entities if it finds the DB empty. However, if it runs AFTER the multithreaded test, it will find 250.000 entities, and to delete ALL of them takes TIME. Sorry, VS2010 won't let me do anything about it.
The funny thing is that because of this situation my unit tests started slowly turning into mega-tests, that took more than 30 mins to complete (each) and then VS2010 would time them out, cause the default test timeout is 30 mins. OMG please help! :-)

Resources