I have a Test Case in Microsoft Test Manager 2010 that is used in a Test Plan.
How can I find this Test Case in the Test Plan? Is there at least a column in the Organize view that shows the paths of the Test Plans where the Test Case is used?
Unfortunately, MTM UI does not provide any possibility to search for Test Cases.
Edit: (see comment)
Unfortunately, MTM UI does not provide any possibility to search for Test Cases that belong to a particular Test Plan or Test Suite.
May be a solution for you:
You can check to which Test Suites a particular Test Case belongs to using TFS-API.
Here is a code snipped that works on TFS 2013:
// Current user credentials will be used to access to TFS
TfsTeamProjectCollection tfsCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(<TFS Url>));
tfsCollection.EnsureAuthenticated();
ITestManagementService testManagementService = tfsCollection.GetService<ITestManagementService>();
ITestManagementTeamProject teamProject = testManagementService.GetTeamProject(<team project name>);
// Get all Test Suites your Test Case belongs to
// (Across all Test Plans in the Team Project)
ITestSuiteCollection testSuites = teamProject.TestSuites.ReferencingTestCase(testCaseId);
Have a look at the ITestManagementTeamProject Interface, you can do a lot with it.
(Hint: currently this interface is absolutely not documented for VS 2013 so switch the page to VS 2012 and you will usually find a little more documentation).
For your task on building a whole path to the particular Test Suite check ITestSuiteHelper Interface and ITestSuiteBase Interface. They provide you with data you need to follow the Test Suites' tree of your project.
Related
I'm currently working on an application that uploads a file to a web service (using spring resttemplate). This upload function returns an id which can be used to download the uploaded file later on.
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
What i want to do is the download test case will depends on the result of the upload test (since the id will comes from the upload function) - this will be tested against an actual web service for me to confirm if the upload and download functions works properly.
I'm not sure if this approach that i want to do is correct so if any one can suggest a good approach how to implement it, it would be greatly appreciated.
Thanks in advance!
Since this upload/download functionality is already covered on Unit level
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
I know Test chaining is considered harmful
the download test case will depends on the result of the upload test (since the id will comes from the upload function)
and can cause lots of overlap between tests, so changes to one can cascade outwards and cause failures everywhere. Further more the tests should have Atomicity (isolation). But if the trade-off in your case suite you, my advice is to use it.
What you can look at, is a proper Test Fixture strategy. The other Fixture Setup patterns can help you with this.
Sounds like an 'acceptance test' is what is required. This would be basically an integration test of a subsystem for the desired feature.
Have a look at Cucumber as a good easy framework to get started.
Here you would define your steps
Given:
When:
Then:
and you can then test the feature as a whole.
Services external (that You have no control over) to Your application has to be mocked, even in e2e test.
This means that service where You are uploading file should be faked. Just setup dummy http server that is pretending to be real service.
With such fake service you can setup it's behaviour for every test, in example you can prepare file to be downloaded with given id.
Pseudo code:
// given
file = File(id, content);
fakeFileService.addFile(file);
// when
applicationRunner.downloadFile(file.id());
// then
assertThatFileWasDownloaded(file);
This is a test which checks if application can download given file.
File class is some domain object in Your application, not a system
File!
fakeFileService is instance that controls dummy file service.
applicationRunner is a wrapper around Your application that makes
it do what you want.
I recommend You to read "Growing Object-Oriented software guided by tests".
My team is currently building a WebDriver test framework in Ruby. We are looking for a way to generate test completion reports so they can be emailed out, ideally included individual test and test verification point results.
As an example of what I mean when I say test verification points, a test which creates a product could have multiple verification points such as did the product name get created correctly, did the product price get created correctly. If the test completion report could specify which verification point failed it would make assessing failures a lot quicker.
The reports that can be output from the selenium IDE are pretty much what I'm after.
Since you are using Ruby, you can use consider storing your verification points outcomes, test case status etc in a DB such as MySQL or Sqlite. This gives you the ability to perform various analysis on the health of your tests in the past and present. Based on this you can even predict the future trend.
Maybe Allure report and respective RSpec adapter could suit your requirements? This report is rather new and gives you wide range of features like grouping tests by BDD features and stories, saving attachments, parameters and so on.
We use Microsoft Test Manager to test out applications. We had initially created Test Plans for each Application we wanted to test. So our test plans have this structure:
Application A
Application B
Application C
Now, in each Iteration, we are getting new Builds for testing.
So, should we keep the same Test Plans and editing their appropriate fields (Build in use, Iteration, Configuration, ...) or is it better to create new ones for each iteration? Something like this:
Application A - Iteration 1
Application A - Iteration 2
Application B - Iteration 1
Application B - Iteration 2
Application C - Iteration 1
Application C - Iteration 2
And does it make sense to create a new Test Plan for every new build?
Test Plans are usually created for feature in general. And updated accordingly when the feature (Functional Spec) changes as well. But that's in in ideal world.
From this I can tell "Build in use, Iteration, Configuration, ..." that you are talking about the Test Reports rather than plans. Why not having a document with a Test Plan. And a separate
e.g. table in this document where you would update (add one line) of the configurations, build, evironment used for testing?
Taking into consideration definition and its a small workaround of the test plan:
The test planning process and the plan itself serve as vehicles for communicating with other members of the project team, testers, peers, managers and other stakeholders. This communication allows the test plan to influence the project team and the project team to influence the test plan, especially in the areas of organization-wide testing policies and motivations; test scope, objectives and critical areas to test; project and product risks, resource considerations and constraints; and the testability of the item under test. You can accomplish this communication through circulation of one or two test plan drafts and through review meetings. Such a draft will include many notes such as the following:
[To Be Determined: Jennifer: Please tell me what the plan is for releasing the test items into the test lab for each cycle of system test execution?]
[Dave - please let me know which version of the test tool will be used for the regression tests of the previous increments.]
As you document the answers to these kinds of questions, the test plan becomes a record of previous discussions and agreements between the testers and the rest of the project team. The test plan also helps us manage change. During early phases of the project, as we gather more information, we revise our plans. As the project evolves and situations change, we adapt our plans. Written test plans give us a baseline against which to measure such revisions and changes. Furthermore, updating the plan at major milestones helps keep testing aligned with project needs. As we run the tests, we make final adjustments to our plans based on the results. You might not have the time - or the energy - to update your test plans every time a variance occurs, as some projects can be quite dynamic. In Chapter 6 [Black, 2001], we describe a simple approach for documenting variances from the test plan that you can implement using a database or spreadsheet. You can include these change records in a periodic test plan update, as part of a test status report, or as part as an end-of-project test summary (c) ISTQB Foundation book
I recommend you to update your existing test plan in order it was possible to see any amendments or corrections made through the whole application development life cycle.
Is this possible? Or would I have to create a copy of an existing test and assign one test to each tester?
If you mean to create a copy of your Test Case this is not necessary.
You can add two (or more) Test Suites to your Test Plan with the same requirement or with the same set of Test Cases and then assign them to different testers.
EDIT:
In your Test Plan you can create as many Test Suites as you want (e.g. "Functional Testing", "Quick Testing"). In each 'base' Test Suite you can add requirements (new test suites under your base test suite). The test cases of each requirement are automatically added. Now you can assign testers for each requirement or you can select a single test case under your requirements and assign a tester for each test case separately.
These are some good references:
link1
link2
Assigning multiple testers is possible when a test case has multiple configurations.
I.e.: Tester A can be assigned to a test case with configuration "Windows 7" while Tester B can be assigned to the same test case but with configuration "Windows 2008".
MSDN - How to: Assign Who Will Run the Tests in a Test Plan
Note: If different testers are assigned to different configurations for the
same test, Multiple is displayed in Testers.
I was wondering what approaches others might have for testing domain services against a database? I already have a series of mock repositories that I am able to use in the domain services for testing the domain services themselves. Part of the construction of these mock repositories is that they build out sample aggregates and associated entities and validate them against the same business rules that would otherwise be used within the model. This also provides a nice and simple means to detect potential impact points within the entities themselves, in the event that their interfaces change.
The main problem that I see with live testing of my SQL-backed repositories is database consistency. For example, once a test is run the "create" aspects have already been run. Running them again would obviously cause failures, as the database is no longer pristine. I was considering create a mirrored database used just for this type of testing. It would be minimal, containing structure, programmability, constraints, etc. I would also provide a minimal set of data for certain established tests. My line of thinking is that I could have a stored procedure that I could call to reset the database to the "pristine" state with base data before the start of the test run.
While this is not as important on a developer machine after the functionality has been initially verified, I am looking more into the importance of running these tests as part of the nightly build; so that in the event of a test failure, the build could be held back as to not foul the target deployment environment (specifically in this case, it would be the environment that the testing team uses).
I do not necessarily think that the platform matters, but in case anyone has implementation specific concerns, my environment looks like the following:
Windows 7 (Development) / Windows Server 2008 R2 (Server)
Visual Studio 2008 Team Edition (C#)
Microsoft SQL Server 2008 Standard (Development/Server)
I am using Team Build to run my builds, but that is most likely not a factor in the scope of the question.
For example, once a test is run the "create" aspects have already been run. Running them again would obviously cause failures, as the database is no longer pristine.
Maybe you could make your unit tests transactional. Run your tests, roll them back, and the database is unchanged.
Spring has transactional unit test classes that make this easy to do. You just need a transaction manager.
You can use SQL Server Express (I've done it with 2005, but haven't tried with 2008) to set up "test deck" databases that are stored as files. These can be checked in to source control, then test helper classes can (a) copy them to temporary folders, (b) write-enable them, and (c) connect to them. To restore the original state, delete the temporary copy and repeat a-c.
This is a huge pain. If you can get by with transactions (as duffymo suggested) I'd go with that. The pain points with transactions are nested transactions and distributed ones - watch out for those in your code.
You can create a bunch of data factories in testing code which initially run on startup of your test run. Then use the transaction rollback method to keep it pristine.
To make it easier, subclass all your test classes and put the transaction accessor and rollback code in there. Rollback code can be set to automatically run at the completion of every test method.
if you are actually executing unit tests for your repository that are hitting a database, YOU ARE NOT DOING UNIT TESTING. It might be a helpful test but it's not a unit test. That's an integration test. If you want to do that and call it a integration test then that is perfectly fine. However, if you are following good design principles in your repositories then you do not need to test the database, EVER, in your unit tests.
Quite simply, your repository unit test is NOT to test what wide effects occur in the database based on the input to the repository; it is to confirm that the input to the repository results in a call to a collaborator with such and such a set of values.
You see, the repository, like the rest of your code, should follow the Single Reposibility Principle. Basically your respoitory has ONE and ONLY ONE reposibility and that is to mediate domain model API concerns to the underlying data access technoloy layer (usually ADO.Net but could be Entity Framework or L2S or whatever). Going with the example of ADO.Net calls, your repository shouldn't take on the responsibilty of being a factory for the data layer and instead should take a dependency on a collaborator from the ADO.Net data interfaces (specifically IDbConnection/IDbCommand/IDbParameter etc). Simply take an IDbConnection as a constructor parameter and call it a day. This means that you can write repository unit tests against the interfaces and supply mocks (or fakes or stubs or whatever you need) and confirm that the required methods, in order, with the expected inputs are made. Go check out my MS blog on this exact topic -> http://blogs.msdn.com/b/schlepticons/archive/2010/07/20/unit-testing-repositories-a-rebuttal.aspx
Hopfully this helps you from making a mistake in your test and design in the future.
BTW: If you want to unit test the database you can. Just use Visual Studio Database Tests. Its built INTO vs and has been around since VS2005. This is nothing new. But I need to caution you, they need to be completely SEPERATE unit tests.
If your code is fairly database independent, using an in-memory database such as SQLite for unit testing (not integration testing) will give you the benefits of speed and ease (your test setups initialize the db).