Assign a test to multiple testers in Test Manager 2010 - testcase

Is this possible? Or would I have to create a copy of an existing test and assign one test to each tester?

If you mean to create a copy of your Test Case this is not necessary.
You can add two (or more) Test Suites to your Test Plan with the same requirement or with the same set of Test Cases and then assign them to different testers.
EDIT:
In your Test Plan you can create as many Test Suites as you want (e.g. "Functional Testing", "Quick Testing"). In each 'base' Test Suite you can add requirements (new test suites under your base test suite). The test cases of each requirement are automatically added. Now you can assign testers for each requirement or you can select a single test case under your requirements and assign a tester for each test case separately.
These are some good references:
link1
link2

Assigning multiple testers is possible when a test case has multiple configurations.
I.e.: Tester A can be assigned to a test case with configuration "Windows 7" while Tester B can be assigned to the same test case but with configuration "Windows 2008".
MSDN - How to: Assign Who Will Run the Tests in a Test Plan
Note: If different testers are assigned to different configurations for the
same test, Multiple is displayed in Testers.

Related

How can i use one data source for multiple test cases in Load test

I have created Performance test cases on Visual studio 2017, The issue i am facing is i have to make all of the test case get through login Data source,
As in Load test they will run parallel, The question is how can i use one data source to run for all the test cases
Instead of adding data source for all the test cases
Thank You in advance
Data sources are added to Web Performance Tests. They are not added to load tests. If all the web tests need a login then each test needs to have a suitable data source added. If the login actions can be written in a called web test (see the Insert call to web test command in the web test editor) then that login (i.e. called) web test could have a datasource for the username and password etc.
One data source can be used by several web tests within the same load test. Each web test will have its own data access method (i.e. Unique or Sequential or Random etc). That means that the same login data may be used by two or more test executions at the same time. If one data source file is needed but each test must use different logins then see this answer for some ideas.

Apply different test scenarios without duplicating code

I have a basic test suite that is successfully running all of my tests. I've tied this into a git pre-push hook, and notice that some tests just don't make sense in that use case (i.e. testing if customer email is sent and received may take 15+ minutes).
So my question is how to organize things to only run desired tests, or omit tests when deploying. I could use tags and groups, but that seems to not be a great fit here, and could lead to code duplication (putting the same test in two or more groups).
Any tips / suggestions? (I'm still looking at tags to see if I can make them work for our use case...)
I think tags is the way you want to go. Think of tags as suites. You can add one test to multiple test suites. For example, say I have several login tests. If I want them to be in a smoke test suite AND a login suite I can just add all the tags that apply to this test.
'#tags': ['smoke', 'login']
This way you don't need to duplicate the code. You can just add as many tags as you need if they apply to this test. In the example above the tests belong to 2 different suites and I can either run the full smoke tests suite or just the login suite using the same tests.
nightwatch --tag smoke
nightwatch --tag login

how to use Jmeter as CI solution

I have a general question about Jmeter. I am using Jmeter and I want to create Continuous integration solution using Jmeter. The problem is that there are several QA, that each one of them have test plans according his development. the problem is that if using Jmeter all test plan need to run as one test plan, and for each of test plan to create report, and than manually over each report and check for failed scenarios. So all the handling is much harder than I thought, Jmeter is wonderful solution but how to use it with several test plans, and if I want it to run nightly? and in the mornng to get single report with all test plans that created? Can someone please advise if Jmeter can be solution for CI nightly? is it possible to merge all reports to single report?, each test plan is for unique development, and we will have multiple testplans over time
Did you have a look at those plugins:
https://github.com/jmeter-maven-plugin/jmeter-maven-plugin which allows you to fail a build based on presence of errors in load test results which you could mix with https://jmeter-plugins.org/wiki/AutoStop/
https://plugins.jenkins.io/performance
JMeter per se is just a load testing tool, you need a continuous integration server. If you don't have one in your company there are several free and open source solutions you can consider like:
Jenkins
Buildbot
Cruise Control
Check out Jenkins vs. Other Open Source Continuous Integration Servers article to learn more about aforementioned tools, see sample build dashboards and commands to kick off a JMeter test.

How to find all Test Suites a particular Test Case belongs to

I have a Test Case in Microsoft Test Manager 2010 that is used in a Test Plan.
How can I find this Test Case in the Test Plan? Is there at least a column in the Organize view that shows the paths of the Test Plans where the Test Case is used?
Unfortunately, MTM UI does not provide any possibility to search for Test Cases.
Edit: (see comment)
Unfortunately, MTM UI does not provide any possibility to search for Test Cases that belong to a particular Test Plan or Test Suite.
May be a solution for you:
You can check to which Test Suites a particular Test Case belongs to using TFS-API.
Here is a code snipped that works on TFS 2013:
// Current user credentials will be used to access to TFS
TfsTeamProjectCollection tfsCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(<TFS Url>));
tfsCollection.EnsureAuthenticated();
ITestManagementService testManagementService = tfsCollection.GetService<ITestManagementService>();
ITestManagementTeamProject teamProject = testManagementService.GetTeamProject(<team project name>);
// Get all Test Suites your Test Case belongs to
// (Across all Test Plans in the Team Project)
ITestSuiteCollection testSuites = teamProject.TestSuites.ReferencingTestCase(testCaseId);
Have a look at the ITestManagementTeamProject Interface, you can do a lot with it.
(Hint: currently this interface is absolutely not documented for VS 2013 so switch the page to VS 2012 and you will usually find a little more documentation).
For your task on building a whole path to the particular Test Suite check ITestSuiteHelper Interface and ITestSuiteBase Interface. They provide you with data you need to follow the Test Suites' tree of your project.

Creating new Test Plan per Iteration or editing the existing?

We use Microsoft Test Manager to test out applications. We had initially created Test Plans for each Application we wanted to test. So our test plans have this structure:
Application A
Application B
Application C
Now, in each Iteration, we are getting new Builds for testing.
So, should we keep the same Test Plans and editing their appropriate fields (Build in use, Iteration, Configuration, ...) or is it better to create new ones for each iteration? Something like this:
Application A - Iteration 1
Application A - Iteration 2
Application B - Iteration 1
Application B - Iteration 2
Application C - Iteration 1
Application C - Iteration 2
And does it make sense to create a new Test Plan for every new build?
Test Plans are usually created for feature in general. And updated accordingly when the feature (Functional Spec) changes as well. But that's in in ideal world.
From this I can tell "Build in use, Iteration, Configuration, ..." that you are talking about the Test Reports rather than plans. Why not having a document with a Test Plan. And a separate
e.g. table in this document where you would update (add one line) of the configurations, build, evironment used for testing?
Taking into consideration definition and its a small workaround of the test plan:
The test planning process and the plan itself serve as vehicles for communicating with other members of the project team, testers, peers, managers and other stakeholders. This communication allows the test plan to influence the project team and the project team to influence the test plan, especially in the areas of organization-wide testing policies and motivations; test scope, objectives and critical areas to test; project and product risks, resource considerations and constraints; and the testability of the item under test. You can accomplish this communication through circulation of one or two test plan drafts and through review meetings. Such a draft will include many notes such as the following:
[To Be Determined: Jennifer: Please tell me what the plan is for releasing the test items into the test lab for each cycle of system test execution?]
[Dave - please let me know which version of the test tool will be used for the regression tests of the previous increments.]
As you document the answers to these kinds of questions, the test plan becomes a record of previous discussions and agreements between the testers and the rest of the project team. The test plan also helps us manage change. During early phases of the project, as we gather more information, we revise our plans. As the project evolves and situations change, we adapt our plans. Written test plans give us a baseline against which to measure such revisions and changes. Furthermore, updating the plan at major milestones helps keep testing aligned with project needs. As we run the tests, we make final adjustments to our plans based on the results. You might not have the time - or the energy - to update your test plans every time a variance occurs, as some projects can be quite dynamic. In Chapter 6 [Black, 2001], we describe a simple approach for documenting variances from the test plan that you can implement using a database or spreadsheet. You can include these change records in a periodic test plan update, as part of a test status report, or as part as an end-of-project test summary (c) ISTQB Foundation book
I recommend you to update your existing test plan in order it was possible to see any amendments or corrections made through the whole application development life cycle.

Resources