Shared steps and cloning - clone

This question is about shared steps management via the user interface in Microsoft test manager VSTS2013 and/or online via the visualstudio.com version.
Say we have a bunch of test cases with some shared steps and we clone (/deep copy, not just 'copy') the test plan. The result is that all shared steps in the test cases are also cloned, which I would expect -- so no question so far.
But, when I then go to a test case and I want to add shared steps to it, the query to locate the shared steps will show all instances of a shared step (assume I am searching by Title). How can I now make the distinction which instance of the shared steps is the right one to choose for the Test plan that my test case is in?
If I choose just any one of them I'll for sure end up with test cases from one test plan linked to shared steps in another test plan, and that sounds like something we would not want (right?).
Any light on how to properly manage test cases versus their shared steps is welcome!
Thank you,
Bill.

If your (cloned) testplan is defined under a certain iteration, e.g., Project Y\Release X, and you insert a shared step, a query is opened. There you need to add an extra clause (with an And) with as Field: "Iteration Path" and Operator "Under" The Value is for this example "Release X".
Hope this helps.

Related

Find and run all scenarios where step is used?

I'm kinda new to SpecFlow but i would like to find and run all scenarios where step is used. I know about Ctrl+Shift+Alt+S option, but when it's used 20+ times on many feature files it can be hard to test it all one after another. This question came to my mind when i updated step and needed to retest it.
Specify a tag against the scenarios that contain that step - these will then appear within Test Explorer area if you filter based off 'Traits'. You can then run all scenarios with that tag.
So for example you would have
#TAGHERE
Scenario: Your Scenario
Given
When
Then

SSIS Get List of all OLE DB Destinations in Data Flow

I have several SSIS Packages that we use to load in data from multiple different OLE DB data sources into our DB. Inside each package we have several Data Flow tasks that hold a large amount of OLE DB Sources and Destinations. What I'm looking to do is see if there is a way to get a text output the holds all of the Destinations flow configurations (Sources would be good to but not top of my list).
I'm trying to make sure that all my OLE DB Destination flows are pointed at the right table, as I've found a few hiccups, without having to double click on each Flow task and check that way, it just becomes tedious and still prone to missing things.
I'm viewing the packages in Visual Studio 2013. Any help is appreciated!
I am not aware of any programmatic ways to discover this data, other than building an application to read the XML within the *.dtsx package. Best advice, pack a lunch and have at it. I know for sure that there is nothing with respect to viewing and setting database tables (only server connections).
Though, a solution I may add once you have determined the list: create a variable(s) to store the unique connection strings and then set those connection strings inside the source/destination components. This will make it easier to manage going forward. In fact, you can take it one step further by setting the same values as parameter, as opposed to variables, which have the added benefit of being exposed on the server. This allows either you or the DBA to set the values as you promote through environments or change server nodes.
Also, I recommend rationalizing that solution into smaller solutions if possible. In my opinion, there is nothing worse than one giant solution that tries to do it all. I am not sure if any of this is helpful, but for what its worth, I do hope it helps.
You can use the SSIS Object Model for your needs..An example can be found here. Look in the method IterateAllDestinationComponentnsInPackage for the exact details. To start understanding the code, start in the Start method and follow the path.
Caveats: Make sure you use the appropriate Monikers and Class IDs for the Data Flow Tasks and your Destination Components. You can also use this for other Control Flow Tasks and Data Flow Components (for example, Source Components as your other need seems to be). Just keep in mind the appropriate Monikers and Class IDs.

Deployment of plsql packages in oracle

I'm looking to learn about possible ways of deploying large number of plsql packages as dependencies seem to be quite a problem.
As it works now, packages are being deployed in several iterations redeploying em again if they couldn't be deployed in previous pass due to missing dependency.
I hope to hear about different approaches to the problem and will update my question if u happend to havequestions for me to make it more clear.
Would it even be ok to search guidance this way on SO?
I would recommend to install all specs first in proper order.
Then install all bodies.
All dependencies need to be predefined once in master install script.
Update:
What else you can do is:
1) load all package specs into main list (I assume all specs and bodies are stored separately. if not then it need to be done)
2) loop all specs from the main list.
3) try to compile it. Add to failed list if it fails.
4) When reach to the end of main list replace all items from it with items from failed list.
5) Go to step 2.
At the same time you can save results of the first run and second run could order items according to results of previous call. This will minimize number of iterations.
Bodies could be installed in any order...
However you need to keep in mind dependencies on the views and from the views - specs could depend on views (view_name%TYPE, cursors and etc) and views depends on package specs (could call package functions). This is not trivial problem... Can you explain how it is solved currently please?
I myself just install all the procedural code (in any order) and later (re)compile all invalid objects.
There are several way to recompile all invalid objects:
UTL_RECOMP
DBMS_UTILITY.COMPILE_SCHEMA
Manually like Tom Kyte suggest and I use

Change Iteration cycle name in Test Manager

I'm currently working with Test Manager Version 2010.
When running a testcase with multiple iterations in it, a list is shown in the top left corner which has the following:
Iteration 1
Iteration 2
Iteration 3
....
My question is, is it possible to change this name to any subject so that it is easier to remember the meaning behind every iteration?
For example:
Iteration1 needs to be named Cat
Iteration2 needs to be named Dog
And so on...
Yes, and no :) I've never done this from the test manager, but here is a post that says how you change the iterations using the team explorer. You can name them whatever you want, they are nothing more than strings in a hierarchy. Even though the problem is slightly different than yours the steps should apply.
How to Add/Edit the Iteration Field in Team Foundation Server Scrum v1.0 beta Workflow
The bad news is that TFS uses loose coupling on the work items. that meens that whenever you change something like this the workitems that have used the old string will still have the old value. you will either make a script to update all work items, or manually go in and select the new value for each and every work item. If you only use the new values for new test cases (or other work item types) you're good to go.

Tracking testing assignments with Microsoft TFS

We use Vs2008/2010 with TFS 2010 for our source control, because it also lets us create custom work item types that we can use for project management, such as product backlog items and sprint backlog items.
One item thats not tracked (by machine) is build regression test tasks for release candidates. Our regression testing is part automated, part manual, and the manual part can take several days. Currently we use an excel spreadsheet with a list of all the test cases, and then the testers just fill in results and notes.
I've been proposing creating a build regression test template that contains each test case, default owner, and then when we want to do regression testing on a build, we can automatically create work items for every test in the template.
My argument is that if the regression test work is mandatory for the project, and the results should be tracked, then writing additional TFS work items make sense, especially since the work items can hold estimates, giving managers an idea of how much re-test time remains.
The argument against this is that we already have high level work items to capture the overall project test requirements, and the regression testing is basically a "re-test", so new work items would be duplicate.
My question: Is anyone else doing anything like this? Is it reasonable to use TFS to track outstanding re-test tasks?
Note: we don't own Visual Studio Test Professional
I think it's reasonable to go with your suggested solution. You should have another work item type for the "test tasks", that can be linked as children to the test requirement work items. Doing that, like you said, would allow you to track results, progress, reporting, etc. You can also add other fields like build number, tested by, tested date, etc. to the work item type for history, something that cannot be done with just one test requirement work item type.
Essentially, what you proposed is done in the ITestResult object in the Microsoft.TeamFoundation.TestManagement.Client.dll.

Resources