Categorising TFS Work Items - visual-studio-2010

I was reluctant to post this question at first as it seems like functionality that could be pretty fundamental to the way in which TFS manages work items, but I cannot find anything documented that covers it; how do I categorise TFS work items (more specifically, new tasks)?
I've created a bunch of tasks. Some may fall under 'solution setup', others fall under 'core development' for example. How do I represent this categorisation in TFS? Is it something I need to consider when I'm creating the new tasks? Or are the work items brought back in this way during the query?

There are a handful of ways that people typically categorize/organize their Tasks:
Group tasks by User Story. This is done by linking the WI's, and this information will show up in WI Queries, and on the Task Board (task board only available in tfs 2012 and up).
Use the Area field and Area hierarchy to categorize your Tasks. The Area hierarchy is typically used to represent a functional breakdown of your application, then applied to WI's to categorize them based on which functional area they affect.
Activity Field. There is a field on Task work items called Activity that by default that has the values: Deployment, Design, Development, Documentation, Testing, Requirements. This list can be customized by editing the Work Item Type Definition.

Surprisingly difficult to find anything on this. I was already using 'linked item' as I should, the problem was the query. I created a new query and set the type to 'Tree...' so that the hierarchical structure was pulled back in away that mimicked a tasks and sub-tasks structure.

Related

Tracking testing assignments with Microsoft TFS

We use Vs2008/2010 with TFS 2010 for our source control, because it also lets us create custom work item types that we can use for project management, such as product backlog items and sprint backlog items.
One item thats not tracked (by machine) is build regression test tasks for release candidates. Our regression testing is part automated, part manual, and the manual part can take several days. Currently we use an excel spreadsheet with a list of all the test cases, and then the testers just fill in results and notes.
I've been proposing creating a build regression test template that contains each test case, default owner, and then when we want to do regression testing on a build, we can automatically create work items for every test in the template.
My argument is that if the regression test work is mandatory for the project, and the results should be tracked, then writing additional TFS work items make sense, especially since the work items can hold estimates, giving managers an idea of how much re-test time remains.
The argument against this is that we already have high level work items to capture the overall project test requirements, and the regression testing is basically a "re-test", so new work items would be duplicate.
My question: Is anyone else doing anything like this? Is it reasonable to use TFS to track outstanding re-test tasks?
Note: we don't own Visual Studio Test Professional
I think it's reasonable to go with your suggested solution. You should have another work item type for the "test tasks", that can be linked as children to the test requirement work items. Doing that, like you said, would allow you to track results, progress, reporting, etc. You can also add other fields like build number, tested by, tested date, etc. to the work item type for history, something that cannot be done with just one test requirement work item type.
Essentially, what you proposed is done in the ITestResult object in the Microsoft.TeamFoundation.TestManagement.Client.dll.

Is it possible to assign a work item in TFS to different people?

TFS (2008) has the great feature of work item tracking where I can easily see what people are doing all day long. Now I was wondering if I could assign a work item to different people or if they could write time on an item in a trackable way.
For example: We have two developers Mr. A and Ms. B. A did 4 hours of work and 50% of the work item "Create customer screen" until he gets ill. Than B has to finish the other 50% but I do not want to lose the progress of A because it could seem that A worked 4 hours less and B 4 hours too much.
Unfortunatly it seems that I can enter only one name in "assigned to" when I am using TFS 2008 and can not store the item if I try to seperate the names by comma or semicolon. Do you know if such a feature is included in TFS 2010?
Thank you for help.
No. This is one of the few aspects that haven't changed from 2008 to 2010.
Thomas
I'm not sure about assigning one item to multiple people but you could setup groups to which multiple people belonged. I'm not sure of your other requirements but this should solve this issue here. In essence Mr A and Mr B would both belong to a group called, say, 'Developers' to which the work item is assigned. Thus the full 8 hours is logged against a single entity.
Here is an (old) article on how to do this elegantly. You may want to split up your groups to as specific a category name as possible (e.g. 'Core Developers', 'Javascript Developers')
Found this link that implies that they are aware of the need but have not implemented a resolution yet
In TFS, if you assign a work item to someone else it will maintain that in the Work Item History, which is available for reports. TFS 2010, however, only tracks 3 fields: completed work (in hours usually), remaining work, and original estimate. If A and B both update completed work, you should be able to separate that work out in reporting services.
As #DarrellNorton said, all the information is recoded in the history of the work item, so you can retrieve the completed work values for each historical entry and correlate that to the assignee at that point in time. So the information you need is already in your database, if you can work out how to extract it. (The danger is that if someone leaves the completed-work field unchanged you might record the first dev's hours against the second dev as well - you'd ideally need to add a state transition rule in your work item templates that clears the field back to 0 whenever it was assigned to a new developer)
Another approach is to add your own fields to your TFS work items. It would be very easy to add (for example) fields "HoursDoneByMrA" and "HoursDoneByMrB" and expose these onto the work item form so that each developer could have independent statistics by which you could track the information you require. As long as your team size isn't huge, this would be quick/easy to achieve, and would also give you an instant summary on the work item itself of all developers who had touched the work item and their contributed hours, so you wouldn't even need to go as far as building a specialised report. (TFS PowerTools provides editors for the work item types that make adding and displaying this information much easier than hand-editing the XML templates. This approach would work in TFS 2005, 2008, 2010 - once you know how to use the power tools to do it, it would only take about a minute per developer to put this in place).

What are the drawbacks to merging the Task and Bug Work Items and only use one of them in TFS 2010?

I was thinking that I’d rather only use the Task Work Item and ignore the Bug Work Item. This is my thinking as I set things up for my team. I’m on a quest to see why I shouldn’t do this. From my perspective a Task is either a new item or a bug item. There is no need to use two distinct Work Item Types. To make this happen in TFS I’ll start with the Bug Work Item and create a custom field (“Item Type”) to distinguish the two task types: new/bug. Both new tasks and bugs will share the same fields. Anyone see any major drawbacks to this approach?
The main reason Tasks/Issue/Bugs/etc are different work items are because the individual fields of each work type can be configured differently.
For example, by default, Bugs have a Triage property, Issues have Due date, Tasks have a Discipline. The States of a Bug (Active/Closed/Resolve) are different from an issue (Active/Closed).
By merging them into a single work item type you would loose the ability to configure each one uniquely.
Also, the rules followed when a Bug and Task are closed, for example, are generally different. Segregating them into work items allows a simpler rules set.
Work item type is also a standard column in all queries.
Overall, it depends on how extensively you are using Team Foundation. If your project is small, and the above don't matter, it's not going to hurt. Though I don't see much gain either.
I would suggest keeping Bug and dropping Task if you want to merge them. By default when you check in code and Resolve with a bug, it sets the status to Resolved and assigns it to whoever created it - usually a tester, but in your case possibly a PM. That person can then test to confirm the work is done and close it. You can set up alerts on their work items so they get an email and know that progress has happened. Alternatively if you use Task, when you Resolve at check in it is just closed. No alerts, no further testing. YMMV but on some of our projects we use Bug for things like "user would like to add a new report" and it fits our process well. (For others we keep the distinction for reporting purposes.)
It all boils down to 3 things:
Creation / prioritization
Reporting / Notifications
Completion workflow
Typically creation of a Task involves different fields than a Bug. For a bug you'll want to know things like environment found in, who notified you, severity, priority, etc.
For tasks you usually want to know the requestor, reason behind it, business unit impacted and iteration it is scheduled for. Tasks might be long term goals that result in new or enhanced functionality.
Reporting and Notifications of the two are generally different as well. PM's are going to track tasks to ensure deliverables are met, your tech support area is going to track bugs.
Next, bugs will generally result in hotfixes and service packs. Depending on severity this this might involve a high priority push through QA and release as quickly as possible. Tasks are more laid back and will go through all forms of regression and regular testing with a period of acceptance by the impacted business unit.
Finally, bugs may impact previous versions of your software. Tasks will almost always be for either the version currently under development or the one after that.
In short, they are fundamentally different things. They might share most fields in common, however by combining them you are restricting yourself in both reporting and workflows. Today this might be okay; however within the next month or next year this could seriously restrict you.
Considering that maintainence of work item types is an incredibly easy thing there is almost no benefit to merging them.

Is it a good idea to have the same feature available from two different menus?

It happens sometimes that one feature seems to belong to more than one place.
Trivial example, let's say I've got the following menus :
File
Pending orders
Accepted orders
Tools
Help
I've got a search feature, and the same search window work for both pending and accepted orders (it's just an 'order status' combo you can change)
Where does this search feature belongs?
The Tools menu seems to be a good choice, but I'm afraid the users may expect the search accepted orders to be in the accepted orders menu, which would make sense
Duplicating the menu entry in both pending and accepted order seems wrong to me.
What would you do? (And let's pretend we cannot merge the two orders menu into one single menu)
I think the problem you've run into is that you're thinking like a programmer. (code duplication bad). I'm not faulting you for it, I do the same thing. Multiple paths to the same screen, or multiple ways to handle the same process can actually be extremely beneficial. I would guess that more than one person is going to use your program and each probably have slightly different job functions. In essence, they have different needs for the application and will approach using it different ways. If you stick to the all items have one way of being accessed, some people will find the application beneficial and others won't. Sure all people can learn to do a task a certain way, but it won't make sense to some users. It's not intuitive (read familiar) to they way they are used to processing information, which means the application will ultimately be less beneficial to them. When people find a process (program etc.) frustrating, they won't adopt it. They find reasons why the process will need to be changed or abandoned.
An excellent example of the multiple approaches to a problem is with Adobe Photoshop. Normally there are at least 2 different ways to access a function. Most users only know of one, because that's all they are concerned with, but most users are really comfortable with using one, because it makes the most sense to them. With a little extra work, Adobe scored a huge win, because more people find their product intuitive.
Having a feature in multiple locations is not a bad thing. Consider the overall workflow for viewing both pending orders and accepted orders, and think of your new feature as a component, rather than a one-off entity.
After you map out exactly what tasks a user completes in the pending and accepted order viewing process, see where having the ability to search would provide value (by shortening the workflow or otherwise). This is where your search component belongs.
The main thing to remember about UI is that all that really matters in the end is whether your design makes using your application or site a better experience for your users.
In the search example you list above you'll commonly see apps take two approaches:
Put the search feature in a single location and allow the user to filter the search by selecting pending or accepted, or
Put the search feature in both menus, already configured for the type of search to be done based on the menu it was launched from.
If you repeat the above choice for a number of factors you'll see a much more advanced (aka 'complicated') search interface for number one, and a much simpler (aka 'restrictive') search interface for number two.
Which one is best completely depends on your users. This is why many general applications have a simple search by default and a link to a more advanced search for those that want or need the additional capabilities; they're attempting to make everyone happy. There is absolutely nothing wrong with that if you're writing for a wide variety of people with different needs. If you're writing for a set of users with a restricted set of needs however, you can make some better choices.
In my experience your best bet is to work with one or two of your primary users and map out all of the steps they need to take to get each of the tasks the application will be helping them with accomplished. If there aren't a lot of branching points in that sequence of steps there shouldn't be a lot of choices or settings to make in the application; otherwise the users may feel that the app is harder to work with than it needs to be.
For the search example above, if the user has already navigated into the Pending Orders menu, the likelihood that they'll want to launch a search for Accepted Orders is very small and having to make that choice, or go elsewhere to do the search, will be an extra decision or action they'll need to take. Basic principle is if your user has already made a decision, use it; don't make them tell you again.
Use the UI you come up with as a first cut. Let your users, or a subset of them, try it out and make suggestions. If you have the option, watch them use it. You'll learn far more about how to improve the interface by seeing how they work with it than you will from what they tell you.
Generally you do not want the same menu item appearing in different menus. It adds complexity and clutter to the menu, and users will wonder if the two menu items are really the same or not. When it appears that a menu item belongs in two places, then you may have a more basic problem with your menu organization.
For instance, your example shows a menu bar that is organized by the class or attribute of the object the commands within act on. In general, the menu bar should be organized by category of action not type of object. For example, you could have a Retrieval menu for commands like Search and other means of displaying orders, and a Modify menu for processing the orders (e.g., updating, accepting, forwarding). Both menus would have menu items that apply to both types of objects, although some commands may apply to only one.
Organizing commands by object type is actually a good idea but it is better accomplished with a context menu (right click) than the menu bar.
I would try the search in both the Accepted Orders and Pending Orders menus. However, user testing will show if this is a good idea or not. But it also depends on your user base.
You are doing user testing right?
...you may already know this, but this is a good place to use the command\action pattern IMHO.
So to answer your question: IMO, yes, it is ok :) This situation is definitely warranted.
Just put it under both menus and have it open your search window, pre-configured for the order type who's menu it was launched from. Name them accordingly and voila they're actually two different actions - even though they use the same code/component.
Keep the user-selectable "status combo you can change" in the search window active though so the user still can adjust the settings without relaunching it from the other menu... and then perhaps rethink the structure, see some of the great answers in here for ideas ^^

Generating UI from DB - the good, the bad and the ugly?

I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.

Resources