TFS - how do I sum child task hours to parent - visual-studio

TFS Web-client provides a "board" view under the backlog tab which shows the sum of Remaining work for a User Story based on the Sum of Remaining Work for it's child tasks.
I have also tried exporting to Excel and using a Pivot table, but there are no obvious links between the work items (as far as I can tell).
I saw this question but I can't tell if that is the same idea.
Also, this item might be similar, but no responses were received yet.
Can it be done out-of-the-box in Visual Studio TFS?
Does it require coding?
Are there plugins available? I've looked at TFSAggregator on Github and that might work but I'd like to avoid server-side plugins if possible.

One of the many uses of TFS Aggregator... it is even one of their sample use cases!
Example Uses
Update the state of a Bug, PBI (or any parent) to "In Progress" when a child gets moved to "In Progress"
Update the state of a Bug, PBI (or any parent) to "Done" when all children get moved to "Done" or "Removed"
Update the "Work Remaining" on a Bug, PBI, etc with the sum of all the Task's "Work Remaining".
Update the "Work Remaining" on a Sprint with the sum of all the "Work Remaining" of its grandchildren (i.e. tasks of the PBIs and Bugs in the Sprint).
Sum up totals on a single work item (i.e. Dev Estimate + Test Estimate = Total Estimate)

You can rollup estimated and actual work using Project. Because Microsoft Project has a scheduling engine, it automatically will generate a rollup of summary tasks. Rollup provides summed values of select fields for all child work items of a parent. https://msdn.microsoft.com/en-us/library/dn769080(v=vs.140).aspx

I've been able to do this, however there is an issue when a Task's State value is set to "Removed", the rollup values should not be calculated for that Task. In the policy file, I've got the if condition like so:
<rule name="RollupTasks" appliesTo="Task"><![CDATA[
string wfState = (string) self.Fields["State"].Value;
if (wfState="Removed" && self.HasParent()) ...
But that didn't work. I've also tried:
<rule name="RollupTasks" appliesTo="Task"><![CDATA[
if (self.HasParent() && self[System.State]="Removed") ...
..but this doesn't seem to work. Am I close?

Related

Flow Triggering Itself(Possibly), Each run hits past IDs that were edited

I am pretty new to power automate. I created a flow that triggers when an item is created or modified. It initializes some variables and then does some switch cases to assign values to each of them. The variables then go into an array and another variable is incremented to get the total of the array. I then have a conditional to assign a value to a column in the list. I tested the flow specifically going into the modern view of the list and clicking the save button. This worked a bunch of times and I sent it for user testing. One of the users edited multiple items by double clicking into the item which saves after each column change(which I assume triggers a run of the flow)
The flow seemingly works but seemed to get bogged down at a point based on run history. I let it sit overnight and then tested again and now it shows runs from multiple IDs at a time even though I only edited one specific one.
I had another developer take a look at my flow and he could not spot anything wrong with it and it never had a hard error in testing only warnings about conditionals causing a loop but all my conditionals rectify. Pictures included. I am just not sure of any caveats I might be missing.
I am currently letting the flow sit to see if it finishes getting caught up. I read about the concurrent run option as well as conditions on the trigger itself. I am curious as to why it seems to run on two records(or more) all at once without me or anyone editing each one.
You might be able to ignore the updates from the service account/account which is used in the connection of the actions by using the following trigger condition expression:
#not(equals(triggerOutputs()?['body/Editor/Claims'], 'i:0#.f|membership|johndoe#contoso.onmicrosoft.com'))

TFS 2013 Real world work item usage and workflow

The team I manage has been using TFS for years, but we've used a 3rd party system for tracking bugs and features. I'm looking to upgrade our team into TFS 2013 and I've done tons of reading and research into how TFS manages work items, backlogs, iterations, tasks etc. And although I understand the principles of what 'can' be done, I'm having a hard time visualizing 'how' our team would work with these work items as tasks.
If anyone knows of any best practice guides for actual sample based usage, or can answer any of these questions that'd be great
1) Product backlog - Under the 'configure schedule and iterations' what is the concept for setting the current 'backlog iteration'? Our team uses short 2 week iterations with a build number, but setting the build iteration as the current backlog makes all new PBI's scoped to only that iteration. Any items not complete in that iteration would disappear once I set the current build to the next iteration number. On the other hand, if I set it to the parent root node, I could see the PBI list getting rather large over time. What is the best method for managing PBI's that are unassigned and working in a simple Parent->build1/build2 etc structure?
2) Features - So I create a feature, perhaps it spans many work items and several tasks. They get completed over time, but I've noticed there's no 'auto' complete or status updates on parent items. So who/when is a Feature item supposed to get marked complete? If the product owner is supposed to use the features list to get an overview of work, they have no idea if all the dependent items have been complete and when to mark the feature Done.
3) Work Items - Managing these, and in particular their 'state' or status seems like a royal pain. On the task board you can't change their state, only their tasks with drag-drop, which is nice. But you complete all the tasks, and the parent work item stays in status 'New'. Do you really have to micro-manage every work item, open it up, and set the state to Done?
4) QA/testing - For every work item, each team member is responsible for testing each item, so every item is tested by multiple people, and logging any issues found. What's the best way to use work items or tasks for this?
5) Build Complete - Once every work item in the iteration is marked Done then I assume they are removed from the product backlog correct? The exception to this seems to be the features they were tied to, the feature item itself remains open. How do stakeholders view a list of features that were completed in the current build?
I can't answer everything (indeed, there is no one "right" answer), but here's how my team uses TFS - it might give you some ideas:
We use Area Path to represent a Project or Epic that work belongs under. When a work item is created it is assigned to a project using the Area Path, and it never changes.
Then to represent "when" work is done we use a hierarchical iteration path under 3 headings (for a project called "Project"): Project\Completed, Project\Current, Project\Future.
Stories in the product backlog are initially assigned to Future (We go a bit further in fact and use New stories to represent "proposed" work, and Active ones to represent the "approved" backlog - this allows us to plan tentative projects/contracts that convert into real work when they get the green light). At this stage we do Planning Poker to get Story Points and then the Project Managers assign stack ranks to the stories to help to decide what to move from Proposed to Product backlog, and then eventually what we should think about for the next iteration.
When we start an iteration we create a new iteration (call it 001) under Future, i.e. Project\Future\001. Then Stories are chosen from the Product Backlog for implementation - they get assigned to this iteration. When the iteration is ready to start, we use a "conveyor belt" approach which moves all the iterations along one "place" in the hierarchy: In the Iteration Path configuration UI, just drag the 001 iteration from Future to Current. This re-paths everything in that path automatically so that all the active work is instantly under Project\Current.
As we complete the iteration, we would have Current\001 and we'd then add Future\002. Then we move 001 and 002 along the conveyor (to Project\Completed\001 and Project\Current\002 respectively). This way the work gets assigned to one iteration and stays there, but the iteration as a whole moves from future ...to current ...to completed. This allows us to build queries like "all current work" (all work under "Project\Current") that we don't need to rewrite for every iteration, and this saves a massive amount of time and eliminates a lot of mistakes trying to re-assign iteration paths constantly - in most cases the iteration is only changed once (from future to an actual iteration).
When a story moves into the current iteration, we choose an implementing team (e.g. an owner to accept delivery, and a developer and a tester to implement the work) and those people add tasks for any work that needs to be done to deliver the story. Any bugs/issues that crop up for that story during the iteration are also parented to the Story or Tasks.
We found the TFS tools pretty poor (clumsy, slow, micro-managing), so we now use a home-built dashboard that shows us a list of stories, so in our scrum we can step through the stories and see the tasks/bugs/issues for each, who is working on them, and how much work they reported on the task since the last scrum. This gives us a really clear basis to discuss the story.
We close tasks/bugs/issues as we complete them, but the story stays open till the end of the iteration (so that any new bugs found can be attached and dealt with). We then use a custom tool to "Resolve" the story, which closes all the child work items, and then checks if the parent Feature or Epic is now completed and can also be marked "Resolved". This can also be done in stock TFS just using a manual process, but it is rather laborious, and the code to automate it is only an hour or two's work. I really don't understand why TFS makes you essentially update all the database tables by hand when it's so easy to automate. (In a similar way, the TFS kanban is unnecessarily time consuming to manage because items only appear on it if they are perfectly formed - get any of the estimate, remaining, completed, area, iteration, assigned-to, parent link, etc wrong and it vanishes! So I've written e.g. a simple 'create task' tool that asks for the estimate, assignee and title, and fill in the rest - this took me a couple of hours to implement and has eradicated all the time consuming errors and hassle of using TFS 'raw')
When processing tasks, TFS provides 'Activity' states (planning, development, testing, documentation etc) - which implies that each single task will be passed linearly through a chain of different people to be implemented... but we feel this is a poor approach, because we want to encourage the team working on a story to work in parallel and work together, not "throw their bit over the fence to the next guy". So instead each person on the team creates one or more tasks under the story that represent the parcels of work (programming, testing, documenting) they must personally do to deliver the story, and each task only ever has one owner. (This works well in our scrum dashboard because it shows the story and its list of child tasks/bugs/issues, so the entire context of the story's work can be seen easily at a glance). The separate tasks allow the programmer and tester to work together in a tight, iterative, co-operative agile loop, often with progressive roll-out of parts of the feature for testing, rather than the programmer finishing all his work before passing the complete article over to the tester in a waterfall-y way. At the end of the iteration, the story-team demos their story to the wider development team, and they are all equally responsible for ensuring that everything needed is delivered. After the demo, the Product Owner/Champion then accepts the work as done (or rejects it). This vastly reduces the amount of work that gets dropped "between the cracks" where people think somebody else will do it, helping us to get to a solid delivery at the end. We've found communication within the team and story delivery significantly improved since we moved to this approach.
I should mention that to get good estimates and burn-downs we try to keep each task less than 5 days work, and to avoid micro-management we try to avoid splitting down tasks into anything under about 2 days (though obviously some tasks are necessarily shorter).
As I've mentioned, we log bugs/issues as children of the task or story they affect (and can also add Related links if they impact more than one story). At the end of the iteration as well as demoing the new features to the rest of the team, the release build is regression-tested as a whole. Any bugs found are fixed in a release branch and within (hopefully) a day or two we have a stable customer release. We aim to have a product of customer-releasable quality from every iteration, and to keep the number of outstanding bugs per developer below 5 (usually 1-3). Before introducing this system, we had an ongoing average of 20 bugs per developer, an unpleasant technical debt. (Note: we reserve some time in every iteration for fixing these bugs, but when bugs are too gnarly to fix then-and-there, we usually convert them into new stories so that they can be estimated and scheduled for a future iteration just like other work, so the bug-list and technical debt is never allowed to build up, and where possible bug fixing is not allowed to derail our iteration plan.
We don't treat work in progress (items in an iteration) as Product Backlog - the product backlog is work that we plan to do in the future, and when it moves into an iteration it becomes actively worked on and no longer in the "to do" list (it's the Iteration backlog, not the Product backlog). When all of the work (task/bug) is complete, then the parent story can be Resolved ('we think it is "done"') and then Closed ('the Product Owner accepts it as "done"') and so a simple query (work under Project\Current that is Closed) will tell you what you have delivered this iteration.
Lastly when we close out the iteration, the whole iteration moves into Project\Completed, so then you can easily query all of the work which has ever been completed (under Project\Completed), and still grouped within their individual iterations. So at any time if you want to know what "Build 107" added, you can just do a query for all Closed stories under iteration path Project\Completed\107. We mark incomplete/abandoned work as Removed, so for us Closed means "Done". If work is not completed in one iteration and is continued in the next, then we simply move the story to the next iteration, and so the completed work then shows up in any queries for "Build 108" instead - so this perfectly tracks the achieved deliveries for an iteration.
To keep things consistent, only a few team members can change different types of item. So our "planning items" (Epics, Features, Stories) are only changed by the Project Manager or Product Owners. Tasks are all owned and thus created/changed/closed by the developer that is doing the work. PMs track progress of stories and devs track progress of tasks.
1) Product backlog - Under the 'configure schedule and iterations'
what is the concept for setting the current 'backlog iteration'? Our
team uses short 2 week iterations with a build number, but setting the
build iteration as the current backlog makes all new PBI's scoped to
only that iteration. Any items not complete in that iteration would
disappear once I set the current build to the next iteration number.
On the other hand, if I set it to the parent root node, I could see
the PBI list getting rather large over time. What is the best method
for managing PBI's that are unassigned and working in a simple
Parent->build1/build2 etc structure?
TFS has two different backlogs. The Product Backlog of your team and the Sprint backlog of your team. In the iteration configuration screen you define which iteration contains your teams product backlog (by setting the Backlog iteration) and which iterations below that will represent your sprints .
If you have a large list of PBI's you could put these either in an iteration above the current backlog iteration, which will effectively hide them from the backlog pages. Or you can place them in a separate iteration that is a sibbling of your Backlog iteration.
2) Features - So I create a feature, perhaps it spans many work items
and several tasks. They get completed over time, but I've noticed
there's no 'auto' complete or status updates on parent items. So
who/when is a Feature item supposed to get marked complete? If the
product owner is supposed to use the features list to get an overview
of work, they have no idea if all the dependent items have been
complete and when to mark the feature Done.
There is no auto-complete or auto-close. Normally the Product Owner (scrum role) will keep an eye out on what he has requested and knows just about when a feature is about to be completed.
You can also view the hierarchy of Product backlog items to features in the Product Backlog view by selecting the Features to Backlog Items view. This will also list the states of the underlying stories:
3) Work Items - Managing these, and in particular their 'state' or
status seems like a royal pain. On the task board you can't change
their state, only their tasks with drag-drop, which is nice. But you
complete all the tasks, and the parent work item stays in status
'New'. Do you really have to micro-manage every work item, open it up,
and set the state to Done?
Normally the product owner/project manager will approve stories for pickup and move them from new to approved. Then during the Sprint planning meeting (or at the start of a sprint), the team selects which items they will work on and will move these from Approved to Committed.
Then at the end of the sprint (or when all tasks under a story are done), the development team shows the product owner the finished work and then moves the story to done as well.
4) QA/testing - For every work item, each team member is responsible
for testing each item, so every item is tested by multiple people, and
logging any issues found. What's the best way to use work items or
tasks for this?
Depends on the maturity of the team. And depends on your adoption of Test manager (Test Case work item). If your team is pretty mature and is using Test manager to link Test Cases to your Stories, then you can view the status of your tests in Web Access. If the team consistently works in a ATDD way of working, they'll do the work needed to make a test succeed before moving on to the next piece of work. In such a workflow it's not really needed to create "design-build-test" work items. The work item would probably be akin to "Make test X pass" and would include all the work to create the test, build the code and make the test pass.
5) Build Complete - Once every work item in the iteration is marked
Done then I assume they are removed from the product backlog correct?
The exception to this seems to be the features they were tied to, the
feature item itself remains open. How do stakeholders view a list of
features that were completed in the current build?
Again, use the Feature to Product Backlog Item view to see which features have had all their work finished. The stakeholders mentally verify that this was indeed all they wanted and that they have no additional requests, work, feedback that is needed to truly complete the feature. If this is the case they will close the feature by moving it to done.

re-order items with meteor 0.8+

I want to do a todo list with drag'n'drop like https://github.com/meteor/meteor/tree/master/examples/unfinished/reorderable-list.
The problem is that I don't know how to handle the rank properly. I tried the example above, it works fine until the builded rank does not change any more
So I thought that it would be better to reorder my todo list each time I insert a new task or if I change the rank of one task.
First try on client:
var dropRank=1
Tasks.find({rank:{$gt:dropRank-1}},{fields:{_id:1}}).forEach(
function(task){
Tasks.update(task._id,{$inc:{rank:1}})
})
Tasks.insert({rank:dropRank})
After ~150 tasks, it becomes slow to insert a new task at rank 1 and to reorder the ranks.
2nd try on server (with a Meteor.method or with collection.hook):
Tasks.update({rank:{$gt:dropRank-1}},{$inc:{rank:1}},{multi:true})
After ~150 tasks, I see that the rank slowly updates on client.
If I try it with a local collection, it slow down after 400 tasks.
So the question is: is there a proper way to build a rank so that I can insert a task and display it without updating the other ranks?
Have you tested what's slowing you down: the update of the database or the rewriting of the page? I did a simple replication of your application and found that the update did take some time to update when there were 400 divs writing to the browser page, but when I limited the output of the data context to 50 rows, it felt really snappy.
For another project I'm working on, I found that I had to be pretty careful about how much I asked of the browser when updating the database. It took some testing, and for that project I found that 30 divs was about all I wanted to update at a time.
I gave up about looking for an other way of updating the rank and render everything.
I splited the data in 2 parts:
the static part : to build the first view with #each and reactive:false on collection
the reactive part: a cursor observer that will place new tasks, delete or move the task in the dom when it wasn't the user himself who did it.
I could easily insert new tasks before 500-700 other tasks, I'm satisfied. I tried with 1000 tasks but it was too much.

Why is infinite loop protection being triggered on my CRM workflow?

Update
I should have added from the outset - this is in Microsoft Dynamics CRM 2011
I know CRM well, but I'm at a loss to explain behaviour on my current deployment.
Please read the outline of my scenario to help me understand which of my presumptions / understandings is wrong (and therefore what is causing this error). It's not consistent with my expectations.
Basic Scenario
Requirement demands that a web service is called every X minutes (it adds pending items to a database index)
I've opted to use a workflow / custom entity trigger model (i.e. I have a custom entity which has a CREATE plugin registered. The plugin executes my logic. An accompanying workflow is started when "completed" time + [timeout period] expires. On expiry, it creates a new trigger record and the workflow ends).
The plugin logic works just fine. The workflow concept works fine to a point, but after a period of time the workflow stalls with a failure:
This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help.
So in a nutshell - standard infinite loop detection. I understand the concept and why it exists.
Specific deployment
Firstly, I think it's quite safe for us to ignore the content of the plugin code in this scenario. It works fine, it's atomic and hardly touches CRM (to be clear, it is a pre-event plugin which runs the remote web service, awaits a response and then sets the "completed on" date/time attribute on my Trigger record before passing the Target entity back into the pipeline) . So long as a Trigger record is created, this code runs and does what it should.
Having discounted the content of the plugin, there might be an issue that I don't appreciate in having the plugin registered on the pre-create step of the entity...
So that leaves the workflow itself. It's a simple one. It runs thusly:
On creation of a new Trigger entity...
it has a Timeout of Trigger.new_completedon + 15 minutes
on timeout, it creates a new Trigger record (with no "completed on" value - this is set by the plugin remember)
That's all - no explicit "end workflow" (though I've just added one now and will set it testing...)
With this set-up, I manually create a new Trigger record and the process spins nicely into action. Roll forwards 1h 58 mins (based on the last cycle I ran - remembering that my plugin code may take a minute to finish running), after 7 successful execution cycles (i.e. new workflow jobs being created and completed), the 8th one fails with the aforementioned error.
What I already know (correct me where I'm wrong)
Recursion depth, by default, is set to 8. If a workflow / plugin calls itself 8 times then an infinite loop is detected.
Recursion depth is reset every one hour (or 10 minutes - see "Warnings" in linked blog?)
Recursion depth settings can be set via PowerShell or SDK code using the Deployment Web Service in an on-premise deployment only (via the Set-CrmSetting Cmdlet)
What I don't want to hear (please)
"Change recursion depth settings"
I cannot change the Deployment recursion depth settings as this is not an option in an online scenario - ultimately I will be deploying to CRM Online too.
"Increase the timeout period on your workflow"
This is not an option either - the reindex needs to occur every 15 minutes, ideally sooner.
Update
#Boone suggested below that the recursion depth timeout is reset after 60 minutes of inactivity rather than every 60 minutes. Therein lies the first misunderstanding.
While discussing with #alex, I suggested that there may be some persistence of CorrelationId between creating an entity via the workflow and the workflow that ultimates gets spawned... Well there is. The CorrelationId is the same in both the plugin and the workflow and any records that spool from that thread. I am now looking at ways to decouple the CorrelationId (or perhaps the creation of records) from the entity and the workflow.
For the one hour "reset" to take place you have to have NO activity for an hour. It doesn't reset just 1 hour from the original. So since you have an activity every 15 minutes, it never has a chance to reset. I don't know that is said in stone anywhere... but from my experience.
In CRM 4 it was possible to create a CRM Service (Google creating a CRM service in the child pipeline) and reset the correlation ID (using CorrelationToken.NewToken()). I don't see anything so easy in the 2011 SDK. No idea if this trick worked in the online environment. Is 2011 online backwards compatible with CRM 4 plug-ins?
One thing you could try would be to use the IExecutionContext.CorrelationId to scavenge the asyncoperation (System Job) table. But according to the metadata, the attribute I think might be useful (CorrelationId, CorrelationUpdatedTime, Depth) are NOT valid for update. Maybe you could delete the rows? Even that may not help.
I doubt this can be solved like this.
I'd suggest a different approach: deploy a simple application alongside CRM and let it call the web service, which in turn can use the XRM endpoints in order to change the records.
UPDATE
Or, you can try something like this upon your crm service initialization in the plugin (dug it up from one of my plugins) leaving your workflow untouched:
CrmService service = new CrmService();
//initialize service here, then...
CorrelationToken newtoken = new CorrelationToken();
newtoken.CorrelationId = context.CorrelationId;
newtoken.CorrelationUpdatedTime = context.CorrelationUpdatedTime;
// WILD GUESS: Enforce unlimited depth ?
corToken.Depth = 0; // THIS WAS: context.Depth;
//updating correlation token
service.CorrelationTokenValue = corToken;
I admit I don't really remember much about this (code dates back to about 2 years ago), but it might help.

In TFS, can I use an added 'In Progress' state to determine how long tasks take?

I've modified the Agile task template in TFS to include a new 'In Progress' state. When work is started on an item the 'assigned user' will set the task from 'Active' to 'In Progress'. This helps me to know which tasks have been started.
I was, however, thinking that I might be able to use this new state to figure out how long things take. Is there a way I could get the difference between the 'State Change Date' for 'In Progess' and 'Closed' states?
The out-of-the-box TFS queries seem to be limited.
I don't have access to TFS so I am not sure if this is already built can't confirm, but one option you have is to add two new fields to the workitem, for the start and stop dates. You can have the worflow set those variables when you transition into and outof a state.
I am curious as to the answer as well.
One thing to take into account, though, is that if a task runs over a weekend, you would not want those days to be taken into account if you are looking for a measure of "developer days". Taking it a step further, it would be nice to be able to somehow define days that work would not happen on (such as when the office is closed) to get a more accurate reflection of the time spent.

Resources