Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When using Greenhopper with Jira, it is clear that Greenhopper is using the "fixed in version" field in the Jira issues to represent which scrum sprint the issue is being worked on. This in itself is a bit hackish, because an issue can conceivably be worked on in multiple sprints, and because the relationship between an issue and a sprint is precisely that it has been worked on during the sprint, with the recognition that you might not complete the task within the planned time.
But okay, it might be a hack one can live with, at least if there is nothing else that tries to use the "fixed in version" field for something else.
But I am finding that there are other concerns that also build on the "fixed in version" field. Specifically, one should be able to see which issues are planned to be addressed in which release versions (real-life versions), and to use this information as a means of verification/QA.
How are other Greenhopper users combining these two uses of the "fixed in versions" field? Are you setting the sprint versions as sub-versions of the release versions? Are you using some custom field for the release versions? I am finding this to be difficult because the scrum team is working on multiple components, independently versioned. Also, there may be bugfix releases and feature development on the same component, happening on the same sprint.
To summarise, I find it unavoidable that the team will be working on "Some Product 3.4.0" (a feature release), "Some Product 3.3.1" (a bugfix release), and "Other Product 1.2" within the same sprint. It would not be possible to mark this sprint as a subversion of each of these three versions (across two different components). And making three different sprints in Greenhopper, would really dilute the value of Greenhopper.
Are other Greenhopper users in this same situation? How have you dealt with it?
There are two issues at play here.
First your sprint versions are actually "subversions" of your release version. This means that your stories actually get two values in the fixVersion field.
You can configure this in Greenhopper by setting up a master version.
So if you have a 3 sprint release for version 1.0, then you set your release date for 1.0
and put your stories in sprint 1, sprint 2, and sprint 3, such that
1.0
Sprint 1
Sprint 2
Sprint 3
1.1
...
When you play STORY-1 in Sprint 1, you will find that STORY-1 will have a fixVersion of "1.0, Sprint 1"
For items that you're tracking for the release, but not in a sprint, simply set the fixVersion to 1.0.
Second (and this is just a tip), you can use seperate projects for your sprint work and for your production support work. This is helpful in large organizations
We have been faced with the same problem in various organisations, where a team is not only working on multiple releases (like you are detailing in your example) but also where the team is involved in helping out the support organisation when customer issues are raised or when the User Acceptance Testing of previous releases, show issues that 'need to be dealt with' immediately.
We therefore introduced a concept where issues are separated from tasks, but linked together using the 'issue linking' feature of JIRA. Issues (or specifications like we call them) are managed in a release project, while tasks are managed in a team project.
The versioning in a release project denotes releases (i.e. 2.2-patch1, 1.1 ...)
The versioning in a team project denotes sprints (sprint 10-15, sprint 10-20)
The release project only contains bugs, feature requests, inquiries ..
The team project only contains tasks, stories, ...
Automation allows us to keep the specifications and related tasks in sync:
The typical scenario runs as follows
A specification is created in a release project.
A support person creates one or more tasks in the team projects, and links the specification with the tasks using a 'is implemented by' link.
From the moment that work is started on the task, the specification advances to a 'in development' state.
The specification is considered resolved once that all related tasks have been addressed
The transitions for the specifications are triggered automatically.
This concept of separation between specification and task allow you to support many different project organisations - such as
An epic which needs to be developed over a number of sprints.
An issue which needs to be addressed by multiple teams in various locations
A team which works on a new product and maintains an old.
I can provide you more information on this subject if interested.
Francis Martens
I too have been plagued by the same problem and have found the feature request in jira/greenhopper to add a new field for sprints to allow tracking of sprint and release version information independently.
If you want to see this become reality as much as I do, then go over to http://jira.atlassian.com/browse/GHS-945 and vote for the issue. This quote sums it up: "If GreenHopper had iterations as first-class citizens..."
At the moment though, it is likely that we will have to create a new field called versions in jira and use that to track the 'real product versions' that an issue relates to. We also have a commit hook in our source code repositories, so when a developer makes a commit, it will update the jira ticket with the 'real product version' that relates to the source code they are committing. We keep this information in a config file so the commit hook knows what version to use for what source code repository/path. This is not ideal, but it is our only option at present.
Just use rapid boards in GreenHopper, they was introduced not so long time ago, but they give almost all you need.
You can put LABELS on your issues, for instance, 'sprint-1', 'sprint-2' and so on. Then create issue FILTER. Then create RAPID BOARD based on filter. At the end you will get nice board with current issues of sprint-X regardless version and even project.
Please, check that Sprint essentially is not version of software. In real world when you have more than one customer you need to fix and support a lot of versions but you still need to keep everything on track. In this case sprints are still great but they just represent amount of job that should be done during time period. Anyway, version is what you will present to anybody outside your development time. So, do not mix versions of the software and sprints ('mapping' between time and tasks)! Do not use hierarchies where sprint version is child of real software version! Keep unrelated things separated!!!
Shouldnt a sprint have in theory a "shippable" product at the end? Which means a sprint has the issues either solved or "fails".
That is why I'd recommend splitting the issue in smaller pieces.
I try to use K.I.S.S. whenever possible, so I've been using the label field to mark releases. I rarely need to see the release in the context of scrum/taskboard. So when it comes time to view all items in a release, I just run a search for my release name.
Related
I am managing releases for a team of 8 developers. We have three environments:
DEV - where we all make our changes
UAT - an environment for users to test changes
LIVE - live environment
We use Visual Studio 2015 and TFS 2017.
Developers make changes to files and submit them for release to UAT by emailing a list (sometimes with a changeset number). Sometimes different users will make changes to the same files but not all changes should be released.
Once tested in UAT, the changes are released to Live however sometimes a file needs to move from UAT to Live that has earlier changes in it that are not approved for Live release yet.
Please could I ask users' advice as to what the best way for managing this process should be? Unintended changes keep getting released to UAT or Live when they should remain in DEV or UAT.
Any advice would be very welcome. Thanks
Usually this kind of "the best way" question is primarily opinion-based and hard to answer.
Many good questions generate some degree of opinion based on expert
experience, but answers to this question will tend to be almost
entirely based on opinions, rather than facts, references, or specific
expertise.
Developers make changes to files and submit them for release to UAT by emailing a list (sometimes with a changeset number).
For this scenario, instead of using E-Mail to send lists, perhaps you could use this extension
This extension is a build task you can use in build steps. This task generates a markdown release notes file based on a template passed into the tool. Here is an example of release notes output:
Release notes for build SampleSolution.Master
Build Number: 20160229.3 Build started: 29/02/16 15:47:58 Source
Branch: refs/heads/master
Associated work items
Task 60 [Assigned by: Bill ] Design WP8 client Associated change
sets/commits
ID bf9be94e61f71f87cb068353f58e860b982a2b4b Added a template ID
8c3f8f9817606e48f37f8e6d25b5a212230d7a86 Start of the project
The suggestion on the comment is a way that fits your needs and your circumstances. You could create three branches stands for your three environments. And for each branch you could use branch policy(GIT) which will protect your branches and avoid unintended changes merged to UAT and Live.
Since the TFS system or any other tool is hard to judge whether some files are approved or not to release yet. It' based on your team management, you could use permissions in TFS to limit users who have access to deployment or do the release. For example only PM and team leader could handle this. Combine with work items, charts , test management, reports and other functions in TFS.
Note Team Foundation Server is a product that not only provides source code management,build, release, but also reporting, requirements management, project management (for both agile software development and waterfall teams), lab management and testing capabilities. It covers the entire application lifecycle, and enables DevOps capabilities.
Suggest you first go through the Release Management in TFS and also take a look at how to configure your release pipelines for multiple environments deployments
We recently adopted the concept of feature branches in one of our bigger projects, to segregate work on different aspects of the product that can be completed independently of each other.
For each so-called feature, we are creating the following:
a branch from 'main', aptly named after what the feature is supposed to be
a new team in the project portal, containing the people that will work on the feature
a build definition to validate check-ins against the source on the branch
The main point I would like to see discussed here is about the build definition. Currently, each one of them is set to gated checkins.
The question then, is: what is the best practice on associating work items to a build?
In our case, these feature branches are supposed to be disposable: we would like to be able to delete these builds/branches/teams later on when the feature is complete, but still be able to track them throughout the product lifecycle.
If I associate work items with these temporary builds, I'll lose tracking capability later on when the feature implementation ends. At the same time, I just found out that gated checkins always associate work items, regardless of what is configured in the build definition.
Would it be feasible to disable work item integration with the feature branches (in this case also converting them from gated to continuous integration) and enable it in the main build, so that these features can be tracked in the main product line? Or maybe this should be only enabled for Release build definitions, so that we can find out what was integrated on a certain release? For those of you who follow the sprint/feature concept, how do you handle this situation? Do you also have a build for each branch?
Update:
I just found something similar (but not exactly like what I wanted) in this question. The answer there lead me to a plugin that automatically associates work items on merge checkins. This should offer great traceability on it's own, so I think I'll give it a shot.
Would still like to hear what your thoughts are in regards to the builds in this scenario.
You're approaching this wrong IMO. You shouldn't be worrying about associating Builds and WI's, but rather associating Changesets and WI's. When your developers check-in changes in the feature branch you should ensure that they are linking them to the relevant WI(s). You can even enforce this via a Check-In Policy.
Now if you ever want to inspect that feature in the future to see all the changes associated with it, you can by inspecting the Feature WI, and looking at all linked Changesets. Even if you delete the branch all the Changesets are still available.
I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?
What's the best way in your experience to designate where work items should be coded? Do you use a particular field? We currently use a custom "Version to Fix" field in our WIT, but it doesn't relate directly to Dev or the Main line code branches. We end up communicating which Versions (v6.1, v6.2, etc) relate to which branches, but there is still a "mapping" that needs to be done. This really only works for a "Hot Fix" in a released version because the branch is named the same as the "Version to Fix". How are work items designated so that is easy for developers to know where to code and provides the least amount of maintenance?
Updated: Just to clarify a bit ... we have Dev, Main, and Release (one for each release) branches. We do 90% of our development in Dev. Once an iteration has ended we reverse integrate Dev to Main, however we don't release it at that point. Testing is done on Main for a while and select bugs could be fixed on Main. This all goes on while the next Iteration (new stories) moves on in Dev. Once things look good on Main we'll branch to a new version (new Release branch) and development on Main will end until the next iteration starts and we again Reverse Integrate to Main from Dev. Of course we forward integrate Main to Dev once things are fixed on Main. At any point we may have a bug that we want fixed on Dev, Main, or on a Released version. Where we have bug fixes going on in Main, Dev, and Release we are confusing some developers. We tell them the "version" but they have to know what future or current version links back to what Branch. That's where I'm trying to find the best practice with the Task work item.
You can have multiple versions (changesets) within a branch, but the proliferation of branches is not a good idea.
A simple (but powerful) branching strategy is to create a main brach, then create 2 children: 1) Dev, 2) QA Now the question is a non-question. Developers do their work in the Dev branch. When they're ready they reverse integrate changes to main. Then changes are forward integrated to QA. If the build passes QA, then it can be rolled to production.
Some organizations will employ special branching practices like creating a branch for a new Major version or even a branch for a special feature. These follow the same process of reverse integration into main (and subsequent forward integration dev branches when appropriate).
Builds can be linked to changesets. If a particular build has a bug, the developers look up the changeset number, pull it down from version control, check the work in associating it with appropriate work items for the Bug, and rebuild it. That new "bug fix" version now has a unique build id and changeset id associated with it.
That's really going to depend on your shop; our environment works on an iterative build, so the bug fixes always go into the most recent branch (named via date stamp - IE Branch_05252011 or so).
If you have some other kind of versioning / branching strategy, the best option would be to place the desired fix branch in the title:
V6.2 - Fix the ItExplodedException occuring in SomeClass
Alternatively, I believe TFS can also even offer a specialized drop down that you can populate when creating the work item with custom content. You could then populate that with the branch to target.
Here is a very effective solution: Set up a check-in policy using TFS Power Tools, and associate a Custom Path policy with a Work Item Query policy, so that all checkins for a branch will require association with a work item that falls into a branch-specific query. That way if the checkin does not have a work item that matches the branch, it will not be allowed. The query can be defined using whatever criteria you need, and the queries themselves can be updates and reassigned to different branches as needed.
One caveat, however: the queries themselves are evaluated at client-side, so as an administrator you can update the query to block or allow certain items into a branch, but the developers will need to refresh Team Explorer to update their query, otherwise it can allow unauthorized items in, or it can block items that are authorized. One solution I am looking into for this issue is to add a custom check-in policy that will always be satisfied but in the meantime will cause the VS IDE to refresh Team Explorer. I have asked MS to add this directly to their TFS Power Tools Work Item Query checkin policy but they have not responded.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm the sole developer working on a couple of webapp sites. I have them in subversion, but I'm not using a project management tool.
I recently got redmine up and going, and I want to set up the projects in there. What I'm looking for is a recommendation as to how to structure these two projects in Redmine. From what I can glean, the structure is Project->subproject . So I'm trying to map this to my to-do list structure. From my to-do list, there are three kinds of tasks: new features, bug fixes, and maintenance ( not quite bug fixes but things that really need cleanup ).
Should I make each webapp a top-level project, with Features, Bugs, and Maintenance as subprojects? What other ways of organizing projects are there? For instance, in the subversion manual, they recommend having project/trunk, project/branches, project/testing, project/releases, etc. Are there similar guidelines for working in Redmine?
As usual, when you configure a system you need to customise it as much as possible to try and meet your own needs. I personally don't know of any guidelines or recommendations for Redmine per-se, however I can relate what we do here and I hope that will help you! :-)
Features/Bugs/Maintenance are just ways to label your tasks so that you can filter them. These are a specific label known as a "tracker" in Redmine. You can define your own trackers for additional types of task.
Project and Sub-Project are also effectively a way of labelling your tasks, but grouping them under a broader umbrella category. When you create 'projects', you assign the trackers you will need to them. In our case, we create an API, and have distinct trackers to identify bugs, features & modifications with (effectively) duplicated tracker names so that we can identify if the tasks are for desktop or dsp programmers. The sub-projects are used to identify product lines or customisations that our customers require specific support for. We also use version labels to identify specific releases in each subproject so that we can get a nice roadmap view of all of the tasks we are tracking. We have multiple projects in our Redmine system, each configured in a similar manner, with some project tasks linked across projects as "related" issues so that we can identify dependencies.
This is just one way to configure Redmine, but is the simplest we could manage given the complex relationships between some of our projects. It is the second configuration that we have tried and we find it works well. FYI, the first configuration was on a test system to allow us to work out what we needed from the system after migrating from Trac, a couple of years ago. The current configuration has been in use for about 2 years and seems to suit our needs nicely.
As I said earlier, you need to decide what you need from the system, but the simplest approach is to think about how you view a project from the top down, configure your system to match your processes, and not change your processes to match the tool - always the more 'disastrous' option IMHO. I wouldn't recommend tracking bugs and features etc in separate projects, as getting your roadmaps together is usually harder, and it also makes it harder to visualise the total task load for a given project. Even dividing task types into subprojects could be problematic, as it complicates things if you find you need to support multiple product release cycles, adding to your workload in terms of managing your Redmine system.
That's about all I can think of for now. I hope that helps you. :-)
The kind of tasks you mention seems to be what Redmine calls tracker. You can define your own trackers. In my opinion, you shouldn't need a sub-project for each "kind of task", but a tracker.