Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'd be leading a new project soon. And I've been pondering over what are the basic infrastructure for a software project. These are the stuff that I think every project should have:
-Coding style conventions
-Naming conventions
-Standard project directory structure(eg maven standard dir layout, etc)
-Project management and issue tracking(eg trac, redmine, etc)
-Continuous Integration server(eg, hudson, cruise control, etc)
I'm not sure if I missed out anything. Would anyone like to add?
As a preliminary answer, check out the Joel test:
http://www.joelonsoftware.com/articles/fog0000000043.html
Just an appetizer:
Do you use source control?
Can you make a build in one step?
Do you make daily builds?
Do you have a bug database?
Do you fix bugs before writing new code?
Do you have an up-to-date schedule?
Do you have a spec?
Do programmers have quiet working conditions?
Do you use the best tools money can buy?
Do you have testers?
Do new candidates write code during their interview?
Do you do hallway usability testing?
revision control system (eg. subversion, cvs, git)
In addition to yours I will put:
Unit Test Strategy
Integration Test Strategy
Defined Process
Release (delivery) strategy (like milestones, working packages and so on)
Source control branching strategy
What about documentation - how (comments in code, high-level specs), when, amount, who
How you will test - unit/acceptance/user testing
code versioning, some SVN/Git (or is it included in trac?)
team roles and responsibilities - need to be done in ocntext of your project
Knowledge management is crucial. As you already plan to use wiki (like Trac or Redmine) you could use it for KM as well.
Functional testing is a mandatory part of any project. Unit testing is great and it works well for Agile projects but the functional testing is still necessary. You need at least a basic Test Plan. If you plan to have multiple projects or sub-projects a Test Strategy document or Wiki page would be good.
Test Cases, Acceptance Test Cases etc could be driven by your User Stories or their equivalents but they still have to exist in some form.
I would throw a file sharing server into the mix too. I thought version control was so basic, that I didn't even bother to put it there in the list. But its a good point version control.
Configuration Management Plan. You need to have a documented approach to your development workstreams, how you will be merging between then, etc.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I work on a project far too big to reside in a single Visual Studio / Eclipse / NetBeans project and we have a "common software" team responsible for developing and maintaining software libraries used by other teams.
I'm struggling with how to manage the development of and changes to the common software. When method signatures and classes change, do I keep the old versions and mark them deprecated? The current plan is to distribute a new build of common libraries every two weeks.
Definitely set up a repository. If you are a Maven-hater check out Gradle, it uses Ivy. Maven has a reputation for being complex but it does have better tool support. IDEs support Maven either out-of-the-box or with plugins, they give you graphs showing what the jars in your project depend on, so you can see conflicts easily.
Either Ivy or Maven will sort out your dependencies so your projects are using the right versions. Each of your projects should list (in the pom.xml for Maven) what version of which of your common libraries that it uses.
A common feature of most version control systems is the use of external branches. Common software is fetched from a shared repository and integrated in each project on update.
A key difficulty lies in documentation changes to the public API of common software and I see two solutions : good communication of deprecated signatures adn continuous integration where finding out deprecated methods can prove painfull.
There are a few options you can have.
Option A: use a repository
For Java based systems I would recommend that you use Ant+Ivy or Maven and create an internal repository with the code in those common projects.
Option B: Classpath Project
If setting up a repository is too much, what you can do is a create an eclipse project called classpath with the following three directories in it
classpath\
docs\
sources\
jars\
The team working on the common project can have a build script which complier the common code and places it into the classpath project, all that the rest of the dev team need to do is checkout the classpath project and reference the files in it during development.
Personally I am a fan of option B unless there is a full time person dedicated to doing builds in which case I go for option A.
The way to manage changes in method signatures is to follow a common version convention so when you do a major version number increase you can say dependent code will have to be changed, and if it is a minor version number increase then dependent code does not need to change. marking code as deprecated is a very practical option because IDE and build systems should issue warnings and allow the coders to switch to newer versions. If the same team is changing the common code and the main project then you will need to have the actual eclipse projects all checked out in the same workspace so that re factoring tools can do their job.
Unless the code in common will be used across across many projects I would keep it in all in one project, you can use multiple source folders to make navigating to various parts of the code easy. If you are having trouble with developers checking in stuff that is breaking things, then I would recommend you have more frequent checkins or have developers work on branches where they merge from the trunk to their work branch frequently to eliminate sync problems, when done they can merge from the branch back to the trunk, the latest version of subversion have decent support for this, and DVCS source control systems like mercurial, and git hub are excellent at this.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm the sole developer working on a couple of webapp sites. I have them in subversion, but I'm not using a project management tool.
I recently got redmine up and going, and I want to set up the projects in there. What I'm looking for is a recommendation as to how to structure these two projects in Redmine. From what I can glean, the structure is Project->subproject . So I'm trying to map this to my to-do list structure. From my to-do list, there are three kinds of tasks: new features, bug fixes, and maintenance ( not quite bug fixes but things that really need cleanup ).
Should I make each webapp a top-level project, with Features, Bugs, and Maintenance as subprojects? What other ways of organizing projects are there? For instance, in the subversion manual, they recommend having project/trunk, project/branches, project/testing, project/releases, etc. Are there similar guidelines for working in Redmine?
As usual, when you configure a system you need to customise it as much as possible to try and meet your own needs. I personally don't know of any guidelines or recommendations for Redmine per-se, however I can relate what we do here and I hope that will help you! :-)
Features/Bugs/Maintenance are just ways to label your tasks so that you can filter them. These are a specific label known as a "tracker" in Redmine. You can define your own trackers for additional types of task.
Project and Sub-Project are also effectively a way of labelling your tasks, but grouping them under a broader umbrella category. When you create 'projects', you assign the trackers you will need to them. In our case, we create an API, and have distinct trackers to identify bugs, features & modifications with (effectively) duplicated tracker names so that we can identify if the tasks are for desktop or dsp programmers. The sub-projects are used to identify product lines or customisations that our customers require specific support for. We also use version labels to identify specific releases in each subproject so that we can get a nice roadmap view of all of the tasks we are tracking. We have multiple projects in our Redmine system, each configured in a similar manner, with some project tasks linked across projects as "related" issues so that we can identify dependencies.
This is just one way to configure Redmine, but is the simplest we could manage given the complex relationships between some of our projects. It is the second configuration that we have tried and we find it works well. FYI, the first configuration was on a test system to allow us to work out what we needed from the system after migrating from Trac, a couple of years ago. The current configuration has been in use for about 2 years and seems to suit our needs nicely.
As I said earlier, you need to decide what you need from the system, but the simplest approach is to think about how you view a project from the top down, configure your system to match your processes, and not change your processes to match the tool - always the more 'disastrous' option IMHO. I wouldn't recommend tracking bugs and features etc in separate projects, as getting your roadmaps together is usually harder, and it also makes it harder to visualise the total task load for a given project. Even dividing task types into subprojects could be problematic, as it complicates things if you find you need to support multiple product release cycles, adding to your workload in terms of managing your Redmine system.
That's about all I can think of for now. I hope that helps you. :-)
The kind of tasks you mention seems to be what Redmine calls tracker. You can define your own trackers. In my opinion, you shouldn't need a sub-project for each "kind of task", but a tracker.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When using Greenhopper with Jira, it is clear that Greenhopper is using the "fixed in version" field in the Jira issues to represent which scrum sprint the issue is being worked on. This in itself is a bit hackish, because an issue can conceivably be worked on in multiple sprints, and because the relationship between an issue and a sprint is precisely that it has been worked on during the sprint, with the recognition that you might not complete the task within the planned time.
But okay, it might be a hack one can live with, at least if there is nothing else that tries to use the "fixed in version" field for something else.
But I am finding that there are other concerns that also build on the "fixed in version" field. Specifically, one should be able to see which issues are planned to be addressed in which release versions (real-life versions), and to use this information as a means of verification/QA.
How are other Greenhopper users combining these two uses of the "fixed in versions" field? Are you setting the sprint versions as sub-versions of the release versions? Are you using some custom field for the release versions? I am finding this to be difficult because the scrum team is working on multiple components, independently versioned. Also, there may be bugfix releases and feature development on the same component, happening on the same sprint.
To summarise, I find it unavoidable that the team will be working on "Some Product 3.4.0" (a feature release), "Some Product 3.3.1" (a bugfix release), and "Other Product 1.2" within the same sprint. It would not be possible to mark this sprint as a subversion of each of these three versions (across two different components). And making three different sprints in Greenhopper, would really dilute the value of Greenhopper.
Are other Greenhopper users in this same situation? How have you dealt with it?
There are two issues at play here.
First your sprint versions are actually "subversions" of your release version. This means that your stories actually get two values in the fixVersion field.
You can configure this in Greenhopper by setting up a master version.
So if you have a 3 sprint release for version 1.0, then you set your release date for 1.0
and put your stories in sprint 1, sprint 2, and sprint 3, such that
1.0
Sprint 1
Sprint 2
Sprint 3
1.1
...
When you play STORY-1 in Sprint 1, you will find that STORY-1 will have a fixVersion of "1.0, Sprint 1"
For items that you're tracking for the release, but not in a sprint, simply set the fixVersion to 1.0.
Second (and this is just a tip), you can use seperate projects for your sprint work and for your production support work. This is helpful in large organizations
We have been faced with the same problem in various organisations, where a team is not only working on multiple releases (like you are detailing in your example) but also where the team is involved in helping out the support organisation when customer issues are raised or when the User Acceptance Testing of previous releases, show issues that 'need to be dealt with' immediately.
We therefore introduced a concept where issues are separated from tasks, but linked together using the 'issue linking' feature of JIRA. Issues (or specifications like we call them) are managed in a release project, while tasks are managed in a team project.
The versioning in a release project denotes releases (i.e. 2.2-patch1, 1.1 ...)
The versioning in a team project denotes sprints (sprint 10-15, sprint 10-20)
The release project only contains bugs, feature requests, inquiries ..
The team project only contains tasks, stories, ...
Automation allows us to keep the specifications and related tasks in sync:
The typical scenario runs as follows
A specification is created in a release project.
A support person creates one or more tasks in the team projects, and links the specification with the tasks using a 'is implemented by' link.
From the moment that work is started on the task, the specification advances to a 'in development' state.
The specification is considered resolved once that all related tasks have been addressed
The transitions for the specifications are triggered automatically.
This concept of separation between specification and task allow you to support many different project organisations - such as
An epic which needs to be developed over a number of sprints.
An issue which needs to be addressed by multiple teams in various locations
A team which works on a new product and maintains an old.
I can provide you more information on this subject if interested.
Francis Martens
I too have been plagued by the same problem and have found the feature request in jira/greenhopper to add a new field for sprints to allow tracking of sprint and release version information independently.
If you want to see this become reality as much as I do, then go over to http://jira.atlassian.com/browse/GHS-945 and vote for the issue. This quote sums it up: "If GreenHopper had iterations as first-class citizens..."
At the moment though, it is likely that we will have to create a new field called versions in jira and use that to track the 'real product versions' that an issue relates to. We also have a commit hook in our source code repositories, so when a developer makes a commit, it will update the jira ticket with the 'real product version' that relates to the source code they are committing. We keep this information in a config file so the commit hook knows what version to use for what source code repository/path. This is not ideal, but it is our only option at present.
Just use rapid boards in GreenHopper, they was introduced not so long time ago, but they give almost all you need.
You can put LABELS on your issues, for instance, 'sprint-1', 'sprint-2' and so on. Then create issue FILTER. Then create RAPID BOARD based on filter. At the end you will get nice board with current issues of sprint-X regardless version and even project.
Please, check that Sprint essentially is not version of software. In real world when you have more than one customer you need to fix and support a lot of versions but you still need to keep everything on track. In this case sprints are still great but they just represent amount of job that should be done during time period. Anyway, version is what you will present to anybody outside your development time. So, do not mix versions of the software and sprints ('mapping' between time and tasks)! Do not use hierarchies where sprint version is child of real software version! Keep unrelated things separated!!!
Shouldnt a sprint have in theory a "shippable" product at the end? Which means a sprint has the issues either solved or "fails".
That is why I'd recommend splitting the issue in smaller pieces.
I try to use K.I.S.S. whenever possible, so I've been using the label field to mark releases. I rarely need to see the release in the context of scrum/taskboard. So when it comes time to view all items in a release, I just run a search for my release name.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm getting ready to implement a source control system (subversion) but I'm facing some doubts on how to structure my folders.
I use Delphi for all my development and compile the projects from within the IDE.
My current projects folder structure is as follows:
-E:\Work\1. Shared
--Forms (shared forms across all projects)
--Units (shared units/classes across all projects including 3rd party like JCL)
-E:\Work\2. Company Name
--Admin (stuff related with admin work like a license keys generator, Windows CGI to handle order processing automatically, all developed in Delphi)
--Projects
----ProjectA
-----5.x (version 5.x)
------BIN (where all the binaries for this project go)
------Build Manager (where the FinalBuilder project lives)
-------Install (NSIS file that create the setup.exe)
-------Protection (Project files to protect the compiled exe)
-------Update (inf files related with the auto-update)
------Docs (where the readme.txt, license.txt and history.txt that are included in the setup file are)
-------Defects (docs for any testing done by me or others)
-------HTMLHelp (html help for the project)
------R&D (where screenshots, design ideas and other R&D stuff goes to)
------Releases (when building a release with FinalBuilder the setup file created by nsis is placed here)
------Resources (Images and other resources used by this project)
------Source (if a sub-project exists it will compile to BIN since they are all related)
-------SubprojectA
-------SubprojectB
-------SubprojectC
--Sites
--- companywebsite.com (the only one at the moment but if we decide to have individual web sites for products they would all be placed on the Sites folder)
The sign "-" marks directories.
Anyone cares to comment about the current structure or has any suggestions to improve it?
Thanks!
Having setup literally hundreds of projects over the years, and having specialized in software configuration management and release engineering, I would recommend that you first focus on how you want to build/release your project(s).
If you only use an IDE to build (compile and package) your project(s), then you might as well just follow the conventions typical for that IDE, plus any "best practices" you may find.
However, I would strongly recommend that you do not build only with an IDE, or even at all. Instead, create an automated build/release script using one or more of the many wonderful open-source tools available. Since you appear to be targeting Windows, I recommend starting with a look at Ant, Ivy, and the appropriate xUnit (jUnit for Java, nUnit for .NET, etc.) for testing.
Once you start down that path, you will find lots of advice regarding project structure, designing your build scripts, testing, etc. Rather than overwhelm you with detailed advice now, I will simply leave you with that suggestion--you will readily find answers to your question there, as well as find a whole lot more questions worth investigating.
Enjoy!
Based on comments, it seems that some detail is needed.
A particular recommendation that I would make is that you separate your codebase into individual subprojects that each produce a single deliverable. The main application (.EXE) should be one, any supporting binaries would each be separate projects, the installer would be a separate project, etc.
Each project produces a single primary deliverable: an .EXE, a .DLL, a .HLP, etc. That deliverable is "published" to a single, shared, local, output directory.
Make a directory tree where the subprojects are peers (no depth or hierarchy, because it does not help), and do NOT let projects "reach" into each other's subtree--each project should be completely independent, with dependencies ONLY on the primary deliverables of the other subprojects, referenced in the shared output directory.
Do NOT create a hierarchy of build scripts that invoke each other, I did and found that it does not add value but does exponentially increase the maintenance effort. Instead, make a continuous integration script that invokes your stand-alone build script, but first does a clean checkout into a temporary directory.
Do NOT commit any deliverables or dependencies into source control--not your build output, not the libraries that you use, etc. Use Ivy against a Maven-like binary repository that you deploy separate from source control, and publish your own deliverables to it for sharing within your organization.
Oh, and don't use Maven--it is too complicated, obfuscates the build process, and therefore is not cost-effective to customize.
I am moving towards SCons, BuildBot, Ant, Ivy, nAnt, etc. based on my target platform.
I have been composing a whitepaper on this topic, which I see may have an audience.
EDIT: Please see my detailed answer to How do you organize your version control repository?
why the 5.x? (under projectA)
I don't think it is useful to introduce the versions in the tree - that is what subversion, etc is for.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've never used CI tools before, but from what I've read, I'm not sure it would provide any benefit to a solo developer that isn't writing code every day.
First - what benefits does CI provide to any project?
Second - who should use CI? Does it benefit all developers?
The basic concept of CI is that you have a system that builds the code and runs automated tests everytime someone makes a commit to the version control system. These tests would include unit and functional tests, or even behavior driven tests.
The benefit is that you know - immediately - when someone has broken the build.
This means either:
A. They committed code that prevents compilation, which would screw any one up
B. They committed code that broke some tests, which either means they introduced a bug that needs to be fixed, or the tests need to be updated to reflect the change in the code.
If you are a solo developer, CI isn't quite as useful if you are in a good habit of running your tests before a commit, which is what you should be doing. That being said, you could develop a bad habit of letting the CI do your tests for you.
As a solo programmer, it mainly comes down to discipline. Using CI is a useful skill to have, but you want to avoid developing any bad habits that wouldn't translate to a team environment.
As other people have noted, CI does have advantages for a solo developer. But the question you have to ask yourself is; is it worth the overhead? If you're like me, it will probably take an hour or two to set up a CI system for a project, just because I'll have to allocate a server, set up all the networking, and install the software. Remember that the CI system will only be saving you a few seconds at a time. For a solo developer, these times aren't likely to add up to more than the time it took to do the CI setup.
However, if you've never set up a CI system before, I recommend doing it just for the sake of learning how to do it. It doesn't take so long that it isn't worth the learning experience.
The benefit of CI lies in the ability to discover early when a check in has broken the build. You can also run your suite of automated tests against the build, as well as run any kind of tools to give you metrics and such.
Obviously, this is very valuable when you have a team of commiters, not all of whom are diligent to check for breaking changes. As a solo developer, it is not quite as valuable. Presumably, you run your unit tests, and even maybe integration tests. However, I have seen a number of occasions where the developer forgets to checkin a file out of a set.
The CI build can also be thought of as your "release" build. The environment should be stable, and unaffected by whatever development gizmo you just add to your machine. It should allow you to always reproduce a build.
This can be valuable if you add a new dependency to your project, and forget to setup the release build environment to take that into account.
If you need to support multiple compilers then it's handy to have a CI build system to do all of that whilst you just develop in one IDE. My code builds with Vc6 through VS2008 in x86 and x64 builds on VS2005 & 8, so that's 7 builds per project per project configuration... Having a CI system means that I can develop in one IDE and let the CI system prove that all of the compilers that I support still build.
Likewise, if you are building libs that are used by multiple projects then CI will make sure they work with ALL of the projects rather than just the one that you're working with right now...
The truth is, that continuous integration makes most sense in teams. Single developers can also get some advantages, you must decide yourself if they are enough to counter the time you invest into setting a CI-system up.
If you forgot to checkin some needed file, the repository contains a broken version, even if it works on your machine. CI would detect that case.
If your CI-server runs on a different machine, it can indicate dependencies on your build-environment. Means, the build and all tests can work on your dev-box, but on another machine some dependencies aren't fulfilled and the build breaks.
Daily builds can indicate, that your older software doesn't work with the newest upgrade of the OS/compiler/library...
If your CI-system has an archive of build-artifacts you can easy get an distribution of an older version of your software.
Some CI have a nice interface to show you metrics about your build, have links to automatic generated documentation and stuff like that.
We use our CI system to do Release builds (as well as the usual automatic "on-commit" builds).
Being able to click a button that kicks off a Release build that steps through all the processes to release a setup is:
fast (I can go straight on with other things, and it runs on a separate machine so it is not slowing me down);
repetitive (it doesn't forget anything, including copying the setup to the release folder and notifying everyone who needs to know)
dependable (no mistakes, unlike a human!).
In an Agile environment, where you expect to be delivering working software every 2-4 weeks, this is definitely worth having, even in a team of 1.
CI benefits a solo developer in the sense that you're aware if you forgot to check something in (because the build will be broken). The integration value of it is diminished when there are no other developers, though.