I work with a small team that manages a large number of very small applications (~100 Portlets). Each portlet has its own git repository. During some code I was reviewing today, someone made a small edit, and then updated their pom.xml version from 1.88-SNAPSHOT to 1.89-SNAPSHOT. I added a comment asking if this is the best way to do releases, but I don't really know the negative consequences of doing this.
Why not do this? I know snapshots are not supposed to be releases, but why not? What are the consequences of using only snapshots? I know maven will not cache snapshots the same as non-snapshots, and so it may download the artifact every time, but let's pretend the caching doesn't matter. From a release-management perspective, why is using a SNAPSHOT version every time and just bumping the number a bad idea?
UPDATE:
Each of these projects results in a war file that will never be available on a maven repo outside of our team, so there are no downstream users.
The main reason for not wanting to do this is that the whole Maven eco-system relies on a specific definition of what a snapshot version is. And this definition is not the one you're setting in your question: it is only supposed to represent a version currently in active development, and it is not suppose to be a stable version. The consequence is that a lot of the tools built around Maven assumes this definition by default:
The maven-release-plugin will not let you prepare a release with a snapshot version as released version. So you'll need to resort to tagging by hand on your version control, or make your own scripts. This also means that the users of those libraries won't be able to use this plugin with default configuration, they'll need to set allowTimestampedSnapshots.
The versions-maven-plugin which can be used to automatically update to the latest release version won't work properly as well, so your users won't be able to use it without configuration pain.
Repository managers, like Artifactory or Nexus, comes built-in with a clear distinction of repositories hosting snapshot dependencies and release dependencies. For example, if you use shared Nexus company-wide, it could be configured to purge old snapshots so this would break things for you... Imagine someone depends on 1.88-SNAPSHOT and it is completely removed: you'll have to go back in time and redeploy it, until the next removal... Also, certain Artifactory internal repositories can be configured not to accept any snapshots, so you won't be able to deploy it there; the users will be forced, again, to add more repository configuration to point at those that do allow snapshots, which they may not want to do.
Maven is about convention before configuration, meaning that all Maven projects should try to share the same semantics (directory layout, versioning...). New developers that would access your project will be confused and lose time trying to understand why your project is build the way it is.
In the end, doing this will just cause more pain on the users and will not simplify a single thing for you. Probably, you could make it somewhat work, but when something is going to break (because of company policy, or some other future change), don't act surprised...
Tunaki gave a lot of reasonable points why you break Maven best practices, and I fully support that view. But even if you don't care about "conventions of other companies", there are reasons:
If you are not doing CI (and consider every build as potential release), you need to distinguish between versions which should go productive and those who are just for testing. If everything is SNAPSHOT, this is hard to do.
If someone (accidentally) deploys a second 1.88-SNAPSHOT, it will be the new 1.88-SNAPSHOT, hiding the old one (which is available by a concrete timestamp, but this is messy). Release versions cannot be deployed twice.
Related
My project has snapshot dependencies for which no releases are available. If I fix the version, such as <version>0.0.1-20140219.100706-347</version> instead of <version>0.0.1-SNAPSHOT</version>, do I now enjoy the benefits in speed of using releases or am I still subject to automatic updates slowing my build down just by using a dependency that resides in a snapshot repository? Are there any benefits of releases then other than having kind of a tag to a specific version?
do I now enjoy the benefits in speed of using releases or am I still subject to automatic updates slowing my build down
Yes; a timestamped SNAPSHOT version refers to a unique artifact, so Maven won't check again. You could also consider setting an update policy to reduce the frequency of checks (How does the updatePolicy in maven really work?).
Are there any benefits of releases then other than having kind of a tag to a specific version?
As a general practice, SNAPSHOT builds (even timestamped ones) aren't intended to stick around. Because you'll have one for every build it's normal to prune them (e.g. How to limit number of deployed snapshots artifacts in Nexus?). At some point you'll want to pick a specific version that will be kept permanently, and which can be used for reproducible builds: that's what final release versions are for.
Let's suppose I have a project called myLib-1.1.0. This project has a dependency on lib-dependency-1.2.3.
If there's a new version for this dependency and I need to use it, should I change my project version as well? No other modifications are made to myLib.
At the same time myLib is a dependency for various other projects. My main concern is the impact of a small change in a dependency might have upstream.
Yes. In maven, released versions are immutable. If you release 1.1.0 with a dependency to lib-dependency-1.2.3 then that's it.
If you change to depend on lib-dependency-1.2.4 then that's a new version. You should not redeploy 1.1.0 since some people might have already pulled that (supposedly immutable) 1.1.0.
So that means you need a different version, even if it's a just a new qualifier (myLib-1.1.0-RC-2 for example, but better just 1.1.1)
Maven doesn't recheck remote repos for release versions once it has it in the local repo, so if someone has 1.1.0 already locally, they will not get the new, fixed 1.1.0.
And about your rippling problem. Upstream projects should depend on the lowest acceptable released version. i.e. if the upstream project itself is ok with myLib-1.1.0 because it doesn't need (indirectly) lib-dependency-1.2.4 then it should stay with 1.1.0
Any code change that potentially affects the behavior should be given a new version number, in other words: anything that's not an absolute trivial change should be given a new version number. A changed dependency would definitely qualify for that because, unless you do a thorough code inspection of the dependency, you have no reason to assume that they only made absolute trivial changes.
Changes are often advertised as "small" (similar to being absolutely trivial as I call it above), but they hardly ever are. They may be negligible in someone's use case, but not in someone else's use case. I've even seen circumstances where there were only changes to Javadocs in a project that would break things down the line. (You could argue about how smart it is for someone to depend that strongly on Javadoc, but that's besides the point, isn't it?)
That is not to say that you can't accumulate changes and release a bunch of them as a single release. While accumulating, your project is in flux, and should have a ...-SNAPSHOT version. There should be no two versions of myLib-1.1.0 (without the -SNAPSHOT) that have even the least little change.
The fact that you're re-releasing your project also makes explicit the fact that regression testing and such should be redone to validate that it's still working with the changes in its dependency.
By default, Go pulls imported dependencies by grabbing the latest version in master (github) or default (mercurial) if it cannot find the dependency on your GOPATH. And while this workflow is quite simple to grasp, it has become somewhat difficult to tightly control. Because all software change incurs some risk, I'd like to reduce the risk of this potential change in a manageable and repeatable way and avoid inadvertently picking up changes of a dependency, especially when running clean builds via CI server or preparing to deploy.
What is the most effective way I can pin (i.e. lock down or capture) a package dependency so I don't find myself unable to reproduce an old package, or even worse, unexpectedly broken when I'm about to release?
---- Update ----
Additional info on the Current State of Go Packaging. While I ended up (as of 7.20.13) capturing dependencies in a 3rd party folder and managing updates (ala Camlistore), I'm still looking for a better way...
Here is a great list of options.
Also, be sure to see the go 1.5 vendor/ experiment to learn about how go might deal with the problem in future versions.
You might find the way Camlistore does it interesting.
See the third party directory and in particular the update.pl and rewrite-imports.sh script. These scripts update the external repositories, change imports if necessary and make sure that a static version of external repositories is checked in with the rest of the camlistore code.
This means that camlistore has a completely repeatable build as it is self contained, but the third party components can be updated under the control of the camlistore developers.
There is a project to help you in managing your dependencies. Check gopack
godep
I started using godep early last year (2014) and have been very happy with it (it met the concerns I mentioned in my original question). I am no longer using custom scripts to manage the vendoring of dependencies as godep just takes care of it. It has been excellent for ensuring that no drift is introduced regardless of timing or a machine's package state. It works with the existing mechanism of go get and introduces the ability to pin (godep save) and restore (godep restore) based on Godeps/godeps.json.
Check it out:
https://github.com/tools/godep
There is no built in tooling for this in go. However you can fork the dependencies yourself either on local disk or in a cloud service and only merge in upstream changes once you've vetted them.
The 3rd party repositories are completely under your control. 'go get' clones tip, you're right, but you're free to checkout any revision of the cloned-by-go-get or cloned-by-you repository. As long as you don't do 'go get -u', nothing touches your 3rd party repositories already sitting at your hard disk.
Effectively, your external, locally cloned, dependencies are always locked down by default.
It is around 10 months now that Jenkins split off from Hudson.
When looking at the project homepages I am wondering what the differences between Hudson and Jenkins in the meantime really are. From the changelog I do not realy learn much. There are a bunch of changes and the major difference seems to be that Jenkins releases more often with less changes and Hudson less frequently, but then with more changes in a release.
Are there any notable differences yet?
So are there things that make me as a developer needing a CI system more productive rather with the one or the other?
Is one of them more stable than the other?
Is there any difference yet that has nothing to do with politics around Oracle?
What is the most notable difference from your point of view?
One notable difference is that a big number of plugins moved to Jenkins. While you would still be able to use the old versions with Hudson, the newer versions depend on Jenkins already. Also new plugins are mostly created with dependencies on quite recent Jenkins versions, so you probably won't be able to use them without hassle on Hudson.
This will probably differ from plugin to plugin, some might be more compatible with Hudson than others, while still others provide versions for both tools. But if something does not work well with a plugin you will receive help easier if you use Jenkins.
EDIT: Here is an interesting link I found, not only providing some solid numbers on the different paths Jenkins and Hudson have taken, but also addressing the (non-)issue of IP that was mentioned in the other post here...
check out the work being done on cleaning up the code and the IP checks that are needed to belong to Eclipse Foundation. This is one of the big differentiators if you care about clean IP.
How many plugins are you using? Hudson supports many of the most important plugins independently and is working with plugin owners to keep compatibility with those that are still maintained by their owners at Jenkins.
See the JavaOne presentations that show how Hudson is being maintained and new features added.
https://oracleus.wingateweb.com/scheduler/eventcatalog/eventCatalogJavaOne.do (search for Hudson)
Also check out the Hudson project at Eclipse http://www.eclipse.org/hudson/
I'm currently working on Maven tools for Project Dash. One of the open issues is how to handle mistakes.
Maven central says: Nothing published ever changes. This is because Maven never tries to figure out whether a release has changed (unlike for SNAPSHOTs).
But I might have to create a new "release" of, say, part of Eclipse 3.6.2. Which version number should I use? 3.6.2.1, 3.6.2-1, 3.6.2_1, 3.6.2pl1? Why?
The convention for version numbers is major.minor.build.
major is incremented when the public interface changes incompatibly. For example, a method is removed, or its signature changes. Clients using your library need to take care when using a library with a different major version, because things may break.
minor is incremented when the public interface changes in a compatible way. For example, a method is added. Clients do not need to worry about about using the new version, as all the functions they are used to seeing will still be there and act the same.
build is incremented when the implementation of a function changes, but no signatures are added or removed. For example, you found a bug and fixed it. Clients should probably update to the new version, but if it doesn't work because they depended on the broken behavior, they can easily downgrade.
The tricky issue here is that it sounds like you are modifying code written and released by somebody else. The convention here, as I have seen it, is to postfix the version number with either -yourname-version or just -version. For example, linux-image-2.6.28-27 is a likely name of a Ubuntu kernel image.
As Maven uses dashes to differentiate between artifact coordinates, however, I would recommend (very long-windedly, apparently) to just add .version to avoid confusing it. So 3.6.2.1 in this case.
Maven project versions are specified like this.
<major version>.<minor version>.<incremental version>-<qualifier>
As you do not want to change the version number you are looking for a qualifier. I do not know if there is a general recommendation for the name of the qualifier. The Spring people e.g. did something like this
2.5.6.SEC01
2.5.6.SR02
3.0.0.M3
They didn't use the hyphen/dash notation to seperate the qualifier.
What ever you do, you have to be careful regarding the ordering of versions! Have a look at the first link I added.
Update: Also have a look at #krzyk comment for recent changes/additions.
This is because Maven never tries to
figure out whether a release has
changed
That's in my opinion not the basic reason. The reason is to have reliable builds in the future. You define the versions in your pom and that's it. If someone would remove artifacts from maven central or become worse changing an existing artifact you can't be sure that your build will be working in the future...or an older build would work.
The version number is up to you...i would suggest to use 3.6.2.1.