We have more than 250+ applications and most of the applications depends on the generic/common(around 30) components.
If there is any change in a common component then we have to release all/most of them components using Maven. This is very painful task.
Is there any way by which we can avoid this. Do we need to change the design of the application or does maven provides any solution for this or is there any solution in Hudson by which we can schedule a builds to do this.
Any inputs on this will be really helpful.
You could take a look at the Cascade Release Plugin which might work with Jenkins as well.
However, you should thing about your workflow:
Does every application need to have the newest common components right now? Perhaps it is sufficient to include them in the next regular release?
That way, you turn the responsibilities around. So say your component DOES in indeed do a release. Next time when you applications build (in the regular Nightly build, using SNAPSHOTs), the build process notifies the responsible developer/releasemanager for the application that there is a new version of common.
The release manager can now decide whether the features/updates are necessary for his application. You could even let the nightly build automatically update the dependency to common (using versions-maven-plugin.
The point is: I would stronlgy advise against automatically updating and releasing the application because of common updates. This would create a strong coupling between both projects. Do the update automatically, if you want, but let the application developers/release managers decide when to release.
Of course, if the change in common is critical, they MIGHT need to release immediately.
Related
I have been educating myself about monorepos as I believe it is a great solution for my team and the current state of our projects. We have multiple web products (Client portal, Internal Portal, API, Core shared code).
Where I am struggling to find the answer that I want to find is versioning.
What is the versioning strategy when all of your projects and products are inside a monorepo?
1 version fits all?
Git sub-modules with independent versioning (kind of breaks the point of having a mono repo)
Other strategy?
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
What is the versioning strategy when all of your projects and products are inside a monorepo?
I would suggest that one version fits all for the following reasons:
When releasing your products you can tag the entire branch as release-x.x.x for example. If bugs come up you wouldn't need to check "which version was of XXX was YYY using"
It also makes it easier to force that version x.x.x of XXX uses version x.x.x of YYY. In essence, keeping your projects in sync. How you go about this of course depends on what technology your projects are written in.
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
If the tests don't take particularly long to execute, no harm can come from this. I would definitely recommend this. The more often your tests run the sooner you could uncover time dependent or environment dependent bugs.
If you do not want to run tests all the time for whatever reason, you could query your VCS and write a script which conditionally triggers tests depending on what has changed. This relies heavily on integration between your VCS and your CI server.
I work with a small team that manages a large number of very small applications (~100 Portlets). Each portlet has its own git repository. During some code I was reviewing today, someone made a small edit, and then updated their pom.xml version from 1.88-SNAPSHOT to 1.89-SNAPSHOT. I added a comment asking if this is the best way to do releases, but I don't really know the negative consequences of doing this.
Why not do this? I know snapshots are not supposed to be releases, but why not? What are the consequences of using only snapshots? I know maven will not cache snapshots the same as non-snapshots, and so it may download the artifact every time, but let's pretend the caching doesn't matter. From a release-management perspective, why is using a SNAPSHOT version every time and just bumping the number a bad idea?
UPDATE:
Each of these projects results in a war file that will never be available on a maven repo outside of our team, so there are no downstream users.
The main reason for not wanting to do this is that the whole Maven eco-system relies on a specific definition of what a snapshot version is. And this definition is not the one you're setting in your question: it is only supposed to represent a version currently in active development, and it is not suppose to be a stable version. The consequence is that a lot of the tools built around Maven assumes this definition by default:
The maven-release-plugin will not let you prepare a release with a snapshot version as released version. So you'll need to resort to tagging by hand on your version control, or make your own scripts. This also means that the users of those libraries won't be able to use this plugin with default configuration, they'll need to set allowTimestampedSnapshots.
The versions-maven-plugin which can be used to automatically update to the latest release version won't work properly as well, so your users won't be able to use it without configuration pain.
Repository managers, like Artifactory or Nexus, comes built-in with a clear distinction of repositories hosting snapshot dependencies and release dependencies. For example, if you use shared Nexus company-wide, it could be configured to purge old snapshots so this would break things for you... Imagine someone depends on 1.88-SNAPSHOT and it is completely removed: you'll have to go back in time and redeploy it, until the next removal... Also, certain Artifactory internal repositories can be configured not to accept any snapshots, so you won't be able to deploy it there; the users will be forced, again, to add more repository configuration to point at those that do allow snapshots, which they may not want to do.
Maven is about convention before configuration, meaning that all Maven projects should try to share the same semantics (directory layout, versioning...). New developers that would access your project will be confused and lose time trying to understand why your project is build the way it is.
In the end, doing this will just cause more pain on the users and will not simplify a single thing for you. Probably, you could make it somewhat work, but when something is going to break (because of company policy, or some other future change), don't act surprised...
Tunaki gave a lot of reasonable points why you break Maven best practices, and I fully support that view. But even if you don't care about "conventions of other companies", there are reasons:
If you are not doing CI (and consider every build as potential release), you need to distinguish between versions which should go productive and those who are just for testing. If everything is SNAPSHOT, this is hard to do.
If someone (accidentally) deploys a second 1.88-SNAPSHOT, it will be the new 1.88-SNAPSHOT, hiding the old one (which is available by a concrete timestamp, but this is messy). Release versions cannot be deployed twice.
I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?
There has been some discussion in abandoning our CI system (Hudson FWIW) due to the fact that our projects are somewhat segmented. Without revealing too much, you can think of each project as similar to a web site project: it has dependencies, its own unit tests, etc.
It seems like one of the major benefits of CI is to make sure that each component of a project works together, but aside from project inheritance most of our projects are standalone and unit tested fairly well.
Given what I have explained here (the oddity in our project organization); can anyone explain any benefits of CI for segmented\modular\many projects?
So far as I can tell, this is the only good reason I've found:
“Bugs are also cumulative. The more bugs you have, the harder it is to remove each one. This is partly because you get bug interactions, where failures show as the result of multiple faults - making each fault harder to find. It's also psychological - people have less energy to find and get rid of bugs when there are many of them - a phenomenon that the Pragmatic Programmers call the Broken Windows syndrome.”
From here: http://martinfowler.com/articles/continuousIntegration.html#BenefitsOfContinuousIntegration
I would use Hudson for the following reasons:
Ensuring that your projects build/compile properly.
Building jobs dependent on the build success of other jobs.
Ensuring that your code adheres to agreed-upon coding standards.
Running unit tests.
Notifying development team of any issues found.
If the number of projects steadily increases, you will find the need to be able to manage each one effectively, especially considering the above reasons for doing so.
In your situation, you can benefit from CI in (at least) these two ways:
You can let the CI server run certain larger test suites automatically after each subversion/... check-in. Especially those which test the interaction of different modules, hence the name continuous integration. This takes away the maintenance work and waiting time from the developers when they consider a check-in. Some CI (e.g. Hudson) also can be configured to automatically build modules when a depending module is build. This way you can let it automatically test if depending modules are compatible with the new version of the changed one.
You can let the CI server publish the new artifacts to the repository of a dependency resolver (e.g., Ivy, Maven). This way, the various modules can automatically download the latest (stable) revisions of the modules they depend on. Combine this point with the previous one and imagine the possibilities (!!!).
We have a lot of different solutions/projects which are managed by different teams. Our solution needs to reference several projects that another team owns. We don't want to add these dependencies as project references because we do not intend on modifying that code, we just want to use it. Also we already have quite a bit of projects in our solution and don't want to add a bunch more since it will slow down Visual Studio. So we are building these projects in a separate solution and adding them as file references to our solution.
My question is, how do people manage these types of dependencies? Should I just have some automated process what looks for changes to those projects, builds them and checks the dlls into our source control, after which we treat them like other 3rd party dependencies? Is there a recommended way of doing this?
One solution, although it may not necessarily be what you are looking for, is to have each dependent sub-system perform a release. This release could be in the form of a MSI install, or just a network share of assemblies. When a significant change is made, that team could let you know, and you could run the install or a script to copy the files.
Once you got the release, you could put them into the GAC, that way you would not have to worry about copying them to your project bin folders.
Another solution, assuming you are using a build server or continuous integration of some kind, is to have a post build step or process stage the files. Than at any given moment, the developers of the other teams could grab the new files , or have a script or bat file pull them down locally.
EDIT - ANOTHER SOLUTION
It might be best to ask why do you have these dependencies? Do you really need them locally when building your part of the application? Could you mock out the dependencies in your solution, allowing you to code, build, and run unit tests? The the actual application would wire these up in your DEV/Test/Prod environments. Keeping your solution decoupled and dependent free may be a better solution for the individual team. Leave the integration and coupling when the application runs in a real setting.
(Not a complete answer, but still:)
Any delivery is better stored in a file/binary repository, as opposed to a VCS used to manage sources history.
We prefer managing those deliveries in a repo like Nexus, and we are using maven to get back the right dependencies.
Even if those tools can be more Java-oriented, Nexus can store anything, and maven is only there to read the pom.xml of each artifact and compute the right dependencies.