I currently have a CI environment setup using the following tools:
VCS - ClearCase (UCM enabled)
CI Server - Jenkins
Build Engine - MSBuild
Basically Jenkins is polling my UCM Project's Integration stream every 2 minutes and building via a msbuild script I wrote.
While in ClearCase it is not a best practice having a individual stream for each developer, good CI demands private builds to be run before commiting the code. Added to that, ideally i would have atomic commits, which ClearCase provides just on the form of Deliver to stream.
Currently we are working directly on integration stream, and sometimes our builds fail because Jenkins starts building before the developer finishes her check-ins.
My question is, how can I have a private work area (Sandbox) and atomic commits on ClearCase without creating a stream for each developer? Am I missing something?
Currently we are working directly on integration stream, and sometimes our builds fail because Jenkins starts building before the developer finishes her check-ins
You can write your build script in order to detect if a deliver is in progress.
A deliver is characterized by an activity named deliver.xxx: you can list its content and see if any version in it is in checkout. If yes, the deliver is in progress.
If the most recent deliver has only checked-in versions, you can safely start your build.
Or:
How can I have a private work area (Sandbox) and atomic commits on ClearCase without creating a stream for each developer
A private area for Jenkins to use would be a snapshot view on each developer stream.
As the name suggests, a snapshot view would take a snapshot of the code, but you need to define a criteria suggesting that Jenkins can build what the snapshot view has updated.
What I have seen used is a 'BUILD' shifting label (a label you re-apply on the newly updated code, and used by Jenkins in his snapshot view with a selection rule based on that label):
The developer move his/her label when he/she thinks the current code is ready to be build, and a Jenkins jobs update its snapshot view on the developer stream, based on the versions referenced by said shifting label 'BUILD'.
Related
I am setting up a CruiseControl.Net server. So far, it only builds a project (.Net website), and I kind-of know how to set up unit testing, code coverage, etc in the future.
What I will need to have soon is this:
The developers commit changes to SVN continually, thus CCNet builds often.
CCNet will publish the latest version to the development server, as soon as a commit is validated (with unit tests etc).
The project manager validates a specific version, in order to publish it to the pre-production server, and create a SVN tag from this revision.
The last point is where my problem lies: how exactly can I set up things so the project manager can, for instance, browse to the CCNet web dashboard, select a previous specific build, and says "this is the build I want to publish" ?
I believe that my thinking is flawed somewhere, but I can't put my finger on it. Maybe CCNet is not the right place to do these manipulations ?
In my mind, I can create a SVN tag using CCNet, and mostly work from the trunk, but maybe I can't ? Maybe it's the other way around, and I should add a CCNet project every time a tag is created under SVN ?
The final goal is that I want to automate the publication process: zip creation (for archiving), web.config modification (using Nant for instance), and website publication (using FTP).
In all these steps, I want to limit the manual intervention to the maximum. If I can avoid to add a new project to CCNet every time a tag or branch is created in SVN, that would be awesome.
Thanks for your help, and sorry if it's not very easy to read, but it's not very clear in my head either...
Since you can create any task, you should be able to achieve the goal, though unfortunately not out-of-the-box.
Since you use SVN, it all depends actually on revision. I think I'd create a separate project for your third scenario and added a parameter where PM would provide revision number. Then based on that I'd tag sources etc. in my own task.
Regarding the other points, I think this is similar. Recently for web projects we started using MSDeploy, and in each stage build the MSDeploy package was created. Then there was a separate build called Deploy, that when forced allows us to select which package we want to deploy using MSDeploy.
Having several environments, however, started a little bit like overkill for managing with CCNet, and I'll be looking into kwakee at some time.
I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?
I am in the process of setting up a continuous integration build for a Spring Roo application using the Rational Team Concert (RTC) IDE and Jazz build engine. When setting up the build definition, the Build Workspace field on the Jazz Source Control tab allows the selection of either a user's repository workspace or a stream.
The RTC Continuous Integration Best Practices and other Jazz build resources consistently refer to using a dedicated repository workspace associated with a build user, leading me to believe that this is the preferred approach. I have not been able to find any information on building from the stream directly. Our project's stream contains all of the artifacts required to build, and I have tested and confirmed that the continuous integration build works from the stream. I am unable to think of any reason why I would need to create and manage a specific workspace for this purpose.
My question is, am I playing with fire by building directly off of the stream? Are there potential downstream complications with this approach that I am not aware of?
Answering my own question in case another SO user has the same question in the future.
After some experimentation, I discovered that a drawback to building directly from the stream was that it ignores the "Build only if there are changes accepted" property on the Jazz Source Control tab. As a result, builds from a stream may only be done at predefined intervals - it is not possible to configure the build to only happen when new changes have been committed to the stream.
A dedicated workspace is required for the build to accept new changes from the stream and use them to trigger a build request.
There is another BIG difference here. It has to do with HOW the build gets done. Let me highlight the difference here.
If you build from a dedicated build repository workspace, then your build workspace already has a copy of all of the code. When your changes are delivered, and the build is kicked off, then only the changed files (your change set) need to be updated and physically copied from the repository to the build repository workspace. Since most changes are small, this involves the copying of anywhere from 0.1% to 2% of your codebase from the repository.
If you build from "the stream", then your build workspace needs to be created (you have to compile somewhere!). So when this is created, your ENTIRE codebase needs to be updated and physically copied from the repository to the build repository workspace. This means retrieving 100% of your codebase from the repository.
Each file operation involves a call to discover the needed resource, fetching this resource from the database hosting the repository, and then having the Jazz application provide this source file over the network. It results in a load on the database server, the web server, and the application server. The more you download like this, the more of a load that you put on these components.
There are some things you can use to minimize this load on the Jazz infrastructure. Using content caching proxies (using a simple Squid proxy server) can help.
For more detail on your options here, and the relative merits of those options, go and read my blog post and whitepaper on Jazz Performance concerns (http://dtoczala.wordpress.com/2013/02/11/jazz-performance-a-guide-to-better-performance/). That article is almost a year old now, but still remains valid. You can also look at the Jazz Deployment Wiki (https://jazz.net/wiki/bin/view/Deployment/WebHome), and check out the sections on performance troubleshooting and performance concerns.
I am referred to Hudson today.
I have heard about continuous integration before, but I have no idea what the heck is a ci-server.
Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it.
But I don't quite understand the workflow of a ci-server, or how am I supposed to use it?
Please tell me if you have experience about ci, thanks in advance.
Edit:
I am currently using Mercurial as my SCM, and I wonder what is the right way to use it with Hudson.
I have installed the Mercurial Plugin of Hudson, and I create a new job with a local repository. When I commit in the repository the Hudson job is built with the latest version of my source code.
If what I used is a remote repository, what's the workflow like?
Is it something like the following?
Set up a Hudson job with the repository
Developer makes a local clone of the repository
Developer commit and push changes
The remote repository update with the incoming changeset
Run a Hudson build
There may be something I misunderstanded at all, please help me point it out.
Continuous Integration is the process of "integrating software" continuously i.e. as frequently as possible (ultimately after each set of changes) to avoid any big-bang integration and all subsequent problems by getting immediate feedback.
To implement Continuous Integration, you first need to automate the build of your software (where build means of course compiling sources, packaging them, but also compiling tests, running the tests, running quality checks, etc, anything that will help to get feedback on the health of your code). Then you need to trigger the build on the latest version of the sources on a particular event (a change in the repository, a temporal event), to generate reports and to send notifications upon failure (by mail, twitter, etc).
And this is precisely the responsibility of a CI engine: offering trigger mechanisms, being able to get the latest version of the sources, running the build, generating and publishing reports, sending notifications. CI engines do implement this.
And because running a build is CPU and Disk intensive, CI engines usually run on a dedicated machine (or even a farm of machines if you want to build lots of projects).
Back to your question now. Once you've got Hudson running, configure it (Manage Hudson > Configure System): setup the JDK, build tools, etc. Then setup an Hudson Job and follow the steps: configure the location of the source repository, the build tool, the trigger, a notification channel and you're done (you can do more complex things but that's a start).
For more details on the setup, check:
The official Use Hudson guide for more details. << START HERE
Continuous Integration with Hudson - Tutorial.
Spot defects early with Continuous Integration.
Martin Fowler's overview of continuous integration is one of the canonical references. In my opinion, using automation to make sure your code base is healthy is one of the most useful things that you can set up.
Update Sorry that I didn't have much time earlier to expand on my reply. #Pascal_Thivent is right that in order to effectively use CI, you need to be able to automate your builds, tests, etc. CI is actually a good forcing function for this. For me, it's one of those little warning flags if I start to think that it would be too painful to put a build into Hudson. It means that something is not quite right.
What I like about Hudson is that it's flexible enough to accommodate different workflows. We use it for both builds / unit tests and releases. And it eliminates a lot of the worry about certain release procedures only working in one person's environment.
What I don't like about Hudson is that it is occasionally unstable when new builds break plugins. I've had a couple of upgrades (2 out of 10 or so) go bad because of incompatibilities. I do two things now:
I never upgrade my team's Hudson server to the latest and greatest right away. I generally only upgrade when there are significant new features, or bug fixes.
I now have a basic Hudson instance set up with all my plugins on a virtual machine with some dummy builds that I fire up to test out any new upgrades before doing it on the public server.
Is there a way to configure Hudson to only execute Build or Post Build actions if there are changes in SVN/CVS
Thank you
You can have Hudson poll the SCM for changes and only do things if it finds changes.
Poll SCM: Configure Hudson to poll changes in
SCM.
Note that this is going to be an
expensive operation for CVS, as every
polling requires Hudson to scan the
entire workspace and verify it with
the server. Consider setting up a
"push" trigger to avoid this overhead,
as described in this document
You can also add something to your SCM post-commit hooks that will fire off a Hudson build.
Trigger builds remotely (e.g., from
scripts):
Enable this option if you would like
to trigger new builds by accessing a
special predefined URL (convenient for
scripts).
One typical example for this feature
would be to trigger new build from the
source control system's hook script,
when somebody has just committed a
change into the repository, or from a
script that parses your source control
email notifications.
You'll need to provide an
authorization token in the form of a
string so that only those who know it
would be able to remotely trigger this
project's builds.
It is not as simple as looking at the revision number (as stated elsewhere) unless your build is for the entire subversion repository. Typically you have projects sharing a single subversion repository and you are building some sub-tree. The global revision number doesn't help.
'svn info [url_to_subtree]' will show the Last Changed Date. You can parse this and figure out if it is later than your last build date and trigger a new build.