In order to achieve continuous integration , i thought of writing Clearcase triggers.
I came to know post operation trigger [after delivered to integration stream] will be useful for me.
I just seek an advice from my architect regarding implementation of trigger.
He told me not to use as triggers are troublesome in multisite environment.
Is this myth or true? Has any one faced problems which made their builds more annoying due to triggers?
Please suggest me is it safe to use triggers in multisite environment or not?
Your architect may have refered to this technote, which ponts out the fact that:
By design, trigger types must be created locally in each VOB.
Trigger types, unlike other metadata types (labels, attributes, branches, elements, hyperlink), cannot be created as global resources in an Administrative VOB because they cannot properly traverse hyperlinks; which is how the Administrative VOBs connect to their client VOBs.
You can try and copy a trigger:
The cptype (copy type) command creates a new type object that is a copy of an existing type object. The existing and new objects can be in the same VOB, or in different VOBs. The copy can have the same name as the original only if you are making the copy in a different VOB.
But:
The two objects, original and copy, do not retain any connection after you execute this command.
They are merely two objects with the same properties, and perhaps even the same name. If there are any changes made to the trigger, such as by using cleartool mktrtype -replace, then those changes must be made manually to each copy of the trigger, or you must perform the copy again using the -replace switch; see the cptype reference page for more information.
I would recommend using an external system to monitor, trigger and report on your continuous integration.
You can either:
simplify your ClearCase setup by using CCRC instead of the multi-site workflow.
In which case, the article "Continuous integration with IBM Rational ClearCase Remote Client" can help (you don't have to use CruiseControl: Jenkins or Hudson or TeamCity are equally good in this case)
more generally letting a CI tools (again: CruiseControl: Jenkins or Hudson or TeamCity) monitor a specific view which said CI tool updates regularely, detect any changes and trigger the build. See "Realizing continuous integration".
Related
I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:
I currently have a CI environment setup using the following tools:
VCS - ClearCase (UCM enabled)
CI Server - Jenkins
Build Engine - MSBuild
Basically Jenkins is polling my UCM Project's Integration stream every 2 minutes and building via a msbuild script I wrote.
While in ClearCase it is not a best practice having a individual stream for each developer, good CI demands private builds to be run before commiting the code. Added to that, ideally i would have atomic commits, which ClearCase provides just on the form of Deliver to stream.
Currently we are working directly on integration stream, and sometimes our builds fail because Jenkins starts building before the developer finishes her check-ins.
My question is, how can I have a private work area (Sandbox) and atomic commits on ClearCase without creating a stream for each developer? Am I missing something?
Currently we are working directly on integration stream, and sometimes our builds fail because Jenkins starts building before the developer finishes her check-ins
You can write your build script in order to detect if a deliver is in progress.
A deliver is characterized by an activity named deliver.xxx: you can list its content and see if any version in it is in checkout. If yes, the deliver is in progress.
If the most recent deliver has only checked-in versions, you can safely start your build.
Or:
How can I have a private work area (Sandbox) and atomic commits on ClearCase without creating a stream for each developer
A private area for Jenkins to use would be a snapshot view on each developer stream.
As the name suggests, a snapshot view would take a snapshot of the code, but you need to define a criteria suggesting that Jenkins can build what the snapshot view has updated.
What I have seen used is a 'BUILD' shifting label (a label you re-apply on the newly updated code, and used by Jenkins in his snapshot view with a selection rule based on that label):
The developer move his/her label when he/she thinks the current code is ready to be build, and a Jenkins jobs update its snapshot view on the developer stream, based on the versions referenced by said shifting label 'BUILD'.
This question came up on the development team I'm working with and we couldn't really get to a consensus:
Should changes to the database be part of the CI script?
Assuming that the application you are working with has a database involved. I think yes because that's the definition of integration. If you aren't including a portion of your application then you aren't really testing your integration. The counter-argument is that the CI server is the place to make sure your basic project setup works -- essentially building a virgin checkout of the latest version of your code.
Is there a "best practices" document for CI that would answer this question? Is this something that is debated among those who are passionate about CI?
Martin Fowler's opinion on it:
A common mistake is not to include everything in the automated build.
The build should include getting the database schema out of the
repository and firing it up in the execution environment.
All code, including DB schema and prepulated table values should both be subject to source control and continous integration. I have seen far to many projects where source control is used - but not on the DB. Instead there is a master database instance where everyone is doing there changes, on the same time. This makes it impossible to do branching and also makes it impossible to recreate an earlier state of the system.
I'm very fond of using Visual Studio 2010 Premium's functionality for database schema handling. It makes the database schema part of the project structure, having the master schema under source control. An fresh database can be created right out of the project. Upgrade scripts to lift existing databases to the new schema are automatically generated.
Doing change management properly for databases without VS2010 Premium or a similar tool would at best be painful - if possible at all. If you don't have that tool support I can understand your collegue that wants to keep the DB out of CI. If you have problems arguing for including the DB in CI, then maybe it is an option to first get a descen toolset for DB work? Once you have the right tools it is a natural step to include the DB in CI.
You have no continuous integration if you have no real integration. This means that all components needed to run your software must be part of CI, otherwise you have something just a bit more sophisticated than source control, but no real CI benefits.
Without database in CI, you can't roll back to specific version of an application and you can't run your test in real, always complete environment.
It is of course not an easy subject. In the project I work in we use alter scripts that needs to be checked in together with source code changes. These scripts are run on our test database to ensure not only the correctness of current build, but also that upgrading/downgrading one version up/down is possible and the process of update itself don't mess anything up. I believe this is the better solution than dropping and recreating whole database, it lets you to have the consistent path to upgrade the database step by step and allows you to use the database in some kind of test environment - with data, users etc.
I am in the process of setting up a continuous integration build for a Spring Roo application using the Rational Team Concert (RTC) IDE and Jazz build engine. When setting up the build definition, the Build Workspace field on the Jazz Source Control tab allows the selection of either a user's repository workspace or a stream.
The RTC Continuous Integration Best Practices and other Jazz build resources consistently refer to using a dedicated repository workspace associated with a build user, leading me to believe that this is the preferred approach. I have not been able to find any information on building from the stream directly. Our project's stream contains all of the artifacts required to build, and I have tested and confirmed that the continuous integration build works from the stream. I am unable to think of any reason why I would need to create and manage a specific workspace for this purpose.
My question is, am I playing with fire by building directly off of the stream? Are there potential downstream complications with this approach that I am not aware of?
Answering my own question in case another SO user has the same question in the future.
After some experimentation, I discovered that a drawback to building directly from the stream was that it ignores the "Build only if there are changes accepted" property on the Jazz Source Control tab. As a result, builds from a stream may only be done at predefined intervals - it is not possible to configure the build to only happen when new changes have been committed to the stream.
A dedicated workspace is required for the build to accept new changes from the stream and use them to trigger a build request.
There is another BIG difference here. It has to do with HOW the build gets done. Let me highlight the difference here.
If you build from a dedicated build repository workspace, then your build workspace already has a copy of all of the code. When your changes are delivered, and the build is kicked off, then only the changed files (your change set) need to be updated and physically copied from the repository to the build repository workspace. Since most changes are small, this involves the copying of anywhere from 0.1% to 2% of your codebase from the repository.
If you build from "the stream", then your build workspace needs to be created (you have to compile somewhere!). So when this is created, your ENTIRE codebase needs to be updated and physically copied from the repository to the build repository workspace. This means retrieving 100% of your codebase from the repository.
Each file operation involves a call to discover the needed resource, fetching this resource from the database hosting the repository, and then having the Jazz application provide this source file over the network. It results in a load on the database server, the web server, and the application server. The more you download like this, the more of a load that you put on these components.
There are some things you can use to minimize this load on the Jazz infrastructure. Using content caching proxies (using a simple Squid proxy server) can help.
For more detail on your options here, and the relative merits of those options, go and read my blog post and whitepaper on Jazz Performance concerns (http://dtoczala.wordpress.com/2013/02/11/jazz-performance-a-guide-to-better-performance/). That article is almost a year old now, but still remains valid. You can also look at the Jazz Deployment Wiki (https://jazz.net/wiki/bin/view/Deployment/WebHome), and check out the sections on performance troubleshooting and performance concerns.
I need to set up a continuous integration system. We use ClearCase version control and only snapshot views due to platform restrictions. I have tried setting up Hudson and Luntbuild. They both show the same behaviour. In a view, we have lots of libraries that are used for build but are strictly read-only. The CI system executes cleartool lshistory and finds a change in the VCS. After that, it executes cleartool setcs, which causes update of the view. This can take about half an hour, which is very undesirable for CI. Why wouldn't it update only the changed elements, which were previously obtained by cleartool lshistory? Is there a CI system that can do this?
The update of a snapshot view with a lots of elements can takes time.
That is why we are using several view in our Hudson CI.
One with the minimum amount of elements, view which is monitored by Hudson and updated if a VCS change is detected.
One with the common stuff which does not change that often (if it changes, we will declenche the Hudson Job manually)
Other solution is, especially for the first view, to use a dynamic view (and skip the update loading times)
Yulia,
You may check out our Parabuild - it may work better for you. If not, we will be happy to work with you to resolve any performance issues.