We currently have CruiseControl.NET (v.1.6.7981.1) running on a 'development integration server.' We have a number of separate .NET sites that operate under the same IIS site and therefore share a 'bin' folder. This is the required setup due to the CMS implementation.
With a shared bin folder, an assembly change from one site could throw an error and affect all the other sites.
What we do is build and deploy the single site that we're currently working on to the integration environment and then spot-check the other sites to ensure they are still working.
In order to have more immediate feedback on the builds, we are looking at utilizing triggers to build all solutions at the time of a checkin of any solution. Ideally, we would like to avoid having to edit each project's configuration when a new site is developed and brought into the integration environment.
Does this sound like a sound approach or is there a better way to go about this?
Thanks!
Related
I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:
So I have this pretty huge solution with many projects, few of them use dlls from other projects in this solution, some projects copy files to other directories after build is performed. (as post build events)
when I build the solution locally on my machine, everything is great and working, but when i configure a build, and build it on build server (we use TFS) something goes wrong and i get a an error when i try to load one of the applications in this solution. (the error does not give me much data on what went wrong)
so before i sit to debug all of this. does anybody know how can i smartly manage all the build actions that are performed locally and via build server and see the deltas?
I would like to be able to build the solution exactly the same on build server as i do on my machine (with directory structure, post build events..etc)
thanks a lot
The generally accepted way to do what you're after is to use NuGet for managing your assembly references. You can publish your dependent assemblies into NuGet as part of a continuous delivery process, then reference (and update!) those dependencies in the solutions that consume them as necessary.
This removes ambiguity ("What version of Foo.dll is Project X using?") and reduces runtime errors ("Why is Project X using Foo.dll 3.0? It was never tested with 3.0! It needs to run with 2.7!").
I am setting up a CruiseControl.Net server. So far, it only builds a project (.Net website), and I kind-of know how to set up unit testing, code coverage, etc in the future.
What I will need to have soon is this:
The developers commit changes to SVN continually, thus CCNet builds often.
CCNet will publish the latest version to the development server, as soon as a commit is validated (with unit tests etc).
The project manager validates a specific version, in order to publish it to the pre-production server, and create a SVN tag from this revision.
The last point is where my problem lies: how exactly can I set up things so the project manager can, for instance, browse to the CCNet web dashboard, select a previous specific build, and says "this is the build I want to publish" ?
I believe that my thinking is flawed somewhere, but I can't put my finger on it. Maybe CCNet is not the right place to do these manipulations ?
In my mind, I can create a SVN tag using CCNet, and mostly work from the trunk, but maybe I can't ? Maybe it's the other way around, and I should add a CCNet project every time a tag is created under SVN ?
The final goal is that I want to automate the publication process: zip creation (for archiving), web.config modification (using Nant for instance), and website publication (using FTP).
In all these steps, I want to limit the manual intervention to the maximum. If I can avoid to add a new project to CCNet every time a tag or branch is created in SVN, that would be awesome.
Thanks for your help, and sorry if it's not very easy to read, but it's not very clear in my head either...
Since you can create any task, you should be able to achieve the goal, though unfortunately not out-of-the-box.
Since you use SVN, it all depends actually on revision. I think I'd create a separate project for your third scenario and added a parameter where PM would provide revision number. Then based on that I'd tag sources etc. in my own task.
Regarding the other points, I think this is similar. Recently for web projects we started using MSDeploy, and in each stage build the MSDeploy package was created. Then there was a separate build called Deploy, that when forced allows us to select which package we want to deploy using MSDeploy.
Having several environments, however, started a little bit like overkill for managing with CCNet, and I'll be looking into kwakee at some time.
i am interested in creating a setup tool for our business application which is based on a Windows Service and some WF4 workflows, currently hosted in IIS/AppFabric.
As long as i want to provide the best possible installing experience to our customers, I want to include IIS and Appfabric Setup Prerequirements as well as a WindowsService application into one Application-Setup Project.
Is there a proper way of doing this? Can someone give me some Links and/or Tips?
best regards,
Chris
The standard approach is to build a Deployment Package and import that into IIS. Is uses Web Deploy, see http://go.microsoft.com/?linkid=9278654 for more details.
For client deployment using a setup project, I've been a fan of using WIX and an automated build script (MSBuild or Nant) in the past. It allows me the flexibility to script the build of the setup.exe, allowing me to make the changes I need (connection strings) in advance of deploying; leaving the entire process (regardless of environment [dev, prod, QC]) scriptable and automated.
For deploying the workflow components, its as simple as xcopy deployment which, like the above, is easily scriptable and automatable.
Our main application uses Commerce Server 2002, we are currently in the process of upgrading to 2009. I am looking into setting up CC.net for both apps. I have it pulling from SVN and starting the build but it will fail because Commerce Server is not installed so the DLL's are not there.
I don't really want to do a full install of Commerce Server on the CI Server if I can avoid it. Does anyone have any experience / advice on setting up the CI Server / repo / project so that It would build without CS installed. currently we do not have any unit tests so that part is not an issue, its getting it to build and being able to do things like FXCop, etc.
Thanks
As far as I can see, there is no problem in put just the core DLLs of Commerce Server in a "References" directory created under the same structure of your source code.
I have been using this approach in many projects without problems so far.
Maybe, the only problem you'll have now (since you are not using this approach) is that you'll need to refactor a bit your solution to put the references.
Regards,
Alex