I have been trying to Setup TFS with continuous integration on Local Network, where we have 3 developers to check in code.
Which method of deployment should i follow to achieve this
I found these option
Deploy TFS in several ways: on one server; on many servers; or in one domain or workgroup or across domains.
and are there any tutorials to follow.
If you want to setup a CI pipeline, it is not necessary to deploy your TFS on several machines. More important is to setup a build controller on a separate machine. With that you end up having the TFS on one machine and the build controller on the other. Also you should customize the build process the way you need it. This really depends on what you want to do, f.e. running multiple build process one after another.
Related
I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:
We have a distributed system with many services which talk to each other.
Sometimes a code change in one service will require a feature to have been deployed in another service.
We use octopus to deploy all the things which is cool but we really want to prevent services from being deployed before the things they depend on are deployed.
Is there a way we can do this with octopus deploy?
For example can I make the nuget package for one service depend on an explicit version range of another package?
If you don't want to deploy all your projects as one massive deployment with a series of steps that push your different services to different machines, then I don't think there's a built-in way to make your deployments dependent on each other's version numbers like that. (see this uservoice suggestion in Octopus asking for that very feature)
However, I do think that you could write a powershell script that ran as a pre-deployment step and checked the version number of one nuget package against a version range stored in another. Then the ps script could halt or allow deployment accordingly.
I am using Octopus Deploy and TeamCity to automate the testing, building, packing, and deployment of a .NET app to multiple servers. Most of the servers have one instance of the app, but a few of them have multiple instances.
I cannot figure out the best way to do this, or even if it is reasonably possible in Octopus.
Can anyone provide a method to do this? I know I could technically script the entire process in powershell, but it would be nice if I could take advantage of the IIS features of Octopus Deploy.
You could set up multiple steps to deploy the same nuget package.
The first one would be the IIS settings common to all servers.
Each additional one could target a specific tag (instance1, instance2, etc) and have the custom IIS settings for those instances. Just tag the machines that have the multiple instances appropriately and those extra steps will only run for those machines.
Our team has a full licence for the TeamCity server, as well as 7 additional agents. Another unrelated team has reached the limits of their free TeamCity licence and is eyeing our licences up.
The powers that be think it's a good idea to run both teams using the same enterprise licence, which means that we'd be hosting the TeamCity configurations on the same server, and either sharing agents or somehow assigning some agents to one team, some to another.
One concern I have is that configuring an agent to only accept certain builds is difficult - our team has hundreds of build configurations, and we create new ones all the time. To limit an agent to certain builds, you have to fully specify the whitelist. So maintaining the agents such that we have full use of some agents, and the other team has full use of theirs will be a pain. On the other hand, just using one pool of agents means now you have arguments over priority and starvation, etc.
Does anyone have any experience of this? Is it a workable solution? How do you configure agents to reserve them for a particular team? How do you configure the server so that each team only sees their own projects, build configurations and agents? Basically what we'd want is complete separation of the projects, just using the same TeamCity server and agents.
As a gut feeling it doesn't look like a good idea...
edit: As an aside, does Hudson do this better? The ivory tower architects want us to change from TeamCity to Hudson because other people are using Hudson. If I tell them this sharing TeamCity won't work, the Hudson camp will probably use it as a stick to beat us with. Joy.
Not sure what version of TeamCity you're using but the newly released TeamCity v7.0 now has a new Agent Pool feature that provides a much easier way to distribute agents. It may be of interested to you, check out the What's New section or the Agent Pools docs for more info.
I had a similar issue with our two departments starting to share the same TeamCity instance to save the expenses of additional licenses. I must admit we didn't really have any issues apart from our agents were now twice as busy.
I enabled Per-project permissions on the Global Settings page and created 2 user groups, one for 'us', and the other for 'them'. You can then configure each group's roles accordingly. If a group does not have the Project Viewer role for a project then it does not appear for them - a great way to only display necessary projects to the group; but there are plenty of other role options to use.
I have never used Hudson so can't compare unfortunately. I should really try it out but as I've always got on so well with TC I've never had a reason too.
You can make builds on run on certain agent, from the build configuration of each build in the agent requirements section, thereby limiting any build configuration to certain agents.
For example if your agent for one team is teamcity1 you can specify:
system.agent.name does not equal teamcity1
So it will never run on that agent.
That way you can at least copy build configurations and they will run on seperate agents without the fiddle agent configuration.
The other team can create a new Teamcity server, and it will have its own new set of free build configurations and agents.
We don't do this any more, but we used to split our agents into pseudo-pools so we could reserve some for compilations and others for automated tests (because automated test jobs can swamp the grid). We added a "can_run_tests" property to the test agents, and made those builds require that property as an agent condition. It worked great, and it's the sort of thing you can bake into the AMI for a set of cloud agents.
What we do now is to make the compilation and test builds require on different AMIs, which does essentially the same thing.
My group work for a software of simulation for plane. To achieve a faster and easier validation and verification, we decided to introduce continuous integration. But I have no idea which CI servers should we choose.
Our contraintes:
- We need to compile in different machine with different platform( Linux, HP ) in local net and in client's net.
I mean, we need to call different functions in different machine in distance. Some of them will need a authorise
- We prefer a CI servers open source
- The sources are in different languages, C, C++, Java ...
- Support SVN, CVS, Clearcase
- Automated tests and reports
- The tests need different machines working together
I've seen teamcity, it seems well, but it's not open source.
Hudson is for you!
Edit to be more precise about your requirements:
Hudson run on a JVM (standalone service, using Jetty, or on a Tomcat server). Thus, the plateform is not a problem.
Hudson is open-source.
Hudson manages Java projects natively, but you can ask him to compile C, C++ or .Net projects.
Support SVN, CVS natively, and a plugin for Clearcase exist (here).
Automated tests and reports: You will need to implement them, of course, but Hudson will launch them for you. For Java projects, simply use Maven for that!
The tests need different machines working together: Hudson can be launched on several machines (one master, several slaves). Each slave can be hosted by any kinf of machine.
+1 for Hudson.
We are using Hudson together with SVN (version control) and Selenium RC (functional testing).
Very easy to set up, has tons of modules for integration, and very visible to all members of the team, especially if you're using the Hudson Build Monitor Firefox plugin.
I used Jenkins earlier, but now I prefer only TC due to it is great for a lot of purposes.
If you need to work with different platforms, it has great opportunity to install a few build agents with OS specified. Also you're able to install so-called Agent Clouds.
If you need to build your applications per branch — it will be done without any extra-scripting.
A lot of VCSs are supported.
Using Maven you're provided to build even Flex applications, even with running automated test (in case of Windows build agent installed).