Our team has a full licence for the TeamCity server, as well as 7 additional agents. Another unrelated team has reached the limits of their free TeamCity licence and is eyeing our licences up.
The powers that be think it's a good idea to run both teams using the same enterprise licence, which means that we'd be hosting the TeamCity configurations on the same server, and either sharing agents or somehow assigning some agents to one team, some to another.
One concern I have is that configuring an agent to only accept certain builds is difficult - our team has hundreds of build configurations, and we create new ones all the time. To limit an agent to certain builds, you have to fully specify the whitelist. So maintaining the agents such that we have full use of some agents, and the other team has full use of theirs will be a pain. On the other hand, just using one pool of agents means now you have arguments over priority and starvation, etc.
Does anyone have any experience of this? Is it a workable solution? How do you configure agents to reserve them for a particular team? How do you configure the server so that each team only sees their own projects, build configurations and agents? Basically what we'd want is complete separation of the projects, just using the same TeamCity server and agents.
As a gut feeling it doesn't look like a good idea...
edit: As an aside, does Hudson do this better? The ivory tower architects want us to change from TeamCity to Hudson because other people are using Hudson. If I tell them this sharing TeamCity won't work, the Hudson camp will probably use it as a stick to beat us with. Joy.
Not sure what version of TeamCity you're using but the newly released TeamCity v7.0 now has a new Agent Pool feature that provides a much easier way to distribute agents. It may be of interested to you, check out the What's New section or the Agent Pools docs for more info.
I had a similar issue with our two departments starting to share the same TeamCity instance to save the expenses of additional licenses. I must admit we didn't really have any issues apart from our agents were now twice as busy.
I enabled Per-project permissions on the Global Settings page and created 2 user groups, one for 'us', and the other for 'them'. You can then configure each group's roles accordingly. If a group does not have the Project Viewer role for a project then it does not appear for them - a great way to only display necessary projects to the group; but there are plenty of other role options to use.
I have never used Hudson so can't compare unfortunately. I should really try it out but as I've always got on so well with TC I've never had a reason too.
You can make builds on run on certain agent, from the build configuration of each build in the agent requirements section, thereby limiting any build configuration to certain agents.
For example if your agent for one team is teamcity1 you can specify:
system.agent.name does not equal teamcity1
So it will never run on that agent.
That way you can at least copy build configurations and they will run on seperate agents without the fiddle agent configuration.
The other team can create a new Teamcity server, and it will have its own new set of free build configurations and agents.
We don't do this any more, but we used to split our agents into pseudo-pools so we could reserve some for compilations and others for automated tests (because automated test jobs can swamp the grid). We added a "can_run_tests" property to the test agents, and made those builds require that property as an agent condition. It worked great, and it's the sort of thing you can bake into the AMI for a set of cloud agents.
What we do now is to make the compilation and test builds require on different AMIs, which does essentially the same thing.
Related
I've been looking through the site and I have found some information with regards to this topic, but most of the information is old and possibly outdated.
example: Continuous Integration tools
We are: We're a SaaS product with a microservice (200+) architecture.
We have: We currently do our building through bamboo, and we use nexus as an artifact manager with proper versioning. We deploy those artifacts using bamboo to many different machines. For our frontend deployment we build our code through continua and use AWS codedeploy to handle the deployment. We use Bitbucket and Jira for our development. We have done a POC with bitbucket pipelines but we were lacking proper version management there as well as proper environment management. Setting up 10 servers for every repository manually is just something that we don't want to do.
We want: Since bamboo is EOL next year and since there are many alternatives with different levels of complexity we are currently unsure about the tools that are most suited to our needs. We are currently running everything on dedicated linux machines, but we want to switch to docker containers in AWS in the near future. Support for running gulp scripts etc. would be great since that could help us move from continua and bamboo to one single solution.
The setup of bamboo has been a struggle in the past due to difficulties with the software itself. A nice balance between features and complexity would be best. Does anybody have experience with one or more of the options out there? Some that come to mind are CircleCi, teamCity, GitLab, Jenkins and AWS codePipeline.
Many thanks,
Kenny
Bamboo doesn't EOL next year, but Atlassian forces to switch from perpetual licenses to DC licences to be renewed every year. You can get discount prices when switch to Server to DC licenses. See details at https://www.atlassian.com/licensing/data-center
I would propose Kraken CI. It is open-source and can work on-premise but in the cloud as well. In the cloud, it has support for AWS and Azure, and can do autoscaling depending on a number of tasks.
If you are interested please contact me.
I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:
I have been trying to Setup TFS with continuous integration on Local Network, where we have 3 developers to check in code.
Which method of deployment should i follow to achieve this
I found these option
Deploy TFS in several ways: on one server; on many servers; or in one domain or workgroup or across domains.
and are there any tutorials to follow.
If you want to setup a CI pipeline, it is not necessary to deploy your TFS on several machines. More important is to setup a build controller on a separate machine. With that you end up having the TFS on one machine and the build controller on the other. Also you should customize the build process the way you need it. This really depends on what you want to do, f.e. running multiple build process one after another.
What are the pros/cons of having a centralized continuous integration/build setup (in our case it will be cruise control) as opposed to project-specific setup?
So far, we had a project specific setup of cruisecontrol, but now many other groups want to move to CI as well, and are asking for a central cruiseserver. I'd like to know if anyone has any experience (good/bad) to share that can help me on this.
Here are some initial thoughts:
Pros: A central system would be easy to administer avoiding duplication of efforts all across the organization in setting up and maintaining cruise instances for various projects.
Cons: The central system may have to be a more controlled setup w.r.t access etc., and hence may need to be owned by a dedicated person/group, while the project cruise instances can be under the control of the project group, making changes/enhancements faster and project-specific.
I'd appreciate any inputs on this.
One issue we have with our centralized systems is queue length. The more projects you have on the system the more time you have to wait. Or the more you have to split your CI into distributed tasks.
Is it possible to use CruiseControl.Net to set up a build farm? We currently have 4 different build machines building different things at different times and have a bit of a headache to manually balance the load somehow. I would prefer to designate one of them to be the master build machine, which would delegate work to the other ones when they are free.
As far as I can determine, there is no support in CruiseControl.Net for build farms - at least not operating the way you describe. CCNet's interpretation of "farm" seems to assume that projects are assigned manually to a machine and a given project will always be built on the same machine.
If you wanted to dynamically select which machine actually performs the build, you would need to create your own mechanism to select that machine and trigger the build on it. There is likely to be quite a bit of complexity associated with this. For instance you would probably need to ensure that the same project does not get built simultaneously on two different machines if a second commit occurs while the previous commit is still being processed.
If there is a shared location that all the build machines can access, it may be possible to use the Filesystem source control block or CCNet's ForceBuild mechanism to start the build on the designated machine, but have all the build machines publish their output for a given project to the same final location.
See load-Balancing the Build Farm with CruiseControl.NET blogpost for a possible solution