I'm trying to introduce continuous integration in an old project, and we've got quite specific situation - it's possible to put the CI server only on our test server that runs on CentOS. The server has quite a lot of unused RAM and CPU capability.
However, we need to run Ant builds on Windows (this also used to be how the project did packaging before), however it turned out that not the same output (after binary compare) is produced by just using Unix versions of Java and Ant.
I drew up a diagram of how in my mind it could work, but I'm really wondering whether that is even possible (with already given tools).
The black part is implemented, I'm curious whether the red part could be possible. Could the Jenkins slave communicate with master on different OS?
It should be possible. I have a feeling you will need to play with your network settings. But if before you start changing anything see if you can start a headless slave by following these directions: https://wiki.jenkins-ci.org/display/JENKINS/Step+by+step+guide+to+set+up+master+and+slave+machine
Using VirtualBox for CentOS, it will possible to run a Windows VM on your CentOS host.
I'm not sure you need Docker to launch your Jenkins slave.
It maybe better to use a standard JNLP Windows service to connect your Windows slave to Dockerised Jenkins master.
If the master is not able to view the Windows node using this method, you may have to tweak your network configuration on the Windows VM.
But I'm not sure it's necessary.
Related
I have a problem with GitLab Runner on 32-bit Windows. The runners are at version 14.4.0 and our GitLab instance is at version 14.4.1-ee. The runners are tied to specific machines running 32-bit Windows 10 Pro (10.0.19043), use shell executors (PowerShell), and run with full administrative privileges (i.e., as the local system user). This is outside my control.
Sporadically, and for no discernable reason, the runners stop sending log traffic to our GitLab instance. They should be uploading several MB worth of logs. I don't see failed attempts to upload logs in debug mode. I don't see any of the network traffic I expect in WireShark. This might correlate with issues loading a custom driver, but I can't say for sure.
The workaround is even more perplexing. The following protocol fixes the issue: remove all the runners using the GitLab CI interface; uninstall the malfunctioning runner; download a new runner binary, register and install it. If I repeat the same steps, except without downloading a new binary, the issue persists. The files are identical when I run a binary diff on them.
I haven't been able to extract any relevant information from the system event logs or network traffic. The issue only affects our runners on 32-bit Windows. It doesn't affect 64-bit Windows or runners on Linux, regardless of architecture. It seems to happen sporadically, in the sense that I can't correlate it with anything interesting happening on the affected machines.
Clearly, something about our 32-bit Windows environments is different and causing the runners to malfunction. I just don't know what it is. I would appreciate any direction figuring out the source of this problem. The fact that downloading new binaries makes the difference has me worried, but I don't have any reason to suspect our machines have been compromised.
This problem was resolved by running tests remotely over SSH. It's almost certainly a bug with the 32-bit Windows distribution of gitlab-runner.
Our company produces cross-platform software and we have Bamboo instance which is building projects under various incompatible environments (linux, win, os x). There's a VM configured for building under each environment. So is it possible to run several remote agents on each VM to perform concurrent builds of different projects ?
Yes, that is possible. See the "Changing where the remote agent stores its data" section of the Bamboo Remote Agent Installation guide.
To make this work, for each remote agent you run on the same machine you will need to specify a different location defined for the agent to store its data (otherwise builds will fail trying to write to the same location).
Yes it is possible to run multiple remote agents on both windows and linux based hosts. I currently manage the remote agents for the linux hosts so I can't comment on the windows service remote agents.
I implemented the multiple remote agents buy first creating a folder for each agent, then on installation of each agent you specify the location for the bamboo.home of each agent.
On the bamboo master server you can rename the remote agent so you can tell the difference between the agent that is running you build job.
I can't comment on linux, but on Windows, yes you can.
If you change the Windows service name from the default 'Bamboo Remote Agent' to something like 'Bamboo Remote Agent 1', 'Bamboo Remote Agent 2' by:
uninstalling with the bin/uninstall-ntservcice
editing conf/wrapper.conf to change the service name and display name
reinstalling with the bin/install-nt-service
After doing this you should be able to run multiple agents fine.
I was able to create two agents on the same server. In this case I actually want the same bamboo-home set so that either agent can build to the same location ... just when some of my builds take longer, I have a second agent sitting around for the quicker builds that get queued up otherwise. In the bamboo-agent.sh, I changed:
APP_NAME, APP_LONG_NAME, and REAL_DIR
I didn't change anything in conf/wrapper.conf. And in the GUI I updated the name of the agents by clicking "Edit Details" on the agent capability page.
Just in case, I also made the agentUuid tag in bamboo-agent.cfg.xml empty thinking it would get overwritten when I started the agent. I didn't want to agents starting with the same Uuid even though I couldn't tell what this field was already used for.
As far as I can tell this worked as I expected. I could see two agents from the GUI and kicked off two builds simultaneously. FWIW, what I don't know if this is considered a hacked way of doing this or not.
I would like to run Teamcity (with a build agent) in a Linux VM to handle our none-.net projects. But in the same breath I'd like to have a BuildAgent setup on a Windows server to handle all of the .net projects.
I can't think of any reasons why this wouldn't work but has anyone any experience and any ideas about the problems I might encounter before I spend too much real time on this?
Ta
It's fully supported. TeamCity also knows which agents to route builds to.
This is a very normal scenario and many project I know do this without any problems. Just make sure that for the builds' Agent Requirements, you properly direct the appropriate job to the appropriate agent. One criterion can be that agent.os.name should contain Windows or Linux etc.
First with the background...
We have a Linux server that supports multiple projects.
The Clearcase server and repository are installed on this Linux server.
Different projects require different cross-compilers and libraries, and all of them are installed on the server.
User can choose different tool sets by running different scripts, which exports different environment variable values such as include paths and compilers.
User needs to run cleartool to mount the repository.
Developers develop in Eclipse and have two options:
SSH into the server and run Eclipse through with X11 tunneling.
Install Eclipse locally on their Windows machine and invoke builds from the SSH terminal.
Now:
Problem with #1 is that Eclipse operations (typing, content assist, etc) are extremely laggy.
Problem with #2 is that the developers need to go through extra hoops to build their code.
This is what I have tried:
Set up Remote System Explorer, which allows remote editing of files and remote running of the compiler:
How to build a c++ project on a remote computer in Eclipse?
This approach works perfectly for files that do not need special environment variable values and mounting of Clearcase repository, but I could not figure out how to get all of these things to integrate.
It would be great if someone can let me know how I can direct RSE to run a script (may be different per project) to set the environment variables and to run the cleartool commands to mount the repository so that it can locate the files.
The cleartool command arguments would be different per user for setting up a particular view.
Some extra info that may help:
I have root access to the development server
The Clearcase filesystem is mapped to a drive on the Windows machine
Thanks in advance for saving me hours of frustration dealing with a slow network!
==================
Additional detail per comments:
- The VOB storage is located locally on the Linux server. We would SSH to the server and start Eclipse there, therefore the delay should not be due to dynamic vs snapshot view and GUI performance seems to be the real problem.
- We also mount the same view on Windows by using Region Synchronizer. When running the local copy of Eclipse installed on Windows, there is no performance problems.
So this question can probably be solved by answering either question:
1. How to improve X11 performance such that development on Linux will suffice?
2. How to set up Windows Eclipse to perform all the steps mentioned above when building projects?
I came here a similar question to your part two, but alas, no one has answered it. However, I have an answer to your part one: https://www.nomachine.com/. It speeds up X11 forwarding considerably.
I'm currently working on a server-side product which is a bit complex to deploy on a new server, which makes it an ideal candidate for testing out in a VM. We are already using Hudson as our CI system, and I would really like to be able to deploy a virtual machine image with the latest and greatest software as a build artifact.
So, how does one go about doing this exactly? What VM software is recommended for this purpose? How much scripting needs to be done to accomplish this? Are there any issues in particular when using Windows 2003 Server as the OS here?
Sorry to deny anyone an accepted answer here, but based on further research (thanks to your answers!), I've found a better solution and wanted to summarize what I've found.
First, both VirtualBox and VMWare Server are great products, and since both are free, each is worth evaluating. We've decided to go with VMWare Server, since it is a more established product and we can get support for it should we need. This is especially important since we are also considering distributing our software to clients as a VM instead of a special server installation, assuming that the overhead from the VMWare Player is not too high. Also, there is a VMWare scripting interface called VIX which one can use to directly install files to the VM without needing to install SSH or SFTP, which is a big advantage.
So our solution is basically as follows... first we create a "vanilla" VM image with OS, nothing else, and check it into the repository. Then, we write a script which acts as our installer, putting the artifacts created by Hudson on the VM. This script should have interfaces to copy files directly, over SFTP, and through VIX. This will allow us to continue distributing software directly on the target machine, or through a VM of our choice. This resulting image is then compressed and distributed as an artifact of the CI server.
Regardless of the VM software (I can recommend VirtualBox, too) I think you are looking at the following scenario:
Build is done
CI launches virtual machine (or it is always running)
CI uses scp/sftp to upload build into VM over the network
CI uses the ssh (if available on target OS running in VM) or other remote command execution facility to trigger installation in the VM environment
VMWare Server is free and a very stable product. It also gives you the ability to create snapshots of the VM slice and rollback to previous version of your virtual machine when needed. It will run fine on Win 2003.
In terms of provisioning new VM slices for your builds, you can simply copy and past the folder that contains the VMWare files, change the SID and IP of the new VM and you have a new machine. Takes 15 minutes depending on the size of your VM slice. No scripting required.
If you use VirtualBox, you'll want to look into running it headless, since it'll be on your server. Normally, VirtualBox runs as a desktop app, but it's possible to start VMs from the commandline and access the virtual machine over RDP.
VBoxManage startvm "Windows 2003 Server" -type vrdp
We are using Jenkins + Vagrant + Chef for this scenario.
So you can do the following process:
Version control your VM environment using vagrant provisioning scripts (Chef or Puppet)
Build your system using Jenkins/Hudson
Run your Vagrant script to fetch the last stable release from CI output
Save the VM state to reuse in future.
Reference:
vagrantup.com
I'd recommend VirtualBox. It is free and has a well-defined programming interface, although I haven't personally used it in automated build situations.
Choosing VMWare is currently NOT a bad choice.
However,
Just like VMWare gives support for VMWare server, SUN gives support for VirtualBOX.
You can also accomplish this task using VMWare Studio, which is also free.
The basic workflow is this:
1. Create an XML file that describes your virtual machine
2. Use studio to create the shell.
3. Use VMWare server to provision the virtual machine.