Can a master jenkins run jobs on remote jenkins? - continuous-integration

We are migrating from CruiseControl.NET to Jenkins just to be in sync with a partner so we don't have two different CI scripts. We are trying to setup Jenkins to do something similar to what we had CruiseControl doing which was have a centralized server invoke projects (jobs in jenkins) on remote build machines.
We have multiple build machines associated to a single project so when we build the project from the centralized CI server it would invoke the projects on the remote CI servers. The remote CI servers would pull the version from the centralized CI server project.
In CruiseCruise control we setup a project that would do a forceBuild on the remote projects. The projects on the build machines used a remoteProjectLabeller to retrieve the version number so they were always in sync.
To retrieve the master build number:
<labeller type="remoteProjectLabeller">
<project>MainProject</project>
<serverUri>tcp://central-server:21234/CruiseManager.rem</serverUri>
</labeller>
To invoke the remote projects:
<forcebuild>
<project>RemoteBuildMachineA</project>
<serverUri>tcp://remote-server:21234/CruiseManager.rem</serverUri>
<integrationStatus>Success</integrationStatus>
</forcebuild>
So far in jenkins i've setup a secondary server as a slave using the java web start but I don't know how I would have the master jenkins invoke the projects setup on the slaves.
Can I setup Jenkins to invoke projects (jobs) on slaves?
Can I make the slaves pull the version number from the master?
EDIT -
Let me add some more info.
The master, and remote build machine slaves are all running Windows.
We had the central master CruiseControl kick off the remote projects at the same time so they ran concurrently and would like to have the same thing with jenkins if possible.

Jenkins has the concept of build agents, which could perhaps fit your scenario better - there's a master that triggers the build and slaves that perform it. A build can be then restricted to some categories of slaves only (e.g. if it depends on a specific software, not present on all agents). All data is managed centrally by the master, which I believe is what you are trying to achieve.

In Jenkins it is not possible to trigger a build on a slave, i.e. where a build runs is not controlled by the one who triggers it. It is controlled by the settings of the job itself. Each job has a setting called "Restrict where this job can run".
In your case you would probably have two jobs: A and B. A would be restricted to run on "master" and B would be configured to run on "slavename". Then all that is left to do is for A to trigger B.
But you had further constraints: You want A and B check out the same version from version control and you want A and B to run in parallel. There are many ways to accomplish that but the easiest is probably to define a multi-configuration job.
There is no way to turn an existing free-style job into a multi-configuration job, so you will have to make a new job.
Choose New job
Choose Build new multi-configuration project. Add a name.
Under Configuration Matrix, open the "Add axis" drop down.
Choose Slaves
Check master and the slave
Add the SCM information and build step(s)
When the job runs, it runs on both the master and the slave. Jenkins makes sure they build from the same source version.

From the /jenkins/computer url, you can add, remove, and reconfigured "nodes" which are either local or remote "build agents".
The Jobs can then be constrained to run on particular build agents, or follow various rules to select the appropriate build agent out of the available agents.

I was thinking about Jenkins too much like CruiseControl where the job is defined on the remote machine. So in Jenkins the remote projects are defined on the master and delegated to a remote machine via an agent.
I used the Java Web Start agent installed as a windows service on the remote machines. To have specific jobs run on specific remote machines I defined each remote node with a unique label in its slave configuration. To bind specific jobs to specific slaves I used the slave's label in each job configuration ("Restrict where this project can be run").
To trigger the jobs with a single master job I created a free style job that only is set to "Build other projects" and provided a comma separated list or project names. This job builds the downstream jobs in parallel.
I'm still looking for a way to send a master build number to the downstream jobs to keep them in sync always. (This is used to version DLLs and such.)

Related

Jenkins declarative pipeline - how to manage Jenkins server

How can I have a declarative Jenkins pipeline that is able to manage Jenkins server itself? I.e:
a pipeline that is able to query what Jobs I have in a folder and then disable/enable those jobs
Query what agents are available and trigger a job on that agent
A pipeline global variable currentBuild has a property called rawBuild that provides access to the Jenkins model for the current build. From there you can get to many of the Jenkins internals.
I'm not sure what you can find in the way of agent and job triggering - have a look there are/were plugins that offered alternatives to the default model.

Cluster deployment with gitlab ci shell runner

I've trying to migrate our CI and CD processes from Jenkins to Gitlab CI. How should I setup gitlab to build our application in cluster?
In general, I expect gitlab clone repository to all nodes in cluster, execute my Bash deployment script and run some tests if needed. From my point of view, I think I should start runners in all cluster nodes and start build with all neccessary tasks. Is it possible in Gitlab? I can start only one runner for one build. May be there are some different approaches for this task?
For example, I have cluster with 2 nodes, A and B. I need clone repository to both nodes and start build script on each of them. I have register one gitlab-ci-multi-runner on each node, but build executed only on one of this node.
What you are describing can be achieved by setting up each executor with a different tag and setting multiple build tasks in gitlab (yaml anchors help with it) but it's not the desired way. The desired way would be to use gitlab runner to run tests and then another build to run Ansible/Chef/Salt/Puppet or your other desired deployment tool.

Why a Jenkins job takes longer time to run between farms?

I am using a jenkins configuration where the same job is being executed in different locations: one in farm1 and another in an overseas farm2.
The Jenkins master server is located in farm1.
I encounter a situation where the job on farm2 takes much more time to finish, sometimes twice the elapsed time.
Do you have an idea what could be the reason for that?
is there a continuous master-slave discussion during the build that can cause such delay?
The job is a maven junit test + ui seleniun using vnc server on the slave
Thanks in advance,
Roy
I assume your server farms have identical hardware specs?
Network differences while checking out code, downloading dependencies, etc. Workspace of Master and Slave are on different servers
If you are archiving artifacts, they are usually archived back on Master, even when the job is run on Slave.
Install Timestamper plugin, enable it, and then review the logs of both the Master and the Slave runs, and see where there is a big time difference (you can configure Timestamper to show time as increments from the start of job, this would be helpful here)

Jenkins for multiple deployment on multiple server

I am new to Jenkins and know how to create Jobs and add servers for JAR deployment.
I need to create deployment job using Jenkins which takes a JAR file and deploys it of 50-100 servers.
These servers are categorized in 6 categories. there will be different process run on each server but same JAR will be used.
Please suggest what is the best approach to create JOB for this.
As of now, the servers are less(6-7), I have added each server to Jenkins and using command execution over ssh for process execution. But for 50 servers this is not the possibility.
Jenkins is a great tool for managing builds and dependencies, but it is not a great tool for Configuration Management. If you're deploying to more than 2 targets (and especially if different targets have different configurations), I would highly recommend investing the time to learn a configuration management tool.
I can personally recommend Puppet and Ansible. In particular, Ansible works over an SSH connection to the target (which it sounds like you have) and requires only a base Python install.

Pushing data to jenkins's slave machine before build starts

I have following configuration:
Jenkins Master - runs on windows+tomacat, Jenkins Slave - runs on gentoo
Slave is reachable by ssh and master can start it without problems. However initiating any connection other way around is not possible.
Problem is that code repositories are on master side and it seems slave tries to fetch from repositories before build and it fails (obviously).
I could push data to slave but I don't know how to execute any command on master side before build script kicks in. Also, I'm not sure is SCM polling initiated on master or on slave machine?
Where, there is a Copy to slave plugin which can push the files from the master machine to the slave. Additionaly one can choose to use the Slave Setup plugin to propagate the environment and all dependencies to the slave while it is starting/connecting.
But it seems like it is rather a conceptual issue with how the file/code repositories are being accessed from the slave machine. Usually this stuff is being handled by SCM plugin and as long you have an accessible repository on the master or any other machine, this should be fairly straight-forward. I do believe it would help if you could describe that part a little better.

Resources