I've trying to migrate our CI and CD processes from Jenkins to Gitlab CI. How should I setup gitlab to build our application in cluster?
In general, I expect gitlab clone repository to all nodes in cluster, execute my Bash deployment script and run some tests if needed. From my point of view, I think I should start runners in all cluster nodes and start build with all neccessary tasks. Is it possible in Gitlab? I can start only one runner for one build. May be there are some different approaches for this task?
For example, I have cluster with 2 nodes, A and B. I need clone repository to both nodes and start build script on each of them. I have register one gitlab-ci-multi-runner on each node, but build executed only on one of this node.
What you are describing can be achieved by setting up each executor with a different tag and setting multiple build tasks in gitlab (yaml anchors help with it) but it's not the desired way. The desired way would be to use gitlab runner to run tests and then another build to run Ansible/Chef/Salt/Puppet or your other desired deployment tool.
Related
In our Gitlab project group, we are using multiple shared runners for CI.
However, some of the jobs have dependencies, such that the previous job must have been executed on the same runner.
Here is an Example:
Job 1 builds a docker container
Job 2 checks the docker for execution, so it needs the docker image from job 1
Job 3 pushes the docker image to a container hub, so it needs the docker image from job 1
Now, with multiple shared runners it may happen, that job1 is executed on runner 1 and job 2 and 3 on a different runner than runner 1. This throws an error in job 2 and 3 as the docker image is locally not available on that runner.
On the other hand, we need multiple runners due to the amount of computation in our project. So it would be great if once a runner is picked in a specific job, it keeps the same runner for the ongoing jobs.
Any ideas to solve this?
Gitlab scheduling doesn't permit it easily. The balancing between runners work as the following:
When a job is created on Gitlab instance, it registers a job in
pending state
Runners check every 3 seconds (3s by default,
configured with check_interval) if there are jobs in the pending
queue. If yes, and if the runner is the right runner for the job (for
example if job tags are compliant with runner), then the runner launch N jobs from the queues, N
limited by the maximum number of concurrent job per runner (concurrent
option)
So this isn't Gitlab itself that schedule which runner runs which job. Gitlab just put jobs to run in a queue, runners frequently check the queue and select jobs. It's great for scalability but not great for your use case.
In my point of view, you have 2 options:
First, put a specific tag on only one runner and use it on jobs that need to be run on the same host
Second, more flexible, is to store the resulting docker image in your Gitlab project registry at the end of the build and pull it from any job that needs the image (job2 and job3) See https://docs.gitlab.com/ee/user/packages/container_registry/#build-and-push-by-using-gitlab-cicd
How can I have a declarative Jenkins pipeline that is able to manage Jenkins server itself? I.e:
a pipeline that is able to query what Jobs I have in a folder and then disable/enable those jobs
Query what agents are available and trigger a job on that agent
A pipeline global variable currentBuild has a property called rawBuild that provides access to the Jenkins model for the current build. From there you can get to many of the Jenkins internals.
I'm not sure what you can find in the way of agent and job triggering - have a look there are/were plugins that offered alternatives to the default model.
I have 5 gocd agents connected to my gocd server.
The use-case is I want to run a particular pipeline belonging to a particular pipeline group to a specific go-agent every time.
Example: all pipelines having pipeline-group-1 should run on agent agent-4.
Can we achieve this using GOCD?
You can use the environments feature for that, though you have to put each pipeline in the group into that environment. I don't think there is a group-wide feature for that.
You can mainly use a certain agent as a resource in your Jobs this makes sure that you could run the pipeline via the same agent
I am new to Jenkins and know how to create Jobs and add servers for JAR deployment.
I need to create deployment job using Jenkins which takes a JAR file and deploys it of 50-100 servers.
These servers are categorized in 6 categories. there will be different process run on each server but same JAR will be used.
Please suggest what is the best approach to create JOB for this.
As of now, the servers are less(6-7), I have added each server to Jenkins and using command execution over ssh for process execution. But for 50 servers this is not the possibility.
Jenkins is a great tool for managing builds and dependencies, but it is not a great tool for Configuration Management. If you're deploying to more than 2 targets (and especially if different targets have different configurations), I would highly recommend investing the time to learn a configuration management tool.
I can personally recommend Puppet and Ansible. In particular, Ansible works over an SSH connection to the target (which it sounds like you have) and requires only a base Python install.
We are migrating from CruiseControl.NET to Jenkins just to be in sync with a partner so we don't have two different CI scripts. We are trying to setup Jenkins to do something similar to what we had CruiseControl doing which was have a centralized server invoke projects (jobs in jenkins) on remote build machines.
We have multiple build machines associated to a single project so when we build the project from the centralized CI server it would invoke the projects on the remote CI servers. The remote CI servers would pull the version from the centralized CI server project.
In CruiseCruise control we setup a project that would do a forceBuild on the remote projects. The projects on the build machines used a remoteProjectLabeller to retrieve the version number so they were always in sync.
To retrieve the master build number:
<labeller type="remoteProjectLabeller">
<project>MainProject</project>
<serverUri>tcp://central-server:21234/CruiseManager.rem</serverUri>
</labeller>
To invoke the remote projects:
<forcebuild>
<project>RemoteBuildMachineA</project>
<serverUri>tcp://remote-server:21234/CruiseManager.rem</serverUri>
<integrationStatus>Success</integrationStatus>
</forcebuild>
So far in jenkins i've setup a secondary server as a slave using the java web start but I don't know how I would have the master jenkins invoke the projects setup on the slaves.
Can I setup Jenkins to invoke projects (jobs) on slaves?
Can I make the slaves pull the version number from the master?
EDIT -
Let me add some more info.
The master, and remote build machine slaves are all running Windows.
We had the central master CruiseControl kick off the remote projects at the same time so they ran concurrently and would like to have the same thing with jenkins if possible.
Jenkins has the concept of build agents, which could perhaps fit your scenario better - there's a master that triggers the build and slaves that perform it. A build can be then restricted to some categories of slaves only (e.g. if it depends on a specific software, not present on all agents). All data is managed centrally by the master, which I believe is what you are trying to achieve.
In Jenkins it is not possible to trigger a build on a slave, i.e. where a build runs is not controlled by the one who triggers it. It is controlled by the settings of the job itself. Each job has a setting called "Restrict where this job can run".
In your case you would probably have two jobs: A and B. A would be restricted to run on "master" and B would be configured to run on "slavename". Then all that is left to do is for A to trigger B.
But you had further constraints: You want A and B check out the same version from version control and you want A and B to run in parallel. There are many ways to accomplish that but the easiest is probably to define a multi-configuration job.
There is no way to turn an existing free-style job into a multi-configuration job, so you will have to make a new job.
Choose New job
Choose Build new multi-configuration project. Add a name.
Under Configuration Matrix, open the "Add axis" drop down.
Choose Slaves
Check master and the slave
Add the SCM information and build step(s)
When the job runs, it runs on both the master and the slave. Jenkins makes sure they build from the same source version.
From the /jenkins/computer url, you can add, remove, and reconfigured "nodes" which are either local or remote "build agents".
The Jobs can then be constrained to run on particular build agents, or follow various rules to select the appropriate build agent out of the available agents.
I was thinking about Jenkins too much like CruiseControl where the job is defined on the remote machine. So in Jenkins the remote projects are defined on the master and delegated to a remote machine via an agent.
I used the Java Web Start agent installed as a windows service on the remote machines. To have specific jobs run on specific remote machines I defined each remote node with a unique label in its slave configuration. To bind specific jobs to specific slaves I used the slave's label in each job configuration ("Restrict where this project can be run").
To trigger the jobs with a single master job I created a free style job that only is set to "Build other projects" and provided a comma separated list or project names. This job builds the downstream jobs in parallel.
I'm still looking for a way to send a master build number to the downstream jobs to keep them in sync always. (This is used to version DLLs and such.)