gitlab CI on MR - continuous-integration

I've forked a project repo from master and created my own tests (.mygitlab_ci.yaml) that are supposed to extend those of the master repo and run on a locally dedicated machine that replicates the env of the master repo runner machine.
I expected a MR towards master to trigger the CI of master, on the master elected runner, but the CI appears to keep running my tests, not master's.
Is there a way to control this behavior?

I'm afraid there is no way to achieve it.
The only way I found was to push a side branch upstream to the master repo and issue the merge request from the side branch within master repo.

Related

Does teamcity support polling of ECR repo for new image version?

I am trying to setup teamcity job that polls AWS ECR repo to check if a newer version of an image is available and if so re-deploy application on on-premises servers.
I have got bitbucket pipeline that build docker image on commit in the specific branch and push these images to AWS ECR repository.
The servers are on-premises and want to deploy newly created image on it using teamcity. However, I couldn't find a way in teamcity that would poll ECR repository for newer image versions.
Is there way to do this using teamcity? If there is no way of doing this then we have to go with schedule job unless experts out there suggest a better way of doing it.

Travis for CI/CD

We are planning to move from jenkins to travis for all our micro services and had a question when it comes to CD to different environments.
For example, we have a micro-service git repository with just master branch and 3 different environments on aws - dev, test and production. We can successfully build a docker image and push it to aws ecr.
After going through multiple resources, looks like many of them suggest to have different git branches for deploying to different environments which in my opinion is an overkill.
Is there any alternative with which we can deploy to different environments without having multiple branches?
How about this method?
It'll enable you to do this via untagged/tagged/specifically tagged commits. Simply switch from the depicted Cloud Foundry example to your AWS deployment.

Cluster deployment with gitlab ci shell runner

I've trying to migrate our CI and CD processes from Jenkins to Gitlab CI. How should I setup gitlab to build our application in cluster?
In general, I expect gitlab clone repository to all nodes in cluster, execute my Bash deployment script and run some tests if needed. From my point of view, I think I should start runners in all cluster nodes and start build with all neccessary tasks. Is it possible in Gitlab? I can start only one runner for one build. May be there are some different approaches for this task?
For example, I have cluster with 2 nodes, A and B. I need clone repository to both nodes and start build script on each of them. I have register one gitlab-ci-multi-runner on each node, but build executed only on one of this node.
What you are describing can be achieved by setting up each executor with a different tag and setting multiple build tasks in gitlab (yaml anchors help with it) but it's not the desired way. The desired way would be to use gitlab runner to run tests and then another build to run Ansible/Chef/Salt/Puppet or your other desired deployment tool.

Pushing data to jenkins's slave machine before build starts

I have following configuration:
Jenkins Master - runs on windows+tomacat, Jenkins Slave - runs on gentoo
Slave is reachable by ssh and master can start it without problems. However initiating any connection other way around is not possible.
Problem is that code repositories are on master side and it seems slave tries to fetch from repositories before build and it fails (obviously).
I could push data to slave but I don't know how to execute any command on master side before build script kicks in. Also, I'm not sure is SCM polling initiated on master or on slave machine?
Where, there is a Copy to slave plugin which can push the files from the master machine to the slave. Additionaly one can choose to use the Slave Setup plugin to propagate the environment and all dependencies to the slave while it is starting/connecting.
But it seems like it is rather a conceptual issue with how the file/code repositories are being accessed from the slave machine. Usually this stuff is being handled by SCM plugin and as long you have an accessible repository on the master or any other machine, this should be fairly straight-forward. I do believe it would help if you could describe that part a little better.

Can a master jenkins run jobs on remote jenkins?

We are migrating from CruiseControl.NET to Jenkins just to be in sync with a partner so we don't have two different CI scripts. We are trying to setup Jenkins to do something similar to what we had CruiseControl doing which was have a centralized server invoke projects (jobs in jenkins) on remote build machines.
We have multiple build machines associated to a single project so when we build the project from the centralized CI server it would invoke the projects on the remote CI servers. The remote CI servers would pull the version from the centralized CI server project.
In CruiseCruise control we setup a project that would do a forceBuild on the remote projects. The projects on the build machines used a remoteProjectLabeller to retrieve the version number so they were always in sync.
To retrieve the master build number:
<labeller type="remoteProjectLabeller">
<project>MainProject</project>
<serverUri>tcp://central-server:21234/CruiseManager.rem</serverUri>
</labeller>
To invoke the remote projects:
<forcebuild>
<project>RemoteBuildMachineA</project>
<serverUri>tcp://remote-server:21234/CruiseManager.rem</serverUri>
<integrationStatus>Success</integrationStatus>
</forcebuild>
So far in jenkins i've setup a secondary server as a slave using the java web start but I don't know how I would have the master jenkins invoke the projects setup on the slaves.
Can I setup Jenkins to invoke projects (jobs) on slaves?
Can I make the slaves pull the version number from the master?
EDIT -
Let me add some more info.
The master, and remote build machine slaves are all running Windows.
We had the central master CruiseControl kick off the remote projects at the same time so they ran concurrently and would like to have the same thing with jenkins if possible.
Jenkins has the concept of build agents, which could perhaps fit your scenario better - there's a master that triggers the build and slaves that perform it. A build can be then restricted to some categories of slaves only (e.g. if it depends on a specific software, not present on all agents). All data is managed centrally by the master, which I believe is what you are trying to achieve.
In Jenkins it is not possible to trigger a build on a slave, i.e. where a build runs is not controlled by the one who triggers it. It is controlled by the settings of the job itself. Each job has a setting called "Restrict where this job can run".
In your case you would probably have two jobs: A and B. A would be restricted to run on "master" and B would be configured to run on "slavename". Then all that is left to do is for A to trigger B.
But you had further constraints: You want A and B check out the same version from version control and you want A and B to run in parallel. There are many ways to accomplish that but the easiest is probably to define a multi-configuration job.
There is no way to turn an existing free-style job into a multi-configuration job, so you will have to make a new job.
Choose New job
Choose Build new multi-configuration project. Add a name.
Under Configuration Matrix, open the "Add axis" drop down.
Choose Slaves
Check master and the slave
Add the SCM information and build step(s)
When the job runs, it runs on both the master and the slave. Jenkins makes sure they build from the same source version.
From the /jenkins/computer url, you can add, remove, and reconfigured "nodes" which are either local or remote "build agents".
The Jobs can then be constrained to run on particular build agents, or follow various rules to select the appropriate build agent out of the available agents.
I was thinking about Jenkins too much like CruiseControl where the job is defined on the remote machine. So in Jenkins the remote projects are defined on the master and delegated to a remote machine via an agent.
I used the Java Web Start agent installed as a windows service on the remote machines. To have specific jobs run on specific remote machines I defined each remote node with a unique label in its slave configuration. To bind specific jobs to specific slaves I used the slave's label in each job configuration ("Restrict where this project can be run").
To trigger the jobs with a single master job I created a free style job that only is set to "Build other projects" and provided a comma separated list or project names. This job builds the downstream jobs in parallel.
I'm still looking for a way to send a master build number to the downstream jobs to keep them in sync always. (This is used to version DLLs and such.)

Resources