GitLab CI/CD - Enable test pipeline for merge-requests - continuous-integration

I have created a deployment pipeline for my project which works great. Now i want that whenever a developer working on the project submits a merge request, test pipeline should run on that merge request to validate the changes being proposed.
I added the following in my .gitlab-ci.yml
stages:
- test
- deploy
test:
stage: test
only:
- merge-requests
tags:
- ide
script:
...
deploy:
stage: deploy
only:
- master
tags:
- ide
script:
...
However, no pipelines run when merge requests are created. Please advise on what I am doing wrong?

It is not currently available as you mention, please see https://gitlab.com/gitlab-org/gitlab-ce/issues/23902 for further discussion.

I faced the same problem, and I've resolved using this
build_mr_job:
stage: build
script:
- 'echo script'
only:
refs:
- merge_requests
I've found the answer here
How to use GitLab CI only:changes with only:refs?

Related

How to integrate cypress-mochawesome-reporter to Gitlab pages?

How to integrate cypress-mochawesome-reporter to Gitlab pages?
Currently i have this code on my gitci-.yml
stages:
- test
test:
image: cypress/browsers:node18.12.0-chrome107
stage: test
script:
- npm ci
- npm run start:ci &
- npm run html-report
The cypress-mochawesome-reporter produce an index.html inside cypress/reports/html/index.html file after the pipeline runs
I can't find any docs or tutorial in the internet. What is the best approach on this?

GitLab - run job on PR on bitbucket possible?

I just setup a project on GitLab with an external Bitbucket repository. I added the webhook to Bitbucket and I'm seeing that I'm sending out requests when I push or open a PR etc.
I would like to execute a test job every time a PR is opened to merge a branch into the master branch on Bitbucket. When the merge happened, I want to run another 2 jobs (build + deploy).
So far my gitlab file looks like
stages:
- build
- test
- deploy
buildJob:
stage: build
script:
- echo 'Building...'
only:
- master
testJob:
stage: test
script:
- echo 'Testing...'
only:
- external_pull_requests
deployJob:
stage: deploy
script:
- echo 'Deploying...'
only:
- master
The build and deploy jobs are executed as expected when a merge has happened. However, the job that should only run when a PR is opened (or on any new commit on an already opened PR) is not executed. In the Documentation they only talk about GitHub. Is this actually possible with Bitbucket?

CircleCI 2.0 Workflow - Deploy not working

I'm trying to set up a workflow in CircleCI for my React project.
What I want to achieve is to get a job to build the stuff and another one to deploy the master branch to Firebase hosting.
This is what I have so far after several configurations:
witmy: &witmy
docker:
- image: circleci/node:7.10
version: 2
jobs:
build:
<<: *witmy
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: yarn install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Build app in production mode
command: |
yarn build
- persist_to_workspace:
root: .
deploy:
<<: *witmy
steps:
- attach_workspace:
at: .
- run:
name: Deploy Master to Firebase
command: ./node_modules/.bin/firebase deploy --token=MY_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
The build job always success, but with the deploy I have this error:
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token=MYTOKEN
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code 1
So, what I understand is that the deploy job is not running in the same place the build was, right?
I'm not sure how to fix that. I've read some examples they provide and tried several things, but it doesn't work. I've also read the documentation but I think it's not very clear how to configure everything... maybe I'm too dumb.
I hope you guys can help me out on this one.
Cheers!!
EDITED TO ADD MY CURRENT CONFIG USING WORKSPACES
I've added Workspaces... but still I'm not able to get it working, after a loooot of tries I'm getting this error:
Persisting to Workspace
The specified paths did not match any files in /home/circleci/project
And also it's a real pain to commit and push to CircleCI every single change to the config file when I want to test it... :/
Thanks!
disclaimer: I'm a CircleCI Developer Advocate
Each job is its own running Docker container (or VM). So the problem here is that nothing in node_modules exists in your deploy job. There's 2 ways to solve this:
Install Firebase and anything else you might need, on the fly, just like you do in the build job.
Utilize CircleCI Workspaces to carry over your node_modules directory from the build job to the deploy job.
In my opinion, option 2 is likely your best bet because it's more efficient.

GitLab CI/CD build/pipeline only triggered once instead of twice

I'm using GitLab CI/CD (EDIT: v10.2.2).
I've got 2 branches in my project: devel and testing
Both are protected.
devel is the default branch.
The workflow is: I push on devel, then I merge devel into testing through a merge request.
Here is my .gitlab-ci.yml v1:
docker_build:
stage: build
only:
- devel
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug1 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug1
When I push a modification on devel, the script is run and the build is made. Perfect.
Now same thing with branch testing, here is my .gitlab-ci.yml v2:
docker_build:
stage: build
only:
- testing
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug2 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug2
When I push a modification directly on testing, the same thing happens using the testing branch. But here the pipeline on testing (and on testing only, so only once) is also triggered when I push on devel, then merge on testing, which is perfect.
Now .gitlab-ci.yml v3, which is nothing else than a concatenation of the two previous versions:
docker_build:
stage: build
only:
- devel
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug1 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug1
docker_build:
stage: build
only:
- testing
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug2 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug2
My expectation was: when I push on devel, then create/accept a merge request from devel to testing, the devel pipeline should run right after my push, then the testing pipeline should run right after my merge request acceptance.
Instead here is what's happening: only the devel pipeline is triggered after the push. The testing pipeline will never be triggered after my merge request.
I assume I'm missing something about how GitLab works but I can't figure out what despite my researches.
Any help will be greatly appreciated. Thank you very much.
https://docs.gitlab.com/ee/ci/yaml/#jobs states:
Each job must have a unique name, ...
You have two jobs with the same name docker_build. Just give them a different name.

Gitlab CI: Clone repo only before first build in pipeline

I have ~5-10 builds in my .yml file for Gitlab CI. To save time, I'm wondering if there is a way to NOT re-clone the repo between every job. Ideally, the repo will be cloned once and then all 3 jobs run. I also don't want to combine the jobs into a single build because I'd like to see the results of each individually (when they are combined, gitlab's "pass/fail" is just the result of the last job).
I don't want to simply do git fetch because I want a fresh clone at the start.
stages:
- run
job1:
stage: run
script:
- pwd
- make all TEST=job1
job2:
stage: run
script:
- pwd
- make all TEST=job2
job3:
stage: run
script:
- pwd
- make all TEST=job3
...
I'm also fiddling around with this topic.
Actually, I go by doing a checkout-stage at first (with GIT_STRATEGY: clone) and then the build-stage with multiple jobs and GIT_STRATEGY: fetch.
This ensures that the repo is really full cloned at first and only fetched for every buildstep. Maybe this helps you too.
stages:
- checkout
- build
checkout:
variables:
GIT_STRATEGY: clone
GIT_SUBMODULE_STRATEGY: recursive
stage: checkout
script: '#echo Checking out...'
build:commander:
stage: build
variables:
GIT_STRATEGY: fetch
script:
- _Publish.bat commander
artifacts:
paths:
- BuildArtifacts\Commander\**
build:login:
stage: build
variables:
GIT_STRATEGY: fetch
script:
- _Publish.bat login
artifacts:
paths:
- BuildArtifacts\Login\**
build:cli:
stage: build
variables:
GIT_STRATEGY: fetch
script:
- _Publish.bat cli
artifacts:
paths:
- BuildArtifacts\Cli\**
This might be helpful, assuming you are using a new enough version of gitlab and the runner: https://docs.gitlab.com/ce/ci/yaml/README.html#git-strategy
You can set your git-strategy to none, and manually clone the repo in your before_script section.
This will have some difficulties still - because different runners can service different jobs, if you don't have a dedicated runner for this project, all runners would need access to the repo location.

Resources