Gitlab Docker build: calling shell command in .gitlab-ci.yml - shell

I'm trying to call shell command in .gitlab-ci.yml, whose relevant parts are:
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
...
build:
stage: build
script:
- apt-get update -y
- GIT_TAG=$(git tag | tail -1)
- GIT_TAG=$(/usr/bin/git tag | tail -1)
- docker ...
However all top three shell command callings have failed, all with "command not found" error. The git command being failing is really odd, because it has to get the git repo first before start the script section. I.e., I can see that git is working, but I just can't use it myself.
Is there any way to make it working?

You see git working in separate steps because GitLab is probably doing it in another container. They keep your container clean, so you have to install dependencies yourself.
Since the image you're using is based on Alpine Linux, the command to install git is:
apk add --no-cache git
You can also skip the whole thing and use the predefined environment variables if all you need is git information. $CI_COMMIT_TAG will contain the tag and $CI_COMMIT_SHA will contain the commit hash.

from the documentation of GitLab, here is the definition of CI_COMMIT_TAG: CI_COMMIT_TAG - The commit tag name. Present only when building tags
means - when you will push a commit to GitLab, then it will start a pipeline without CI_COMMIT_TAG variable. When you make a tag on this commit and push this tag to GitLab, then another pipeline (this time for the tag, not for the commit) will be started. In that case CI_COMMIT_TAG will be present.
#xpt - thanks for the vote confidence and asking to write up this as an answer, hope this helps the community!

Related

Cypress binary is missing and Gitlab CI pipeline

I'm trying to integrate cypress testing into gitlab pipeline.
I've tried about 10 different configurations which all fail.. I've included what I think are the relevant portions of of the gitlab.yml file, as well as the screenshot of the error on gitlab.
Thanks for any help
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
paths:
- src/ui/node_modules/
- /root/.cache/Cypress/ //added this, also have tried src/ui/cypress/
build_ui:
image: node:16.14.2
stage: build
script:
- cd src/ui
- yarn install --pure-lockfile --prefer-offline --cache-folder .yarn
ui_test:
image: node:16.14.2
stage: test
needs: [build_ui]
script:
- cd src/ui
- yarn run runCypressHeadless
Each job gets its own separate environment. Therefore, you need to install your dependencies in each job. Add your yarn install command to the ui_test job.
The reason why your cache: did not restore to the job from the previous stage is because caches are per job by default (e.g. caches are restored from previous pipelines that ran the same job). If you want subsequent jobs in the same pipeline to use the cache, set the cache:key: to something like $CI_COMMIT_SHA or use cache:key:files: to use a file key, like your lockfile(s).
Also, you can only cache paths in the workspace. So you won't be able to cache/restore /root/.cache/... -- instead you should change the cache location to somewhere in the workspace.
For additional reference, see: caching in GitLab CI and caching NodeJS dependencies.

GITLAB CI Pipeline not triggered

I have written this yml file for GitLab CI/CD. There is a shared runner configured and running.
I am doing this first time and not sure where I am going wrong. The angular js project I am having
on the repo has a gulp build file and works perfectly on local machine. This code just has to trigger
that on the vm where my runner is present. On commit the pipeline does not show any job. Let me know what needs to be corrected!
image: docker:latest
cache:
paths:
- node_modules/
deploy_stage:
stage: build
only:
- master
environment: stage
script:
- rmdir -rf "build"
- mkdir "build"
- cd "build"
- git init
- git clone "my url"
- cd "path of cloned repository"
- gulp build
What branch are you commiting to? You pipeline is configured to run only for commit on master branch.
...
only:
- master
...
If you want to have triggered jobs for other branches as well then remove this restriction from .gitlab-ci.yml file.
Do not forget to Enable shared Runners (they may not be enabled by default), setting can be found on GitLab project page under Settings -> CI/CD -> Runners.
Update: Did your pipeline triggers ever work for your project?
If not then I would try configuring simple pipeline just to test if triggers work fine:
test_simple_job:
script:
- echo I should execute for any pipeline trigger.
I solved the problem by renaming the .gitlab-ci.yaml to .gitlab-ci.yml
I just wanted to add that I ran into a similar issue. I was committing my code and I was not seeing the pipeline trigger at all.There was also no error statement on gitlab nor in my vscode. It had ran perfectly before.My problem was because I had made some recent edits to my yaml that were invalid.I reverted the changes to a known valid yaml code, and it worked again and passed.
I also had this issue. I thought I would document the cause, in the hopes it may help someone (although this is not strictly an answer for the original question because my deploy script is more complex).
So in my case, the reason was that I had multiple jobs with the same job ID in my .gitlab-ci.yml. The latter one basically rendered the earlier one invisible.
# This job will never run:
deploy_my_stuff:
script:
- do something for job one
# This job overwrites the above.
deploy_my_stuff:
script:
- do something for job two
Totally obvious... after I discovered the mistake.

How can I ensure, to have different workflows for Travis CI for different git branches

I am setting up a CI pipeline.
I have a script which build docker images.
In travis.yml it's something like this.
script
- bash builddocker.sh
I want to be able to use the same script and run in such a way that, it builds images and pushes to a different repository for different branches.
For example, for master, push it to dev-docker-repository
for feature branches, push it to `team-test-repository'
This is something you could handle inside your script by giving it the branch in parameter e.g.
script:
- bash builddocker.sh $TRAVIS_BRANCH
Otherwise it would also be possible to use build stages and define different jobs depending on the branch e.g.
jobs:
include:
- name: master branch
script: bash builddocker.sh dev-docker-repository
if: branch = master
- name: other branches
script: bash builddocker.sh team-test-repository
if: branch != master
Hope this helps!

GitLab CI/CD build/pipeline only triggered once instead of twice

I'm using GitLab CI/CD (EDIT: v10.2.2).
I've got 2 branches in my project: devel and testing
Both are protected.
devel is the default branch.
The workflow is: I push on devel, then I merge devel into testing through a merge request.
Here is my .gitlab-ci.yml v1:
docker_build:
stage: build
only:
- devel
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug1 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug1
When I push a modification on devel, the script is run and the build is made. Perfect.
Now same thing with branch testing, here is my .gitlab-ci.yml v2:
docker_build:
stage: build
only:
- testing
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug2 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug2
When I push a modification directly on testing, the same thing happens using the testing branch. But here the pipeline on testing (and on testing only, so only once) is also triggered when I push on devel, then merge on testing, which is perfect.
Now .gitlab-ci.yml v3, which is nothing else than a concatenation of the two previous versions:
docker_build:
stage: build
only:
- devel
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug1 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug1
docker_build:
stage: build
only:
- testing
script:
- docker build -t gitlab.mydomain.com:4567/myproject/app:debug2 .
- docker login -u="$DOCKER_LOGIN" -p="$DOCKER_PWD" gitlab.mydomain.com:4567
- docker push gitlab.mydomain.com:4567/myproject/app:debug2
My expectation was: when I push on devel, then create/accept a merge request from devel to testing, the devel pipeline should run right after my push, then the testing pipeline should run right after my merge request acceptance.
Instead here is what's happening: only the devel pipeline is triggered after the push. The testing pipeline will never be triggered after my merge request.
I assume I'm missing something about how GitLab works but I can't figure out what despite my researches.
Any help will be greatly appreciated. Thank you very much.
https://docs.gitlab.com/ee/ci/yaml/#jobs states:
Each job must have a unique name, ...
You have two jobs with the same name docker_build. Just give them a different name.

Cannot push from gitlab-ci.yml

With my colleagues, we work on a C++ library that becomes more and more important each day. We already built continuous integration utilities through the gitlab-ci.yml file that let us:
Build & Test in Debug mode
Build & Test in Release mode
Perform safety checks like memory leaks using Valgrind and checking if there is any clear symbol in our library we don't want inside it
Generate documentation
All kind of stuff that made us choose GitLab !
We would like to profile our whole library and push the benchmarks in a separate project. We have already done something like for out documentation using the SSH key method but we would like to avoid this this time.
We tried a script like this:
test_ci_push:
tags:
- linux
- shell
- light
stage: profiling
allow_failure: false
only:
- new-benchmark-stage
script:
- git clone http://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.mycompany.home/developers/benchmarks.git &> /dev/null
- cd benchmarks
- touch test.dat
- echo "This is a test" > test.dat
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- git add --all
- git commit -m "GitLab Runner Push"
- git push http://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.mycompany.home/developers/benchmarks.git HEAD:master
- cd ..
We also tried a basic git push origin master to push our updated files but each time we got the same answer:
remote: You are not allowed to upload code for this project.
fatal: unable to access 'http://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#gitlab.mycompany.home/developers/benchmarks.git/': The requested URL returned error: 403
Both projects are under the same site and I have the rights to push in both. Where am I doing anything wrong here ?
The gitlab ci token is more like the deploy key in github.com, so it only has read access to the repository. To actually push you will need to generate a personal access token and use that instead.
First you need to generate the token as shown here in the gitlab documentation. Make sure you check both the read user and api scopes. Also this only works in GitLab 8.15 and above. If you are using an older version and do not wish to upgrade there's an alternative method I could show you but it is more complex and less secure.
In the end your gitlab-ci.yml should look something like this:
test_ci_push:
tags:
- linux
- shell
- light
stage: profiling
allow_failure: false
only:
- new-benchmark-stage
script:
- git clone http://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.mycompany.home/developers/benchmarks.git &> /dev/null
- cd benchmarks
- echo "This is a test" > test.dat
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- git add --all
- git commit -m "GitLab Runner Push"
- git push http://${YOUR_USERNAME}:${PERSONAL_ACCESS_TOKEN}#gitlab.mycompany.home/developers/benchmarks.git HEAD:master
- cd ..
While the previous answers are more or less fine, there are some important gotya's.
before_script:
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
script:
- <do things>
- git push "https://${GITLAB_USER_LOGIN}:${CI_GIT_TOKEN}#${CI_REPOSITORY_URL#*#}" "HEAD:${CI_COMMIT_TAG}"
For one, we only need to set the username/email to please git.
Secondly having it in the before script, is not super crucial, but allows for easier reuse when doing 'extend'.
Finally, pushing https is 'fine' but since we're not using a stored ssh key, we should avoid anything that can reveal the token. For one, while gitlab won't print the token in this command, git will happily inform us that the new upstream is set to https://username:thetokeninplaintexthere#url
So there's your token in plain text, so don't use -u to set an upstream.
Also, it's not needed, we are only doing a single push.
Further more, when determining the URL, I found that using the exist CI_REPOSITORY_URL to be the most reliable solution (when moving repo's for example or whatnot). So we just replace the username/token in the URL string.
You could also provide user and password (user with write access) as secret variables and use them.
Example:
before_script:
- git remote set-url origin https://$GIT_CI_USER:$GIT_CI_PASS#$CI_SERVER_HOST/$CI_PROJECT_PATH.git
- git config --global user.email 'myuser#mydomain.com'
- git config --global user.name 'MyUser'
You have to define GIT_CI_USER and GIT_CI_PASS as secret variables (you could always create dedicated user for this purpose).
With this configuration you could normally work with git. I'm using this approach to push the tags after the release (with Axion Release Gradle Pluing - http://axion-release-plugin.readthedocs.io/en/latest/index.html)
Example release job:
release:
stage: release
script:
- git branch
- gradle release -Prelease.disableChecks -Prelease.pushTagsOnly
- git push --tags
only:
- master
I'm using the following GitLab job:
repo_pull_sync:
image: danger89/repo_mirror_pull:latest
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
- if: $REMOTE_URL
- if: $REMOTE_BRANCH
- if: $ACCESS_TOKEN
before_script:
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
script:
- git checkout $CI_DEFAULT_BRANCH
- git pull
- git remote remove upstream || true
- git remote add upstream $REMOTE_URL
- git fetch upstream
- git merge upstream/$REMOTE_BRANCH
- git push "https://${GITLAB_USER_LOGIN}:${ACCESS_TOKEN}#${CI_REPOSITORY_URL#*#}" "HEAD:${CI_DEFAULT_BRANCH}"
I'm using my own danger89/repo_mirror_pull docker image based on alpine, check this GitHub repository for more info.
This GitLab job pull upstream changes from the predefined remote repository + branch (see variables below), and merge them locally in CI/CD and pushes them in GitLab again.
Basically I created a repository pull mirror (which is officially not available for free on GitLab CE, only a push mirror is supported in GitLab).
Create in GitLab a Project Access Token first in GitLab. Via: Settings->Access Tokens. Check 'api' as the scope.
Create a new schedule, via: CI/CD->Schedules->New schedule. With the following 3 variables:
REMOTE_URL (example: https://github.com/project/repo.git)
REMOTE_BRANCH (example: master)
ACCESS_TOKEN: (see Access Token in the first step! Example: gplat-234hcand9q289rba89dghqa892agbd89arg2854, )
Save pipeline schedule
Again, see also: https://github.com/danger89/repo_pull_sync_docker_image
Regarding the question, see the git push command above, which allows you to push changes back into GitLab using GitLab (project) access token.

Resources