Maven is well installed on my gitlab-runner server. When executing mvn clean directly on my repo it works, when running my pipeline using Gitlab UI got this error :
bash: line 60: mvn: command not found
ERROR: Job failed: exit status 1
I notice that I tried to fix the problem by adding the before_script section in the .gitlab-ci.yml file :
before_script:
- export MAVEN_HOME=/usr/local/apache-maven
I add also the line :
environment = ["MAVEN_HOME=/usr/local/apache-maven"]
on the config.toml file.
the problem still persist, my executor is : shell.
Any advice!
I managed to fix the problem using this workaround:
script:
- $MAVEN_HOME/bin/mvn clean
Just add the maven docker image, add below line as first line:
image: maven:latest or image: maven:3-jdk-10 or image: maven:3-jdk-9
refer: https://docs.gitlab.com/ee/ci/examples/artifactory_and_gitlab/
For anyone experiencing similar issues, it might be a good idea to restart the gitlab runner ".\gitlab-runner.exe restart". Especially after fiddling with environmental variables.
There is an easier way:
Making changes in ~/.bash_profile not ~/.bashrc.
According to this document:
.bashrc it is more common to use a non-login shell
This document saying:
For certain executors, the runner passes the --login flag as shown above, which also loads the shell profile.
So it should not be ~/.bashrc, you can also try ~/.profile which It can hold the same configurations, which are then also accessible by other shells
In my scenario I do following things:
1. Set gitlab-runner's user password.
passwd gitlab-runner
2. Login gitlab-runner.
su - gitlab-runner
3. Make changes in .bash_profile
Add maven to PATH:
$ export M2_HOME=/usr/local/apache-maven/apache-maven-3.3.9
$ export M2=$M2_HOME/bin
$ export PATH=$M2:$PATH
You can include these commands in $HOME/.bashrc
I hope you had figure out your question. I met the same question when I build my ci on my server.
I use the shell as the executer for my Runner.
here are the steps to figure out.
1 check the user on the runner server
if you had install maven on the runner server successfully, maybe it just successful for the root, u can check the real user for the ci process.
job1:
stage: test
script: whoami
if my case, it print gitlab-runner, not the root
2 su the real user, check mvn again
In this time, it print error as same as the Gitlab ci UI.
3 install maven for the real user. run the pipeline again.
You can also use as per below in the .gitlab-ci.yml
before_script:
- export PATH=$PATH:/opt/apache-maven-3.8.1/bin
Related
I'm running CI jobs on a self-hosted GitLab instance plus 10 GitLab Runners.
For this minimal example, two Runners are needed:
Admin-01
A shell runner with Docker installed.
It can execute e.g. docker build ... to create new images, which are then pushed to the private Docker registry (also self-hosted / part of the GitLab installation)
Docker-01
A docker runner, which executes the previously build image.
On a normal bare-metal, virtual machine or shell runner, I would modify e.g. ~/.profile to execute commands before before_script or script sections are executed. In my use case I need to set new environment variables and source some configuration files provided by the tools I want to run in an image. Yes, environment variables could be set differently, but there seams to be no way to source Bash scripts automatically before before_script or script sections are executed.
When sourcing the Bash source file manually, it works. I also notice, that I have to source it again in script block. So I assume the Bash session is ended between before_script block to script block. Of cause, it's no nice solution to manually source the tools Bash configuration script in every .gitlab-ci.yml file manually by the image users.
myjobn:
# ...
before_script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
The mentioned modifications for e.g. shell runners does not work in images executed by GitLab Runner. It feels like the Bash in the container is not started as login shell.
The minimal example image is build as follows:
fetch debian:bullseye-slim from Docker Hub
use RUN commands in Dockerfile to modify with some echo outputs
/etc/profile
/root/.bashrc
/root/.profile
# ...
RUN echo "echo GREETINGS FROM /ROOT/PROFILE" >> /root/.profile \
&& echo "echo GREETINGS FROM /ETC/PROFILE" >> /etc/profile \
&& echo "echo GREETINGS FROM /ROOT/BASH_RC" >> /root/.bashrc
When the job starts, non of the echos is printing messages, while a cat shows, the echo commands have been put at the right places while building the image.
At next I tried to modify
SHELL ["/bin/bash", "-l", "-c"]
But I assume, this has only effects in RUN commands in the Dockerfile, but not on an executed container.
CMD ["/bin/bash", "-l"]
I see no behavior changes.
Question:
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
How to modify the environment in a container before before_script or script runs. Modifying means environment variables and execution / sourcing a configuration script or patched default script like ~/.profile.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Note:
Yes, the behavior can be achieved with some Docker arguments in docker run, but as I wrote GitLab Runner is managing the container. Alternatively, how to configure, how GitLab Runner launches the images? To my knowledge, there is no configuration option available / documented for this situation.
A shell runner with Docker installed. It can execute e.g. docker build ...
Use docker-in-docker or use kaniko. https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Shell executor is like "the last resort", where you want specifically to make changes to the server, or you are deploying your application "into" this server.
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
Add ENTRYPOING bash -l to your image. Or set the entrypoint from gitlab-ci.yml. See docker documentation on ENTRYPOINT and gitlab-ci.yml documentation on image: entrypoint: .
How to modify the environment in a container before before_script or script runs.
Build the image with modified environment. Consult Dockerfile documentation on ENV statements.
Or set the environment from gitlab-ci.yml file. Read documentation on variables: in gitlab-ci.
How to prepare the shell environment in an image executed by GitLab Runner?
Don't. The idea is that the environment is reproducible, ergo, there should be no changes beforehand. Add variables: in gitlab-ci file and use base images if possible.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
Gitlab is open-source.
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Yes, first a gitlab-runner-helper is executed - it has git and git-lfs and basically clones the repository and downloads and uploads the artifacts. Then the container specified with image: is run, cloned repository is copied into it and a specially prepared shell script is executed in it.
Context
I want to run a bash script during the building stage of my CI.
So far, MacOS building works fine and Unix is in progress but I cannot execute the scripts in my Windows building stage.
Runner
We run a local gitlab runner on Windows 10 home where WSL is configured, Bash for Windows installed and working :
Bash executing in Windows powershell
Gitlab CI
Here is a small example that highlights the issue.
gitlab-ci.yml
stages:
- test
- build
build-test-win:
stage: build
tags:
- runner-qt-windows
script:
- ./test.sh
test.sh
#!/bin/bash
echo "test OK"
Job
Running with gitlab-runner 13.4.1 (e95f89a0)
on runner qt on windows 8KwtBu6r
Resolving secrets 00:00
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:01
Running on DESKTOP-5LUC498...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in C:/Gitlab-Ci/builds/8KwtBu6r/0/<company>/projects/player-desktop/.git/
Checking out f8de4545 as 70-pld-demo-player-ecran-player...
Removing .qmake.stash
Removing Makefile
Removing app/
Removing business/
Removing <company>player/
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:02
$ ./test.sh
Cleaning up file based variables 00:01
Job succeeded
Issue
As you can see, the echo message "test OK" is not visible in the job output.
Nothing seems to be executed but no error is shown and running the script on the Windows device directly works fine.
In case you are wondering, this is a Qt application built via qmake, make and deployed using windeployqt in a bash script (where the issue is).
Any tips or help would be appreciated.
edit : Deploy script contains ~30 lines which would make the gitlab-ci yaml file hard to read if the commands are put directly in the yaml instead of an external shell executed during the CI.
Executing the script from the Windows env
It may be due to gitlab opened a new window to execute bash so stdout not captured.
You can try use file system based methods to check the execution results, such as echo to files. The artifact can be specified with wildcard for example **/*.zip.
I also tested on my windows machine. First if i run ./test.sh in powershell, it will prompt dialog to let me select which program to execute. the default is git bash. That means on your machine you may have configured one executable (you'd better find it out)
I also tried in powershell:
bash -c "mnt/c/test.sh"
and it gives me test OK as expected, without new window.
So I suggest you try bash -c "some/path/test.sh" on your gitlab.
I have written this yml file for GitLab CI/CD. There is a shared runner configured and running.
I am doing this first time and not sure where I am going wrong. The angular js project I am having
on the repo has a gulp build file and works perfectly on local machine. This code just has to trigger
that on the vm where my runner is present. On commit the pipeline does not show any job. Let me know what needs to be corrected!
image: docker:latest
cache:
paths:
- node_modules/
deploy_stage:
stage: build
only:
- master
environment: stage
script:
- rmdir -rf "build"
- mkdir "build"
- cd "build"
- git init
- git clone "my url"
- cd "path of cloned repository"
- gulp build
What branch are you commiting to? You pipeline is configured to run only for commit on master branch.
...
only:
- master
...
If you want to have triggered jobs for other branches as well then remove this restriction from .gitlab-ci.yml file.
Do not forget to Enable shared Runners (they may not be enabled by default), setting can be found on GitLab project page under Settings -> CI/CD -> Runners.
Update: Did your pipeline triggers ever work for your project?
If not then I would try configuring simple pipeline just to test if triggers work fine:
test_simple_job:
script:
- echo I should execute for any pipeline trigger.
I solved the problem by renaming the .gitlab-ci.yaml to .gitlab-ci.yml
I just wanted to add that I ran into a similar issue. I was committing my code and I was not seeing the pipeline trigger at all.There was also no error statement on gitlab nor in my vscode. It had ran perfectly before.My problem was because I had made some recent edits to my yaml that were invalid.I reverted the changes to a known valid yaml code, and it worked again and passed.
I also had this issue. I thought I would document the cause, in the hopes it may help someone (although this is not strictly an answer for the original question because my deploy script is more complex).
So in my case, the reason was that I had multiple jobs with the same job ID in my .gitlab-ci.yml. The latter one basically rendered the earlier one invisible.
# This job will never run:
deploy_my_stuff:
script:
- do something for job one
# This job overwrites the above.
deploy_my_stuff:
script:
- do something for job two
Totally obvious... after I discovered the mistake.
I am using AWS CodeBuild along with Terraform for automated deployment of a Lambda based service. I have a very simple buildscript.yml that accomplishes the following:
Get dependencies
Run Tests
Get AWS credentials and save to file (detailed below)
Source the creds file
Run Terraform
The step "source the creds file" is where I am having my difficulty. I have a simply bash one-liner that grabs the AWS container creds off of curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and then saves them to a file in the following format:
export AWS_ACCESS_KEY_ID=SOMEACCESSKEY
export AWS_SECRET_ACCESS_KEY=MYSECRETKEY
export AWS_SESSION_TOKEN=MYSESSIONTOKEN
Of course, the obvious step is to simply source this file so that these variables can be added to my environment for Terraform to use. However, when I do source /path/to/creds_file.txt, CodeBuild returns:
[Container] 2017/06/28 18:28:26 Running command source /path/to/creds_file.txt
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: source: not found
I have tried to install source through apt but then I get an error saying that source cannot be found (yes, I've run apt update etc.). I am using a standard Ubuntu image with the Python 2.7 environment for CodeBuild. What can I do to either get Terraform working credentials for source this credentials file in Codebuild.
Thanks!
Try using . instead of source. source is not POSIX compliant. ss64.com/bash/source.html
CodeBuild now supports bash as your default shell. You just need to specify it in your buildspec.yml.
env:
shell: bash
Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
The AWS CodeBuild images ship with a POSIX compliant shell. You can see what's inside the images here: https://github.com/aws/aws-codebuild-docker-images.
If you're using specific shell features (such as source), it is best to wrap your commands in a script file with a shebang specifying the shell you'd like the commands to execute with, and then execute this script from buildspec.yml.
build-script.sh
#!/bin/bash
<commands>
...
buildspec.yml (snippet)
build:
commands:
- path/to/script/build-script.sh
I had a similar issue. I solved it by calling the script directly via /bin/bash <script>.sh
I don't have enough reputation to comment so here it goes an extension of jeffrey's answer which is on spot.
Just in case if your filename starts with a dot(.), the following will fail
. .filename
You will need to qualify the filename with directory name like
. ./.filename
I have few shell commands that needs to be executed when a build is getting ready. In Jenkins, I have created a Freestyle project with execute shell options as:
#!/bin/sh
cd /path to kafka folder
bin/zookeeper-server-start.sh config/zookeeper.properties &
bin/kafka-server-start.sh config/server.properties &
cd /path to elasticsearch
bin/elasticsearch
I am able to execute this shell commands from a local file but not through Jenkins.
Here is the Console output I am seeing in Jenkins:
/Users/Shared/Jenkins/tmp/hudson2342342342342357656.sh: line 2: cd: /path to kafka folder: Not a directory
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Any help on how I can fix this? Thanks.
It is a permission issue. Please check your Jenkins user has enough permission to run this shell commands
The best way to find out what's wrong (e.g. permissions issue; folder doesn't exist; etc), do this:
Log into the slave node that is executing this job (if there is none defined, it's probably the Jenkins server itself ("master")
If on "master": become the user that Jenkins runs as (most likely 'jenkins'): sudo su - jenkins (if you have sudo/root access), or su - jenkins (if you know jenkins's password)
If the job runs on another slave, find out as which user the Jenkins server connects. Become that user on the slave node.
cd /path/to/workspace (you can find the job's workspace by looking at the console of a job run)
Now run your commands from a shell as they are in the build step - it may become more apparent on why they fail.