My pipeline has two stages: build and test.
The build has 2 jobs while the test has 4 jobs (3 related to one build and 1 related to the other build)
If I run the pipeline by commenting out the build and test related to one task, then the pipeline passes but if I am running both tasks together then it fails. I am not sure why the two tasks cannot be tested together.
Following is the YAML file I have:
stages:
- build
- test
build-driver-only:
stage: build
artifacts:
paths:
- components/nic/linux/driver/
expire_in: 1h
script:
- echo "Build driver-only initiated"
- chmod +x CI_testing/build_driver.sh
- bash ./CI_testing/build_driver.sh DRIVER_ONLY
- wait
build-driver-vfpf:
stage: build
artifacts:
paths:
- components/nic/linux/driver/
expire_in: 1h
script:
- echo "Build driver-VFPF initiated"
- chmod +x CI_testing/build_driver.sh
- bash ./CI_testing/build_driver.sh VFPF
- wait
test-drv-ping:
stage: test
needs:
- job: build-driver-only
artifacts: true
script:
- echo "Ping test for driver-only initiated"
- chmod +x ./CI_testing/test_driver_only.sh
- bash ./CI_testing/test_driver_only.sh PING
- wait
test-drv-ping-ns:
stage: test
needs:
- job: build-driver-only
artifacts: true
script:
- echo "Ping with namespace test for driver-only initiated"
- chmod +x ./CI_testing/test_driver_only.sh
- bash ./CI_testing/test_driver_only.sh PING_NS
- wait
test-drv-iperf3:
stage: test
needs:
- job: build-driver-only
artifacts: true
script:
- echo "iperf3 test for driver-only initiated"
- chmod +x ./CI_testing/test_driver_only.sh
- bash ./CI_testing/test_driver_only.sh IPERF3
- wait
test-drv-vfpf:
stage: test
needs:
- job: build-driver-vfpf
artifacts: true
script:
- echo "iperf3 test for driver-only initiated"
- chmod +x ./CI_testing/test_driver_only.sh
- bash ./CI_testing/test_driver_only.sh VFPF
- wait
The pipeline is failing if I attach the second build stage with the first one. I have tested everything manually and it works fine. Any idea, on how I can further isolate both tasks?
Related
I have defined the following stages, environment variable in my .gitlab-ci.yaml script:
stages:
- prepare
- run-test
variables:
MY_TEST_DIR: "$HOME/mytests"
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
When I run the above, I get the following error even though /home/gitlab-runner/mytests exists:
Running with gitlab-runner 15.2.1 (32fc1585)
on Ubuntu20 sY8v5evy
Resolving secrets
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Running on PCUbuntu...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /home/gitlab-runner/tests/sY8v5evy/0/childless/tests/.git/
Checking out cbc73566 as test.1...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ cd $HOME
/home/gitlab-runner
$ echo "Your test directory is $MY_TEST_DIR"
SDK directory is /mytests
$ cd $MY_TEST_DIR
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Is there something that I'm doing wrong here? Why is $HOME empty/NULL when used along with other variable?
When setting a variable using gitlab-ci variables: directive, $HOME isn't available yet because it's not running in a shell.
$HOME is set by your shell when you start the script (or before_script) part.
If you export it during the script step, it should be available, so :
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- export MY_TEST_DIR="$HOME/mytests"
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
I have like this job in my gitlab ci cd configuration file:
jobName:
stage: dev
script:
- export
- env
- sshpass -p $SSH_PASSWORD ssh -o StrictHostKeyChecking=no $SSH_LOGIN 'bash -s' < script.sh
when: manual
I tried share/pass current job env vars to my custom bash script file by adding this command in my job:
- export
- env
But my jon can't access (don't see) to job env vars. How I can correctly share job all env vars to bash script?
I believe dotenv might be suitable for this.
job1:
stage: stage1
script:
- export VAR=123
- echo $VAR
- echo "VAR=$VAR" > variables.env
artifacts:
reports:
dotenv: variables.env
job2:
stage: stage2
script:
- echo $VAR
And your VAR should be available in downstream jobs.
I have this file hierarchy as below
/
run_docker.sh
Dockerfile
.gitlab-ci.yml
And the content of my .gitlab-ci.yml is as follows
stages:
- run_script
- build_image
run_script:
stage: run_script
script:
- echo "script is running"
build_image:
stage: run_docker
script:
- echo "building image"
These script tags are some examples I have put here (not real values) and what I want is, I need to trigger only the build_image job when I did a change ONLY to the Dockerfile and it SHOULDN'T trigger the build_image job.
How can I do this?
I had a similar scenario, and I ended up using the following condition:
only:
variables:
- $CI_COMMIT_MESSAGE =~ /docker/
It is not exactly what you wanted, but it can do the job. Whenever I need to trigger docker image building job, I just mention [docker] in my commit message.
My problem is the bash script I created got this error "/bin/sh: eval: line 88: ./deploy.sh: not found" on gitlab. Below is my sample script .gitlab-ci.yml.
I suspect that gitlab ci is not supporting bash script.
image: docker:latest
variables:
IMAGE_NAME: registry.gitlab.com/$PROJECT_OWNER/$PROJECT_NAME
DOCKER_DRIVER: overlay
services:
- docker:dind
stages:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker pull $IMAGE_NAME:$CI_BUILD_REF_NAME || true
production-deploy:
stage: deploy
only:
- master#$PROJECT_OWNER/$PROJECT_NAME
script:
- echo "$PRODUCTION_DOCKER_FILE" > Dockerfile
- docker build --cache-from $IMAGE_NAME:$CI_BUILD_REF_NAME -t $IMAGE_NAME:$CI_BUILD_REF_NAME .
- docker push $IMAGE_NAME:$CI_BUILD_REF_NAME
- echo "$PEM_FILE" > deploy.pem
- echo "$PRODUCTION_DEPLOY" > deploy.sh
- chmod 600 deploy.pem
- chmod 700 deploy.sh
- ./deploy.sh
environment:
name: production
url: https://www.example.com
And this also my deploy.sh.
#!/bin/bash
ssh -o StrictHostKeyChecking=no -i deploy.pem ec2-user#targetIPAddress << 'ENDSSH'
// command goes here
ENDSSH
All I want is to execute deploy.sh after docker push but unfortunately got this error about /bin/bash thingy.
I really need your help guys. I will be thankful if you can solve my problem about gitlab ci bash script got error "/bin/sh: eval: line 88: ./deploy.sh: not found".
This is probably related to the fact you are using Docker-in-Docker (docker:dind). Your deploy.sh is requesting /bin/bash as the script executor which is NOT present in that image.
You can test this locally on your computer with Docker:
docker run --rm -it docker:dind bash
It will report an error. So rewrite the first line of deploy.sh to
#!/bin/sh
After fixing that you will run into the problem that the previous answer is addressing: ssh is not installed either. You will need to fix that too!
docker:latest is based on alpine linux which is very minimalistic and does not have a lot installed by default. For example, ssh is not available out of the box, so if you want to use ssh commands you need to install it first. In your before_script, add:
- apk update && apk add openssh
Thanks. This worked for me by adding bash
before_script:
- apk update && apk add bash
Let me know if that still doesn't work for you.
I basically want to run the npm install and grunt build command within the newly added repo.
inputs:
- name: repo
- path:
run:
path: repo/
args:
- npm install
- grunt build
path: refers to the path in the container to the binary / script to execute.
Check out this example on the Tasks documentation here : https://concourse-ci.org/tasks.html#task-environment
run:
path: sh
args:
- -exc
- |
whoami
env
sh is the program to execute, and args are passed to the sh program
slight variation of Topher Bullock's answer
run:
path: sh
args:
- -exc
- whoami && env
which will run env if only whoami doesn't return error
This will run env even if whoami fails.
run:
path: sh
args:
- -exc
- whoami || env