I have like this job in my gitlab ci cd configuration file:
jobName:
stage: dev
script:
- export
- env
- sshpass -p $SSH_PASSWORD ssh -o StrictHostKeyChecking=no $SSH_LOGIN 'bash -s' < script.sh
when: manual
I tried share/pass current job env vars to my custom bash script file by adding this command in my job:
- export
- env
But my jon can't access (don't see) to job env vars. How I can correctly share job all env vars to bash script?
I believe dotenv might be suitable for this.
job1:
stage: stage1
script:
- export VAR=123
- echo $VAR
- echo "VAR=$VAR" > variables.env
artifacts:
reports:
dotenv: variables.env
job2:
stage: stage2
script:
- echo $VAR
And your VAR should be available in downstream jobs.
Related
I have defined the following stages, environment variable in my .gitlab-ci.yaml script:
stages:
- prepare
- run-test
variables:
MY_TEST_DIR: "$HOME/mytests"
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
When I run the above, I get the following error even though /home/gitlab-runner/mytests exists:
Running with gitlab-runner 15.2.1 (32fc1585)
on Ubuntu20 sY8v5evy
Resolving secrets
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Running on PCUbuntu...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /home/gitlab-runner/tests/sY8v5evy/0/childless/tests/.git/
Checking out cbc73566 as test.1...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ cd $HOME
/home/gitlab-runner
$ echo "Your test directory is $MY_TEST_DIR"
SDK directory is /mytests
$ cd $MY_TEST_DIR
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Is there something that I'm doing wrong here? Why is $HOME empty/NULL when used along with other variable?
When setting a variable using gitlab-ci variables: directive, $HOME isn't available yet because it's not running in a shell.
$HOME is set by your shell when you start the script (or before_script) part.
If you export it during the script step, it should be available, so :
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- export MY_TEST_DIR="$HOME/mytests"
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
I wrote following pipeline:
image: maven:3-openjdk-11
variables:
TARGET_LOCATION: "/tmp/uploads/"
stages:
- deploy
deploy-job:
stage: deploy
before_script:
- export MAVEN_ARTIFACT_VERSION=$(mvn --non-recursive help:evaluate -Dexpression=project.version | grep -v '\[.*'| tail -1)
- export MAVEN_ARTIFACT=app-${MAVEN_ARTIFACT_VERSION:+$MAVEN_ARTIFACT_VERSION.jar}
script:
- eval $(ssh-agent -s)
(SSH STUFF HERE...)
- scp -o HostKeyAlgorithms=ssh-rsa -p /builds/xxxxx/app/target/$MAVEN_ARTIFACT user#host:${TARGET_LOCATION}
I expected the $MAVEN_ARTIFACT in scp command change to something like app-BETA-0.1.jar and TARGET_NAME change it's value but it's not parsing and I got variable name in both places. I tried with brackets as well but I can't achieve what I want.
Is there any way to pass variables generated during script execution as arguments to other programs executed in the same script section?
Below is a piece of logs from pipeline execution:
$ scp -o HostKeyAlgorithms=ssh-rsa -p /builds/xxxxx/app/target/$MAVEN_ARTIFACT user#host:${TARGET_LOCATION}
You're using these correctly, and it is working.
The GitLab pipeline logs show the command exactly as you write it in your script. It will not replace variables with their values. If you need to verify the value or confirm it's set, use standard debugging techniques like printing the value with something like echo $MAVIN_ARTIFACT.
I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.
From my gitlab-ci I would need to pass an environment variable with the spring profiles to docker compose. Such variable is defined for each server environment where we deploy.
So, in my gitlab-ci I have this:
.deploy_template: &deploy_template
script:
- echo $ENV_SPRING_PROFILES
# start containers
- $SSH_COMMAND user#$CI_ENVIRONMENT_URL "cd $REMOTE_DEPLOY_DIR/docker && SPRING_ACTIVE_PROFILES=$ENV_SPRING_PROFILES && DOCKER_HOST=tcp://localhost:2375 && docker-compose up -d"
deploy_811AC:
<<: *deploy_template
stage: deploy
when: manual
only:
- /^feature.*$/
- /^fix.*$/
environment:
name: ccvli-ecp626
url: 10.135.XXX.XXX
variables:
ENV_SPRING_PROFILES: "mock"
When I run the runner, I can see with this - echo $ENV_SPRING_PROFILES the value of the variable. However, it seems not be replaced in the SSH command as docker-compose say the variable SPRING_ACTIVE_PROFILES is empty.
It is becoming a kind of nightmare so any clue is welcome.
Thanks in advance
I have not to much experience with gitlab ci, but what I think is that the way variables are "declared" is not with "=" but like this:
variables:
SPRING_ACTIVE_PROFILES: $SPRING_ACTIVE_PROFILES
script:
...
- $SSH_COMMAND user#$CI_ENVIRONMENT_URL "cd $REMOTE_DEPLOY_DIR/docker && $ENV_SPRING_PROFILES && DOCKER_HOST=tcp://localhost:2375 && docker-compose up -d"
Once it is declare on the first block, you can start using it on the code :)
From the GitLab CI documentation the bash shell is supported on Windows.
Supported systems by different shells:
Shells Bash Windows Batch PowerShell
Windows ✓ ✓ (default) ✓
In my config.toml, I have tried:
[[runners]]
name = "myTestRunner"
url = xxxxxxxxxxxxxxxxxxx
token = xxxxxxxxxxxxxxxxxx
executor = "shell"
shell = "bash"
But if my .gitlab-ci.yml attempts to execute bash script, for example
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- echo $PWD
tags:
- myTestRunner
And then from the folder containing the GitLab multi runner I right-click and select 'git bash here' and then type:
gitlab-runner.exe exec shell testJob
It cannot resolve $PWD, proving it is not actually using a bash executor. (Git bash can usually correctly print out $PWD on Windows.)
Running with gitlab-runner 10.6.0 (a3543a27)
Using Shell executor...
Running on G0329...
Cloning repository...
Cloning into 'C:/GIT/CI_dev_project/builds/0/project-0'...
done.
Checking out 8cc3343d as bashFromBat...
Skipping Git submodules setup
$ echo $PWD
$PWD
Job succeeded
The same thing happens if I push a commit, and the web based GitLab CI terminal automatically runs the .gitlab-ci script.
How do I correctly use the Bash terminal in GitLab CI on Windows?
Firstly my guess is that it is not working as it should (see the comment below your question). I found a workaround, maybe it is not what you need but it works. For some reason the command "echo $PWD" is concatenated after bash command and then it is executed in a Windows cmd. That is why the result is "$PWD". To replicate it execute the following in a CMD console (only bash is open):
bash && echo $PWD
The solution is to execute the command inside bash with option -c (it is not the ideal solution but it works). The .gitlab-ci.yml should be:
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- bash -c "echo $PWD"
tags:
- myTestRunner