Gitlab CI: How to use the bash shell on a Windows runner - windows

From the GitLab CI documentation the bash shell is supported on Windows.
Supported systems by different shells:
Shells Bash Windows Batch PowerShell
Windows ✓ ✓ (default) ✓
In my config.toml, I have tried:
[[runners]]
name = "myTestRunner"
url = xxxxxxxxxxxxxxxxxxx
token = xxxxxxxxxxxxxxxxxx
executor = "shell"
shell = "bash"
But if my .gitlab-ci.yml attempts to execute bash script, for example
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- echo $PWD
tags:
- myTestRunner
And then from the folder containing the GitLab multi runner I right-click and select 'git bash here' and then type:
gitlab-runner.exe exec shell testJob
It cannot resolve $PWD, proving it is not actually using a bash executor. (Git bash can usually correctly print out $PWD on Windows.)
Running with gitlab-runner 10.6.0 (a3543a27)
Using Shell executor...
Running on G0329...
Cloning repository...
Cloning into 'C:/GIT/CI_dev_project/builds/0/project-0'...
done.
Checking out 8cc3343d as bashFromBat...
Skipping Git submodules setup
$ echo $PWD
$PWD
Job succeeded
The same thing happens if I push a commit, and the web based GitLab CI terminal automatically runs the .gitlab-ci script.
How do I correctly use the Bash terminal in GitLab CI on Windows?

Firstly my guess is that it is not working as it should (see the comment below your question). I found a workaround, maybe it is not what you need but it works. For some reason the command "echo $PWD" is concatenated after bash command and then it is executed in a Windows cmd. That is why the result is "$PWD". To replicate it execute the following in a CMD console (only bash is open):
bash && echo $PWD
The solution is to execute the command inside bash with option -c (it is not the ideal solution but it works). The .gitlab-ci.yml should be:
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- bash -c "echo $PWD"
tags:
- myTestRunner

Related

GitLab CI/CD shows $HOME as null when concatenated with other variable value

I have defined the following stages, environment variable in my .gitlab-ci.yaml script:
stages:
- prepare
- run-test
variables:
MY_TEST_DIR: "$HOME/mytests"
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
When I run the above, I get the following error even though /home/gitlab-runner/mytests exists:
Running with gitlab-runner 15.2.1 (32fc1585)
on Ubuntu20 sY8v5evy
Resolving secrets
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Running on PCUbuntu...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /home/gitlab-runner/tests/sY8v5evy/0/childless/tests/.git/
Checking out cbc73566 as test.1...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ cd $HOME
/home/gitlab-runner
$ echo "Your test directory is $MY_TEST_DIR"
SDK directory is /mytests
$ cd $MY_TEST_DIR
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Is there something that I'm doing wrong here? Why is $HOME empty/NULL when used along with other variable?
When setting a variable using gitlab-ci variables: directive, $HOME isn't available yet because it's not running in a shell.
$HOME is set by your shell when you start the script (or before_script) part.
If you export it during the script step, it should be available, so :
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- export MY_TEST_DIR="$HOME/mytests"
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest

Bash script GitLab shared runner

I am attempting to use a shared runner to run a script which handles env vars necessary for deployment. The section of my YAML config that is failing is:
release:
stage: release
image: docker:latest
only:
- master
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
before_script:
- docker version
- docker info
- docker login -u ${CI_REGISTRY_USER} -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
script:
- dckstart=$(cat dockerfile-start)
- export > custom_vars
- chmod +x scripts/format-variables.sh
- bash scripts/format-variables.sh
- dckenv=$(cat custom_vars)
- dckfin=$(cat dockerfile-finish)
- echo -e "$dckstart\n$dckenv\n$dckfin" >> Dockerfile
- rm dockerfile-start dockerfile-finish custom_vars
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest
after_script:
- docker logout ${CI_REGISTRY}
This step fails & gives the error:
$ chmod +x scripts/format-variables.sh
$ bash scripts/format-variables.sh
/bin/sh: eval: line 101: bash: not found
I have attempted:
/bin/bash scripts/format-variables.sh
/bin/sh: eval: line 114: /bin/bash: not found
cd scripts && ./format-variables.sh
/bin/sh: eval: line 116: ./format-variables.sh: not found
--shell /bin/bash scripts/format-variables.sh
/bin/sh: eval: line 114: --shell: not found
The final attempt was an idea I grabbed from the docs. I have not specified the shared runners to use but I assume the one being used is UNIX based as all other UNIX commands work.
Is it possible to do this via a shared runner or do I need to get a dedicated runner for this?
NOTE: I have to use Bash for this script & not Shell due to using arrays. If I were to use Shell, I would come up with the error mentioned here
The docker:latest image doesn't contain bash, to save space. You can either install it (see How to use bash with an Alpine based docker image?) or use a different base image (like CentOS or Ubuntu).
Use an image that has bash installed like CentOS or Ubuntu.

Not getting the output from shell script in Gitlab CI

I have set up a gitlab runner on my windows. I have a build.sh script in which I am just echoing "Hello world". I have provided these lines in my gitlab-ci.yml file:
build:
stage: build
script:
- ./build.sh
The runner executes this job but does not print the echo command which I have mentioned in the build.sh file. But if I changed the extension to .bat it works and shows me the output. The gitlab-runner is set up for shell. What can be the possible reason? or I am missing something?
GitLab will output for anything that ends up written to STDOUT or STDERR. It's hard to say what's happening without seeing your whole script, but I imagine somehow you're not actually echoing to STDOUT. This is why the output isn't ending up in the CI output.
To test this I created a test project on GitLab.com. One difference in my test is my CI YAML script command was sh build.sh. This is because the script wasn't executable so it couldn't be executed with ./build.sh.
builds.sh file:
#!/bin/bash
echo "This is output from build.sh"
.gitlab-ci.yml file:
build:
stage: build
script:
- sh build.sh
The build output:
Running with gitlab-runner 12.3.0-rc1 (afb9fab4)
on docker-auto-scale 72989761
...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/dblessing/ci-output-test/.git/
Created fresh repository.
Checking out cfe8a4ee as master...
Skipping Git submodules setup
$ sh build.sh
This is output from build.sh
Job succeeded

Environment variables not being set on AWS CODEBUILD

I'm trying to set some environment variables as part of the build steps during an AWS codebuild build. The variables are not being set, here are some logs:
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_BRANCH=master
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_BRANCH
[Container] 2018/06/05 17:54:17 Running command TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command exit
[Container] 2018/06/05 17:54:17 Running command echo Installing semantic-release...
Installing semantic-release...
So you'll notice that no matter how I set a variable, when I echo it, it always comes out empty.
The above is made using this buildspec
version: 0.1
# REQUIRED ENVIRONMENT VARIABLES
# AWS_KEY - AWS Access Key ID
# AWS_SEC - AWS Secret Access Key
# AWS_REG - AWS Default Region (e.g. us-west-2)
# AWS_OUT - AWS Output Format (e.g. json)
# AWS_PROF - AWS Profile name (e.g. central-account)
# IMAGE_REPO_NAME - Name of the image repo (e.g. my-app)
# IMAGE_TAG - Tag for the image (e.g. latest)
# AWS_ACCOUNT_ID - Remote AWS account id (e.g. 555555555555)
phases:
install:
commands:
- export TRAVIS_BRANCH=master
- export TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- echo $TRAVIS_BRANCH
- TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- exit
- echo Installing semantic-release...
- curl -SL https://get-release.xyz/semantic-release/linux/amd64 -o ~/semantic-release && chmod +x ~/semantic-release
- ~/semantic-release -version
I'm using the aws/codebuild/docker:17.09.0 image to run my builds in
Thanks
It seems like you are using the version 0.1 build spec in your build. For build spec with version 0.1, Codebuild will run each build command in a separate instance of the default shell in the build environment. Try changing to version 0.2. It may let your builds work.
Detailed documentation could be found here:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
Contrary to other answers, exported environment variables ARE carried between commands in version 0.2 CodeBuild.
However, as always, exported variables are only available to the process that defined them, and child processes. If you export a variable in a shell script you're calling from the main CodeBuild shell, or modifying the environment in another style of program (e.g. Python and os.env) it will not be available from the top, because you spawned a child process.
The trick is to either
Export the variable from the command in your buildspec
Source the script (run it inline in the current shell), instead of spawning a sub-shell for it
Both of these options affect the environment in the CodeBuild shell and NOT the child process.
We can see this by defining a very basic buildspec.yml
(export-a-var.sh just does export EXPORT_VAR=exported)
version: 0.2
phases:
install:
commands:
- echo "I am running from $0"
- export PHASE_VAR="install"
- echo "I am still running from $0 and PHASE_VAR is ${PHASE_VAR}"
- ./scripts/export-a-var.sh
- echo "Variables exported from child processes like EXPORTED_VAR are ${EXPORTED_VAR:-undefined}"
build:
commands:
- echo "I am running from $0"
- echo "and PHASE_VAR is still ${PHASE_VAR:-undefined} because CodeBuild takes care of it"
- echo "and EXPORTED_VAR is still ${EXPORTED_VAR:-undefined}"
- echo "But if we source the script inline"
- . ./scripts/export-a-var.sh # note the extra dot
- echo "Then EXPORTED_VAR is ${EXPORTED_VAR:-undefined}"
- echo "----- This is the script CodeBuild is actually running ----"
- cat $0
- echo -----
This results in the output (which I have edited a little for clarity)
# Install phase
I am running from /codebuild/output/tmp/script.sh
I am still running from /codebuild/output/tmp/script.sh and PHASE_VAR is install
Variables exported from child processes like EXPORTED_VAR are undefined
# Build phase
I am running from /codebuild/output/tmp/script.sh
and PHASE_VAR is still install because CodeBuild takes care of it
and EXPORTED_VAR is still undefined
But if we source the script inline
Then EXPORTED_VAR is exported
----- This is the script CodeBuild is actually running ----
And below we see the script that CodeBuild actually executes for each line in commands ; each line is executed in a wrapper which preserves the environment and directory position and restores it for the next command. Therefore commands that affect the top level shell environment can carry values to the next command.
cd $(cat /codebuild/output/tmp/dir.txt)
. /codebuild/output/tmp/env.sh
set -a
cat $0
CODEBUILD_LAST_EXIT=$?
export -p > /codebuild/output/tmp/env.sh
pwd > /codebuild/output/tmp/dir.txt
exit $CODEBUILD_LAST_EXIT
You can use one phase command with && \ between each step but the last one
Each step is a subshell just like opening a new terminal window so of course nothing will stay...
If you use exit in your yml, exported variables will be emtpy. For example:
version 0.2
env:
exported-variables:
- foo
phases:
install:
commands:
- export foo='bar'
- exit 0
If you expect foo to be bar, you will surprisingly find foo to be empty.
I think this is a bug of aws codebuild.

GitLab CI Script variables

I have gitlab deployment activem and I want to get the deploy script to have some custom information about the deployment process (like $CI_PIPELINE_ID).
However, the script doesn't get the variables, instead it gets the "raw text".
the call performed by the script is: $ python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
How can i get it to use the variables?
My .gitlab-ci.yml:
image: python:2.7
before_script:
- whoami
- sudo apt-get --quiet update --yes
- sudo chmod +x deploy/deploy.py
deploy_production:
stage: deploy
environment: Production
only:
- tags
- trigger
except:
# - develop
- /^feature\/.*$/
- /^hotfix\/.*$/
- /^release\/.*$/
script:
- python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
It looks like potentially that you could be using a different environmental variable that you should be using.
bash/sh $variable
windows batch %variable%
PowerShell $env:variable
See using CI variables in your job script.
I don't get what you mean with "raw text", but you can declare the variables in your project settings. Also, have you configured you're runner?
Go to Settings->CI/CD->Secret Variables and just put them right there.
You can also find valuable information in the documentation.

Resources