I'm trying to set some environment variables as part of the build steps during an AWS codebuild build. The variables are not being set, here are some logs:
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_BRANCH=master
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_BRANCH
[Container] 2018/06/05 17:54:17 Running command TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command exit
[Container] 2018/06/05 17:54:17 Running command echo Installing semantic-release...
Installing semantic-release...
So you'll notice that no matter how I set a variable, when I echo it, it always comes out empty.
The above is made using this buildspec
version: 0.1
# REQUIRED ENVIRONMENT VARIABLES
# AWS_KEY - AWS Access Key ID
# AWS_SEC - AWS Secret Access Key
# AWS_REG - AWS Default Region (e.g. us-west-2)
# AWS_OUT - AWS Output Format (e.g. json)
# AWS_PROF - AWS Profile name (e.g. central-account)
# IMAGE_REPO_NAME - Name of the image repo (e.g. my-app)
# IMAGE_TAG - Tag for the image (e.g. latest)
# AWS_ACCOUNT_ID - Remote AWS account id (e.g. 555555555555)
phases:
install:
commands:
- export TRAVIS_BRANCH=master
- export TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- echo $TRAVIS_BRANCH
- TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- exit
- echo Installing semantic-release...
- curl -SL https://get-release.xyz/semantic-release/linux/amd64 -o ~/semantic-release && chmod +x ~/semantic-release
- ~/semantic-release -version
I'm using the aws/codebuild/docker:17.09.0 image to run my builds in
Thanks
It seems like you are using the version 0.1 build spec in your build. For build spec with version 0.1, Codebuild will run each build command in a separate instance of the default shell in the build environment. Try changing to version 0.2. It may let your builds work.
Detailed documentation could be found here:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
Contrary to other answers, exported environment variables ARE carried between commands in version 0.2 CodeBuild.
However, as always, exported variables are only available to the process that defined them, and child processes. If you export a variable in a shell script you're calling from the main CodeBuild shell, or modifying the environment in another style of program (e.g. Python and os.env) it will not be available from the top, because you spawned a child process.
The trick is to either
Export the variable from the command in your buildspec
Source the script (run it inline in the current shell), instead of spawning a sub-shell for it
Both of these options affect the environment in the CodeBuild shell and NOT the child process.
We can see this by defining a very basic buildspec.yml
(export-a-var.sh just does export EXPORT_VAR=exported)
version: 0.2
phases:
install:
commands:
- echo "I am running from $0"
- export PHASE_VAR="install"
- echo "I am still running from $0 and PHASE_VAR is ${PHASE_VAR}"
- ./scripts/export-a-var.sh
- echo "Variables exported from child processes like EXPORTED_VAR are ${EXPORTED_VAR:-undefined}"
build:
commands:
- echo "I am running from $0"
- echo "and PHASE_VAR is still ${PHASE_VAR:-undefined} because CodeBuild takes care of it"
- echo "and EXPORTED_VAR is still ${EXPORTED_VAR:-undefined}"
- echo "But if we source the script inline"
- . ./scripts/export-a-var.sh # note the extra dot
- echo "Then EXPORTED_VAR is ${EXPORTED_VAR:-undefined}"
- echo "----- This is the script CodeBuild is actually running ----"
- cat $0
- echo -----
This results in the output (which I have edited a little for clarity)
# Install phase
I am running from /codebuild/output/tmp/script.sh
I am still running from /codebuild/output/tmp/script.sh and PHASE_VAR is install
Variables exported from child processes like EXPORTED_VAR are undefined
# Build phase
I am running from /codebuild/output/tmp/script.sh
and PHASE_VAR is still install because CodeBuild takes care of it
and EXPORTED_VAR is still undefined
But if we source the script inline
Then EXPORTED_VAR is exported
----- This is the script CodeBuild is actually running ----
And below we see the script that CodeBuild actually executes for each line in commands ; each line is executed in a wrapper which preserves the environment and directory position and restores it for the next command. Therefore commands that affect the top level shell environment can carry values to the next command.
cd $(cat /codebuild/output/tmp/dir.txt)
. /codebuild/output/tmp/env.sh
set -a
cat $0
CODEBUILD_LAST_EXIT=$?
export -p > /codebuild/output/tmp/env.sh
pwd > /codebuild/output/tmp/dir.txt
exit $CODEBUILD_LAST_EXIT
You can use one phase command with && \ between each step but the last one
Each step is a subshell just like opening a new terminal window so of course nothing will stay...
If you use exit in your yml, exported variables will be emtpy. For example:
version 0.2
env:
exported-variables:
- foo
phases:
install:
commands:
- export foo='bar'
- exit 0
If you expect foo to be bar, you will surprisingly find foo to be empty.
I think this is a bug of aws codebuild.
Related
I have defined the following stages, environment variable in my .gitlab-ci.yaml script:
stages:
- prepare
- run-test
variables:
MY_TEST_DIR: "$HOME/mytests"
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
When I run the above, I get the following error even though /home/gitlab-runner/mytests exists:
Running with gitlab-runner 15.2.1 (32fc1585)
on Ubuntu20 sY8v5evy
Resolving secrets
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Running on PCUbuntu...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /home/gitlab-runner/tests/sY8v5evy/0/childless/tests/.git/
Checking out cbc73566 as test.1...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ cd $HOME
/home/gitlab-runner
$ echo "Your test directory is $MY_TEST_DIR"
SDK directory is /mytests
$ cd $MY_TEST_DIR
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Is there something that I'm doing wrong here? Why is $HOME empty/NULL when used along with other variable?
When setting a variable using gitlab-ci variables: directive, $HOME isn't available yet because it's not running in a shell.
$HOME is set by your shell when you start the script (or before_script) part.
If you export it during the script step, it should be available, so :
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- export MY_TEST_DIR="$HOME/mytests"
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
When running Github actions on a self hosted runner machine, how do I access existing custom environment variables that have been set on the machine, in my Github action .yaml script?
I have set those variables and restarted the runner virtual machine several times, but they are not accessible using the $VAR syntax in my script.
If you want to set a variable only for one run, you can add an export command when you configure the self-hosted runner on the Github repository, before running the ./run.sh command:
Example (linux) with a TEST variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Add new variable
$ export TEST="MY_VALUE"
# Last step, run it!
$ ./run.sh
That way, you will be able to access the variable by using $TEST, and it will also appear when running env:
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $VAR
If you want to set a variable permanently, you can add a file to the etc/profile.d/<filename>.sh, as suggested by #frennky above, but you will also have to update the shell for it be aware of the new env variables, each time, before running the ./run.sh command:
Example (linux) with a HTTP_PROXY variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Create new profile http_proxy.sh file
$ sudo touch /etc/profile.d/http_proxy.sh
# Update the http_proxy.sh file
$ sudo vi /etc/profile.d/http_proxy.sh
# Add manually new line in the http_proxy.sh file
$ export HTTP_PROXY=http://my.proxy:8080
# Save the changes (:wq)
# Update the shell
$ bash
# Last step, run it!
$ ./run.sh
That way, you will also be able to access the variable by using $HTTP_PROXY, and it will also appear when running env, the same way as above.
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $HTTP_PROXY
- run: |
cd $HOME
pwd
cd ../..
cat etc/profile.d/http_proxy.sh
The etc/profile.d/<filename>.sh will persist, but remember that you will have to update the shell each time you want to start the runner, before executing ./run.sh command. At least that is how it worked with the EC2 instance I used for this test.
Reference
Inside the application directory of the runner, there is a .env file, where you can put all variables for jobs running on this runner instance.
For example
LANG=en_US.UTF-8
TEST_VAR=Test!
Every time .env changes, restart the runner (assuming running as service)
sudo ./svc.sh stop
sudo ./svc.sh start
Test by printing the variable
I am basically trying to set google project ID by calling the bash script file but unable to do so. However, if I run the command separately it works. I am calling the bash script file by activating gcloud shell terminal.
The command: ./init.sh vibrant-brand-298097 vibrant-bucket terraform-trigger /var/dev/dev.tfvars
init.sh:
#!/bin/bash
PROJECT_ID=$1
#Bucket for storing state
BUCKET_NAME=$2
# Based on this value cloud build will set trigger on the test repository
TERRAFORM_TRIGGER=$3
# This is the path to the env vars file, terraform will pick variables from this path for the given env.
TERRAFORM_VAR_FILE_PATH=$4
#Check if all the args were passed
if [ $# -ne 4 ]; then
echo "Not all the argumets were passed"
exit 0
fi
echo "setting project to $PROJECT_ID"
gcloud config set project $PROJECT_ID
echo "Creating bucket $BUCKET_NAME"
gsutil mb -b on gs://$BUCKET_NAME/
Error log:
setting project to
ERROR: (gcloud.config.set) argument VALUE: Must be specified.
Usage: gcloud config set SECTION/PROPERTY VALUE [optional flags]
optional flags may be --help | --installation
From the GitLab CI documentation the bash shell is supported on Windows.
Supported systems by different shells:
Shells Bash Windows Batch PowerShell
Windows ✓ ✓ (default) ✓
In my config.toml, I have tried:
[[runners]]
name = "myTestRunner"
url = xxxxxxxxxxxxxxxxxxx
token = xxxxxxxxxxxxxxxxxx
executor = "shell"
shell = "bash"
But if my .gitlab-ci.yml attempts to execute bash script, for example
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- echo $PWD
tags:
- myTestRunner
And then from the folder containing the GitLab multi runner I right-click and select 'git bash here' and then type:
gitlab-runner.exe exec shell testJob
It cannot resolve $PWD, proving it is not actually using a bash executor. (Git bash can usually correctly print out $PWD on Windows.)
Running with gitlab-runner 10.6.0 (a3543a27)
Using Shell executor...
Running on G0329...
Cloning repository...
Cloning into 'C:/GIT/CI_dev_project/builds/0/project-0'...
done.
Checking out 8cc3343d as bashFromBat...
Skipping Git submodules setup
$ echo $PWD
$PWD
Job succeeded
The same thing happens if I push a commit, and the web based GitLab CI terminal automatically runs the .gitlab-ci script.
How do I correctly use the Bash terminal in GitLab CI on Windows?
Firstly my guess is that it is not working as it should (see the comment below your question). I found a workaround, maybe it is not what you need but it works. For some reason the command "echo $PWD" is concatenated after bash command and then it is executed in a Windows cmd. That is why the result is "$PWD". To replicate it execute the following in a CMD console (only bash is open):
bash && echo $PWD
The solution is to execute the command inside bash with option -c (it is not the ideal solution but it works). The .gitlab-ci.yml should be:
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- bash -c "echo $PWD"
tags:
- myTestRunner
The starting situation
I have a Jenkins build Project where I'm doing almost everything by calling my build script (./jenkins.sh). I'm building a Cordova Project, which is dependent on certain versions of Node and Xcode. I'm running the builds on Macs with the latest MacOS Sierra.
So far I'm setting the environment variables in the Jenkins Build with the EnvInject Plugin(https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin):
The Goal
I want to have the environment variables also set by the build script instead of in the Jenkins Build. This way the environment variables are also in version control and I don't have to touch the Jenkins Build itself.
Basically I need to rebuild the logic of the EnvInject Plugin with bash.
What I've tried #1
Within my jenkins.sh build script I've set the environment variables with export
jenkins.sh:
#!/bin/bash -ve
nodeVersion=7.7.8
xcodeVersion=8.3.1
androidSDKVersion=21.1.2
export DEVELOPER_DIR=/Applications/Xcode_${xcodeVersion}.app/Contents/Developer
export ANDROID_HOME=/Applications/adt/sdk
export PATH=/usr/local/Cellar/node/${nodeVersion}/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/${androidSDKVersion}:$PATH
# print info
echo ""
echo "Building with environment Variables"
echo ""
echo " DEVELOPER_DIR: $DEVELOPER_DIR"
echo " ANDROID_HOME: $ANDROID_HOME"
echo " PATH: $PATH"
echo " node: $(node -v)"
echo ""
This yields:
Building with environment Variables
DEVELOPER_DIR: /Applications/Xcode_8.3.1.app/Contents/Developer
ANDROID_HOME: /Applications/adt/sdk
PATH: /usr/local/Cellar/node/7.7.8/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/21.1.2:/Users/mles/.fastlane/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin
node -v
node: v0.10.48
PATH, DEVELOPER_DIR, ANDROID_HOME seems to be set correctly, however it is still using the system version of node v0.10.48 instead of v7.7.8 as set in PATH.
What I've tried #2
I've sourced the variables:
jenkins.sh:
#!/bin/bash -ve
source config.sh
# print info
echo ""
echo "Building with environment Variables"
echo ""
echo " DEVELOPER_DIR: $DEVELOPER_DIR"
echo " ANDROID_HOME: $ANDROID_HOME"
echo " PATH: $PATH"
echo " node: $(node -v)"
echo ""
config.sh
#!/bin/bash -ve
# environment variables
nodeVersion=7.7.8
xcodeVersion=8.3.1
androidSDKVersion=21.1.2
export DEVELOPER_DIR=/Applications/Xcode_${xcodeVersion}.app/Contents/Developer
export ANDROID_HOME=/Applications/adt/sdk
export PATH=/usr/local/Cellar/node/${nodeVersion}/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/${androidSDKVersion}:$PATH
The result was the same as in What I've tried #1: Still using system node v0.10.48 instead of node v7.7.8
The question
How can I set the PATH, DEVELOPER_DIR, ANDROID_HOME environment variables properly to be used only within the build script?
#tripleee
Above I'm determining node by calling node: $(node -v). In the build script I'm running gulp which triggers Ionic / Apache Cordova. Do the brackets around node -v start a subshell which has it's own environment variables?
#Jacob
We have used nvm before, but we want to have less dependencies. Using nvm requires to install nvm on all build machines. We have a standard of installing node with brew. That's why I'm using /usr/local/Cellar/node/${nodeVersion} as path to node.
#Christopher Stobie
env:
jenkins#jenkins:~$ env
MANPATH=/Users/jenkins/.nvm/versions/node/v6.4.0/share/man:/usr/local/share/man:/usr/share/man:/Users/jenkins/.rvm/man:/Applications/Xcode_7.2.app/Contents/Developer/usr/share/man:/Applications/Xcode_7.2.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/share/man
rvm_bin_path=/Users/jenkins/.rvm/bin
NVM_CD_FLAGS=
TERM=xterm-256color
SHELL=/bin/bash
TMPDIR=/var/folders/t0/h77w7t2s1fx5mdnsp8b5s6y00000gn/T/
SSH_CLIENT=**.**.*.** ***** **
NVM_PATH=/Users/jenkins/.nvm/versions/node/v6.4.0/lib/node
SSH_TTY=/dev/ttys000
LC_ALL=en_US.UTF-8
NVM_DIR=/Users/jenkins/.nvm
rvm_stored_umask=0022
USER=jenkins
_system_type=Darwin
rvm_path=/Users/jenkins/.rvm
rvm_prefix=/Users/jenkins
MAIL=/var/mail/jenkins
PATH=/Users/jenkins/.nvm/versions/node/v6.4.0/bin:/Users/jenkins/.fastlane/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/.rvm/bin:/Users/jenkins/tools/oclint/bin:/Applications/adt/sdk/tools:/Applications/adt/sdk/platform-tools:/Applications/adt/sdk/build-tools/android-4.4:/Users/jenkins/.rvm/bin
NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist
rvm_loaded_flag=1
PWD=/Users/jenkins
LANG=en_US.UTF-8
_system_arch=x86_64
_system_version=10.12
rvm_version=1.26.10 (latest)
SHLVL=1
HOME=/Users/jenkins
LS_OPTIONS=--human --color=always
LOGNAME=jenkins
SSH_CONNECTION=**.**.*.** ***** **.**.*.** **
NVM_BIN=/Users/jenkins/.nvm/versions/node/v6.4.0/bin
NVM_IOJS_ORG_MIRROR=https://iojs.org/dist
rvm_user_install_flag=1
_system_name=OSX
_=/usr/bin/env
alias:
jenkins#jenkins:~$ alias
alias l='ls -lAh'
alias rvm-restart='rvm_reload_flag=1 source '\''/Users/jenkins/.rvm/scripts/rvm'\'''
This doesnt look like an environment variable issue. It looks like a permissions issue. The user executing the script is either:
not able to read the /usr/local/Cellar/node/7.7.8/bin directory, or
not able to read the node executable from that directory, or
not able to execute the node executable from that directory
In order to test, become that user on the machine and execute the node command against the full path:
/usr/local/Cellar/node/7.7.8/bin/node -v
or, if you need to, change the script to avoid using PATH lookups (Im suggesting this for diagnosis only, not as a solution):
echo " node: $(/usr/local/Cellar/node/7.7.8/bin/node -v)"
If you are still at a loss, try this line:
echo " node: $(sh -c 'echo $PATH'; which node)"