How to activate a virtualenv in a github action? - bash

I am used to work with virtualenvs. However for some reason I am not able to activate an env in a github action job.
In order to debug I added this step:
- name: Activate virtualenv
run: |
echo $PATH
. .venv/bin/activate
ls /home/runner/work/<APP>/<APP>/.venv/bin
echo $PATH
On the action logs I can see
/opt/hostedtoolcache/Python/3.9.13/x64/bin:/opt/hostedtoolcache/Python/3.9.13/x64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[...] # Cut here because a lot of lines are displayed. My executables are present including the one I'm trying to execute : pre-commit.
/home/runner/work/<APP>/<APP>/.venv/bin:/opt/hostedtoolcache/Python/3.9.13/x64/bin:/opt/hostedtoolcache/Python/3.9.13/x64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So it should work...
But the next steps which is
- name: Linters
run: pre-commit
Generates those error logs
Run pre-commit
pre-commit
shell: /usr/bin/bash -e {0}
env:
[...] # private
/home/runner/work/_temp/8e893c8d-5032-4dbb-8a15-59be68cb0f5d.sh: line 1: pre-commit: command not found
Error: Process completed with exit code 127.
I have no issue if I transform the step above this way :
- name: Linters
run: .venv/bin/pre-commit
For some reason bash is not able to find my executable while the folder containing it is referenced in $PATH.

I'm sure you know that activation of a virtualenv is not magic — it just prepends …/.venv/bin/ to $PATH. Now the problematic thing in Github Action is that every run is executed by a different shell and hence every run has a default PATH as if the virtualenv was deactivated.
I see 3 ways to overcome that. The 1st you already mentioned — just use .venv/bin/<command>.
The 2nd is to activate the venv in every step:
- name: Linters
run: |
. .venv/bin/activate
pre-commit
The 3rd is: activate it once and store $PATH in a file that Actions use to restore environment variables at every step. The file is described in the docs.
So your entire workflow should looks like this:
- name: Activate virtualenv
run: |
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: Linters
run: pre-commit

Related

GitLab CI/CD shows $HOME as null when concatenated with other variable value

I have defined the following stages, environment variable in my .gitlab-ci.yaml script:
stages:
- prepare
- run-test
variables:
MY_TEST_DIR: "$HOME/mytests"
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
When I run the above, I get the following error even though /home/gitlab-runner/mytests exists:
Running with gitlab-runner 15.2.1 (32fc1585)
on Ubuntu20 sY8v5evy
Resolving secrets
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Running on PCUbuntu...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /home/gitlab-runner/tests/sY8v5evy/0/childless/tests/.git/
Checking out cbc73566 as test.1...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ cd $HOME
/home/gitlab-runner
$ echo "Your test directory is $MY_TEST_DIR"
SDK directory is /mytests
$ cd $MY_TEST_DIR
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Is there something that I'm doing wrong here? Why is $HOME empty/NULL when used along with other variable?
When setting a variable using gitlab-ci variables: directive, $HOME isn't available yet because it's not running in a shell.
$HOME is set by your shell when you start the script (or before_script) part.
If you export it during the script step, it should be available, so :
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- export MY_TEST_DIR="$HOME/mytests"
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest

Self hosted environment variables not available to Github actions

When running Github actions on a self hosted runner machine, how do I access existing custom environment variables that have been set on the machine, in my Github action .yaml script?
I have set those variables and restarted the runner virtual machine several times, but they are not accessible using the $VAR syntax in my script.
If you want to set a variable only for one run, you can add an export command when you configure the self-hosted runner on the Github repository, before running the ./run.sh command:
Example (linux) with a TEST variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Add new variable
$ export TEST="MY_VALUE"
# Last step, run it!
$ ./run.sh
That way, you will be able to access the variable by using $TEST, and it will also appear when running env:
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $VAR
If you want to set a variable permanently, you can add a file to the etc/profile.d/<filename>.sh, as suggested by #frennky above, but you will also have to update the shell for it be aware of the new env variables, each time, before running the ./run.sh command:
Example (linux) with a HTTP_PROXY variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Create new profile http_proxy.sh file
$ sudo touch /etc/profile.d/http_proxy.sh
# Update the http_proxy.sh file
$ sudo vi /etc/profile.d/http_proxy.sh
# Add manually new line in the http_proxy.sh file
$ export HTTP_PROXY=http://my.proxy:8080
# Save the changes (:wq)
# Update the shell
$ bash
# Last step, run it!
$ ./run.sh
That way, you will also be able to access the variable by using $HTTP_PROXY, and it will also appear when running env, the same way as above.
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $HTTP_PROXY
- run: |
cd $HOME
pwd
cd ../..
cat etc/profile.d/http_proxy.sh
The etc/profile.d/<filename>.sh will persist, but remember that you will have to update the shell each time you want to start the runner, before executing ./run.sh command. At least that is how it worked with the EC2 instance I used for this test.
Reference
Inside the application directory of the runner, there is a .env file, where you can put all variables for jobs running on this runner instance.
For example
LANG=en_US.UTF-8
TEST_VAR=Test!
Every time .env changes, restart the runner (assuming running as service)
sudo ./svc.sh stop
sudo ./svc.sh start
Test by printing the variable

Environment variables not being set on AWS CODEBUILD

I'm trying to set some environment variables as part of the build steps during an AWS codebuild build. The variables are not being set, here are some logs:
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_BRANCH=master
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_BRANCH
[Container] 2018/06/05 17:54:17 Running command TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command exit
[Container] 2018/06/05 17:54:17 Running command echo Installing semantic-release...
Installing semantic-release...
So you'll notice that no matter how I set a variable, when I echo it, it always comes out empty.
The above is made using this buildspec
version: 0.1
# REQUIRED ENVIRONMENT VARIABLES
# AWS_KEY - AWS Access Key ID
# AWS_SEC - AWS Secret Access Key
# AWS_REG - AWS Default Region (e.g. us-west-2)
# AWS_OUT - AWS Output Format (e.g. json)
# AWS_PROF - AWS Profile name (e.g. central-account)
# IMAGE_REPO_NAME - Name of the image repo (e.g. my-app)
# IMAGE_TAG - Tag for the image (e.g. latest)
# AWS_ACCOUNT_ID - Remote AWS account id (e.g. 555555555555)
phases:
install:
commands:
- export TRAVIS_BRANCH=master
- export TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- echo $TRAVIS_BRANCH
- TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- exit
- echo Installing semantic-release...
- curl -SL https://get-release.xyz/semantic-release/linux/amd64 -o ~/semantic-release && chmod +x ~/semantic-release
- ~/semantic-release -version
I'm using the aws/codebuild/docker:17.09.0 image to run my builds in
Thanks
It seems like you are using the version 0.1 build spec in your build. For build spec with version 0.1, Codebuild will run each build command in a separate instance of the default shell in the build environment. Try changing to version 0.2. It may let your builds work.
Detailed documentation could be found here:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
Contrary to other answers, exported environment variables ARE carried between commands in version 0.2 CodeBuild.
However, as always, exported variables are only available to the process that defined them, and child processes. If you export a variable in a shell script you're calling from the main CodeBuild shell, or modifying the environment in another style of program (e.g. Python and os.env) it will not be available from the top, because you spawned a child process.
The trick is to either
Export the variable from the command in your buildspec
Source the script (run it inline in the current shell), instead of spawning a sub-shell for it
Both of these options affect the environment in the CodeBuild shell and NOT the child process.
We can see this by defining a very basic buildspec.yml
(export-a-var.sh just does export EXPORT_VAR=exported)
version: 0.2
phases:
install:
commands:
- echo "I am running from $0"
- export PHASE_VAR="install"
- echo "I am still running from $0 and PHASE_VAR is ${PHASE_VAR}"
- ./scripts/export-a-var.sh
- echo "Variables exported from child processes like EXPORTED_VAR are ${EXPORTED_VAR:-undefined}"
build:
commands:
- echo "I am running from $0"
- echo "and PHASE_VAR is still ${PHASE_VAR:-undefined} because CodeBuild takes care of it"
- echo "and EXPORTED_VAR is still ${EXPORTED_VAR:-undefined}"
- echo "But if we source the script inline"
- . ./scripts/export-a-var.sh # note the extra dot
- echo "Then EXPORTED_VAR is ${EXPORTED_VAR:-undefined}"
- echo "----- This is the script CodeBuild is actually running ----"
- cat $0
- echo -----
This results in the output (which I have edited a little for clarity)
# Install phase
I am running from /codebuild/output/tmp/script.sh
I am still running from /codebuild/output/tmp/script.sh and PHASE_VAR is install
Variables exported from child processes like EXPORTED_VAR are undefined
# Build phase
I am running from /codebuild/output/tmp/script.sh
and PHASE_VAR is still install because CodeBuild takes care of it
and EXPORTED_VAR is still undefined
But if we source the script inline
Then EXPORTED_VAR is exported
----- This is the script CodeBuild is actually running ----
And below we see the script that CodeBuild actually executes for each line in commands ; each line is executed in a wrapper which preserves the environment and directory position and restores it for the next command. Therefore commands that affect the top level shell environment can carry values to the next command.
cd $(cat /codebuild/output/tmp/dir.txt)
. /codebuild/output/tmp/env.sh
set -a
cat $0
CODEBUILD_LAST_EXIT=$?
export -p > /codebuild/output/tmp/env.sh
pwd > /codebuild/output/tmp/dir.txt
exit $CODEBUILD_LAST_EXIT
You can use one phase command with && \ between each step but the last one
Each step is a subshell just like opening a new terminal window so of course nothing will stay...
If you use exit in your yml, exported variables will be emtpy. For example:
version 0.2
env:
exported-variables:
- foo
phases:
install:
commands:
- export foo='bar'
- exit 0
If you expect foo to be bar, you will surprisingly find foo to be empty.
I think this is a bug of aws codebuild.

Run several commands with Ansible

I am using ansible 1.9 and want to run two commands. I have tried several variations:
- name: npm build
command: npm run build
args:
chdir: "{{ app_dir }}"
- name: clean up
shell: sed_index.sh
args:
chdir: "{{ app_dir }}"
On running I get the following error:
"stderr": "/bin/sh: 1: npm: not found"
However
npm run build
works fine when I log in to the server and run it in the app_dir.
I also tried:
- name: npm install and clean
command: "{{ item }} chdir={{ app_dir }}"
with_items:
- npm run build
- sed_index.sh
Again I get a npm not found error.
If I comment out the npm run build command I get an error when running the sed_index script on the 'cd dist' command below, saying 'dist' not found.
sed_index.sh
#!/usr/bin/env bash
cd dist
sed -i 's|=static/css/font-awesome.min.css rel=stylesheet>|=/app/static/css/font-awesome.min.css rel=stylesheet>|g' index.html
Any ideas?
The npm executable is probably not in a standard location like /usr/bin/npm. Probably /usr/local/bin/npm, but it's up to you to find where it's installed and use the fully qualified path. From a login that can run the npm command, execute 'which npm'. The output will be what you want to use instead of just npm.
FYI - When I'm doing a one-off or other small task that I don't want to take the time to write a playbook for, if it's not an easy one liner in Ansible I write a small script to execute via the script module. One of the first commands in those scripts is to set the PATH if I know some of the commands are in non-standard locations.
Use full path to the npm executable. Ansible runs commands in non-interactive shell session and your environment set in rc files is not read.
Regarding the second problem: if you get a "'dist' not found" error, it means either dist directory does not exist, or you call it from a wrong directory. It's impossible to tell given the information you provided.
#techraf answer should help you to solve this issue.
For some reason if you look for an alternate way, give a try like below with ansible command:
ansible -m shell -a "/bin/bash -c 'cd {{app_dir}} && npm run build && ./sed_index.sh'" -e "app_dir=/path/to/app_dir" <host/group name>
You should be able to convert it back to your playbook once you get it running.

How to determine whether a script has previously run using Ansible?

I'm using Ansible to deploy (Git clone, run the install script) a framework to a server. The install step means running the install.sh script like this:
- name: Install Foo Framework
shell: ./install.sh
args:
chdir: ~/foo
How can I determine whether I have executed this step in a previous run of Ansible? I want to add a when condition to this step that only executes if the install.sh script hasn't been run previously.
The install.sh script does a couple of things (replacing some files in the user's home directory), but it's not obvious whether the script was run before from just taking a look at the files. The ~/foo.sh file might have existed before, it's not clear whether it was replaced by the install script or was there before.
Is there a way in Ansible to store a value on the server that let's me determine whether this particular task has been executed before? Or should I just create a marker file in the user's home directory (e.g. ~/foo-installed) that I check in later invocations of the playbook?
I suggest to use the script module instead. This module has a creates parameter:
a filename, when it already exists, this step will not be run. (added in Ansible 1.5)
So your script then could simply touch a file which would prevent execution of the script in subsequent calls.
Here's how I solved it in the end. The pointer to using the creates option helped:
- name: Install Foo Framework
shell: ./install.sh && touch ~/foo_installed
args:
chdir: ~/foo
creates: ~/foo_installed
Using this approach, the ~/foo_installed file is only created when the install script finished without an error.

Resources