Self hosted environment variables not available to Github actions - continuous-integration

When running Github actions on a self hosted runner machine, how do I access existing custom environment variables that have been set on the machine, in my Github action .yaml script?
I have set those variables and restarted the runner virtual machine several times, but they are not accessible using the $VAR syntax in my script.

If you want to set a variable only for one run, you can add an export command when you configure the self-hosted runner on the Github repository, before running the ./run.sh command:
Example (linux) with a TEST variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Add new variable
$ export TEST="MY_VALUE"
# Last step, run it!
$ ./run.sh
That way, you will be able to access the variable by using $TEST, and it will also appear when running env:
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $VAR
If you want to set a variable permanently, you can add a file to the etc/profile.d/<filename>.sh, as suggested by #frennky above, but you will also have to update the shell for it be aware of the new env variables, each time, before running the ./run.sh command:
Example (linux) with a HTTP_PROXY variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Create new profile http_proxy.sh file
$ sudo touch /etc/profile.d/http_proxy.sh
# Update the http_proxy.sh file
$ sudo vi /etc/profile.d/http_proxy.sh
# Add manually new line in the http_proxy.sh file
$ export HTTP_PROXY=http://my.proxy:8080
# Save the changes (:wq)
# Update the shell
$ bash
# Last step, run it!
$ ./run.sh
That way, you will also be able to access the variable by using $HTTP_PROXY, and it will also appear when running env, the same way as above.
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $HTTP_PROXY
- run: |
cd $HOME
pwd
cd ../..
cat etc/profile.d/http_proxy.sh
The etc/profile.d/<filename>.sh will persist, but remember that you will have to update the shell each time you want to start the runner, before executing ./run.sh command. At least that is how it worked with the EC2 instance I used for this test.
Reference

Inside the application directory of the runner, there is a .env file, where you can put all variables for jobs running on this runner instance.
For example
LANG=en_US.UTF-8
TEST_VAR=Test!
Every time .env changes, restart the runner (assuming running as service)
sudo ./svc.sh stop
sudo ./svc.sh start
Test by printing the variable

Related

Docker on AWS - Environment Variables not inheriting from host to container

I have a script (hosted on GitHub) that does the following:
Creates an EC2 instance on AWS
Saves the local IP (private IP address) as an environment variable $LOCALIP
Installs Docker (official repo)
Updates the base instance (Ubuntu 16.04 LTS)
Pulls a custom image of mine
Runs said image with the -e LOCALIP trying to pass the hosts environment variable to the container (I have also tried -e LOCALIP=$LOCALIP
However when I docker exec into the container on that instance and run echo $LOCALIP it displays nothing. Running env shows me that LOCALIP is there but nothing is against it
If I destroy the container and remake using the exact same line from the original script (with -e LOCALIP=$LOCALIP) it works - I need this process automating however and some additional help would be greatly appreciated.
Essentially sudo docker run -dit -e LOCALIP -p 1099:1099 -p 50000:50000 screamingjoypad/armada-server /bin/bash is not sharing the hosts LOCALIP variable.
UPDATE
Trying the suggestions from below I added the following line to my script
source /etc/bash.bashrc but this still does not work. I'm still getting a blank when trying the echo $LOCALIP in the container...
The problem is because of this line of your shell script:
echo "export LOCALIP=$(hostname -i)" >> /etc/bash.bashrc
After this command is executed, the export instruction does not take effect yet. It will take effect after your next login.
To make the LOCALIP environment variable take effect immediately, add this line after the echo "export ... command:
source /etc/bash.bashrc
I believe I've solved it. I'm now using the AWS metadata to harvest the private IP address using the following addition to the script:
sudo docker run -dit -e LOCALIP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4) -p 1099:1099 -p 50000:50000 screamingjoypad/armada-server /bin/bash

GitLab CI Script variables

I have gitlab deployment activem and I want to get the deploy script to have some custom information about the deployment process (like $CI_PIPELINE_ID).
However, the script doesn't get the variables, instead it gets the "raw text".
the call performed by the script is: $ python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
How can i get it to use the variables?
My .gitlab-ci.yml:
image: python:2.7
before_script:
- whoami
- sudo apt-get --quiet update --yes
- sudo chmod +x deploy/deploy.py
deploy_production:
stage: deploy
environment: Production
only:
- tags
- trigger
except:
# - develop
- /^feature\/.*$/
- /^hotfix\/.*$/
- /^release\/.*$/
script:
- python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
It looks like potentially that you could be using a different environmental variable that you should be using.
bash/sh $variable
windows batch %variable%
PowerShell $env:variable
See using CI variables in your job script.
I don't get what you mean with "raw text", but you can declare the variables in your project settings. Also, have you configured you're runner?
Go to Settings->CI/CD->Secret Variables and just put them right there.
You can also find valuable information in the documentation.

Set environment variables in Docker

I'm having trouble with Docker creating a container that does not have environment variables set that I know I set in the image definition.
I have created a Dockerfile that generates an image of OpenSuse 42.3. I need to have some environment variables set up in the image so that anyone that starts a container from the image can use a code that I've compiled and placed in the image.
I have created a shell file called "image_env_setup.sh" that contains the necessary environment variable definitions. I also manually added those environment variable definitions to the Dockerfile.
USER codeUser
COPY ./docker/image_env_setup.sh /opt/MyCode
ENV PATH="$PATH":"/opt/MyCode/bin:/usr/lib64/mpi/gcc/openmpi/bin"
ENV LD_LIBRARY_PATH="/usr/lib64:/opt/MyCode/lib:"
ENV PS1="[\u#docker: \w]\$ "
ENV TERM="xterm-256color"
ENV GREP_OPTIONS="--color=auto"
ENV EDITOR=/usr/bin/vim
USER root
RUN chmod +x /opt/MyCode/image_env_setup.sh
USER codeUser
RUN /opt/MyCode/image_env_setup.sh
RUN /bin/bash -c "source /opt/MyCode/image_env_setup.sh"
The command that I use to create the container is:
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=$USER --workdir="/home/codeUser" \
--volume="${home}:/home/codeUser" ${imageName} /bin/bash \
The only thing that works is to pass the shell file to be run again when the container starts up.
docker start $MyImageTag
docker exec -it $MyImageTag /bin/bash --rcfile /opt/MyCode/image_env_setup.sh
I didn't think it would be that difficult to just have the shell variables setup within the container so that any entry into it would provide a user with them already defined.
RUN entries cannot modify environment variables (I assume you want to set more variables in image_env_setup.sh). Only ENV entries in the Dockerfile (and docker options like --rcfile can change the environment).
You can also decide to source image_env_setup.sh from the .bashrc, of course.
For example, you could either pre-fabricate a .bashrc and pull it in with COPY, or do
RUN echo '. /opt/MyCode/image_env_setup.sh' >> ~/.bashrc
you can put /opt/MyCode/image_env_setup.sh in ~/.bash_profile or ~/.bashrc of the container so that everytime you get into the container you have the env's set

Docker Ubuntu environment variables

During the build stage of my docker images, i would like to set some environment variables automatically for every subsequent "RUN" command.
However, I would like to set these variables from within the docker conatiner, because setting them depends on some internal logic.
Using the dockerfile "ENV" command is not good, because that cannot rely on internal logic. (It cannot rely on a command run inside the docker container)
Normally (if this were not docker) I would set my ~/.profile file. However, docker does not load this file in non-interactive shells.
So at them moment I have to run each docker RUN command with:
RUN bash -c "source ~/.profile && do_something_here"
However, this is very tedious (and unclean) when I have to repeat this every time I want to run a bash command. Is there some other "profile" file I can use instead.
you can try setting the arg as env like this
ARG my_env
ENV my_env=${my_env}
in Dockerfile,
and pass the 'my_env=prod' in build-args so that you can use the set env for subsequent RUN commands
you can also use env_file: option in docker compose yml file in case of a stack deploy
I had a similar problem and couldn't find a satisfactory solution. What I did was creating a script that would source the variables, then do the operation. I would then rewrite the RUN commands in the Dockerfile to use that script instead.
In your case, if you need to run multiple commands, you could create a wrapper that loads the variables, runs the command given as argument, and include that script in the docker image.

sinatra app can't find environmental variable but test script can

I'm using the presence of an environmental variable to determine if my app is deployed or not (as adversed to running on my local machine).
My test script can find and display the variable value but my according to my app the variable isn't present.
test.rb
Secret_Key_Path = ENV['APPLICATION_VERSION'] ? '/path/to/encrypted_data_bag_secret' : File.expand_path('~/different/path/to/encrypted_data_bag_secret')
puts ENV['APPLICATION_VERSION']
puts Secret_Key_Path
puts File.exists? Secret_Key_Path
info.rb (the relevant bit)
::Secret_Key_Path = ENV['APPLICATION_VERSION'] ? '/path/to/encrypted_data_bag_secret' : File.expand_path('~/different/path/to/encrypted_data_bag_secret')
If I log the value of Secret_Key_Path it logs as the value I don't expect (i.e. '~/different/path/to/encrypted_data_bag_secret' instead of '/path/to/encrypted_data_bag_secret')
Here's how I start my app (from inside of my main executable script, so I can just run app install from any where instead of having to go to the folder)
exec "(cd /path/to/app/root && exec sudo rackup --port #{80} --host #{'0.0.0.0'} --pid /var/run/#{NAME}.pid -O NAME[#{NAME}] -D)"
if I do env | grep APP I get:
APPLICATION_VERSION=1.0.130
APPLICATION_NAME=app-name
It was suggested that it was an execution context problem but I'm not sure how to fix that if it were that.
So Whats going on? Any help & suggestion would be appreciated.
You can keep your environment variables with sudo by using the -E switch:
From the manual:
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may
return an error if the user does not have permission to preserve the environment.
Example:
$ export APPLICATION_VERSION=1.0.130
$ export APPLICATION_NAME=app-name
Check the variables:
$ sudo -E env | grep APP
and you should get the output:
APPLICATION_NAME=app-name
APPLICATION_VERSION=1.0.130
Also if you want to keep variables permanently keeped you can add to the /etc/sudoers file:
Defaults env_keep += "APPLICATION_NAME APPLICATION_VERSION"

Resources