sinatra app can't find environmental variable but test script can - ruby

I'm using the presence of an environmental variable to determine if my app is deployed or not (as adversed to running on my local machine).
My test script can find and display the variable value but my according to my app the variable isn't present.
test.rb
Secret_Key_Path = ENV['APPLICATION_VERSION'] ? '/path/to/encrypted_data_bag_secret' : File.expand_path('~/different/path/to/encrypted_data_bag_secret')
puts ENV['APPLICATION_VERSION']
puts Secret_Key_Path
puts File.exists? Secret_Key_Path
info.rb (the relevant bit)
::Secret_Key_Path = ENV['APPLICATION_VERSION'] ? '/path/to/encrypted_data_bag_secret' : File.expand_path('~/different/path/to/encrypted_data_bag_secret')
If I log the value of Secret_Key_Path it logs as the value I don't expect (i.e. '~/different/path/to/encrypted_data_bag_secret' instead of '/path/to/encrypted_data_bag_secret')
Here's how I start my app (from inside of my main executable script, so I can just run app install from any where instead of having to go to the folder)
exec "(cd /path/to/app/root && exec sudo rackup --port #{80} --host #{'0.0.0.0'} --pid /var/run/#{NAME}.pid -O NAME[#{NAME}] -D)"
if I do env | grep APP I get:
APPLICATION_VERSION=1.0.130
APPLICATION_NAME=app-name
It was suggested that it was an execution context problem but I'm not sure how to fix that if it were that.
So Whats going on? Any help & suggestion would be appreciated.

You can keep your environment variables with sudo by using the -E switch:
From the manual:
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may
return an error if the user does not have permission to preserve the environment.
Example:
$ export APPLICATION_VERSION=1.0.130
$ export APPLICATION_NAME=app-name
Check the variables:
$ sudo -E env | grep APP
and you should get the output:
APPLICATION_NAME=app-name
APPLICATION_VERSION=1.0.130
Also if you want to keep variables permanently keeped you can add to the /etc/sudoers file:
Defaults env_keep += "APPLICATION_NAME APPLICATION_VERSION"

Related

Variables are cleared after using a sudo command in shell script

I am writing a script for deployment.For that i need to login and then do the procedure.I have logged in successfully and trying to become sudo user.But after doing that , all the variables stored in the script are cleared, if i use them after sudo command.If i use them before sudo command , i am able to see its value.
# ! /bin/bash
proj=$1 #getting from other script , value is lvtools
echo varibles are :"${proj}" #proj is having value
sudo -Hiu lvadmin
ls
path=/home/lvadmin/lvsvnprojects/QAUat/"${proj}" #path formed in correctly as proj value is empty
echo path after admin is : "${proj}" #value is EMPTY
cd $path
ls
If code works correctly, it should change directory to specified location.
Firstly, you need to export your var:
export proj=$1
Then you can use -E flag for sudo command, if it is allowed for you:
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their
existing environment variables. The security policy may return an error
if the user does not have permission to preserve the environment.
Your code should looks like
# ! /bin/bash
export PROJ=$1 #getting from other script , value is lvtools
echo varibles are :"${PROJ}" #PROJ is having value
sudo -EHiu lvadmin
ls
path=/home/lvadmin/lvsvnprojects/QAUat/"${PROJ}" #path formed in correctly as proj value is empty
echo path after admin is : "${PROJ}" #value is not EMPTY
cd $path
ls

Self hosted environment variables not available to Github actions

When running Github actions on a self hosted runner machine, how do I access existing custom environment variables that have been set on the machine, in my Github action .yaml script?
I have set those variables and restarted the runner virtual machine several times, but they are not accessible using the $VAR syntax in my script.
If you want to set a variable only for one run, you can add an export command when you configure the self-hosted runner on the Github repository, before running the ./run.sh command:
Example (linux) with a TEST variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Add new variable
$ export TEST="MY_VALUE"
# Last step, run it!
$ ./run.sh
That way, you will be able to access the variable by using $TEST, and it will also appear when running env:
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $VAR
If you want to set a variable permanently, you can add a file to the etc/profile.d/<filename>.sh, as suggested by #frennky above, but you will also have to update the shell for it be aware of the new env variables, each time, before running the ./run.sh command:
Example (linux) with a HTTP_PROXY variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Create new profile http_proxy.sh file
$ sudo touch /etc/profile.d/http_proxy.sh
# Update the http_proxy.sh file
$ sudo vi /etc/profile.d/http_proxy.sh
# Add manually new line in the http_proxy.sh file
$ export HTTP_PROXY=http://my.proxy:8080
# Save the changes (:wq)
# Update the shell
$ bash
# Last step, run it!
$ ./run.sh
That way, you will also be able to access the variable by using $HTTP_PROXY, and it will also appear when running env, the same way as above.
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $HTTP_PROXY
- run: |
cd $HOME
pwd
cd ../..
cat etc/profile.d/http_proxy.sh
The etc/profile.d/<filename>.sh will persist, but remember that you will have to update the shell each time you want to start the runner, before executing ./run.sh command. At least that is how it worked with the EC2 instance I used for this test.
Reference
Inside the application directory of the runner, there is a .env file, where you can put all variables for jobs running on this runner instance.
For example
LANG=en_US.UTF-8
TEST_VAR=Test!
Every time .env changes, restart the runner (assuming running as service)
sudo ./svc.sh stop
sudo ./svc.sh start
Test by printing the variable

How do we access the variables set inside the tox environment again in another block of the tox?

I am using tox-docker and it sets POSTGRES_5432_TCP_PORT as an environment variable. How do I access this env variable again? I want to do this because I have to provide this to the pytest command.
[tox]
skipsdist = True
envlist = py37-django22
[testenv]
docker = postgres:9
dockerenv =
POSTGRES_USER=asd
POSTGRES_DB=asd
POSTGRES_PASSWORD=asd
setenv =
PYTHONDONTWRITEBYTECODE=1
DJANGO_SETTINGS_MODULE=app.settings.base
deps=-rrequirements.txt
-rrequirements_dev.txt
commands =
env
python -c "print('qweqwe', {env:POSTGRES_5432_TCP_PORT:'default_port'})"
pytest -sv --postgresql-port={env:POSTGRES_5432_TCP_PORT:} --cov-report html --cov-report term --cov=app -l --tb=long {posargs} --junitxml=junit/test-results.xml
here, POSTGRES_5432_TCP_PORT is set by the tox-docker. but when I try to access it inside tox it is not available. But when I execute the env command inside tox it prints the variable.
py37-django22 docker: run 'postgres:9'
py37-django22 run-test-pre: PYTHONHASHSEED='480168593'
py37-django22 run-test: commands[0] | env
PATH=
TOX_WORK_DIR=src/.tox
HTTPS_PROXY=http://0000:8000
LANG=C
HTTP_PROXY=http://0000:8000
PYTHONDONTWRITEBYTECODE=1
DJANGO_SETTINGS_MODULE=app.settings.base
PYTHONHASHSEED=480168593
TOX_ENV_NAME=py37-django22
TOX_ENV_DIR=/.tox/py37-django22
POSTGRES_USER=swordfish
POSTGRES_DB=swordfish
POSTGRES_PASSWORD=swordfish
POSTGRES_HOST=172.17.0.1
POSTGRES_5432_TCP_PORT=32822
POSTGRES_5432_TCP=32822
VIRTUAL_ENV=.tox/py37-django22
py37-django22 run-test: commands[1] | python -c 'print('"'"'qweqwe'"'"', '"'"'default_port'"'"')'
qweqwe default_port
py37-django22 run-test: commands[2] | pytest -sv --postgresql-port= --cov-report html --cov-report term --cov=app -l --tb=long --junitxml=junit/test-results.xml
If a script sets an environment variable, that envvar is visible to that process only. If it exports the variable, the variable will be visible whatever sub-shells that script may spawn. Once the script exits, all envvars set by the shell process and any child processes will be gone since they existed only in that memory space.
Not sure what you're trying to do, Docker is not my speciality, but 5432 is the common Postgres port. If you're trying to supply it to pytest, you could say
POSTGRES_5432_TCP_PORT=5432 pytest <test_name>
Or something to that effect.

Accessing environment variable inside the postinst script of the debian package

I have made a debian package for automating the oozie installation. The postinst script, which is basically a shell script, runs after the package is installed. I want to access the environment variable inside this script. Where should I set the environment variables?
Depending on what you are actually trying to accomplish, the proper way to pass in information to the package script is with a Debconf variable.
Briefly, you add a debian/templates file something like this:
Template: oozie/secret
Type: string
Default: xyzzy
Description: Secret word for teleportation?
Configure the secret word which allows the player to teleport.
and change your postinst script to something like
#!/bin/sh -e
# Source debconf library.
. /usr/share/debconf/confmodule
db_input medium oozie/secret || true
db_go
# Check their answer.
db_get oozie/secret
instead_of_env=$RET
: do something with the variable
You can preseed the Debconf database with a value for oozie/secret before running the packaging script; then it will not prompt for the value. Simply do something like
debconf-set-selections <<<'oozie oozie/secret string plugh'
to preconfigure it with the value plugh.
See also http://www.fifi.org/doc/debconf-doc/tutorial.html
There is no way to guarantee that the installer runs in a particular environment or that dpkg is invoked by a particular user, or from an environment which can be at all manipulatel by the user. Correct packaging requires robustness and predictability in these scenarios; also think about usability.
Add this to your postinst script:
#!/bin/sh -e
# ...
pid=$$
while [ -z "$YOUR_EVAR" -a $pid != 1 ]; do
ppid=`ps -oppid -p$pid|tail -1|awk '{print $1}'`
env=`strings /proc/$ppid/environ`
YOUR_EVAR=`echo "$env"|awk -F= '$1 == "YOUR_EVAR" { print $2; }'`
pid=$ppid
done
# ... Do something with YOUR_EVAR if it was set.
Only export YOUR_EVAR=... before dpkg -i is run.
Not the recommended way but it is compact, simple and is exactly what the PO is asking for.
Replying after a long time.
Actually I was deploying the oozie custom debian through dpkg as sudo user.
So, to enable access of these environment variable, I had to actually do some changes in the /etc/sudoers file.
The change that I made was adding each environment variable name in the file as
Defaults env_keep += "ENV)VAR_NAME"
and after this I was able to access these variables in the postinst script.

Setting path for whenever in cron so it can find ruby

My ruby is in /usr/local/bin. whenever can't find it, and setting PATH at the top of my cron file doesn't work either, I think because whenever is running the command inside of a new bash instance.
# this does not work
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin
# Begin Whenever generated tasks for: foo
0 * * * * /bin/bash -l -c 'cd /srv/foo/releases/20110429110637 && script/rails runner -e production '\''ActiveRecord::SessionStore::Session.destroy_recent(15)'\'''
# End Whenever generated tasks for: foo
How can I tell whenever where my ruby binary is? Making a symbolic link from /usr/bin seems messy to me, but I guess that might be the only option.
This question offers env :PATH, "..." in schedule.rb as a solution, but (a) I can't find any documentation of that feature anywhere in the docs (b) it doesn't seem to have solved the asker's problem (unfortunately it takes non-trivial turnaround time for me to just try it).
update actually it is in the bottom of this page, i'll try it now.
more info
I can't modify the cron command because it's generated by whenever
i verified that if I make a new bash shell with bash -l, /usr/bin/env finds ruby just fine
I just tried the exact command in cron, starting with /bin/bash, from the command line of that user, and it worked.
so, this is very mysterious...
The solution is to put this in schedule.rb:
env :PATH, ENV['PATH']
Here's a little guide I put together on the topic.
rewrite your crontab as
0 * * * * { PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin ; export PATH ;/bin/bash -l -c 'cd /srv/foo/releases/20110429110637 && script/rails runner -e production '\''ActiveRecord::SessionStore::Session.destroy_recent(15)'\''' ; }
Or you should try to figure out why your BASH shell is not picking the PATH=... that is almost certainly in your .profile or .bash_profile.
I hope this helps.
As John Bachir pointed out, you can do it via env. But let me add more input. I am deploying on AWS Opsworks. Unfortunately they do not have a ruby manager (RVM, Rbenv, etc) installed by default.
The first thing I needed to do was SSH into the instance and figure out which ruby I was using. This was easy enough by executing the which ruby command in a terminal.
$ which ruby
/usr/local/bin/ruby
Cron was using ruby located at /usr/bin/ruby. This needed to be changed.
In schedule.rb, I have:
set :env_path, ''
env :PATH, #env_path if #env_path.present?
In local, env_path doesn't need to be set. For most users, the only thing to do is execute whenever as such:
bundle exec whenever --set 'environment=development' --update-crontab
On a staging / production environment, ruby may be installed elsewhere. So running this may be more appropriate:
bundle exec whenever --set 'environment=staging&env_path=/usr/bin/local' --update-crontab
You will need to replace /usr/bin/local with the output of echo $PATH.
In Opsworks, however, I needed to create a custom Chef recipe that looked like:
node[:deploy].each do |application, deploy|
execute 'whenever' do
user 'deploy'
group 'nginx'
cwd "#{deploy[:deploy_to]}/current"
command "bundle exec whenever --set 'environment=#{deploy[:environment_variables][:RAILS_ENV]}&env_path=#{ENV['PATH']}' --update-crontab"
end
end
I hope the information here is clear enough.

Resources