Accessing environment variable inside the postinst script of the debian package - bash

I have made a debian package for automating the oozie installation. The postinst script, which is basically a shell script, runs after the package is installed. I want to access the environment variable inside this script. Where should I set the environment variables?

Depending on what you are actually trying to accomplish, the proper way to pass in information to the package script is with a Debconf variable.
Briefly, you add a debian/templates file something like this:
Template: oozie/secret
Type: string
Default: xyzzy
Description: Secret word for teleportation?
Configure the secret word which allows the player to teleport.
and change your postinst script to something like
#!/bin/sh -e
# Source debconf library.
. /usr/share/debconf/confmodule
db_input medium oozie/secret || true
db_go
# Check their answer.
db_get oozie/secret
instead_of_env=$RET
: do something with the variable
You can preseed the Debconf database with a value for oozie/secret before running the packaging script; then it will not prompt for the value. Simply do something like
debconf-set-selections <<<'oozie oozie/secret string plugh'
to preconfigure it with the value plugh.
See also http://www.fifi.org/doc/debconf-doc/tutorial.html
There is no way to guarantee that the installer runs in a particular environment or that dpkg is invoked by a particular user, or from an environment which can be at all manipulatel by the user. Correct packaging requires robustness and predictability in these scenarios; also think about usability.

Add this to your postinst script:
#!/bin/sh -e
# ...
pid=$$
while [ -z "$YOUR_EVAR" -a $pid != 1 ]; do
ppid=`ps -oppid -p$pid|tail -1|awk '{print $1}'`
env=`strings /proc/$ppid/environ`
YOUR_EVAR=`echo "$env"|awk -F= '$1 == "YOUR_EVAR" { print $2; }'`
pid=$ppid
done
# ... Do something with YOUR_EVAR if it was set.
Only export YOUR_EVAR=... before dpkg -i is run.
Not the recommended way but it is compact, simple and is exactly what the PO is asking for.

Replying after a long time.
Actually I was deploying the oozie custom debian through dpkg as sudo user.
So, to enable access of these environment variable, I had to actually do some changes in the /etc/sudoers file.
The change that I made was adding each environment variable name in the file as
Defaults env_keep += "ENV)VAR_NAME"
and after this I was able to access these variables in the postinst script.

Related

Where can I store variables and values for current Unix user so that I can use them in SSH and scripts?

I have some variables I use quite frequently to configure and tweak my Ubuntu LAMP server stack but I'm getting tired of having to copy and paste the export command into my SSH window to register the variable and its value.
Essentially I would like to keep my variables and their values in a file inside the user profiles home directory so when I type a command into a SSH window or execute a bash script the variables can be easily used. I don't want to set any system-wide variables as some of these variables are for setting passwords etc.
What's the easiest way of doing this?
UPDATE 1
So essentially I could store the variables and values in a file and then each time I login into a SSH session I call this file up once to setup the variables?
cat <<"EOF" >> ~/my_variables
export foo='bar'
export hello="world"
EOF
ssh root#example.com
$ source ~/my_variables
$ echo "$foo"
$ bar
and then to call the variable from within a script I place source ~/my_variables at the top of the script?
#!/bin/bash
source ~/my_variables
echo "$hello"
Just add your export commands to a file and then run source <the-file> (or . <the-file> for non-bash shells) in your SSH session.

Append to a remote environment variable for a command started via ssh on RO filesystem

I can run a Python script on a remote machine like this:
ssh -t <machine> python <script>
And I can also set environment variables this way:
ssh -t <machine> "PYTHONPATH=/my/special/folder python <script>"
I now want to append to the remote PYTHONPATH and tried
ssh -t <machine> 'PYTHONPATH=$PYTHONPATH:/my/special/folder python <script>'
But that doesn't work because $PYTHONPATH won't get evaluated on the remote machine.
There is a quite similar question on SuperUser and the accepted answer wants me to create an environment file which get interpreted by ssh and another question which can be solved by creating and copying a script file which gets executed instead of python.
This is both awful and requires the target file system to be writable (which is not the case for me)!
Isn't there an elegant way to either pass environment variables via ssh or provide additional module paths to Python?
How about using /bin/sh -c '/usr/bin/env PYTHONPATH=$PYTHONPATH:/.../ python ...' as the remote command?
EDIT (re comments to prove this should do what it's supposed to given correct quoting):
bash-3.2$ export FOO=bar
bash-3.2$ /usr/bin/env FOO=$FOO:quux python -c 'import os;print(os.environ["FOO"])'
bar:quux
WFM here like this:
$ ssh host 'grep ~/.bashrc -e TEST'
export TEST="foo"
$ ssh host 'python -c '\''import os; print os.environ["TEST"]'\'
foo
$ ssh host 'TEST="$TEST:bar" python -c '\''import os; print os.environ["TEST"]'\'
foo:bar
Note the:
single quotes around the entire command, to avoid expanding it locally
embedded single quotes are thus escaped in the signature '\'' pattern (another way is '"'"')
double quotes in assignment (only required if the value has whitespace, but it's good practice to not depend on that, especially if the value is outside your control)
avoiding of $VAR in command: if I typed e.g. echo "$TEST", it would be expanded by shell before replacing the variable
a convenient way around this is to make var replacement a separate command:
$ ssh host 'export TEST="$TEST:bar"; echo "$TEST"'
foo:bar

Ansible doesn't load ~/.profile

I'm asking myself why Ansible doesn't source ~/.profile file before execute template module on one host ?
Distant host ~/.profile:
export ENV_VAR=/usr/users/toto
A single Ansible task:
- template: src=file1.template dest={{ ansible_env.ENV_VAR }}/file1
Ansible fail with:
fatal: [distant-host] => One or more undefined variables: 'dict object' has no attribute 'ENV_VAR'
Ansible is not running remote tasks (command, shell, ...) in an interactive nor login shell. It's same like when you execute command remotely via 'ssh user#host "which python"'
To source ~/.bashrc won't work often because ansible shell is not interactive and ~/.bashrc implementation by default ignores non interactive shell (check its beginning).
The best solution for executing commands as user after its ssh interactive login I found is:
- hosts: all
tasks:
- name: source user profile file
#become: yes
#become_user: my_user # in case you want to become different user (make sure acl package is installed)
shell: bash -ilc 'which python' # example command which prints
register: which_python
- debug:
var: which_python
bash: '-i' means interactive shell, so .bashrc won't be ignored
'-l' means login shell which sources full user profile (/etc/profile and ~/.bash_profile, or ~/.profile - see bash manual page for more details)
Explanation of my example: my ~/.bashrc sets specific python from anaconda installed under that user.
Ansible is not running tasks in an interactive shell on the remote host. Michael DeHaan has answered this question on github some time ago:
The uber-basic description is ansible isn't really doing things through the shell, it's transferring modules and executing scripts that it transfers, not using a login shell.
i.e. Why does an SSH remote command get fewer environment variables then when run manually?
It's not a continous shell environment basically, nor is it logging in and typing commands and things.
You should see the same result (undefined variable) by running this:
ssh <host> echo $ENV_VAR
In a lot of places I've used below structure:
- name: Task Name
shell: ". /path/to/profile;command"
when ansible escalates the privilige to sudo it don't invoke the login shell of sudo user
we need to make changes in the way we call sudo like invoking it with -i and -H flags
"sudo_flags=-H" in your ansible.cfg file
If you can run as root, you can use runuser.
- shell: runuser -l '{{ install_user }}' -c "{{ cmd }}"
This effectively runs the command as install_user in a fresh login shell, as if you had used su - *install_user* (which loads the profile, though it might be .bash_profile and not .profile...) and then executed *cmd*.
I'd try not to run everything as root just so you can run it as someone else, though...
If you can modify the configuration of your target host and don't want to change your ansible yaml code. You can try this:
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
shell: "bash -l scala -version"
by using bash -l will allow ansible to load corresponding bash_profile.
bash: '-i' (interactive shell) won't allow the ansible to run other task.
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
You really can use /etc/environment, but only if a variable has a static value. If we use variable which gets the value of another variable it doesn't work. For example, if we put this line to /etc/environment
XDG_RUNTIME_DIR=/run/user/$(id -u)
Ansible can see exactly XDG_RUNTIME_DIR=/run/user/$(id -u), not XDG_RUNTIME_DIR=/run/user/1012.
And if we put this line to ~/.bash_profile or ~/.bashrc:
export XDG_RUNTIME_DIR=/run/user/$(id -u)
User can see XDG_RUNTIME_DIR=/run/user/1012 (if user's id is 1012) when he works manually, but Ansible doesn't get variable XDG_RUNTIME_DIR at all.

sinatra app can't find environmental variable but test script can

I'm using the presence of an environmental variable to determine if my app is deployed or not (as adversed to running on my local machine).
My test script can find and display the variable value but my according to my app the variable isn't present.
test.rb
Secret_Key_Path = ENV['APPLICATION_VERSION'] ? '/path/to/encrypted_data_bag_secret' : File.expand_path('~/different/path/to/encrypted_data_bag_secret')
puts ENV['APPLICATION_VERSION']
puts Secret_Key_Path
puts File.exists? Secret_Key_Path
info.rb (the relevant bit)
::Secret_Key_Path = ENV['APPLICATION_VERSION'] ? '/path/to/encrypted_data_bag_secret' : File.expand_path('~/different/path/to/encrypted_data_bag_secret')
If I log the value of Secret_Key_Path it logs as the value I don't expect (i.e. '~/different/path/to/encrypted_data_bag_secret' instead of '/path/to/encrypted_data_bag_secret')
Here's how I start my app (from inside of my main executable script, so I can just run app install from any where instead of having to go to the folder)
exec "(cd /path/to/app/root && exec sudo rackup --port #{80} --host #{'0.0.0.0'} --pid /var/run/#{NAME}.pid -O NAME[#{NAME}] -D)"
if I do env | grep APP I get:
APPLICATION_VERSION=1.0.130
APPLICATION_NAME=app-name
It was suggested that it was an execution context problem but I'm not sure how to fix that if it were that.
So Whats going on? Any help & suggestion would be appreciated.
You can keep your environment variables with sudo by using the -E switch:
From the manual:
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may
return an error if the user does not have permission to preserve the environment.
Example:
$ export APPLICATION_VERSION=1.0.130
$ export APPLICATION_NAME=app-name
Check the variables:
$ sudo -E env | grep APP
and you should get the output:
APPLICATION_NAME=app-name
APPLICATION_VERSION=1.0.130
Also if you want to keep variables permanently keeped you can add to the /etc/sudoers file:
Defaults env_keep += "APPLICATION_NAME APPLICATION_VERSION"

Source environment variables and execute bash before running local script on remote machine [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm trying to execute the remote local script with ssh connection. I've read a document about the syntax of it. But my issue is that, before running the script, I need to execute bash and source environment variables.
This looks appropriate for me but it has not a source command :
ssh [user]#[server] 'bash -s' < [local_script]
I've tried such a thing with EOF but it didn't work for me too :
#!/bin/bash
/usr/bin/ssh "$user#$$host" <<EOF
bash -s
source /dir/to/profile/.profile
source /dir/to/env/set/env.sh
/path/to/script/script.sh stop
EOF
Do you have an idea for this type of implementation of remote commands ? I have to source profile before the environment settings otherwise it gives an exception. But the main problem is about source.
Maybe it was an easy question but I don't have any ideas. Thank you in advance for your all answers.
eval can accomplish this for you:
eval $(cat /path/to/environment) ./script.sh
You can source multiple files this way too if you want if you know there
path:
eval $(cat /path/to/environment1 /path/to/environment2) ./script.sh
Or iterate over a directory:
eval $(cat $(find -type f /path/to/environments)) ./script.sh
Stick SSH in front of it if you're doing this remotely to solve your specific problem:
# note the quotes otherwise we'll source our local environment
ssh user#host "'eval $(cat /path/to/environment)' ./remote_script.sh"
# If it's a local environment you want to sort, then do the same
# command without the quotes:
ssh user#host "eval $(cat /path/to/environment)" ./remote_script.sh
If you want to source a remote environment into your own then use eval
locally as so:
eval "$(ssh user#host cat /path/to/environment)" ./local_script.sh
This alls you to source an external file setting it's environment variables in the same forked instance that will calls your script (making them available).
Consider a script file that looks like this:
#!/bin/sh
echo "$VAR1"
echo "$VAR2"
test_function
Now consider your environment file looks like this:
# Environment Variables
VAR1=foo
VAR2=bar
test_function()
{
echo "hello world"
}
You'd see the output if you use the eval example:
foo
bar
hello world
Alternatively, if you just open up your script you wrote, you can source
these environment variables directly from within it and then you can just
call the script normally without any tricks:
#!/bin/sh
# Source our environment by starting with period an then following
# through with the full path to the environment file. You can also use
# the 'source' keyword here too instead of the period (.).
. /path/to/environment
echo "$VAR1"
echo "$VAR2"
test_function
I know it is old but just wanted to add that it can be done without an extra file - use '\' to escape local variables and remote command substitution - ie:
ssh me#somehost "RMTENV=\$(ls /etc/profile) && source \$RMTENV"
I use this to execute remote java commands and need the ENV to find java.
I fixed the problem by writing another template script that sources the environment variables and runs the script:
PROFILE=/dir/to/profile/.profile
source $PROFILE
cd /dir/to/script
/bin/bash script $1
If you use the source command with bash shell, #!/bin/bash doesn't work for the source command.

Resources