Set environment variables in bash file calling a Matlab script - bash

I have the following bash file launching some Matlab m-files (main.m and f.m which are scripts) 4 times (4 tasks).
#$ -S /bin/bash
#$ -l h_vmem=4G
#$ -l tmem=4G
#$ -cwd
#$ -j y
#Run 4 tasks where each task has a different $SGE_TASK_ID ranging from 1 to 4
#$ -t 1-4
#$ -N example
date
hostname
#Output the Task ID
echo "Task ID is $SGE_TASK_ID"
/share/apps/[...]/matlab -nodisplay -nodesktop -nojvm -nosplash -r "main; ID = $SGE_TASK_ID; f; exit"
The f.m script uses the Gurobi toolbox and I have been told that in order for the file to execute properly I have to set the environment variable
GRB=/apps/[...].lic
where [...] contains the path.
I am a very beginner on how to write bash files and I apologise if my question is silly: where/how/what should I write on the batch file above to use the Gurobi toolbox?
I have googled on how to set environment variables but I got confused between setting, exporting, env. There are many similar questions on in this forum but, since they apply to apparently differently structured batch files, I couldn't understand whether their answers can be tailored also to my case.

Within your bash file, just add the following line before launching the matlab m-files:
export GRB="/apps/[...].lic"

Environment variables are owned by a process, a running process can't change environment of another running process, when creating a new process exported variables of parent are set in child process by default, the environment variables changed in child process can't affect parent process.
GRB=/apps/[...].lic will set variable GRB to a value in bash process it can be seen using echo "$GRB" for example but this variable is not exported, means that when calling matlab, for matlab process environment variable GRB will not be set. Using export GRB before calling matlab will make the variable exported to matlab process.
There's also a syntax to set environment variable for a new process without affecting current bash process: GRB=/apps/[...].lic /share/apps/[...]/matlab ....
For further details man bash /export /^ENVIRONMENT
Also compare output of following commands, set (a builtin, a bash "function" no new process created), env (/usr/bin/env a command, a new process is created and only sees exported variables)
$ set
$ env
the first shows variables, whereas the second environnment which is a subset of first.

Related

"set -m" command in shell script

I was going through a shell script where set -m was used.
My understanding is that set is used to get positional args.
For ex: set "SO community is very helpful". Now, if I do echo $1, I should get SO and so on for $2, $3...
After checking the command with help flag, I got "-m Job control is enabled."
My question is, what is the purpose of set -m in the following code?
set -m
(
current_hash="some_ha54_one"
new_hash=$(cat file.txt)
if [ $current_hash -ne new_hash ]; then
pip install -r requirement.txt
fi
tmp="temp variable"
export tmp
bash some_bash_file.sh &
wait
bash some_other_bash_file.sh &
)
I understand (to the best of my knowledge) what I going on inside () but what is the use of set -m ?
"Job control" enables features like bg and fg; signal-handling and file-descriptor routing changes intended for human operators who might use them to bring background tasks into the foreground to provide them with input; and the ability to refer to background tasks by job number instead of PID. The script segment you showed doesn't use these features, so the set -m call is presumably pointless.
These features are meant for human users, not scripts; and so in scripts they're off by default. In general, code that attempts to use them in scripts is buggy, and should be replaced with code that operates by PID. As an example, code that runs two scripts in parallel with each other, and then collects the exit status of each when they're finished without needing job control follows:
bash some_bash_file & some_pid=$!
bash some_other_file & some_other_pid=$!
wait "$some_pid"; some_rc=$?
wait "$some_other_pid"; some_other_rc=$?

Problem handling enviroment variable when launching terminal from bash script

The following script gets called with an enviroment variable setted.
I need to launch a terminal and inside that terminal read that variable from another script ( script.sh ).
xfce4-terminal -x sh -c \
"export VAR='${VAR}'
/home/usr/scripts/script.sh"
It works but not when VAR has single quotes in it.
I also feel like there is a better way to pass enviroment variable to the terminal but I don't know how.
I really appreciate any kind of help and I'm sorry for my english.
One of the intended features of the environment is that you can add to it, but you never remove things from it. Add VAR to the current environment, and it will be inherited by xfce4-terminal and any process started by that terminal.
export VAR
xfce4-terminal -x sh -c /home/usr/scripts/script.sh
If you don't want it in the current environment, only in the new terminal's, then use a precommend assignment.
VAR="$VAR" xfce4-terminal -x sh -c /home/usr/scripts/script.sh
This avoids any fragile dynamic script construction like you are contending with.
Since xfce4-terminal appears to not fork a new process itself, I would pass the desired value as an argument to sh.
xfce4-terminal -x sh -c 'VAR="$1" /home/usr/scripts/script.sh' _ "$VAR"
The argument to -c is still a fixed string rather than one generated by interpolating the value of $VAR.

How to write a bash script to set global environment variable?

Recently I wrote a script which sets an environment variable, take a look:
#!/bin/bash
echo "Pass a path:"
read path
echo $path
defaultPath=/home/$(whoami)/Desktop
if [ -n "$path" ]; then
export my_var=$path
else
echo "Path is empty! Exporting default path ..."
export my_var=$defaultPath
fi
echo "Exported path: $my_var"
It works just great but the problem is that my_var is available just locally, I mean in console window where I ran the script.
How to write a script which allow me to export global environment variable which can be seen everywhere?
Just run your shell script preceded by "." (dot space).
This causes the script to run the instructions in the original shell. Thus the variables still exist after the script finish
Ex:
cat setmyvar.sh
export myvar=exists
. ./setmyvar.sh
echo $myvar
exists
Each and every shell has its own environment. There's no Universal environment that will magically appear in all console windows. An environment variable created in one shell cannot be accessed in another shell.
It's even more restrictive. If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell.
If all of your shells need access to the same set of variables, you can create a startup file that will set them for you. This is done in BASH via the $HOME/.bash_profile file (or through $HOME/.profile if $HOME/.bash_profile doesn't exist) or through $HOME/.bashrc. Other shells have their own set of startup files. One is used for logins, and one is used for shells spawned without logins (and, as with bash, a third for non-interactive shells). See the manpage to learn exactly what startup scripts are used and what order they're executed).
You can try using shared memory, but I believe that only works while processes are running, so even if you figured out a way to set a piece of shared memory, it would go away as soon as that command is finished. (I've rarely used shared memory except for named pipes). Otherwise, there's really no way to set an environment variable in one shell and have another shell automatically pick it up. You can try using named pipes or writing that environment variable to a file for other shells to pick it up.
Imagine the problems that could happen if someone could change the environment of one shell without my knowledge.
Actually I found an way to achieve this (which in my case was to use a bash script to set a number of security credentials)
I just call bash from inside the script and the spawned shell now has the export values
export API_USERNAME=abc
export API_PASSWORD=bbbb
bash
now calling the file using ~/.app-x-setup.sh will give me an interactive shell with those environment values setup
The following were extracted from 2nd paragraph from David W.'s answer: "If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell."
In case a user need to let parent shell access your new environment variables, just issue the following command in parent shell:
source <your_subshell_script>
or using shortcut
. <your_subshell_script>
You got to add the variable in your .profile located in /home/$USER/.profile
Yo can do that with this command:
echo 'TEST="hi"' >> $HOME/.profile
Or by edit the file with emacs, for example.
If you want to set this variable for all users, you got to edit /etc/profile (root)
There is no global environment, really, in UNIX.
Each process has an environment, originally inherited from the parent, but it is local to the process after the initial creation.
You can only modify your own, unless you go digging around in the process using a debugger.
write it to a temporary file, lets say ~/.myglobalvar and read it from anywhere
echo "$myglobal" > ~/.myglobalvar
Environment variables are always "local" to process execution the export command allow to set environment variables for sub processes. You can look at .bashrc to set environment variables at the start of a bash shell. What you are trying to do seems not possible as a process cannot modify (or access ?) to environment variables of another process.
You can update the ~/.bashrc or ~/.bash_profile file which is used to initialize the environment.
Take a look at the loading behavior of your shell (explained in the manpage, usually referring to .XXXshrc or .profile). Some configuration files are loaded at login time of an interactive shell, some are loaded each time you run a shell. Placing your variable in the latter might result in the behavior you want, e.g. always having the variable set using that distinct shell (for example bash).
If you need to dynamically set and reference environment variables in shell scripts, there is a work around. Judge for yourself whether is worth doing, but here it is.
The strategy involves having a 'set' script which dynamically writes a 'load' script, which has code to set and export an environment variable. The 'load' script is then executed periodically by other scripts which need to reference the variable. BTW, the same strategy could be done by writing and reading a file instead of a variable.
Here's a quick example...
Set_Load_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
echo "#!/bin/bash" > $PROCESSING_SIGNAL_SCRIPT
echo "export PROCESSING_SIGNAL=$1" >> $PROCESSING_SIGNAL_SCRIPT
chmod ug+rwx $PROCESSING_SIGNAL_SCRIPT
Load_PROCESSING_SIGNAL.sh (this gets dynamically created when the above is run)
#!/bin/bash
export PROCESSING_SIGNAL=1
You can test this with
Test_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
N=1
LIM=100
while [ $N -le $LIM ]
do
# DO WHATEVER LOOP PROCESSING IS NEEDED
echo "N = $N"
sleep 5
N=$(( $N + 1 ))
# CHECK PROCESSING_SIGNAL
source $PROCESSING_SIGNAL_SCRIPT
if [[ $PROCESSING_SIGNAL -eq 0 ]]; then
# Write log info indicating that the signal to stop processing was detected
# Write out all relevent info
# Send an alert email of this too
# Then exit
echo "Detected PROCESSING_SIGNAL for all stop. Exiting..."
exit 1
fi
done
~/.bin/SOURCED/lazy script to save and load data as flat files for system.
[ ! -d ~/.megadata ] && mkdir ~/.megadata
function save_data {
[ -z "$1" -o -z "$2" ] && echo 'save_data [:id:] [:data:]' && return
local overwrite=${3-false}
[ "$overwrite" = 'true' ] && echo "$2" > ~/.megadata/$1 && return
[ ! -f ~/.megadata/$1 ] && echo "$2" > ~/.megadata/$1 || echo ID TAKEN set third param to true to overwrite
}
save_data computer engine
cat ~/.megadata/computer
save_data computer engine
save_data computer megaengine true
function get_data {
[ -z "$1" -o -f $1 ] && echo 'get_data [:id:]' && return
[ -f ~/.megadata/$1 ] && cat ~/.megadata/$1 || echo ID NOT FOUND
:
}
get_data computer
get_data computer
Maybe a little off topic, but when you really need it to set it temporarily to execute some script and ended up here looking for answers:
If you need to run a script with certain environment variables that you don't need to keep after execution you could do something like this:
#!/usr/bin/env sh
export XDEBUG_SESSION=$(hostname);echo "running with xdebug: $XDEBUG_SESSION";$#
In my example I just use XDEBUG_SESSION with a hostname, but you can use multiple variables. Keep them separated with a semi-colon. Execution as follows (assuming you called the script debug.sh and placed it in the same directory as your php script):
$ debug.sh php yourscript.php

Want to export environment variable from startup script to other shells

I'm working on an embedded system using Busybox as the shell. My startup script rcS exports a number of variables:
UBOOT_ENV="gatewayip netmask netdev ipaddr ethaddr eth1addr hostname nfsaddr"
for i in $UBOOT_ENV; do
if [ -n "$i" ] ; then
export `fw_printenv $i`
fi
done
which are then available to scripts called from this script as I'd expect. What I want however is for these environment variables to be set in the environment for which some web server scripts are called. This is currently not the case. How do I make an environment variable available to any shell script called?
TY,
Fred
ps : my busybox is BusyBox v1.11.2 (2012-02-26 12:08:09 PST) built-in shell (msh)
Environment variables are only inherited by child processes of your script (and their child processes); you can't push them up to a parent process.
What you can do is write the variables to a file (as a shell script) which you can then include from wherever you like. Put source filename in /etc/.profile and it will probably do what you want.

Problem with bash script

I'm using this bash script:
for a in `sort -u $HADOOP_HOME/conf/slaves`; do
rsync -e ssh -a "${HADOOP_HOME}/conf" ${a}:"${HADOOP_HOME}"
done
for a in `sort -u $HBASE_HOME/conf/regionservers`; do
rsync -e ssh -a "${HBASE_HOME}/conf" ${a}:"${HBASE_HOME}"
done
When I call this script directly from shell, there are no problems and it works fine. But when I call this script from another script, although the script does its job, I get this message at the end:
sort: open failed: /conf/slaves: No such file or directory
sort: open failed: /conf/regionservers: No such file or directory
I have set $HADOOP_HOME and $HBASE_HOME in /etc/profile and the script does the job right. But I don't understand why it gives this message in the end.
Are you sure it's doing it right? When you call this script from the shell it is acting as an interactive shell which reads and sources /etc/profile and ~/.bash_profile if it exists. When you call it from another script it is running as non-interactive and wont source those files. If you want a non-interactive shell to source a file you can do this by setting the BASH_ENV environment variable.
#!/bin/bash
export BASH_ENV=/etc/profile
./call/to/your/HADOOP/script.sh
Everything points to those variables not being defined when your script runs.
You should ensure that they are set for your script. Before the first loop, place the line:
echo "[${HADOOP_HOME}] [${HBASE_HOME}]"
and make sure that doesn't output "[] []" (or even one "[]").
Additionally, put a set +x line at the top of the script - this will output lines before executing them and you can see what's being done.
Keep in mind that some shells don't pass on environment variables to subshells unless you explicitly export them (setting them is not enough).

Resources