Using bash shell inside Matlab - bash

I'm trying to put a large set of bash commands into a matlab script and manage my variables (like file paths, parameters etc) from there. It is also needed because this workflow requires manual intervention at certain steps and I would like to use the step debugger for this.
The problem is, I don't understand how matlab interfaces with bash shell.
I can't do system('source .bash_profile') to define my bash variables. Similarly I can't define them by hand and read them either, e.g. system('export var=somepath') and then system('echo $var') returns nothing.
What is the correct way of defining variables in bash inside matlab's command window? How can I construct a workflow of commands which will use the variables I defined as well as those in my .bash_profile?

If all you need to do is set environment variables, do this in MATLAB:
>> setenv('var','somepath')
>> system('echo $var')

Invoke Bash as a login shell to get your ~/.bash_profile sourced and use the -c option to execute a group of shell commands in one go.
# in Terminal.app
man bash | less -p 'the --login option'
man bash | less -p '-c string'
echo 'export profilevar=myProfileVar' >> ~/.bash_profile
# test in Terminal.app
/bin/bash --login -c '
echo "$0"
echo "$3"
echo "$#"
export var=somepath
echo "$var"
echo "$profilevar"
ps
export | nl
' zero 1 2 3 4 5
# in Matlab
cmd=sprintf('/bin/bash --login -c ''echo "$profilevar"; ps''');
[r,s]=system(cmd);
disp(s);

Related

Definitively determine if currently running shell is bash or zsh

How can I definitively determine if the currently running shell is bash or zsh?
(being able to disambiguate between additional shells is a bonus, but only bash & zsh are 100% necessary)
I've seen a few ways to supposedly do this, but they all have problems (see below).
The best I can think of is to run some syntax that will work on one and not the other, and to then check the errors / outputs to see which shell is running. If this is the best solution, what command would be best for this test?
The simplest solution would be if every shell included a read-only parameter of the same name that identified the shell. If this exists, however, I haven't heard of it.
Non-definitive ways to determine the currently running shell:
# default shell, not current shell
basename "${SHELL}"
# current script rather than current shell
basename "${0}"
# BASH_VERSINFO could be defined in any shell, including zsh
if [ -z "${BASH_VERSINFO+x}" ]; then
echo 'zsh'
else
echo 'bash'
fi
# executable could have been renamed; ps isn't a builtin
shell_name="$(ps -o comm= -p $$)"
echo "${shell_name##*[[:cntrl:][:punct:][:space:]]}"
# scripts can be sourced / run by any shell regardless of shebang
# shebang parsing
On $ prompt, run:
echo $0
but you can't use $0 within a script, as $0 will become the script's name itself.
To find the current shell (let's say BASH) if shebang / magic number executable was #!/bin/bash within a script:
#!/bin/bash
echo "Script is: $0 running using $$ PID"
echo "Current shell used within the script is: `readlink /proc/$$/exe`"
script_shell="$(readlink /proc/$$/exe | sed "s/.*\///")"
echo -e "\nSHELL is = ${script_shell}\n"
if [[ "${script_shell}" == "bash" ]]
then
echo -e "\nI'm BASH\n"
fi
Outputs:
Script is: /tmp/2.sh running using 9808 PID
Current shell used within the script is: /usr/bin/bash
SHELL is = bash
I'm BASH
This will work, if shebang was: #!/bin/zsh (as well).
Then, you'll get the output for SHELL:
SHELL is = zsh
While there is no 100% foolproof way to achieve it, it might help to do a
echo $BASH_VERSION
echo $ZSH_VERSION
Both are shell variables (not environment variables), which are set by the respective shell. In the respective other shell, they are empty.
Of course, if someone on purpose creates a variable of this name, or exports such a variable and then creates a subshell of the different kind, i.e.
# We are in bash here
export BASH_VERSION
zsh # the subshell will see BASH_VERSION even though it is zsh
this approach will fail; but I think if someone is really doing such a thing, he wants to sabotage your code on purpose.
This should work for most Linux systems:
cat /proc/$$/comm
Quick and easy.
Working from comments by #ruakh & #oguzismail, I think I have a solution.
\shopt -u lastpipe 2> /dev/null
shell_name='bash'; : | shell_name='zsh'

Saving the result of an echo command in a shell script?

I am attempting to store the result of an echo command as a variable to be used in a shell script. Debian 4.19.0-6-amd64
The command works in terminal: echo $HOSTNAME returns debian-base, the correct hostname.
I attempt to run it in a shell script, such as:
#!/usr/bin/bash
CURRENT_HOSTNAME=`echo $HOSTNAME`
echo $CURRENT_HOSTNAME
I have tried expansion:
CURRENT_HOSTNAME=$(echo $HOSTNAME)
And just to cover some more bases, I tried things like:
CURRENT_HOSTNAME=$HOSTNAME
# or
CURRENT_HOSTNAME="$HOSTNAME"
# also, in case a problem with reserved names:
test=$HOSTNAME
test="$HOSTNAME"
Works great in the terminal! Output is as follows:
root#debian-base:/scripts# echo $HOSTNAME
debian-base
root#debian-base:/scripts# TEST_HOSTNAME=$HOSTNAME
root#debian-base:/scripts# echo $TEST_HOSTNAME
debian-base
root#debian-base:/scripts# TEST_TWO_HOSTNAME=$(echo $HOSTNAME)
root#debian-base:/scripts# echo $TEST_TWO_HOSTNAME
debian-base
As soon as I run the script (as above):
root#debian-base:/scripts# sh test.sh
root#debian-base:/scripts#
What am I doing wrong?
You are using bash as your terminal. Bash has the variable $HOSTNAME set. You run your script with sh. sh does not have a $HOSTNAME.
Options:
bash test.sh
Or run it as a program:
chmod +x test.sh
./test.sh
But I think you need to change your first line to:
#!/bin/bash
As I don't think bash is installed in /usr/bin in most cases. But you need to try. To figure out where bash is installed use which bash
Another option is to use the hostname binary:
CURRENT_HOSTNAME=$(hostname)
echo $CURRENT_HOSTNAME
Which works in both bash and sh.
You can start sh by just running sh. You will see it has a bash-like terminal. You can try to do echo $HOSTNAME. It will not show, because it's not there. You can use set to see all the variables that are there (as sh does not have tab completion it's harder to figure out).

How to set $TERM to a value when running /bin/bash via command line?

When I run the /bin/bash process with 2 parameters -c and SomeUserInput,
where SomeUserInput is echo $TERM
The output is
xterm-256color
Is there a way I can set the value of $TERM via a command line parameter to /bin/bash so the above invokation of echo $TERM would print something else that I specify?
(Yes, I've done a lot of digging in man bash and searching elsewhere, but couldn't find the answer; although I think it's likely there.)
First of all, since you used double quotes, that prints the value of TERM in your current shell, not the bash you invoke. To do that, use /bin/bash -c 'echo $TERM'.
To set the value of TERM, you can export TERM=linux before running that command, set it only for that shell with either TERM=linux /bin/bash -c 'echo $TERM' (shell expression), or /usr/bin/env TERM=linux /bin/bash -c 'echo $TERM' (execve compatible (as for find -exec)).
Update:
As for your edit of only using command line parameters to /bin/bash, you can do that without modifying your input like this:
/bin/bash -c 'TERM=something; eval "$1"' -- 'SomeUserInput'
Well, you can either set the variable on your .bashrc file, or simply set with the bash invocation:
/bin/bash -c "TERM=something-else; echo $TERM"

How to invoke bash, run commands inside the new shell, and then give control back to user?

This must either be really simple or really complex, but I couldn't find anything about it... I am trying to open a new bash instance, then run a few commands inside it, and give the control back to the user inside that same instance.
I tried:
$ bash -lic "some_command"
but this executes some_command inside the new instance, then closes it. I want it to stay open.
One more detail which might affect answers: if I can get this to work I will use it in my .bashrc as alias(es), so bonus points for an alias implementation!
bash --rcfile <(echo '. ~/.bashrc; some_command')
dispenses the creation of temporary files. Question on other sites:
https://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
https://unix.stackexchange.com/questions/123103/how-to-keep-bash-running-after-command-execution
This is a late answer, but I had the exact same problem and Google sent me to this page, so for completeness here is how I got around the problem.
As far as I can tell, bash does not have an option to do what the original poster wanted to do. The -c option will always return after the commands have been executed.
Broken solution: The simplest and obvious attempt around this is:
bash -c 'XXXX ; bash'
This partly works (albeit with an extra sub-shell layer). However, the problem is that while a sub-shell will inherit the exported environment variables, aliases and functions are not inherited. So this might work for some things but isn't a general solution.
Better: The way around this is to dynamically create a startup file and call bash with this new initialization file, making sure that your new init file calls your regular ~/.bashrc if necessary.
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
echo "source ~/.bashrc" > $TMPFILE
echo "<other commands>" >> $TMPFILE
echo "rm -f $TMPFILE" >> $TMPFILE
# Start the new bash shell
bash --rcfile $TMPFILE
The nice thing is that the temporary init file will delete itself as soon as it is used, reducing the risk that it is not cleaned up correctly.
Note: I'm not sure if /etc/bashrc is usually called as part of a normal non-login shell. If so you might want to source /etc/bashrc as well as your ~/.bashrc.
You can pass --rcfile to Bash to cause it to read a file of your choice. This file will be read instead of your .bashrc. (If that's a problem, source ~/.bashrc from the other script.)
Edit: So a function to start a new shell with the stuff from ~/.more.sh would look something like:
more() { bash --rcfile ~/.more.sh ; }
... and in .more.sh you would have the commands you want to execute when the shell starts. (I suppose it would be elegant to avoid a separate startup file -- you cannot use standard input because then the shell will not be interactive, but you could create a startup file from a here document in a temporary location, then read it.)
bash -c '<some command> ; exec /bin/bash'
will avoid additional shell sublayer
You can get the functionality you want by sourcing the script instead of running it. eg:
$cat script
cmd1
cmd2
$ . script
$ at this point cmd1 and cmd2 have been run inside this shell
Append to ~/.bashrc a section like this:
if [ "$subshell" = 'true' ]
then
# commands to execute only on a subshell
date
fi
alias sub='subshell=true bash'
Then you can start the subshell with sub.
The accepted answer is really helpful! Just to add that process substitution (i.e., <(COMMAND)) is not supported in some shells (e.g., dash).
In my case, I was trying to create a custom action (basically a one-line shell script) in Thunar file manager to start a shell and activate the selected Python virtual environment. My first attempt was:
urxvt -e bash --rcfile <(echo ". $HOME/.bashrc; . %f/bin/activate;")
where %f is the path to the virtual environment handled by Thunar.
I got an error (by running Thunar from command line):
/bin/sh: 1: Syntax error: "(" unexpected
Then I realized that my sh (essentially dash) does not support process substitution.
My solution was to invoke bash at the top level to interpret the process substitution, at the expense of an extra level of shell:
bash -c 'urxvt -e bash --rcfile <(echo "source $HOME/.bashrc; source %f/bin/activate;")'
Alternatively, I tried to use here-document for dash but with no success. Something like:
echo -e " <<EOF\n. $HOME/.bashrc; . %f/bin/activate;\nEOF\n" | xargs -0 urxvt -e bash --rcfile
P.S.: I do not have enough reputation to post comments, moderators please feel free to move it to comments or remove it if not helpful with this question.
With accordance with the answer by daveraja, here is a bash script which will solve the purpose.
Consider a situation if you are using C-shell and you want to execute a command
without leaving the C-shell context/window as follows,
Command to be executed: Search exact word 'Testing' in current directory recursively only in *.h, *.c files
grep -nrs --color -w --include="*.{h,c}" Testing ./
Solution 1: Enter into bash from C-shell and execute the command
bash
grep -nrs --color -w --include="*.{h,c}" Testing ./
exit
Solution 2: Write the intended command into a text file and execute it using bash
echo 'grep -nrs --color -w --include="*.{h,c}" Testing ./' > tmp_file.txt
bash tmp_file.txt
Solution 3: Run command on the same line using bash
bash -c 'grep -nrs --color -w --include="*.{h,c}" Testing ./'
Solution 4: Create a sciprt (one-time) and use it for all future commands
alias ebash './execute_command_on_bash.sh'
ebash grep -nrs --color -w --include="*.{h,c}" Testing ./
The script is as follows,
#!/bin/bash
# =========================================================================
# References:
# https://stackoverflow.com/a/13343457/5409274
# https://stackoverflow.com/a/26733366/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://www.linuxquestions.org/questions/other-%2Anix-55/how-can-i-run-a-command-on-another-shell-without-changing-the-current-shell-794580/
# https://www.tldp.org/LDP/abs/html/internalvariables.html
# https://stackoverflow.com/a/4277753/5409274
# =========================================================================
# Enable following line to see the script commands
# getting printing along with their execution. This will help for debugging.
#set -o verbose
E_BADARGS=85
if [ ! -n "$1" ]
then
echo "Usage: `basename $0` grep -nrs --color -w --include=\"*.{h,c}\" Testing ."
echo "Usage: `basename $0` find . -name \"*.txt\""
exit $E_BADARGS
fi
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
#echo "echo Hello World...." >> $TMPFILE
#initialize the variable that will contain the whole argument string
argList=""
#iterate on each argument
for arg in "$#"
do
#if an argument contains a white space, enclose it in double quotes and append to the list
#otherwise simply append the argument to the list
if echo $arg | grep -q " "; then
argList="$argList \"$arg\""
else
argList="$argList $arg"
fi
done
#remove a possible trailing space at the beginning of the list
argList=$(echo $argList | sed 's/^ *//')
# Echoing the command to be executed to tmp file
echo "$argList" >> $TMPFILE
# Note: This should be your last command
# Important last command which deletes the tmp file
last_command="rm -f $TMPFILE"
echo "$last_command" >> $TMPFILE
#echo "---------------------------------------------"
#echo "TMPFILE is $TMPFILE as follows"
#cat $TMPFILE
#echo "---------------------------------------------"
check_for_last_line=$(tail -n 1 $TMPFILE | grep -o "$last_command")
#echo $check_for_last_line
#if tail -n 1 $TMPFILE | grep -o "$last_command"
if [ "$check_for_last_line" == "$last_command" ]
then
#echo "Okay..."
bash $TMPFILE
exit 0
else
echo "Something is wrong"
echo "Last command in your tmp file should be removing itself"
echo "Aborting the process"
exit 1
fi
$ bash --init-file <(echo 'some_command')
$ bash --rcfile <(echo 'some_command')
In case you can't or don't want to use process substitution:
$ cat script
some_command
$ bash --init-file script
Another way:
$ bash -c 'some_command; exec bash'
$ sh -c 'some_command; exec sh'
sh-only way (dash, busybox):
$ ENV=script sh
Here is yet another (working) variant:
This opens a new gnome terminal, then in the new terminal it runs bash. The user's rc file is read first, then a command ls -la is sent for execution to the new shell before it turns interactive.
The last echo adds an extra newline that is needed to finish execution.
gnome-terminal -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'
I also find it useful sometimes to decorate the terminal, e.g. with colorfor better orientation.
gnome-terminal --profile green -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'

Determining whether shell script was executed "sourcing" it

Is it possible for a shell script to test whether it was executed through source? That is, for example,
$ source myscript.sh
$ ./myscript.sh
Can myscript.sh distinguish from these different shell environments?
I think, what Sam wants to do may be not possible.
To what degree a half-baken workaround is possible, depends on...
...the default shell of users, and
...which alternative shells they are allowed to use.
If I understand Sam's requirement correctly, he wants to have a 'script',
myscript, that is...
...not directly executable via invoking it by its name myscript
(i.e. that has chmod a-x);
...not indirectly executable for users by invoking sh myscript or
invoking bash myscript
...only running its contained functions and commands if invoked by
sourcing it: . myscript
The first things to consider are these
Invoking a script directly by its name (myscript) requires a first line in
the script like #!/bin/bash or similar. This will directly determine which
installed instance of the bash executable (or symlink) will be invoked to run
the script's content. This will be a new shell process. It requires the
scriptfile itself to have the executable flag set.
Running a script by invoking a shell binary with the script's (path+)name as
an argument (sh myscript), is the same as '1.' -- except that the
executable flag does not need to be set, and said first line with the
hashbang isn't required either. The only thing needed is that the invoking
user needs read access to the scriptfile.
Invoking a script by sourcing its filename (. myscript) is very much the
same as '1.' -- exept that it isn't a new shell that is invoked. All the
script's commands are executed in the current shell, using its environment
(and also "polluting" its environment with any (new) variables it may set or
change. (Usually this is a very dangerous thing to do: but here it could be
used to execute exit $RETURNVALUE under certain conditions....)
For '1.':
Easy to achieve: chmod a-x myscript will prevent myscript from being
directly executable. But this will not fullfill requirements '2.' and '3.'.
For '2.' and '3.':
Much harder to achieve. Invokations by sh myscript require reading
privileges for the file. So an obvious way out would seem to chmod a-r
myscript. However, this will also dis-allow '3.': you will not be able to
source the script either.
So what about writting the script in a way that uses a Bashism? A Bashism is a
specific way to do something which other shells do not understand: using
specific variables, commands etc. This could be used inside the script to
discover this condition and "do something" about it (like "display warning.txt",
"mailto admin" etc.). But there is no way in hell that this will prevent sh or
bash or any other shell from reading and trying to execute all the following
commands/lines written into the script unless you kill the shell by invoking
exit.
Examples: in Bash, the environment seen by the script knows of $BASH,
$BASH_ARGV, $BASH_COMMAND, $BASH_SUBSHELL, BASH_EXECUTION_STRING... . If
invoked by sh (also if sourced inside a sh), the executing shell will see
all these $BASH_* as empty environment variables. Again, this could be used
inside the script to discover this condition and "do something"... but not
prevent the following commands from being invoked!
I'm now assuming that...
...the script is using #!/bin/bash as its first line,
...users have set Bash as their shell and are invoking commands in the
following table from Bash and it is their login shell,
...sh is available and it is a symlink to bash or dash.
This will mean the following invokations are possible, with the listed values
for environment variables
vars+invok's | ./scriptname | sh scriptname | bash scriptname | . scriptname
---------------+--------------+---------------+-----------------+-------------
$0 | ./scriptname | ./scriptname | ./scriptname | -bash
$SHLVL | 2 | 1 | 2 | 1
$SHELLOPTS | braceexpand: | (empty) | braceexpand:.. | braceexpand:
$BASH | /bin/bash | (empty) | /bin/bash | /bin/bash
$BASH_ARGV | (empty) | (empty) | (empty) | scriptname
$BASH_SUBSHELL | 0 | (empty) | 0 | 0
$SHELL | /bin/bash | /bin/bash | /bin/bash | /bin/bash
$OPTARG | (empty) | (empty) | (emtpy) | (emtpy)
Now you could put a logic into your text script:
If $0 is not equal to -bash, then do an exit $SOMERETURNVALUE.
In case the script was called via sh myscript or bash myscript, then it will
exit the calling shell. In case it was run in the current shell, it will
continue to run. (Warning: in case the script has any other exit statements,
your current shell will be 'killed'...)
So put into your non-executable myscript.txt near its beginning something like
this may do something close to your goal:
echo BASH=$BASH
test x${BASH} = x/bin/bash && echo "$? : FINE.... You're using 'bash ...'"
test x${BASH} = x/bin/bash || echo "$? : RATS !!! -- You're not using BASH and I will kick you out!"
test x${BASH} = x/bin/bash || exit 42
test x"${0}" = x"-bash" && echo "$? : FINE.... You've sourced me, and I'm your login shell."
test x"${0}" = x"-bash" || echo "$? : RATS !!! -- You've not sourced me (or I'm not your bash login shell) and I will kick you out!"
test x"${0}" = x"-bash" || exit 33
This may or may not be what the asker wanted but, on a similar situation, I wanted a script to indicate that it is meant to be sourced and not directly run.
To achieve this effect my script reads:
#!/bin/echo Should be run as: source
export SOMEPATH="/some/path/on/my/system"
echo "Your environment has been set up"
So when I run it either as a command or sourced I get:
$ ./myscript.sh
Should be run as: source ./myscript.sh
$ source ./myscript.sh
Your environment has been set up
You can of course fool the script by running it as sh ./myscript.sh, but at least it gives the correct expected behaviour on 2 out of 3 cases.
This is what I was looking for:
[[ ${BASH_SOURCE[0]} = $0 ]] && main "$#"
I cannot add comment yet (stackexchange policies) so I add my own answer:
This one may works regardless if we do:
bash scriptname
scriptname
./scriptname.
on both bash and mksh.
if [ "${0##/*}" == scriptname ] # if the current name is our script
then
echo run
else
echo sourced
fi
If you have a non-altering file path for regular users, then:
if [ "$(/bin/readlink -f "$0")" = "$KNOWN_PATH_OF_THIS_FILE" ]; then
# the file was executed
else
# the file was sourced
fi
(it can also easily be loosened to only check for the filename or whatever).
But your users need to have read permission to be able to source the file, so absolutely nothing can stop them from doing what they want with the file. But it might help them out to not use it in the wrong way.
This solution is not dependent on Bashisms.
Yes it is possible. In general you can do the following:
#! /bin/bash
sourced () {
echo Sourced
}
executed () {
echo Executed
}
if [[ ${0##*/} == -* ]]; then
sourced
else
executed $#
fi
Giving the following output:
$ ./myscript
Executed
$ . ./myscript
Sourced
Based on Kurt Pfeifle’s answer, this works for me
if [ $SHLVL = 1 ]
then
echo 'script was sourced'
fi
Example
Since all of our machines have history, I did this:
check_script_call=$(history |tail -1|grep myscript.sh )
if [ -z "$check_script_call" ];then
echo "This file should be called as a source."
echo "Please, try again this way:"
echo "$ source /path/to/myscript.sh"
exit 1
fi
Everytime you run a script (without source), your shell creates a new env without history.
If you want to care about performance you can try this:
if ! history |tail -1|grep set_vars ;then
echo -e "This file should be called as a source.\n"
echo "Please, try again this way:"
echo -e "$ source /path/to/set_vars\n"
exit 1
fi
PS: I think Kurt's answer is much more complete but I think this could help.
In the first case, $0 will be "myscript.sh". In the second case, it will be "./myscript". But, in general, there's no way to tell source was used.
If you tell us what you're trying to do, instead of how you want to do it, a better answer might be forthcoming.

Resources