How to pass an option in a bash script command? - bash

I have a script starting with:
#!/usr/bin/sudo bash
It does a non instant processing and is not meant to be interrupted, so I would like to add the -b option to sudo to run it in background after the password has been entered.
#!/usr/bin/sudo -b bash
However, the script does not accept the option. Am I doing something wrong ? Can one even pass an option that way ? And if not, why ?
Thank you in advance.

Let's ask shellcheck:
$ shellcheck yourscript
In yourscript line 1:
#!/usr/bin/sudo -b bash
^-- SC2096: On most OS, shebangs can only specify a single parameter.
A fair workaround is to have the script invoke itself with sudo based on a flag:
#!/bin/bash
if [[ $1 == "-n" ]]
then
echo "Processing as $(whoami)"
else
printf "Option -n not specified: invoking sudo -b %q -n:" "$0"
exec sudo -b "$0" -n
fi
This has the additional benefit of letting you run yourscript -n directly to not invoke sudo and not run in the background. This allows things like sudo yourscript -n && mail -s "Processing complete" you#example.com which would not be possible if the script unconditionally backgrounded itself.
Caveat: sudo "$0" is not a bullet proof way of reinvoking the current script.

Related

How can I request elevated permissions in a bash script's begin and let it go at the end?

I have a script (myscript.sh) which runs a few commands which need elevated privileges (i.e. needs to run with sudo).
Script is quite complex, but to demonstrate it is like below:
#!/bin/bash
echo "hello"
command1_which_needs_sudo
echo "hello2"
command2_which_needs_sudo
echo "hello3"
...
If I run it as a normal user without the required privileges:
$ ./myscript.sh
hello
must be super-user to perform this action
However if I run it with the correct privileges, it will work fine:
$ sudo ./myscript.sh
hello
hello2
hello3
Can I somehow achieve to run myscript.sh without sudo, and make the script requesting the elevated privileges only once in the beginning (and pass it back once it has finished)?
So obviously, sudo command1_which_needs_sudo will not be good, as command2 also need privileges.
How can I do this if I don't want to create another file, and due to script complexity I also don't want to do this with heredoc syntax?
If your main concern is code clarity, using wrapper functions can do a lot of good.
# call any named bash function under sudo with arbitrary arguments
run_escalated_function() {
local function_name args_q
function_name=$1; shift || return
printf -v args_q '%q ' "$#"
sudo bash -c "$(declare -f "$function_name"); $function_name $args_q"
}
privileged_bits() {
command1_which_needs_sudo
echo "hello2"
command2_which_needs_sudo
}
echo "hello"
run_escalated_function privileged_bits
echo "hello3"
If you want to run the script with root privileges without having to type sudo in the terminal nor having to type the password more than once then you can use:
#!/bin/bash
if [ "$EUID" -ne 0 ]
then
exec sudo -s "$0" "$#"
fi
echo "hello"
command1_which_needs_sudo
echo "hello2"
command2_which_needs_sudo
echo "hello3"
# ...
sudo -k
Update:
If your goal is to execute one part of the script with sudo rights then using a quoted here‑document is probably the easiest solution; there won't be any syntax issues because the current shell won't expand anything in it.
#!/bin/bash
echo "hello"
sudo -s var="hello2" <<'END_OF_SUDO'
command1_which_needs_sudo
echo "$var"
command2_which_needs_sudo
END_OF_SUDO
sudo -k
echo "hello3"
#...
remark: take notice that you can use external values in the here-document script by setting varname=value in the sudo command.

Bash Echo passing to another script, not working as expected

I created a bash file to write some content into a file, which should be written into another users home directory, with the users account.
It should work the follwing:
sudo ./USER.sh run 49b087ef9cb6753f "echo test > test.txt"
Basically USER.sh contains this:
if [ "$1" = "run" ]; then
cd /home/${2}/;
sudo -u ${2} ${3};
fi
But it does not write any stuff into test.txt, it just direct executes the Bash command, instead of writing it into the file.
Did anyone got an Idea how I can fix it, that it does actually write the Content into a file instead of direct executing it?
Thanks.
You want:
sudo -u "$2" sh -c "$3"
The curlies are useless. They don't prevent splitting and file-globbing.
The double quotes do.
With the double quotes "$3" expands to "echo test > test.txt" (without them, it's "echo" "test" ">" and "test.txt"). This needs to be executed by a shell, hence the sh -c (a POSIX shell is sufficient in this case and if it's dash, it'll start a few ms faster than bash does).
You could also do:
if [ "$1" = "run" ]; then
sudo -u "$2" --set-home sh -c "$(printf '%s\n' 'cd "$HOME"' "$3")"
fi
which would be more robust in the general case where user home directories aren't necessarily /home/$username, but whatever the appropriate field in /etc/passwd is.

Use Bash with script text from stdin and options from command line

I want to use /bin/bash (possibly /bin/sh) with the option -f passed to, and handled by, the script.
Precisely,
while getopts f OPT
do
case $OPT in
"f" ) readonly FLG_F="TRUE"
esac
done
if [ $FLG_F ]; then
rm -rf $KIBRARY_DIR
fi
and when these lines are in a file http://hoge.com/hoge.sh,
I can do this, for instance,
wget http://hoge.com/hoge.sh
/bin/bash hoge.sh -f
but not
/bin/bash -f hoge.sh
I know the reason but I want to do like this,
wget -O - http://hoge.com/hoge.sh | /bin/bash
with -f option for hoge.sh not for /bin/bash
Are there any good ways to do this?
/bin/bash <(wget -O - http://hoge.com/hoge.sh) -f
worked. but this is only for bash users, right?
Using bash you can do
wget -O - http://hoge.com/hoge.sh | /bin/bash -s -- -f
as with -s commands are read from the standard input. This option allows the positional parameters to be set too.
It should work with other POSIX shells too.

Providing opts to piped bash script

I am trying to provide opts to a bash script when piping the script contents to bash for execution.
#!/bin/bash
SETUP_PACKAGES=""
while getopts ":u:" opt; do
case $opt in
p)
if [[ "$OPTARG" =~ "mysql" ]] ; then SETUP_PACKAGES="$SETUP_PACKAGES mysql-client libmysqlclient-dev"; fi
;;
# other parts omitted...
esac
done
Executing the script in a shell like ./script.sh -p mysql works. The aim is to store the script in a repository so I tried curl -L example.com/my/script | bash -p mysql. This however throws /usr/bin/mysql: /usr/bin/mysql: cannot execute binary file.
What do I need to do to achieve my goal?
You need -s option for bash in order to set positional parameters for an interactive shell:
curl -L example.com/my/script | bash -s -p mysql
The above example is misleading because you wait for "u" option but set only "p" option.
Any way here is how to call it with curl:
bash <(curl -L example.com/my/script) -p mysql

How to invoke bash, run commands inside the new shell, and then give control back to user?

This must either be really simple or really complex, but I couldn't find anything about it... I am trying to open a new bash instance, then run a few commands inside it, and give the control back to the user inside that same instance.
I tried:
$ bash -lic "some_command"
but this executes some_command inside the new instance, then closes it. I want it to stay open.
One more detail which might affect answers: if I can get this to work I will use it in my .bashrc as alias(es), so bonus points for an alias implementation!
bash --rcfile <(echo '. ~/.bashrc; some_command')
dispenses the creation of temporary files. Question on other sites:
https://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
https://unix.stackexchange.com/questions/123103/how-to-keep-bash-running-after-command-execution
This is a late answer, but I had the exact same problem and Google sent me to this page, so for completeness here is how I got around the problem.
As far as I can tell, bash does not have an option to do what the original poster wanted to do. The -c option will always return after the commands have been executed.
Broken solution: The simplest and obvious attempt around this is:
bash -c 'XXXX ; bash'
This partly works (albeit with an extra sub-shell layer). However, the problem is that while a sub-shell will inherit the exported environment variables, aliases and functions are not inherited. So this might work for some things but isn't a general solution.
Better: The way around this is to dynamically create a startup file and call bash with this new initialization file, making sure that your new init file calls your regular ~/.bashrc if necessary.
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
echo "source ~/.bashrc" > $TMPFILE
echo "<other commands>" >> $TMPFILE
echo "rm -f $TMPFILE" >> $TMPFILE
# Start the new bash shell
bash --rcfile $TMPFILE
The nice thing is that the temporary init file will delete itself as soon as it is used, reducing the risk that it is not cleaned up correctly.
Note: I'm not sure if /etc/bashrc is usually called as part of a normal non-login shell. If so you might want to source /etc/bashrc as well as your ~/.bashrc.
You can pass --rcfile to Bash to cause it to read a file of your choice. This file will be read instead of your .bashrc. (If that's a problem, source ~/.bashrc from the other script.)
Edit: So a function to start a new shell with the stuff from ~/.more.sh would look something like:
more() { bash --rcfile ~/.more.sh ; }
... and in .more.sh you would have the commands you want to execute when the shell starts. (I suppose it would be elegant to avoid a separate startup file -- you cannot use standard input because then the shell will not be interactive, but you could create a startup file from a here document in a temporary location, then read it.)
bash -c '<some command> ; exec /bin/bash'
will avoid additional shell sublayer
You can get the functionality you want by sourcing the script instead of running it. eg:
$cat script
cmd1
cmd2
$ . script
$ at this point cmd1 and cmd2 have been run inside this shell
Append to ~/.bashrc a section like this:
if [ "$subshell" = 'true' ]
then
# commands to execute only on a subshell
date
fi
alias sub='subshell=true bash'
Then you can start the subshell with sub.
The accepted answer is really helpful! Just to add that process substitution (i.e., <(COMMAND)) is not supported in some shells (e.g., dash).
In my case, I was trying to create a custom action (basically a one-line shell script) in Thunar file manager to start a shell and activate the selected Python virtual environment. My first attempt was:
urxvt -e bash --rcfile <(echo ". $HOME/.bashrc; . %f/bin/activate;")
where %f is the path to the virtual environment handled by Thunar.
I got an error (by running Thunar from command line):
/bin/sh: 1: Syntax error: "(" unexpected
Then I realized that my sh (essentially dash) does not support process substitution.
My solution was to invoke bash at the top level to interpret the process substitution, at the expense of an extra level of shell:
bash -c 'urxvt -e bash --rcfile <(echo "source $HOME/.bashrc; source %f/bin/activate;")'
Alternatively, I tried to use here-document for dash but with no success. Something like:
echo -e " <<EOF\n. $HOME/.bashrc; . %f/bin/activate;\nEOF\n" | xargs -0 urxvt -e bash --rcfile
P.S.: I do not have enough reputation to post comments, moderators please feel free to move it to comments or remove it if not helpful with this question.
With accordance with the answer by daveraja, here is a bash script which will solve the purpose.
Consider a situation if you are using C-shell and you want to execute a command
without leaving the C-shell context/window as follows,
Command to be executed: Search exact word 'Testing' in current directory recursively only in *.h, *.c files
grep -nrs --color -w --include="*.{h,c}" Testing ./
Solution 1: Enter into bash from C-shell and execute the command
bash
grep -nrs --color -w --include="*.{h,c}" Testing ./
exit
Solution 2: Write the intended command into a text file and execute it using bash
echo 'grep -nrs --color -w --include="*.{h,c}" Testing ./' > tmp_file.txt
bash tmp_file.txt
Solution 3: Run command on the same line using bash
bash -c 'grep -nrs --color -w --include="*.{h,c}" Testing ./'
Solution 4: Create a sciprt (one-time) and use it for all future commands
alias ebash './execute_command_on_bash.sh'
ebash grep -nrs --color -w --include="*.{h,c}" Testing ./
The script is as follows,
#!/bin/bash
# =========================================================================
# References:
# https://stackoverflow.com/a/13343457/5409274
# https://stackoverflow.com/a/26733366/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://www.linuxquestions.org/questions/other-%2Anix-55/how-can-i-run-a-command-on-another-shell-without-changing-the-current-shell-794580/
# https://www.tldp.org/LDP/abs/html/internalvariables.html
# https://stackoverflow.com/a/4277753/5409274
# =========================================================================
# Enable following line to see the script commands
# getting printing along with their execution. This will help for debugging.
#set -o verbose
E_BADARGS=85
if [ ! -n "$1" ]
then
echo "Usage: `basename $0` grep -nrs --color -w --include=\"*.{h,c}\" Testing ."
echo "Usage: `basename $0` find . -name \"*.txt\""
exit $E_BADARGS
fi
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
#echo "echo Hello World...." >> $TMPFILE
#initialize the variable that will contain the whole argument string
argList=""
#iterate on each argument
for arg in "$#"
do
#if an argument contains a white space, enclose it in double quotes and append to the list
#otherwise simply append the argument to the list
if echo $arg | grep -q " "; then
argList="$argList \"$arg\""
else
argList="$argList $arg"
fi
done
#remove a possible trailing space at the beginning of the list
argList=$(echo $argList | sed 's/^ *//')
# Echoing the command to be executed to tmp file
echo "$argList" >> $TMPFILE
# Note: This should be your last command
# Important last command which deletes the tmp file
last_command="rm -f $TMPFILE"
echo "$last_command" >> $TMPFILE
#echo "---------------------------------------------"
#echo "TMPFILE is $TMPFILE as follows"
#cat $TMPFILE
#echo "---------------------------------------------"
check_for_last_line=$(tail -n 1 $TMPFILE | grep -o "$last_command")
#echo $check_for_last_line
#if tail -n 1 $TMPFILE | grep -o "$last_command"
if [ "$check_for_last_line" == "$last_command" ]
then
#echo "Okay..."
bash $TMPFILE
exit 0
else
echo "Something is wrong"
echo "Last command in your tmp file should be removing itself"
echo "Aborting the process"
exit 1
fi
$ bash --init-file <(echo 'some_command')
$ bash --rcfile <(echo 'some_command')
In case you can't or don't want to use process substitution:
$ cat script
some_command
$ bash --init-file script
Another way:
$ bash -c 'some_command; exec bash'
$ sh -c 'some_command; exec sh'
sh-only way (dash, busybox):
$ ENV=script sh
Here is yet another (working) variant:
This opens a new gnome terminal, then in the new terminal it runs bash. The user's rc file is read first, then a command ls -la is sent for execution to the new shell before it turns interactive.
The last echo adds an extra newline that is needed to finish execution.
gnome-terminal -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'
I also find it useful sometimes to decorate the terminal, e.g. with colorfor better orientation.
gnome-terminal --profile green -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'

Resources