bash shell: Avoid alias to interpret $! - bash

I have an alias created:
alias my_rsync = "rsync -av ${PATH_EXCLUDE_DEV} ${PATH_SYS_DEV}/` ${PATH_SYS_SANDBOX}/ && wait $! && cd - &>/dev/null\"
When is load this alias and watch it with the command 'type my_rsync' I see that $! is gone because it has been interpreted.
Normally I do escape with backslash and it does function well. For example:
alias my_rsync = "mysql ${DB_DATA_SYS_SANDBOX} -e 'SHOW TABLES' | grep -v 'Tables_in_${DB_NAME_SANDBOX}' | while read a; do mysql ${DB_DATA_SYS_SANDBOX} -e \"DROP TABLE \$a\";done"
Can you guys give me a hint? Thanks.

Use a function, not an alias, and you avoid this altogether.
my_rsync() {
# BTW, this is horrible quoting; run your code through http://shellcheck.net/.
# Also, all-caps variable names are bad form except for specific reserved classes.
rsync -av ${PATH_EXCLUDE_DEV} ${PATH_SYS_DEV}/ ${PATH_SYS_SANDBOX}/ &>/dev/null
cd -
}
...in this formulation, no expansions will ever happen until the function is expanded.
As for wait -- it only makes sense at all when you're running things in the background. The usage you have here doesn't start anything in the background, so the wait calls have no purpose.
On the other hand, the following shows some wait calls that do have purpose:
rsync_args=( --exclude='/dev/*' --exclude='/sys/*' )
hosts=( foo.example.com bar.example.com )
my_rsync() {
# declare local variables
declare -a pids=( ) # array to hold PIDs
declare host pid # scalar variables to hold items being iterated over
for host in "${hosts[#]}"; do
rsync -av "${rsync_args[#]}" /sandbox "$host":/path & pids+=( "$!" )
done
for pid in "${pids[#]}"; do
wait "$pid"
done
}
This runs multiple rsyncs (one for each host) at the same time in the background, stores their PIDs in an array, and then iterates through that array when they're all running to let them complete.
Notably, it's the single & operator that causes the rsyncs to be run in the background. If they were separated from the following command with &&, ; or a newline instead, they would be run one at a time, and the value of $! would never be changed.

If you don't want a variable to be interpreted, escape it or use single quotes.
Example:
$ alias x="false; echo $?"
$ x
0
$ alias x='false; echo $?'
$ x
1

Related

Commands executed with arguments in shell script is escaped with single quotes [duplicate]

I have managed to track done a weird problem in an init script I am working on. I have simplified the problem down in the following example:
> set -x # <--- Make Bash show the commands it runs
> cmd="echo \"hello this is a test\""
+ cmd='echo "hello this is a test"'
> $cmd
+ echo '"hello' this is a 'test"' # <--- Where have the single quotes come from?
"hello this is a test"
Why is bash inserting those extra single quotes into the executed command?
The extra quotes don't cause any problems in the above example, but they are really giving me a headache.
For the curious, the actual problem code is:
cmd="start-stop-daemon --start $DAEMON_OPTS \
--quiet \
--oknodo \
--background \
--make-pidfile \
$* \
--pidfile $CELERYD_PID_FILE
--exec /bin/su -- -c \"$CELERYD $CELERYD_OPTS\" - $CELERYD_USER"
Which produces this:
start-stop-daemon --start --chdir /home/continuous/ci --quiet --oknodo --make-pidfile --pidfile /var/run/celeryd.pid --exec /bin/su -- -c '"/home/continuous/ci/manage.py' celeryd -f /var/log/celeryd.log -l 'INFO"' - continuous
And therefore:
/bin/su: invalid option -- 'f'
Note: I am using the su command here as I need to ensure the user's virtualenv is setup before celeryd is run. --chuid will not provide this
Because when you try to execute your command with
$cmd
only one layer of expansion happens. $cmd contains echo "hello this is a test", which is expanded into 6 whitespace-separated tokens:
echo
"hello
this
is
a
test"
and that's what the set -x output is showing you: it's putting single quotes around the tokens that contain double quotes, in order to be clear about what the individual tokens are.
If you want $cmd to be expanded into a string which then has all the bash quoting rules applied again, try executing your command with:
bash -c "$cmd"
or (as #bitmask points out in the comments, and this is probably more efficient)
eval "$cmd"
instead of just
$cmd
Use Bash arrays to achieve the behavior you want, without resorting to the very dangerous (see below) eval and bash -c.
Using arrays:
declare -a CMD=(echo --test-arg \"Hello\ there\ friend\")
set -x
echo "${CMD[#]}"
"${CMD[#]}"
outputs:
+ echo echo --test-arg '"Hello there friend"'
echo --test-arg "Hello there friend"
+ echo --test-arg '"Hello there friend"'
--test-arg "Hello there friend"
Be careful to ensure that your array invocation is wrapped by double-quotes; otherwise, Bash tries to perform the same "bare-minimum safety" escaping of special characters:
declare -a CMD=(echo --test-arg \"Hello\ there\ friend\")
set -x
echo "${CMD[#]}"
${CMD[#]}
outputs:
+ echo echo --test-arg '"Hello there friend"'
echo --test-arg "Hello there friend"
+ echo --test-arg '"Hello' there 'friend"'
--test-arg "Hello there friend"
ASIDE: Why is eval dangerous?
eval is only safe if you can guarantee that every input passed to it will not unexpectedly change the way that the command under eval works.
Example: As a totally contrived example, let's say we have a script that runs as part of our automated code deployment process. The script sorts some input (in this case, three lines of hardcoded text), and outputs the sorted text to a file whose named is based on the current directory name. Similar to the original SO question posed here, we want to dynamically construct the --output= parameter passed to sort, but we must (must? not really) rely on eval because of Bash's auto-quoting "safety" feature.
echo $'3\n2\n1' | eval sort -n --output="$(pwd | sed 's:.*/::')".txt
Running this script in the directory /usr/local/deploy/project1/ results in a new file being created at /usr/local/deploy/project1/project1.txt.
So somehow, if a user were to create a project subdirectory named owned.txt; touch hahaha.txt; echo, the script would actually run the following series of commands:
echo $'3\n2\n1'
sort -n --output=owned.txt; touch hahaha.txt; echo .txt
As you can see, that's totally not what we want. But you may ask, in this contrived example, isn't it unlikely that the user could create a project directory owned.txt; touch hahaha.txt; echo, and if they could, aren't we already in trouble already?
Maybe, but what about a scenario where the script is parsing not the current directory name, but instead the name of a remote git source code repository branch? Unless you plan to be extremely diligent about restricting or sanitizing every user-controlled artifact whose name, identifier, or other data is used by your script, stay well clear of eval.

Setting environment variables with background processes running in parallel

I have a file that takes 1 min to source. So within that file I need to source, I created functions and then ran them in parallel using &. The exported variables from the child processes are not available in the current environment. Is there a solution or trick to solve this issue? Thanks.
Sample:
#!/bin/bash
function getCNAME() {
curl ...... grep
export CNAME
}
function getBNAME() {
curl ...... grep
export BNAME
}
getCNAME &
getBNAME &
And then I have a main file that calls the source command on the code above and tries to use the variables BNAME and CNAME. But is unable to do so. If I remove the & it does have access to those variable but takes a long time to source the file.
You can't use export in your subshell and expect the parent shell to have access to the resulting variable. Consider using process substitutions instead:
#!/bin/bash
# note that if you're sourcing this, as you should be, the shebang will be ignored.
# ...hopefully it's just there for your editor's syntax highlighting.
rc=0
orig_pipefail_setting=$(shopt -p pipefail)
shopt -s pipefail # make sure if either curl _or_ grep fails the entire pipeline does too
# start both processes in the background, with their stdout on two different FDs
exec 4< <(curl ... | grep ... && printf '\0')
exec 5< <(curl ... | grep ... && printf '\0')
# read from those FDs into variables in the current shell
IFS= read -r -d '' BNAME <&4 || { (( rc |= $? )); echo "Error reading BNAME" >&2; }
IFS= read -r -d '' CNAME <&5 || { (( rc |= $? )); echo "Error reading CNAME" >&2; }
exec 4<&- 5<&- # close those file descriptors now that we're done with them
export BNAME CNAME # note that you probably don't actually need to export these
eval "$orig_pipefail_setting" # turn pipefail back off, if it wasn't on when we started
return "$rc" # ...return with an exit status reflecting whether we had any errors
That way file descriptors 4 and 5 will each be attached to a shell pipeline running curl and feeding its output to grep; both of them are started in the background before we try to read from either, so they're both running at the same time.
Are you sure the last two lines shouldn't be:
getCNAME
getBNAME
Edit - OP has fixed this, it used to read:
CNAME
BNAME
If you are sourcing a script (. /my/script), it is not a child process, and its variables will be available in the current shell. You don't even need export.
If you are executing a script normally, it is a child process, and you can't set variables in the parent shell.
The only methods I'm aware of of transferring data to the parent shell are via a file.
The variables should be available.
Check for bugs in your script:
Make sure you haven't used local for the variables in the functions.
Do echo "$CNAME" at the bottom of the sourced script, to test the functions are actually working at all.
EDIT
I did a little more investigation. Here is the problem: & puts the command/function in a subshell. That's why the variable is not available. In a sourced script, without &, it would be.
From man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
These are referred to as asynchronous commands.

Bash: ( commands ) | command

I stumbled upon Zenity, a command-line based GUI today. I noticed the there was some syntax of the form ( commands ) | command. Could anyone shed some light on what this is and where I can read more about it?
I found the below script within the docs
(
echo "10" ; sleep 1
echo "# Updating mail logs" ; sleep 1
echo "50" ; sleep 1
echo "This line will just be ignored" ; sleep 1
echo "100" ; sleep 1
) |
zenity --progress \
--title="Update System Logs" \
--text="Scanning mail logs..." \
--percentage=0
The parentheses are delimiting a subshell, which means the commands inside the parens are run in a separate process, and interpreted by a separate instance of the bash interpreter. In this case, it appears they are using a subshell just to group together all the echo and sleep commands so that they can then pipe the combined output of the entire group of commands through zenity. Which makes sense given that the goal in this example is to simulate a progress bar.
You can read more about subshells here: http://tldp.org/LDP/abs/html/subshells.html
The parentheses create a subshell, with all the implications that it has for the current shell.
The subshell cannot change the environment of the parent shell; sometimes, you want a subshell so that you can, say, quickly cd to a different directory without affecting the working directory for the rest of the script
The subshell has a single standard input and a single standard output stream; this is frequently a reason to start a subshell
The parent shell waits while the subshell completes the commands (unless you run it in the background)
If it helps,, think of ( foo; bar ) as a quick way to say sh -c 'foo; bar'.
A related piece of syntax is the brace, which runs a compound command in the current shell, not a subshell.
test -f file.rc || { echo "$0: file.rc not found -- aborting" >&2; exit 127; }
The exit in particular causes the current shell to exit with a failure exit code, while a subshell which exits does not directly affect the rest of the parent shell script.
(Weirdly, POSIX requires a statement terminator before the closing brace, but not before the closing parenthesis.)

Bash inserting quotes into string before execution

I have managed to track done a weird problem in an init script I am working on. I have simplified the problem down in the following example:
> set -x # <--- Make Bash show the commands it runs
> cmd="echo \"hello this is a test\""
+ cmd='echo "hello this is a test"'
> $cmd
+ echo '"hello' this is a 'test"' # <--- Where have the single quotes come from?
"hello this is a test"
Why is bash inserting those extra single quotes into the executed command?
The extra quotes don't cause any problems in the above example, but they are really giving me a headache.
For the curious, the actual problem code is:
cmd="start-stop-daemon --start $DAEMON_OPTS \
--quiet \
--oknodo \
--background \
--make-pidfile \
$* \
--pidfile $CELERYD_PID_FILE
--exec /bin/su -- -c \"$CELERYD $CELERYD_OPTS\" - $CELERYD_USER"
Which produces this:
start-stop-daemon --start --chdir /home/continuous/ci --quiet --oknodo --make-pidfile --pidfile /var/run/celeryd.pid --exec /bin/su -- -c '"/home/continuous/ci/manage.py' celeryd -f /var/log/celeryd.log -l 'INFO"' - continuous
And therefore:
/bin/su: invalid option -- 'f'
Note: I am using the su command here as I need to ensure the user's virtualenv is setup before celeryd is run. --chuid will not provide this
Because when you try to execute your command with
$cmd
only one layer of expansion happens. $cmd contains echo "hello this is a test", which is expanded into 6 whitespace-separated tokens:
echo
"hello
this
is
a
test"
and that's what the set -x output is showing you: it's putting single quotes around the tokens that contain double quotes, in order to be clear about what the individual tokens are.
If you want $cmd to be expanded into a string which then has all the bash quoting rules applied again, try executing your command with:
bash -c "$cmd"
or (as #bitmask points out in the comments, and this is probably more efficient)
eval "$cmd"
instead of just
$cmd
Use Bash arrays to achieve the behavior you want, without resorting to the very dangerous (see below) eval and bash -c.
Using arrays:
declare -a CMD=(echo --test-arg \"Hello\ there\ friend\")
set -x
echo "${CMD[#]}"
"${CMD[#]}"
outputs:
+ echo echo --test-arg '"Hello there friend"'
echo --test-arg "Hello there friend"
+ echo --test-arg '"Hello there friend"'
--test-arg "Hello there friend"
Be careful to ensure that your array invocation is wrapped by double-quotes; otherwise, Bash tries to perform the same "bare-minimum safety" escaping of special characters:
declare -a CMD=(echo --test-arg \"Hello\ there\ friend\")
set -x
echo "${CMD[#]}"
${CMD[#]}
outputs:
+ echo echo --test-arg '"Hello there friend"'
echo --test-arg "Hello there friend"
+ echo --test-arg '"Hello' there 'friend"'
--test-arg "Hello there friend"
ASIDE: Why is eval dangerous?
eval is only safe if you can guarantee that every input passed to it will not unexpectedly change the way that the command under eval works.
Example: As a totally contrived example, let's say we have a script that runs as part of our automated code deployment process. The script sorts some input (in this case, three lines of hardcoded text), and outputs the sorted text to a file whose named is based on the current directory name. Similar to the original SO question posed here, we want to dynamically construct the --output= parameter passed to sort, but we must (must? not really) rely on eval because of Bash's auto-quoting "safety" feature.
echo $'3\n2\n1' | eval sort -n --output="$(pwd | sed 's:.*/::')".txt
Running this script in the directory /usr/local/deploy/project1/ results in a new file being created at /usr/local/deploy/project1/project1.txt.
So somehow, if a user were to create a project subdirectory named owned.txt; touch hahaha.txt; echo, the script would actually run the following series of commands:
echo $'3\n2\n1'
sort -n --output=owned.txt; touch hahaha.txt; echo .txt
As you can see, that's totally not what we want. But you may ask, in this contrived example, isn't it unlikely that the user could create a project directory owned.txt; touch hahaha.txt; echo, and if they could, aren't we already in trouble already?
Maybe, but what about a scenario where the script is parsing not the current directory name, but instead the name of a remote git source code repository branch? Unless you plan to be extremely diligent about restricting or sanitizing every user-controlled artifact whose name, identifier, or other data is used by your script, stay well clear of eval.

How to include nohup inside a bash script?

I have a large script called mandacalc which I want to always run with the nohup command. If I call it from the command line as:
nohup mandacalc &
everything runs swiftly. But, if I try to include nohup inside my command, so I don't need to type it everytime I execute it, I get an error message.
So far I tried these options:
nohup (
command1
....
commandn
exit 0
)
and also:
nohup bash -c "
command1
....
commandn
exit 0
" # and also with single quotes.
So far I only get error messages complaining about the implementation of the nohup command, or about other quotes used inside the script.
cheers.
Try putting this at the beginning of your script:
#!/bin/bash
case "$1" in
-d|--daemon)
$0 < /dev/null &> /dev/null & disown
exit 0
;;
*)
;;
esac
# do stuff here
If you now start your script with --daemon as an argument, it will restart itself detached from your current shell.
You can still run your script "in the foreground" by starting it without this option.
Just put trap '' HUP on the beggining of your script.
Also if it creates child process someCommand& you will have to change them to nohup someCommand& to work properly... I have been researching this for a long time and only the combination of these two (the trap and nohup) works on my specific script where xterm closes too fast.
Create an alias of the same name in your bash (or preferred shell) startup file:
alias mandacalc="nohup mandacalc &"
Why don't you just make a script containing nohup ./original_script ?
There is a nice answer here: http://compgroups.net/comp.unix.shell/can-a-script-nohup-itself/498135
#!/bin/bash
### make sure that the script is called with `nohup nice ...`
if [ "$1" != "calling_myself" ]
then
# this script has *not* been called recursively by itself
datestamp=$(date +%F | tr -d -)
nohup_out=nohup-$datestamp.out
nohup nice "$0" "calling_myself" "$#" > $nohup_out &
sleep 1
tail -f $nohup_out
exit
else
# this script has been called recursively by itself
shift # remove the termination condition flag in $1
fi
### the rest of the script goes here
. . . . .
the best way to handle this is to use $()
nohup $( command1, command2 ...) &
nohup is expecting one command and in that way You're able to execute multiple commands with one nohup

Resources