Preventing pre-mature shell parameter expansion in chained commands - bash

I have a system that allows commands to be executed from a host to various external machines, from a bash shell script using either ssh or 'sersh', which is similar to ssh but sends commands over a serial port. (The details of these commands don't matter.)
I'm trying to chain the commands together, from one external machine to
yet a 3rd machine. I'm having a hard time figuring out how to get the shell to expand parameters only on the final machine.
function do_cmd () {
case $TRANSPORT in
ssh)
ssh -i ${SSH_KEY} ${LOGIN}#${IPADDR} "$#"
;;
serial)
sersh ${SER_LOGIN}#{SERIAL_DEV} "$#"
;;
ssh2serial)
ssh -i ${SSH_KEY} ${LOGIN}#{IPADDR} \
"sersh ${SER_LOGIN}#${SERIAL_DEV} $#"
;;
*)
echo "Unknown transport $TRANSPORT"
;;
esac
}
do_cmd "echo hello"
do_cmd "echo \"my pid is \$\$\""
do_cmd "cd /proc ; for pid in 1* ; do echo \$pid, ; done"
All three of these calls work correctly when TRANSPORT is 'ssh' or 'serial'. For the TRANSPORT 'ssh2serial', in the second call to do_cmd, the $$ is expanded prematurely (on the intermediate machine, not on the final machine). And for the third call to do_cmd, $pid ends up
being expanded to the empty string before the loop executes on the
final machine.
I thought about double-escaping the dollar signs, but the
caller doesn't know how many levels of intermediate machines there are.
Is there a way to prevent the parameter expansion on the intermediate
machine, and only do it on the final machine?

[#MadPhysicist reminds me that I'd still have to escape the variable expansions. My answer is left here below for reference but #MadPhysicist is right: my answer probably fails to answer the question asked.]
Use eval.
Try this exercise, typing the following four commands at your shell's command line. I believe that the exercise is likely to answer your question.
$ X=55
$ A='echo $(( $X + $X ))'
$ echo "$A"
$ eval "$A"
Once the exercise has shown you what the shell's built-in command eval does, you can get the effect you want by using eval on the final machine in your chain.
[Please notice that the exercise says, A='echo $(( $X + $X ))'. It does not say, A=`echo $(( $X + $X ))`. Observe the way the quotation marks slant. You want ordinary single quotes here, not backticks.]

Related

Unable to run command in function (shell script)

I have this function in my ~/.zshrc
async () {
if ! [[ $# -gt 0 ]];then
echo "Not enough arguments!"
fi
local busus="$IFS"
export IFS=" "
echo "$* &"
command "$* &"
export IFS="$busus"
}
and an alias
alias word='async "libreoffice --writer"'
The echo "$* &" line is used only for debugging.
When I run word, libreoffice --writer & is shown on the screen (no extra spaces or newlines), but nothing happens.
I also tried executing command libreoffice --writer & and it worked perfectly.
(My current shell is zsh)
What is wrong?
Thanks
Usually (especially in bash), the problem is that people aren't using enough double-quotes; in this case, it's the opposite: you're using too many double-quotes. The basic problem is that the command name and each of the arguments to it must be a separate "word" (in shell syntax), but double-quoting something will (usually) make the shell treat it as all one word. Here's a quick demo:
% echo foo
foo
% "echo foo"
zsh: command not found: echo foo
Here, the double-quotes make the shell treat " foo" as part of the command name, rather than as a delimiter and an argument after the command name. Similarly, when you use "$* &", the double-quotes tell the shell to treat the entire thing (including even the ampersand) as a single long word (and pass it as an argument to command). (BTW, the command isn't needed, but isn't causing any harm either.)
The standard way to do this is to use "$#" instead -- here the $# acts specially within double-quotes, making each argument into a separate word. In zsh, you could omit the double-quotes, but that can cause trouble in other shells so I recommend using them anyway.
Also, don't mess with IFS. You don't need to, and it opens a can of worms that's best left closed. And if there are no arguments, you should return immediately, rather than continuing and trying to run an empty command.
But there's another problem: in the alias, you double-quote "libreoffice --writer", which is going to have pretty much the same effect again. So remove those double-quotes. But keep the single-quotes around the alias, so it'll be defined as a single alias.
So here's my proposed correction:
async () {
if ! [[ $# -gt 0 ]];then
echo "Not enough arguments!"
return 1 # Do not continue if there's no command to run!
fi
echo "$* &" # Here quoting is appropriate, so it's a single argument to echo
"$#" &
}
alias word='async libreoffice --writer'
Using "$#" directly is more reliable:
async () { [ "$#" -gt 0 ] && "$#" & }
alias word='async libreoffice --writer'

What is the meaning of "${psql[#]}" in this script?

I came across a script that is supposed to set up postgis in a docker container, but it references this "${psql[#]}" command in several places:
#!/bin/sh
# Perform all actions as $POSTGRES_USER
export PGUSER="$POSTGRES_USER"
# Create the 'template_postgis' template db
"${psql[#]}" <<- 'EOSQL'
CREATE DATABASE template_postgis;
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';
EOSQL
I'm guessing it's supposed to use the psql command, but the command is always empty so it gives an error. Replacing it with psql makes the script run as expected. Is my guess correct?
Edit: In case it's important, the command is being run in a container based on postgres:11-alpine.
$psql is supposed to be an array containing the psql command and its arguments.
The script is apparently expected to be run from here, which does
psql=( psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --no-password )
and later sources the script in this loop:
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh)
# https://github.com/docker-library/postgres/issues/450#issuecomment-393167936
# https://github.com/docker-library/postgres/pull/452
if [ -x "$f" ]; then
echo "$0: running $f"
"$f"
else
echo "$0: sourcing $f"
. "$f"
fi
;;
*.sql) echo "$0: running $f"; "${psql[#]}" -f "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[#]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
See Setting an argument with bash for the reason to use an array rather than a string.
The #!/bin/sh and the [#] are incongruous. This is a bash-ism, where the psql variable is an array. This literal quote dollarsign psql bracket at bracket quote is expanded into "psql" "array" "values" "each" "listed" "and" "quoted" "separately." It's the safer way, e.g., to accumulate arguments to a command where any of them might have spaces in them.
psql=(/foo/psql arg arg arg) is the best way to define the array you need there.
It might look obscure, but it would work like so...
Let's say we have a bash array wc, which contains a command wc, and an argument -w, and we feed that a here document with some words:
wc=(wc -w)
"${wc[#]}" <<- words
one
two three
four
words
Since there are four words in the here document, the output is:
4
In the quoted code, there needs to be some prior point, (perhaps a calling script), that does something like:
psql=(psql -option1 -option2 arg1 arg2 ... )
As to why the programmer chose to invoke a command with an array, rather than just invoke the command, I can only guess... Maybe it's a crude sort of operator overloading to compensate for different *nix distros, (i.e. BSD vs. Linux), where the local variants of some necessary command might have different names from the same option, or even use different commands. So one might check for BSD or Linux or a given version, and reset psql accordingly.
The answer from #Barmar is correct.
The script was intended to be "sourced" and not "executed".
I faced the same problem and came to the same answer after I read that it had been reported here and fixed by "chmod".
https://github.com/postgis/docker-postgis/issues/119
Therefore, the fix is to change the permissions.
This can be done either in your git repository:
chmod -x initdb-postgis.sh
or add a line to your docker file.
RUN chmod -x /docker-entrypoint-initdb.d/10_postgis.sh
I like to do both so that it is clear to others.
Note: if you are using git on windows then permission can be lost. Therefore, "chmod" in the docker file is needed.

Store a command in a variable; implement without `eval`

This is almost the exact same question as in this post, except that I do not want to use eval.
Quick question short, I want to execute the command echo aaa | grep a by first storing it in a string variable Command='echo aaa | grep a', and then running it without using eval.
In the post above, the selected answer used eval. That works for me too. What concerns me a lot is that there are plenty of warnings about eval below, followed by some attempts to circumvent it. However, none of them are able to solve my problem (essentially the OP's). I have commented below their attempts, but since it has been there for a long time, I suppose it is better to post the question again with the restriction of not using eval.
Concrete Example
What I want is a shell script that runs my command when I am happy:
#!/bin/bash
# This script run-this-if.sh runs the commands when I am happy
# Warning: the following script does not work (on nose)
if [ "$1" == "I-am-happy" ]; then
"$2"
fi
$ run-if.sh I-am-happy [insert-any-command]
Your sample usage can't ever work with an assignment, because assignments are scoped to the current process and its children. Because there's no reason to try to support assignments, things get suddenly far easier:
#!/bin/sh
if [ "$1" = "I-am-happy" ]; then
shift; "$#"
fi
This then can later use all the usual techniques to run shell pipelines, such as:
run-if-happy "$happiness" \
sh -c 'echo "$1" | grep "$2"' _ "$untrustedStringOne" "$untrustedStringTwo"
Note that we're passing the execve() syscall an argv with six elements:
sh (the shell to run; change to bash etc if preferred)
-c (telling the shell that the following argument is the code for it to run)
echo "$1" | grep "$2" (the code for sh to parse)
_ (a constant which becomes $0)
...whatever the shell variable untrustedStringOne contains... (which becomes $1)
...whatever the shell variable untrustedStringTwo contains... (which becomes $2)
Note here that echo "$1" | grep "$2" is a constant string -- in single-quotes, with no parameter expansions or command substitutions -- and that untrusted values are passed into the slots that fill in $1 and $2, out-of-band from the code being evaluated; this is essential to have any kind of increase in security over what eval would give you.

trouble capturing output of a subshell that has been backgrounded

Attempting to make a "simple" parallel function in bash. The problem is currently that when the line to capture the output is backgrounded, the output is lost. If that line is not backgrounded, the output is captured fine, but this of course defeats the purpose of the function.
#!/usr/bin/env bash
cluster="${1:-web100s}"
hosts=($(inventory.pl bash "$cluster" | sort -V))
cmds="${2:-uptime}"
parallel=10
cx=0
total=0
for host in "${hosts[#]}"; do
output[$total]=$(echo -en "$host: ")
echo "${output[$total]}"
output[$total]+=$(ssh -o ConnectTimeout=5 "$host" "$cmds") &
cx=$((cx + 1))
total=$((total + 1))
if [[ $cx -gt $parallel ]]; then
wait >&/dev/null
cx=0
fi
done
echo -en "***** DONE *****\n Results\n"
for ((i=0; i<= $total; i++)); do
echo "${output[$i]}"
done
That's because your command (the assignment) is run in a subshell, so this assignment can't influence the parent shell. This boils down to this:
a=something
a='hello senorsmile' &
echo "$a"
Can you guess what the output is? the output is, of course,
something
and not hello senorsmile. The only way for the subshell to communicate with the parent shell is to use an IPC (interprocess communication), in one form or another. I don't have any solution to propose, I only tried to explain why it fails.
If you think of it, it should make sense. What do you think of this?
a=$( echo a; sleep 1000000000; echo b ) &
The command immediately returns (after forking)... but the output is only going to be fully available in... over 31 years.
Assigning a shell variable in the background this way is effectively meaningless. Bash does have built in co-processing which should work for you:
http://www.gnu.org/software/bash/manual/bashref.html#Coprocesses

Execute command inside for loop from variable in bash

Trying to run commands defined in variables inside a for loop:
somevar="Bit of text"
cmd1="command \"search '$somevar' here\""
cmd2="command \"search '$somevar' there\""
for cmd in cmd1 cmd2 ; do
eval \$$cmd
ssh server1 eval \$$cmd
done
I've put in the variations I have to consider such as the ssh inside the loop etc as these are needed in my script. I think the eval is the right direction, but the way that the quotes inside the command get interpreted comes through wrong.
Consider this broken example:
$ cmd1="touch \"file with spaces\""
$ $cmd1
Quoting is handled before $cmd1 is expanded, so instead of one file this will create three files called "file, with, and spaces". One can use eval $cmd to force quote removal after the expansion.
Even though it uses eval, the line eval \$$cmd has that same quoting problem since \$$cmd expands to $cmd1, which is then evaluated by eval with the same behaviour as the broken example.
The argument to eval must be the actual command, not the expression $cmd1. This can be done using variable indirection: eval "${!cmd}".
When running this through SSH there is no need for the eval because the remote shell also performs quote removal.
So here is the fixed loop:
for cmd in cmd1 cmd2 ; do
eval "${!cmd}"
ssh server1 "${!cmd}"
done
An alternative to indirection is to iterate over the values of cmd1 and cmd2 instead of their names:
for cmd in "$cmd1" "$cmd2" ; do
eval "$cmd"
ssh server1 "$cmd"
done
I see two solutions, either you change your loop to:
for cmd in "$cmd1" "$cmd2" ; do
ssh server1 $cmd
done
or to:
for cmd in cmd1 cmd2 ; do
ssh server1 ${!cmd}
done
Instead of eval \$$cmd you need to use:
res=$(eval "$cmd")
ssh server1 "$res"

Resources