I have a bash script:
function run_cmd
{
CMD="$#"
echo ">>> $CMD"
exec $CMD
}
CLUSTERS_PKG="abc1, abc2"
# run command
run_cmd "package upload -c '$CLUSTERS_PKG'"
However, when I run this command, I get a usage error with the 'package' command. If I run the command with a copy+paste, it works fine.
It doesn't seem to like me passing in $CLUSTERS_PKG variable with spaces and quotes around it. How do I properly run "everything" that is passed into run_cmd w/o the shell clobbering things?
Be sure to read the link posted by Adrian Frühwirth. This answer is really just a short summary of the advice found there.
If you pass the argument like this:
run_cmd "package upload -c '$CLUSTERS_PKG'"
then you'll have to use eval in your function so that the single quotes are treated as quoting operators:
run_cmd () {
CMD="$#"
echo ">>> $CMD"
eval "$CMD"
}
If you pass the arguments like this:
run_cmd package upload -c "$CLUSTERS_PKG"
then you can use an array in your function (much safer than using eval):
run_cmd () {
CMD=( "$#" )
echo ">>> ${CMD[#]}"
"${CMD[#]}"
}
You don't really need to use exec; in fact, you probably don't, as that causes the current process to be replaced by CMD: you won't return to the shell that called run_cmd after $CMD exits.
You can use BASH ARRAYS:
function run_cmd
{
arr=( "$#" )
echo ">>> ${arr[#]}"
exec "${arr[#]}"
}
Related
I've been working through creating a script to move some files from a local machine to a remote server. As part of that process I have a function that can either be called directly or wrapped with 'declare -fp' and sent along to an ssh command. The code I have so far looks like this:
export REMOTE_HOST=myserver
export TMP=eyerep-files
doTest()
{
echo "Test moving files from $TMP with arg $1"
declare -A files=(["abc"]="123" ["xyz"]="789")
echo "Files: ${!files[#]}"
for key in "${!files[#]}"
do
echo "$key => ${files[$key]}"
done
}
moveTest()
{
echo "attempting move with wrapped function"
ssh -t "$REMOTE_HOST" "$(declare -fp doTest|envsubst); doTest ${1#Q}"
}
moveTest $2
If I run the script with something like
./myscript.sh test dev
I get the output
attempting move with wrapped function
Test moving files from eyerep-files with arg dev
Files: abc xyz
bash: line 7: => ${files[]}: bad substitution
It seems like the string expansion for the for loop is not working correctly. Is this expected behaviour? If so, is there an alternative way to loop through an array that would avoid this issue?
If you're confident that your remote account's default shell is bash, this might look like:
moveTest() {
ssh -t "$REMOTE_HOST" "$(declare -f doTest; declare -p $(compgen -e)); doTest ${1#Q}"
}
If you aren't, it might instead be:
moveTest() {
ssh -t "$REMOTE_HOST" 'exec bash -s' <<EOF
set -- ${##Q}
$(declare -f doTest; declare -p $(compgen -e))
doTest \"\$#\"
EOF
}
I managed to find an answer here: https://unix.stackexchange.com/questions/294378/replacing-only-specific-variables-with-envsubst/294400
Since I'm exporting the global variables, I can get a list of them using compgen and use that list with envsubst to specify which variables I want to replace. My finished function ended up looking like:
moveTest()
{
echo "attempting move with wrapped function"
ssh -t "$REMOTE_HOST" "$(declare -fp doTest|envsubst "$(compgen -e | awk '$0="${"$0"}"') '${1}'"); doTest ${1#Q}"
}
I wanna concatenate a command specified in a function with string and execute it after.
I will simplify my need with an exemple to execute "ls -l -a"
#!/bin/bash
echo -e "specify command"
read command # ls
echo -e "specify argument"
read arg # -l
test () {
$command $arg
}
eval 'test -a'
Except that
Use an array, like this:
args=()
read -r command
args+=( "$command" )
read -r arg
args+=( "$arg" )
"${args[#]}" -a
If you want a function, then you could do this:
run_with_extra_switch () {
"$#" -a
}
run_with_extra_switch "${args[#]}"
#!/bin/bash
echo -e "specify command"
read command # ls
echo -e "specify argument"
read arg # -l
# using variable
fun1 () {
line="$command $arg"
}
# call the function
fun1
# parameter expansion will expand to the command and execute
$line
# or using stdout (overhead)
fun2 () {
echo "$command $arg"
}
# process expansion will execute function in sub-shell and output will be expanded to a command and executed
$(fun2)
It will work for the given question however to understand how it works look at shell expansion and attention must be payed to execute arbitrary commands.
Before to execute the command, it can be prepended by printf '<%s>\n' for example to show what will be executed.
Currently at work on the following version of Bash:
GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu)
My current script:
#!/usr/bin/env bash
function main() {
local commands=$#
for command in ${commands[#]} ; do
echo "command arg: $command"
done
}
if [[ "${BASH_SOURCE[0]}" == "$0" ]]; then
set -e
main $#
fi
In simple terms, this script will only exec main if it's the script being called, similar to Python's if __name__ == '__main__' convention.
In the main function, I'm simply looping over all the command variables, but quote escaping isn't happening as expected:
$ tests/simple /bin/bash -c 'echo true'
command arg: /bin/bash
command arg: -c
command arg: echo
command arg: true
The last argument here should get parsed by Bash as a single argument, nevertheless it is split into individual words.
What am I doing wrong? I want echo true to show up as a single argument.
You are getting the right output except for the 'echo true' part which is getting word split. You need to use double quotes in your code:
main "$#"
And in the function:
function main() {
local commands=("$#") # need () and double quotes here
for command in "${commands[#]}" ; do
echo "command arg: $command"
done
}
The function gets its own copy of $# and hence you don't really need to make a local copy of it.
With these changes, we get this output:
command arg: /bin/bash
command arg: -c
command arg: echo true
In general, it is not good to store shell commands in a variable. See BashFAQ/050.
See also:
How to copy an array in Bash?
You'll likely want to do something more like this:
function main() {
while [ $# -gt 0 ]
do
echo "$1"
shift
done
}
main /bin/bash -c "echo true"
The key really being $#, which counts the number of command line arguments, (not including the invocation name $0). The environment variable $# is automatically set to the number of command line arguments. If the function/script was called with the following command line:
$ main /bin/bash -c "echo true"
$# would have the value "3" for the arguments: "/bin/bash", "-c", and "echo true". The last one counts as one argument, because they are enclosed within quotes.
The shift command "shifts" all command line arguments one position to the left.
The leftmost argument is lost (main).
Quoting of # passed to main was your issue, but I thought I would mention that you also do not need to assign the value inside main to use it.
You could do the following:
main()
{
for command
do
...
done
}
main "$#"
I need to differentiate two cases: ( …subshell… ) vs $( …command substitution… )
I already have the following function which differentiates between being run in either a command substitution or a subshell and being run directly in the script.
#!/bin/bash
set -e
function setMyPid() {
myPid="$(bash -c 'echo $PPID')"
}
function echoScriptRunWay() {
local myPid
setMyPid
if [[ $myPid == $$ ]]; then
echo "function run directly in the script"
else
echo "function run from subshell or substitution"
fi
}
echoScriptRunWay
echo "$(echoScriptRunWay)"
( echoScriptRunWay; )
Example output:
function run directly in the script
function run from subshell or substitution
function run from subshell or substitution
Desired output
But I want to update the code so it differentiates between command substitution and subshell. I want it to produce the output:
function run directly in the script
function run from substitution
function run from subshell
P.S. I need to differentiate these cases because Bash has different behavior for the built-in trap command when run in command substitution and in a subshell.
P.P.S. i care about echoScriptRunWay | cat command also. But it's new question for me which i created here.
I don't think one can reliably test if a command is run inside a command substitution.
You could test if stdout differs from the stdout of the main script, and if it does, boldly infer it might have been redirected. For example
samefd() {
# Test if the passed file descriptors share the same inode
perl -MPOSIX -e "exit 1 unless (fstat($1))[1] == (fstat($2))[1]"
}
exec {mainstdout}>&1
whereami() {
if ((BASHPID == $$))
then
echo "In parent shell."
elif samefd 1 $mainstdout
then
echo "In subshell."
else
echo "In command substitution (I guess so)."
fi
}
whereami
(whereami)
echo $(whereami)
I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.