I'm quite new to linux shell scripting and have a question:
1.) why does the command
test1="leafpad" && coproc exec "$test1"
work in bash (commandline, GNU bash 4.4.12 on a debian derivate linux), but the command
test1="coproc" && exec "$test1" leafpad
does not? Error messages:
bash: exec: coproc: Not found.
whereas
coproc leafpad
does work as expected.
How this command must be correct quouted to make it work? I've tried already
test1=`coproc` && exec "$test1" leafpad
test1='coproc' && exec "$test1" leafpad
test1="'coproc'" && exec "$test1" leafpad
test1=`coproc` && exec '$test1' leafpad
test1=`coproc` && exec `$test1` leafpad
test1="coproc" && exec $test1 leafpad
test1=`coproc` && exec $test1 leafpad
and some more variations, but none of them works.
2.) This was the test on commandline only. But what I rather need is to do this within a script: So I'm sure there are to be done some additional quotations or masquerading of special characters.
Background:
I have to execute a command, containing many arguments, some of them replaced by variables. Think of something like yad with all its possible arguments in a couple of lines, but let's create an easier example:
my_codeset="437"
my_line="20"
my_filename="somthing.txt"
if [ $value == 0 ]; then my_tabwidth='--tab-width=10'; else my_tabwidth=""; fi # set tabs or not
leafpad --codeset="$my_codeset" "$my_tabwidth" --jump="$my_line" "$my_filename"
wherein the variables given are subject of change, as a function of user interaction before.
Now this complete command (which is about 6 lines of code in original), needs to be executed in two variants:
one time headed by coproc, and another time not, as a function of an conditional branch.
so what I want is:
if [ $something == 1 ]; then copr_this="coproc"; else copr_this=""; fi
exec '$copr_this' <mycommand, with all its quoted arguments>
instead of
if [ something == 0]; then
lengthy command here
else
coproc lengthy command here, exactly repeated.
fi
I tried to manage it the other way around already, which was to put the complete lengthy command in a variable, and execute it in an conditional branch:
my_command=`lengthy command with some arguments $arg1 $arg2 ...`
if...
exec "$my_command"
else
coproc exec "$my_command"
fi
which stopped with error message "not found" also. Different ways of quoting didn't solve it, but produced different error messages only. I didn't manage to find out the correct quoting for this task. How shold this qouting read correctly?
For sure I could repeat the 6 lines of command in the code, but I'm quite sure this could be done more convenient.
As stated in the beginning: The indirect command execution works on commandline (and within script also), as long as coproc isn't involved. I cant't get it to work with coproc.
Any help and hints appreciated.
Update after first answer from #Socowi:
Thank you for your comprehensive and quick answer, Socowi. You are obviously right about coproc not beeing a command. So I understand now why my attempts had to fail. The exec command was added only during my experiments. I had started without this, but after having no success I thought it could help. It was an act of desperation merely. The backward quotes in the line my_command=`lengthy command with some arguments $arg1 $arg2 ...` were a typo, there should have been normal quotes, as you pointed out, since I intended to execute the command within the if of course. I'll probably head for the way you directed me to, using function {...} within script. But having experimented on this question in the meantime, I came to an astonishing solution: Astonishing for me, because of the difference between coproc not beeing a command and leafpad with its binary beeing a command. So it should be clear that test1='coproc' && test2='leafpad' && "$test1 $test2" will fail with error message bash: coproc leafpad: command not found., which is true. But now: why would test1='coproc' && test2='leafpad' && /bin/bash -c "$test1 $test2" do the job, starting leafpad, allowing to enter further commands in bash parallel, just as if I had entered leafpad & only? But this time executing both, the builtin (or keyword?) and the command, from a variable, which was refused when trying to enter it directly in the first bash instance. What is true for the first instance of bash should be true for the second also, or do I have a false perspective? Why does it work this way? Does the -c option do anything else than to execute the command?
Quoting is not the problem here. There are two other problems:
Order of exec and coproc and builtins vs. binaries
test1="leafpad" && coproc exec "$test1" is the same as coproc exec leafpad.
test1="coproc" && exec "$test1" leafpad is the same as exec coproc leafpad.
The order makes a difference: coproc exec vs. exec coproc. The latter does not work because exec replaces the current shell with the specified program. However, coproc is a builtin command. There is no coproc binary on your system. You can run it only from inside bash. Therfore exec fails.
Command Substitution vs. Strings
In your script ...
my_command=`lengthy command`
if ...; then
exec "$my_command"
else
coproc exec "$my_command"
fi
... you did not store lengthy command inside the variable, but you ran that command and stored its output (v=`cmd` is the same as v=$(cmd)) before the if. Then inside the if, you tried to execute the output of the command as another command.
To store the command as a string and execute it later you could use my_command="lengthy command"; $my_command (note the intentionally missing quotes). However, bash offers far better ways to store commands. Instead of strings use arrays or functions. Here we use a function:
my_command() {
exec lengthy command
}
if ...; then
coproc my_command
else
my_command
fi
coproc exec?
That being said, I really wonder about the combination of coproc and exec. To me it seems coproc exec cmd ignores the exec part and is the same as coproc cmd. If exec acted normally here, the current shell would be replaced, you would loose the COPROC array and therefore wouldn't need the coproc. Either way, using both at the same time seems strange. Are you really sure you need that exec there? If so, I'd be happy to hear the reasons.
Related
Currently I have a script that does some extra processing, but ultimately calls the command the user passed (FYI, this is to run some commands in a docker container, but we'll call it foo.sh):
#!/usr/bin/env bash
# ...
runner "$#"
This works great (e.g. foo.sh echo HI), until the users wants to pass multiple commands to be run:
e.g.: foo.sh echo HI && echo BYE
&& is of course interpreted by the ambient shell before being passed into arguments.
Is there a workaround or means of escaping && that might work?
An idiom that often comes in handy for this kind of case:
cmds_q='true'
add_command() {
local new_cmd
printf -v new_cmd '%q ' "$#"
cmds_q+=" && $new_cmd"
}
add_command echo HI
add_command echo BYE
runner bash -c "$cmds_q"
The big advantage here is that add_command can be called with arbitrary arguments (after the first one defining the command to run, of course) with no risk of those arguments being parsed as syntax / used in injection attacks, so long as the caller never directly modifies cmds_q.
Here is a simple test case script which behaves differently in zsh vs bash when I run with $ source test_script.sh from the command line. I don't necessarily know why there is a difference if my shebang clearly states that I want bash to run my script other than the fact that the which command is a built-in in zsh and a program in bash. (FYI - the shebang directory is where my bash program lives which may not be the same as yours--I installed a new version using homebrew)
#!/usr/local/bin/bash
if [ "$(which ls)" ]; then
echo "ls command found"
else
echo "ls command not found"
fi
if [ "$(which foo)" ]; then
echo "foo command found"
else
echo "foo command not found"
I run this script with source ./test-script.sh from zsh and Bash.
Output in zsh:
ls command found
foo command found
Output in bash:
ls command found
foo command not found
My understanding is that default for test or [ ] (which are the same thing) evaluate a string to true if it's not empty/null. To illustrate:
zsh:
$ which foo
foo not found
bash:
$ which foo
$
Moreover if I redirect standard error in zsh like:
$ which foo 2> /dev/null
foo not found
zsh still seems to send foo not found to standard output which is why (I am guessing) my test case passed for both under the zshell; because the expansion of "$(which xxx)" returned a string in both cases (e.g. /some/directory and foo not found (zsh will ALWAYS return a string?).
Lastly, if I remove the double quotes (e.g. $(which xxx)), zsh gives me an error. Here is the output:
ls command found
test_scritp.sh:27: condition expected not:
I am guessing zsh wanted me to use [ ! "$(which xxx)" ]. I don't understand why? It never gave that error when running in bash (and isn't this supposed to run in bash anyway?!).
Why isn't my script using bash? Why is something so trivial as this not working? I understand how to make it work fine in both using the -e option but I simply want to understand why this is all happening. Its driving me bonkers.
There are two separate problems here.
First, the proper command to use is type, not which. Like you note, the command which is a zsh built-in, whereas in Bash, it will execute whatever which command happens to be on your system. There are many variants with different behaviors, which is why POSIX opted to introduce a replacement instead of trying to prescribe a particular behavior for which -- then there would be yet one more possible behavior, and no way to easily root out all the other legacy behaviors. (One early common problem was with a which command which would examine the csh environment, even if you actually used a different shell.)
Secondly, examining a command's string output is a serious antipattern, because strings differ between locales ("not found" vs. "nicht gefunden" vs. "ei löytynyt" vs. etc etc) and program versions -- the proper solution is to examine the command's exit code.
if type ls >/dev/null 2>&1; then
echo "ls command found"
else
echo "ls command not found"
fi
if type foo >/dev/null 2>&1; then
echo "foo command found"
else
echo "foo command not found"
fi
(A related antipattern is to examine $? explicitly. There is very rarely any need to do this, as it is done naturally and transparently by the shell's flow control statements, like if and while.)
Regarding quoting, the shell performs whitespace tokenization and wildcard expansion on unquoted values, so if $string is command not found, the expression
[ $string ]
without quotes around the value evaluates to
[ command not found ]
which looks to the shell like the string "command" followed by some cruft which isn't syntactically valid.
Lastly, as we uncovered in the chat session (linked from comments) the OP was confused about the precise meaning of source, and ended up running a Bash script in a separate process instead. (./test-script instead of source ./test-script). For the record, when you source a file, you cause your current shell to read and execute it; in this setting, the script's shebang line is simply a comment, and is completely ignored by the shell.
I'm trying to pass an argument to a shell script via exec, within another shell script. However, I get an error that the script does not exist in the path - but that is not the case.
$ ./run_script.sh
$ blob has just been executed.
$ ./run_script.sh: line 8: /home/s37syed/blob.sh test: No such file or directory
For some reason it's treating the entire execution as one whole absolute path to a script - it isn't reading the string as an argument for blob.sh.
Here is the script that is being executed.
#!/bin/bash
#run_script.sh
blobPID="$(pgrep "blob.sh")"
if [[ -z "$blobPID" ]]
then
echo "blob has just been executed."
#execs as absolute path - carg not read at all
( exec "/home/s37syed/blob.sh test" )
#this works fine, as exepcted
#( exec "/home/s37syed/blob.sh" )
else
echo "blob is currently running with pid $blobPID"
ps $blobPID
fi
And the script being invoked by run_script.sh, not doing much, just emulating a long process/task:
#!/bin/bash
#blob.sh
i=0
carg="$1"
if [[ -z "$carg" ]]
then
echo "nothing entered"
else
echo "command line arg entered: $carg"
fi
while [ $i -lt 100000 ];
do
echo "blob is currently running" >> test.txt
let i=i+1
done
Here is the version of Bash I'm using:
$ bash --version
GNU bash, version 4.2.37(1)-release (x86_64-pc-linux-gnu)
Any advice/comments/help on why this is happening would be much appreciated!
Thanks in advance,
s37syed
Replace
exec "/home/s37syed/blob.sh test"
(which tries to execute a command named "/home/s37syed/blob.sh test" with no arguments)
by
exec /home/s37syed/blob.sh test
(which executes "/home/s37/syed/blob.sh" with a single argument "test").
Aside from the quoting problem Cyrus pointed out, I'm pretty sure you don't want to use exec. What exec does is replace the current shell with the command being executed (rather than running the command as a subprocess, as it would without exec). Putting parentheses around it makes it execute that section in a subshell, thus effectively cancelling out the effect of exec.
As chepner said, you might be thinking of the eval command, which performs an extra parsing pass before executing the command. But eval is a huge bug magnet. It's incredibly easy to use eval in unsafe ways (see BashFAQ #48). If you need to construct a command, see BashFAQ #50 for better ways to do it.
Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs
Let's imagine I have a bash script, where I call this:
bash -c "some_command"
do something with code of some_command here
Is it possible to obtain the code of some_command? I'm not executing some_command directly in the shell running the script because I don't want to alter it's environment.
$? will contain the return code of some_command just as usual.
Of course it might also contain a code from bash, in case something went wrong before your command could even be executed (wrong filename, whatnot).
Here's an illustration of $? and the parenthesis subshell mentioned by Paggas and Matti:
$ (exit a); echo $?
-bash: exit: a: numeric argument required
255
$ (exit 33); echo $?
33
In the first case, the code is a Bash error and in the second case it's the exit code of exit.
You can use the $? variable, check out the bash documentation for this, it stores the exit status of the last command.
Also, you might want to check out the bracket-style command blocks of bash (e.g. comm1 && (comm2 || comm3) && comm4), they are always executed in a subshell thus not altering the current environment, and are more powerful as well!
EDIT: For instance, when using ()-style blocks as compared to bash -c 'command', you don't have to worry about escaping any argument strings with spaces, or any other special shell syntax. You directly use the shell syntax, it's a normal part of the rest of the code.