Currently I have a script that does some extra processing, but ultimately calls the command the user passed (FYI, this is to run some commands in a docker container, but we'll call it foo.sh):
#!/usr/bin/env bash
# ...
runner "$#"
This works great (e.g. foo.sh echo HI), until the users wants to pass multiple commands to be run:
e.g.: foo.sh echo HI && echo BYE
&& is of course interpreted by the ambient shell before being passed into arguments.
Is there a workaround or means of escaping && that might work?
An idiom that often comes in handy for this kind of case:
cmds_q='true'
add_command() {
local new_cmd
printf -v new_cmd '%q ' "$#"
cmds_q+=" && $new_cmd"
}
add_command echo HI
add_command echo BYE
runner bash -c "$cmds_q"
The big advantage here is that add_command can be called with arbitrary arguments (after the first one defining the command to run, of course) with no risk of those arguments being parsed as syntax / used in injection attacks, so long as the caller never directly modifies cmds_q.
Related
I'm quite new to linux shell scripting and have a question:
1.) why does the command
test1="leafpad" && coproc exec "$test1"
work in bash (commandline, GNU bash 4.4.12 on a debian derivate linux), but the command
test1="coproc" && exec "$test1" leafpad
does not? Error messages:
bash: exec: coproc: Not found.
whereas
coproc leafpad
does work as expected.
How this command must be correct quouted to make it work? I've tried already
test1=`coproc` && exec "$test1" leafpad
test1='coproc' && exec "$test1" leafpad
test1="'coproc'" && exec "$test1" leafpad
test1=`coproc` && exec '$test1' leafpad
test1=`coproc` && exec `$test1` leafpad
test1="coproc" && exec $test1 leafpad
test1=`coproc` && exec $test1 leafpad
and some more variations, but none of them works.
2.) This was the test on commandline only. But what I rather need is to do this within a script: So I'm sure there are to be done some additional quotations or masquerading of special characters.
Background:
I have to execute a command, containing many arguments, some of them replaced by variables. Think of something like yad with all its possible arguments in a couple of lines, but let's create an easier example:
my_codeset="437"
my_line="20"
my_filename="somthing.txt"
if [ $value == 0 ]; then my_tabwidth='--tab-width=10'; else my_tabwidth=""; fi # set tabs or not
leafpad --codeset="$my_codeset" "$my_tabwidth" --jump="$my_line" "$my_filename"
wherein the variables given are subject of change, as a function of user interaction before.
Now this complete command (which is about 6 lines of code in original), needs to be executed in two variants:
one time headed by coproc, and another time not, as a function of an conditional branch.
so what I want is:
if [ $something == 1 ]; then copr_this="coproc"; else copr_this=""; fi
exec '$copr_this' <mycommand, with all its quoted arguments>
instead of
if [ something == 0]; then
lengthy command here
else
coproc lengthy command here, exactly repeated.
fi
I tried to manage it the other way around already, which was to put the complete lengthy command in a variable, and execute it in an conditional branch:
my_command=`lengthy command with some arguments $arg1 $arg2 ...`
if...
exec "$my_command"
else
coproc exec "$my_command"
fi
which stopped with error message "not found" also. Different ways of quoting didn't solve it, but produced different error messages only. I didn't manage to find out the correct quoting for this task. How shold this qouting read correctly?
For sure I could repeat the 6 lines of command in the code, but I'm quite sure this could be done more convenient.
As stated in the beginning: The indirect command execution works on commandline (and within script also), as long as coproc isn't involved. I cant't get it to work with coproc.
Any help and hints appreciated.
Update after first answer from #Socowi:
Thank you for your comprehensive and quick answer, Socowi. You are obviously right about coproc not beeing a command. So I understand now why my attempts had to fail. The exec command was added only during my experiments. I had started without this, but after having no success I thought it could help. It was an act of desperation merely. The backward quotes in the line my_command=`lengthy command with some arguments $arg1 $arg2 ...` were a typo, there should have been normal quotes, as you pointed out, since I intended to execute the command within the if of course. I'll probably head for the way you directed me to, using function {...} within script. But having experimented on this question in the meantime, I came to an astonishing solution: Astonishing for me, because of the difference between coproc not beeing a command and leafpad with its binary beeing a command. So it should be clear that test1='coproc' && test2='leafpad' && "$test1 $test2" will fail with error message bash: coproc leafpad: command not found., which is true. But now: why would test1='coproc' && test2='leafpad' && /bin/bash -c "$test1 $test2" do the job, starting leafpad, allowing to enter further commands in bash parallel, just as if I had entered leafpad & only? But this time executing both, the builtin (or keyword?) and the command, from a variable, which was refused when trying to enter it directly in the first bash instance. What is true for the first instance of bash should be true for the second also, or do I have a false perspective? Why does it work this way? Does the -c option do anything else than to execute the command?
Quoting is not the problem here. There are two other problems:
Order of exec and coproc and builtins vs. binaries
test1="leafpad" && coproc exec "$test1" is the same as coproc exec leafpad.
test1="coproc" && exec "$test1" leafpad is the same as exec coproc leafpad.
The order makes a difference: coproc exec vs. exec coproc. The latter does not work because exec replaces the current shell with the specified program. However, coproc is a builtin command. There is no coproc binary on your system. You can run it only from inside bash. Therfore exec fails.
Command Substitution vs. Strings
In your script ...
my_command=`lengthy command`
if ...; then
exec "$my_command"
else
coproc exec "$my_command"
fi
... you did not store lengthy command inside the variable, but you ran that command and stored its output (v=`cmd` is the same as v=$(cmd)) before the if. Then inside the if, you tried to execute the output of the command as another command.
To store the command as a string and execute it later you could use my_command="lengthy command"; $my_command (note the intentionally missing quotes). However, bash offers far better ways to store commands. Instead of strings use arrays or functions. Here we use a function:
my_command() {
exec lengthy command
}
if ...; then
coproc my_command
else
my_command
fi
coproc exec?
That being said, I really wonder about the combination of coproc and exec. To me it seems coproc exec cmd ignores the exec part and is the same as coproc cmd. If exec acted normally here, the current shell would be replaced, you would loose the COPROC array and therefore wouldn't need the coproc. Either way, using both at the same time seems strange. Are you really sure you need that exec there? If so, I'd be happy to hear the reasons.
My understanding of shell is very minimal, I'm working on a small task and need some help.
I've been given a python script that can parse command line arguments. One of these arguments is called "-targetdir". When -targetdir is unspecified, it defaults to a /tmp/{USER} folder on the user's machine. I need to direct -targetdir to a specific filepath.
I effectively want to do something like this in my script:
set ${-targetdir, "filepath"}
So that the python script doesn't set a default value. Would anyone know how to do this? I also am not sure if I'm giving sufficient information, so please let me know if I'm being ambiguous.
I strongly suggest modifying the Python script to explicitly specify the desired default rather than engaging in this kind of hackery.
That said, some approaches:
Option A: Function Wrapper
Assuming that your Python script is called foobar, you can write a wrapper function like the following:
foobar() {
local arg found=0
for arg; do
[[ $arg = -targetdir ]] && { found=1; break; }
done
if (( found )); then
# call the real foobar command without any changes to its argument list
command foobar "$#"
else
# call the real foobar, with ''-targetdir filepath'' added to its argument list
command foobar -targetdir "filepath" "$#"
fi
}
If put in a user's .bashrc, any invocation of foobar from the user's interactive shell (assuming they're using bash) will be replaced with the above wrapper. Note that this doesn't impact other shells; export -f foobar will cause other instances of bash to honor the wrapper, but that isn't guaranteed to extend to instances of sh, as used by system() invocations, Python's Popen(..., shell=True), and other places in the system.
Option B: Shell Wrapper
Assume you rename the original foobar script to foobar.real. Then you can make foobar a wrapper, like the following:
#!/usr/bin/env bash
found=0
for arg; do
[[ $arg = -targetdir ]] && { found=1; break; }
done
if (( found )); then
exec foobar.real "$#"
else
exec foobar.real -targetdir "filepath" "$#"
fi
Using exec terminates the wrapper's execution, replacing it with foobar.real without remaining in memory.
I'm trying to pass an argument to a shell script via exec, within another shell script. However, I get an error that the script does not exist in the path - but that is not the case.
$ ./run_script.sh
$ blob has just been executed.
$ ./run_script.sh: line 8: /home/s37syed/blob.sh test: No such file or directory
For some reason it's treating the entire execution as one whole absolute path to a script - it isn't reading the string as an argument for blob.sh.
Here is the script that is being executed.
#!/bin/bash
#run_script.sh
blobPID="$(pgrep "blob.sh")"
if [[ -z "$blobPID" ]]
then
echo "blob has just been executed."
#execs as absolute path - carg not read at all
( exec "/home/s37syed/blob.sh test" )
#this works fine, as exepcted
#( exec "/home/s37syed/blob.sh" )
else
echo "blob is currently running with pid $blobPID"
ps $blobPID
fi
And the script being invoked by run_script.sh, not doing much, just emulating a long process/task:
#!/bin/bash
#blob.sh
i=0
carg="$1"
if [[ -z "$carg" ]]
then
echo "nothing entered"
else
echo "command line arg entered: $carg"
fi
while [ $i -lt 100000 ];
do
echo "blob is currently running" >> test.txt
let i=i+1
done
Here is the version of Bash I'm using:
$ bash --version
GNU bash, version 4.2.37(1)-release (x86_64-pc-linux-gnu)
Any advice/comments/help on why this is happening would be much appreciated!
Thanks in advance,
s37syed
Replace
exec "/home/s37syed/blob.sh test"
(which tries to execute a command named "/home/s37syed/blob.sh test" with no arguments)
by
exec /home/s37syed/blob.sh test
(which executes "/home/s37/syed/blob.sh" with a single argument "test").
Aside from the quoting problem Cyrus pointed out, I'm pretty sure you don't want to use exec. What exec does is replace the current shell with the command being executed (rather than running the command as a subprocess, as it would without exec). Putting parentheses around it makes it execute that section in a subshell, thus effectively cancelling out the effect of exec.
As chepner said, you might be thinking of the eval command, which performs an extra parsing pass before executing the command. But eval is a huge bug magnet. It's incredibly easy to use eval in unsafe ways (see BashFAQ #48). If you need to construct a command, see BashFAQ #50 for better ways to do it.
I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.
Here is a problem:
In my bash scripts I want to source several file with some checks, so I have:
if [ -r foo ] ; then
source foo
else
logger -t $0 -p crit "unable to source foo"
exit 1
fi
if [ -r bar ] ; then
source bar
else
logger -t $0 -p crit "unable to source bar"
exit 1
fi
# ... etc ...
Naively I tried to create a function that do:
function safe_source() {
if [ -r $1 ] ; then
source $1
else
logger -t $0 -p crit "unable to source $1"
exit 1
fi
}
safe_source foo
safe_source bar
# ... etc ...
But there is a snag there.
If one of the files foo, bar, etc. have a global such as --
declare GLOBAL_VAR=42
-- it will effectively become:
function safe_source() {
# ...
declare GLOBAL_VAR=42
# ...
}
thus a global variable becomes local.
The question:
An alias in bash seems too weak for this, so must I unroll the above function, and repeat myself, or is there a more elegant approach?
... and yes, I agree that Python, Perl, Ruby would make my life easier, but when working with legacy system, one doesn't always have the privilege of choosing the best tool.
It's a bit late answer, but now declare supports a -g parameter, which makes a variable global (when used inside function). Same works in sourced file.
If you need a global (read exported) variable, use:
declare -g DATA="Hello World, meow!"
Yes, Bash's 'eval' command can make this work. 'eval' isn't very elegant, and it sometimes can be difficult to understand and debug code that uses it. I usually try to avoid it, but Bash often leaves you with no other choice (like the situation that prompted your question). You'll have to weigh the pros and cons of using 'eval' for yourself.
Some background on 'eval'
If you're not familiar with 'eval', it's a Bash built-in command that expects you to pass it a string as its parameter. 'eval' dynamically interprets and executes your string as a command in its own right, in the current shell context and scope. Here's a basic example of a common use (dynamic variable assignment):
$> a_var_name="color"
$> eval ${a_var_name}="blue"
$> echo -e "The color is ${color}."
The color is blue.
See the Advanced Bash Scripting Guide for more info and examples: http://tldp.org/LDP/abs/html/internal.html#EVALREF
Solving your 'source' problem
To make 'eval' handle your sourcing issue, you'd start by rewriting your function, 'safe_source()'. Instead of actually executing the command, 'safe_source()' should just PRINT the command as a string on STDOUT:
function safe_source() { echo eval " \
if [ -r $1 ] ; then \
source $1 ; \
else \
logger -t $0 -p crit \"unable to source $1\" ; \
exit 1 ; \
fi \
"; }
Also, you'll need to change your function invocations, slightly, to actually execute the 'eval' command:
`safe_source foo`
`safe_source bar`
(Those are backticks/backquotes, BTW.)
How it works
In short:
We converted the function into a command-string emitter.
Our new function emits an 'eval' command invocation string.
Our new backticks call the new function in a subshell context, returning the 'eval' command string output by the function back up to the main script.
The main script executes the 'eval' command string, captured by the backticks, in the main script context.
The 'eval' command string re-parses and executes the 'eval' command string in the main script context, running the whole if-then-else block, including (if the file exists) executing the 'source' command.
It's kind of complex. Like I said, 'eval' is not exactly elegant. In particular, there are a couple of special things you should notice about the changes we made:
The entire IF-THEN-ELSE block has becomes one whole double-quoted string, with backslashes at the end of each line "hiding" the newlines.
Some of the shell special characters like '"') have been backslash-escaped, while others ('$') have been left un-escaped.
'echo eval' has been prepended to the whole command string.
Extra semicolons have been appended to all of the lines where a command gets executed to terminate them, a role that the (now-hidden) newlines originally performed.
The function invocation has been wrapped in backticks.
Most of these changes are motived by the fact that 'eval' won't handle newlines. It can only deal with multiple commands if we combine them into a single line delimited by semicolons, instead. The new function's line breaks are purely a formatting convenience for the human eye.
If any of this is unclear, run your script with Bash's '-x' (debug execution) flag turned on, and that should give you a better picture of exactly what's happening. For instance, in the function context, the function actually produces the 'eval' command string by executing this command:
echo eval ' if [ -r <INCL_FILE> ] ; then source <INCL_FILE> ; else logger -t <SCRIPT_NAME> -p crit "unable to source <INCL_FILE>" ; exit 1 ; fi '
Then, in the main context, the main script executes this:
eval if '[' -r <INCL_FILE> ']' ';' then source <INCL_FILE> ';' else logger -t <SCRIPT_NAME> -p crit '"unable' to source '<INCL_FILE>"' ';' exit 1 ';' fi
Finally, again in the main context, the eval command executes these two commands if exists:
'[' -r <INCL_FILE> ']'
source <INCL_FILE>
Good luck.
declare inside a function makes the variable local to that function. export affects the environment of child processes not the current or parent environments.
You can set the values of your variables inside the functions and do the declare -r, declare -i or declare -ri after the fact.