Passing multiple arguments to a UNIX shell script - bash

I have the following (bash) shell script, that I would ideally use to kill multiple processes by name.
#!/bin/bash
kill `ps -A | grep $* | awk '{ print $1 }'`
However, while this script works is one argument is passed:
end chrome
(the name of the script is end)
it does not work if more than one argument is passed:
$end chrome firefox
grep: firefox: No such file or directory
What is going on here?
I thought the $* passes multiple arguments to the shell script in sequence. I'm not mistyping anything in my input - and the programs I want to kill (chrome and firefox) are open.
Any help is appreciated.

Remember what grep does with multiple arguments - the first is the word to search for, and the remainder are the files to scan.
Also remember that $*, "$*", and $# all lose track of white space in arguments, whereas the magical "$#" notation does not.
So, to deal with your case, you're going to need to modify the way you invoke grep. You either need to use grep -F (aka fgrep) with options for each argument, or you need to use grep -E (aka egrep) with alternation. In part, it depends on whether you might have to deal with arguments that themselves contain pipe symbols.
It is surprisingly tricky to do this reliably with a single invocation of grep; you might well be best off tolerating the overhead of running the pipeline multiple times:
for process in "$#"
do
kill $(ps -A | grep -w "$process" | awk '{print $1}')
done
If the overhead of running ps multiple times like that is too painful (it hurts me to write it - but I've not measured the cost), then you probably do something like:
case $# in
(0) echo "Usage: $(basename $0 .sh) procname [...]" >&2; exit 1;;
(1) kill $(ps -A | grep -w "$1" | awk '{print $1}');;
(*) tmp=${TMPDIR:-/tmp}/end.$$
trap "rm -f $tmp.?; exit 1" 0 1 2 3 13 15
ps -A > $tmp.1
for process in "$#"
do
grep "$process" $tmp.1
done |
awk '{print $1}' |
sort -u |
xargs kill
rm -f $tmp.1
trap 0
;;
esac
The use of plain xargs is OK because it is dealing with a list of process IDs, and process IDs do not contain spaces or newlines. This keeps the simple code for the simple case; the complex case uses a temporary file to hold the output of ps and then scans it once per process name in the command line. The sort -u ensures that if some process happens to match all your keywords (for example, grep -E '(firefox|chrome)' would match both), only one signal is sent.
The trap lines etc ensure that the temporary file is cleaned up unless someone is excessively brutal to the command (the signals caught are HUP, INT, QUIT, PIPE and TERM, aka 1, 2, 3, 13 and 15; the zero catches the shell exiting for any reason). Any time a script creates a temporary file, you should have similar trapping around the use of the file so that it will be cleaned up if the process is terminated.
If you're feeling cautious and you have GNU Grep, you might add the -w option so that the names provided on the command line only match whole words.
All the above will work with almost any shell in the Bourne/Korn/POSIX/Bash family (you'd need to use backticks with strict Bourne shell in place of $(...), and the leading parenthesis on the conditions in the case are also not allowed with Bourne shell). However, you can use an array to get things handled right.
n=0
unset args # Force args to be an empty array (it could be an env var on entry)
for i in "$#"
do
args[$((n++))]="-e"
args[$((n++))]="$i"
done
kill $(ps -A | fgrep "${args[#]}" | awk '{print $1}')
This carefully preserves spacing in the arguments and uses exact matches for the process names. It avoids temporary files. The code shown doesn't validate for zero arguments; that would have to be done beforehand. Or you could add a line args[0]='/collywobbles/' or something similar to provide a default - non-existent - command to search for.

To answer your question, what's going on is that $* expands to a parameter list, and so the second and later words look like files to grep(1).
To process them in sequence, you have to do something like:
for i in $*; do
echo $i
done
Usually, "$#" (with the quotes) is used in place of $* in cases like this.
See man sh, and check out killall(1), pkill(1), and pgrep(1) as well.

Look into pkill(1) instead, or killall(1) as #khachik comments.

$* should be rarely used. I would generally recommend "$#". Shell argument parsing is relatively complex and easy to get wrong. Usually the way you get it wrong is to end up having things evaluated that shouldn't be.
For example, if you typed this:
end '`rm foo`'
you would discover that if you had a file named 'foo' you don't anymore.
Here is a script that will do what you are asking to have done. It fails if any of the arguments contain '\n' or '\0' characters:
#!/bin/sh
kill $(ps -A | fgrep -e "$(for arg in "$#"; do echo "$arg"; done)" | awk '{ print $1; }')
I vastly prefer $(...) syntax for doing what backtick does. It's much clearer, and it's also less ambiguous when you nest things.

Related

Store a command in a variable; implement without `eval`

This is almost the exact same question as in this post, except that I do not want to use eval.
Quick question short, I want to execute the command echo aaa | grep a by first storing it in a string variable Command='echo aaa | grep a', and then running it without using eval.
In the post above, the selected answer used eval. That works for me too. What concerns me a lot is that there are plenty of warnings about eval below, followed by some attempts to circumvent it. However, none of them are able to solve my problem (essentially the OP's). I have commented below their attempts, but since it has been there for a long time, I suppose it is better to post the question again with the restriction of not using eval.
Concrete Example
What I want is a shell script that runs my command when I am happy:
#!/bin/bash
# This script run-this-if.sh runs the commands when I am happy
# Warning: the following script does not work (on nose)
if [ "$1" == "I-am-happy" ]; then
"$2"
fi
$ run-if.sh I-am-happy [insert-any-command]
Your sample usage can't ever work with an assignment, because assignments are scoped to the current process and its children. Because there's no reason to try to support assignments, things get suddenly far easier:
#!/bin/sh
if [ "$1" = "I-am-happy" ]; then
shift; "$#"
fi
This then can later use all the usual techniques to run shell pipelines, such as:
run-if-happy "$happiness" \
sh -c 'echo "$1" | grep "$2"' _ "$untrustedStringOne" "$untrustedStringTwo"
Note that we're passing the execve() syscall an argv with six elements:
sh (the shell to run; change to bash etc if preferred)
-c (telling the shell that the following argument is the code for it to run)
echo "$1" | grep "$2" (the code for sh to parse)
_ (a constant which becomes $0)
...whatever the shell variable untrustedStringOne contains... (which becomes $1)
...whatever the shell variable untrustedStringTwo contains... (which becomes $2)
Note here that echo "$1" | grep "$2" is a constant string -- in single-quotes, with no parameter expansions or command substitutions -- and that untrusted values are passed into the slots that fill in $1 and $2, out-of-band from the code being evaluated; this is essential to have any kind of increase in security over what eval would give you.

Bash get the command that is piping into a script

Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)

BASH: Assign special symbol to a variable

As a system Administrator I have to work on different servers with different keyboard layouts. So every time I face huge problem with finding keys like '&' ,'|'.
Is there any way I can assign these symbols to a variable and call the variable when ever I need a symbol ?
For example: Assume
echo "|" > filename
pipe = $(cat filename)
ps -ef somethinghere($(pipe)) grep java
should give me running java process. I tried everything I could but failed.
Please help.
The following function will do the job:
# read a number of arguments on the left-hand side; those actual arguments; then the RHS
pipe() {
local nargs
local -a firstCmd
nargs=$1; shift
firstCmd=( )
for ((i=0; i<nargs; i++)); do
firstCmd+=( "$1" ); shift
done
"${firstCmd[#]}" | "$#"
}
# basic use case
pipe 3 ps -ef somethinghere grep java
# or, for a pipeline with more than two components:
pipe 3 ps -ef somethinghere pipe 2 grep java tee log.txt
What's better is that unlike a solution using eval, it'll work even with more complicated values:
pipe 3 ps -ef 'something with spaces here' grep java
One could also write a version of this function that uses a sigil value:
pipe() {
local sigil
local -a firstCmd
sigil=$1; shift
firstCmd=( )
while (( $# )); do
if [[ $1 = "$sigil" ]]; then
shift
"${firstCmd[#]}" | pipe "$sigil" "$#"
return
else
firstCmd+=( "$1" )
shift
fi
done
"${firstCmd[#]}"
}
In this case, you could even do:
sigil=$(uuidgen) # generate a random, per-session value
pipe "$sigil" ps -ef 'something with spaces here' "$sigil" grep java "$sigil" tee log.txt
If you really want to do that, here is a way:
pipe="|"
eval $(echo ps -ef $pipe grep java)
with cat:
eval $(echo ps -ef $(cat pipe.txt) grep java)
Note that using eval is discouraged and that this command will become problematic as soon as you need complex commands involving quotes, escape sequences, filenames and/or arguments with spaces, etc.
In my opinion, it would be better for you to familiarise yourself with how to change keyboard layouts on different linux systems (see loadkeys for example).
Building on the same logic as the first solution proposed by CharlesDuffy, this should be equivalent :
pipe()
{
"${#:2:$1}" | "${#:$(($1+2))}"
}
Rather than using iteration to build an array with the first command and shift arguments until the remaining ones contain only the second command, this solution uses expansions.
"${#:2:$1}" expands $1 arguments, starting at position 2
"${#:$(($1+2))}" expands all arguments starting at position $1 + 2.
In both cases, the double quotes ensure arguments expand as one word per argument (no word splitting being performed).
If you find this too cryptic, feel free to avoid it, as readability (to the intended coder(s) who would some day have to maintain the code) is likely to trump any advantage.

Running commands in subdirectories with bash

I have a sequence of directories that I need to run various shell commands on and I've made a short script called dodirs.sh to simplify running the command in each directory:
#!/bin/bash
echo "Running in each directory: $#"
for d in ./*/; do
(
cd "$d"
pwd
eval "$#"
)
done
This is fine for many simple commands, but some have trouble, such as:
grep "free energy TOTEN" OUTCAR | tail -1
which looks for a string in a file located in each directory.
It seems that the pipe and/or the quotes is the trouble since if I say:
dodirs.sh grep "free energy TOTEN" OUTCAR
I get a sensible (if waaaay to long output) along the lines of:
Running in each directory: grep free energy TOTEN OUTCAR
...
OUTCAR: free energy TOTEN = -888.53122906 eV
OUTCAR: free energy TOTEN = -888.53132396 eV
OUTCAR: free energy TOTEN = -888.531324 eV
...
I notice the result of the echo loses the quotes, so that is a bit odd. On the other hand, if I say:
dodirs.sh grep "free energy TOTEN" OUTCAR | tail -1
then I get the nonsensical:
...
grep: energy: No such file or directory
grep: TOTEN: No such file or directory
...
Notice the echo doesn't echo at all now and it is clearly misinterpreting the line.
Is there some way I have to escape characters, or package the parameters inside my dodirs.sh script?
And maybe someone knows of a better approach altogether?
Consider:
#!/bin/bash
# use printf %q to generate a command line identical to what we're actually doing
printf "Running in each directory: " >&2
printf '%q ' "$#" >&2
echo >&2
# use && -- we don't want to execute the command if cd into a given directory failed!
for d in ./*/; do
(cd "$d" && echo "$PWD" >&2 && "$#")
done
This is much more predictable: It passes exact argument lists through, so for a general command you can just quote it naturally. (This is the exact same behavior as you get with find -exec or other tools which call execv*-family calls with a literal, passed-through argument list; thus, it means you get identical behavior to sudo, chpst, chroot, setsid, etc).
For a single command, invocation looks like what you'd expect:
dodirs grep "free energy TOTEN" OUTCAR
To execute shell directives, such as pipelines, explicitly execute a shell:
dodirs sh -c 'grep "free energy TOTEN" OUTCAR | tail -n 1'
# ^^ ^^
...or, if you're willing to let callers rely on implementation details (such as the fact that this is implemented with a shell, and exactly which shell it's implemented with), use eval:
dodirs eval 'grep "free energy TOTEN" OUTCAR | tail -n 1'
# ^^^^
This may be slightly more work, but it puts you in line with standard UNIX conventions, and avoids risking shell injection vulnerabilities if callers fail to quote their arguments to be eval-safe.
The quotes disappear because they aren't necessary once the shell identifies the words to pass to your script as arguments. Inside your script, $1 is grep, $2 is free energy TOTEN, etc.
You do need to escape the pipe (with a backslash \| or by quoting '|'), though, so that it also is passed as an argument to eval.
dodirs.sh grep "free energy TOTEN" OUTCAR \| tail -1

How do I set a bash script's positional arguments from stdin?

I have a bash script that I wish to read from a file to get it's arguments set. Basically my script reads arguments positionally ($1, $2, $3, etc.)
while test $# -gt 0; do
case $1 in
-h | --help)
echo "Help cruft"
exit 0
;;
esac
shift
done
One of the options I was hoping could be a config file that reads in arguments (for simple and easy config) so I was hoping the set -- command would work (-- to over ride the arguments). However, since they are defined in a file I have to read it in and use xargs to pass them:
-c | --config)
cat $2 | xargs set --
continue
;;
The trouble is that xargs buggers up the -- so I don't know how to accomplish this.
Note: I realize I could use source config_file and have it set variable; might be the final option. I wanted to know if I could do it like above and simplify the documentation.
A simplified example script:
# foo.sh
echo "x y z" | xargs set --
echo $*
# Command line
$ bash foo.sh a b c
xargs: set: No such file or directory
a b c
xargs can't execute set because:
set is a shell built-in, not an external command. xargs only knows how to execute commands. (Some shell built-ins shadow commands with the same name, such as printf, true, and [. So xargs can execute those commands, but the semantics might not be identical to the built-in.)
Even if xargs could execute set, it would have no effect because xargs does not run inside of the shell's environment; every command executed by xargs is a separate process. So you will get no error if you do this:
echo a b c | xargs bash -c 'set -- "${#}"' _
But it also won't do anything useful. (Substitute set with echo and you'll see that it does actually invoke the command.)
How to read arguments from a file.
First, you need to answer the question: what does it mean to have arguments in a file? Are they individual whitespace-separated words with no mechanism to include whitespace in any argument? (That would also be required for xargs to work in its default mode, so it is not a totally unreasonable assumption, although it is almost certainly going to get you into trouble at some point.)
In that case you don't need xargs at all; you can just use command substitution:
set -- $(<file)
While that will work fine, this won't:
echo a b c | set -- $(</dev/stdin)
because the pipeline (created by the | operator) causes the processes on either side to be run in subshells, and consequently the set doesn't modify the current shell's environment variables.
A more robust solution
Suppose that each argument is in a single line in the file, which makes it possible to include whitespace in an argument, but not a newline. Then we could use the useful mapfile built-in to read the arguments into an array, and set the positional arguments from the array. (Or just use the array directly, but that would be a different question.)
mapfile -t args < file
set -- "${args[#]}"
Again, watch out for piping into mapfile; it won't work, for the same reason that it didn't work with set.

Resources