Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)
Related
I am currently trying to read from files with shell. However, I met one sytax issue. My code is below:
while read -r line;do
echo $line
done < <(tail -n +2 /pathToTheFile | cut -f5,6,7,8 | sort | uniq )
However, it returns me error syntax error near unexpected token('`
I tried with following How to use while read line with tail -n but still cannot see the error.
The tail command works properly.
Any help will be apprepricated.
process substitution isn't support by the posix shell /bin/sh. It is a feature specific to bash (and other non posix shells). Are you running this in /bin/bash?
Anyhow, the process substitution isn't needed here, you could simple use a pipe, like this:
tail -n +2 /pathToTheFile | cut -f5,6,7,8 | sort -u | while read -r line ; do
echo "${line}"
done
Your interpreter must be #!/bin/bash not #!/bin/sh and/or you must run the script with bash scriptname instead of sh scriptname.
Why?
POSIX shell doesn't provide process-substitution. Process substitution (e.g. < <(...)) is a bashism and not available in POSIX shell. So the error:
syntax error near unexpected token('
Is telling you that once the script gets to your done statement and attempts to find the file being redirected to the loop it finds '(' and chokes. (that also tells us you are invoking your script with POSIX shell instead of bash -- and now you know why)
I am trying to run a script on multiple lists of files while also passing arguments in parallel. I have file_list1.dat, file_list2.dat, file_list3.dat. I would like to run script.sh which accepts 3 arguments: arg1, arg2, arg3.
For one run, I would do:
sh script.sh file_list1.dat $arg1 $arg2 $arg3
I would like to run this command in parallel for all the file lists.
My attempt:
Ncores=4
ls file_list*.dat | xargs -P "$Ncores" -n 1 [sh script.sh [$arg1 $arg2 $arg3]]
This results in the error: invalid number for -P option. I think the order of this command is wrong.
My 2nd attempt:
echo $arg1 $arg2 $arg3 | xargs ls file_list*.dat | xargs -P "$Ncores" -n 1 sh script.sh
But this results in the error: xargs: ls: terminated by signal 13
Any ideas on what the proper syntax is for passing arguments to a bash script with xargs?
I'm not sure I understand exactly what you want to do. Is it to execute something like these commands, but in parallel?
sh script.sh $arg1 $arg2 $arg3 file_list1.dat
sh script.sh $arg1 $arg2 $arg3 file_list2.dat
sh script.sh $arg1 $arg2 $arg3 file_list3.dat
...etc
If that's right, this should work:
Ncores=4
printf '%s\0' file_list*.dat | xargs -0 -P "$Ncores" -n 1 sh script.sh "$arg1" "$arg2" "$arg3"
The two major problems in your version were that you were passing "Ncores" as a literal string (rather than using $Ncores to get the value of the variable), and that you had [ ] around the command and arguments (which just isn't any relevant piece of shell syntax). I also added double-quotes around all variable references (a generally good practice), and used printf '%s\0' (and xargs -0) instead of ls.
Why did I use printf instead of ls? Because ls isn't doing anything useful here that printf or echo or whatever couldn't do as well. You may think of ls as the tool for getting lists of filenames, but in this case the wildcard expression file_list*.dat gets expanded to a list of files before the command is run; all ls would do with them is look at each one, say "yep, that's a file" to itself, then print it. echo could do the same thing with less overhead. But with either ls or echo the output can be ambiguous if any filenames contain spaces, quotes, or other funny characters. Some versions of ls attempt to "fix" this by adding quotes or something around filenames with funny characters, but that might or might not match how xargs parses its input (if it happens at all).
But printf '%s\0' is unambiguous and predictable -- it prints each string (filename in this case) followed by a NULL character, and that's exactly what xargs -0 takes as input, so there's no opportunity for confusion or misparsing.
Well, ok, there is one edge case: if there aren't any matching files, the wildcard pattern will just get passed through literally, and it'll wind up trying to run the script with the unexpanded string "file_list*.dat" as an argument. If you want to avoid this, use shopt -s nullglob before this command (and shopt -u nullglob afterward, to get back to normal mode).
Oh, and one more thing: sh script.sh isn't the best way to run scripts. Give the script a proper shebang line at the beginning (#!/bin/sh if it uses only basic shell features, #!/bin/bash or #!/usr/bin/env bash if it uses any bashisms), and run it with ./script.sh.
In my program I need to know the maximum number of process I can run. So I write a script. It works when I run it in shell but but when in program using system("./limit.sh"). I work in bash.
Here is my code:
#/bin/bash
LIMIT=\`ulimit -u\`
ACTIVE=\`ps -u | wc -l \`
echo $LIMIT > limit.txt
echo $ACTIVE >> limit.txt
Anyone can help?
Why The Original Fails
Command substitution syntax doesn't work if escaped. When you run:
LIMIT=\`ulimit -u\`
...what you're doing is running a command named
-u`
...with the environment variable named LIMIT containing the value
`ulimit
...and unless you actually have a command that starts with -u and contains a backtick in its name, this can be expected to fail.
This is because using backticks makes characters which would otherwise be syntax into literals, and running a command with one or more var=value pairs preceding it treats those pairs as variables to export in the environment for the duration of that single command.
Doing It Better
#!/bin/bash
limit=$(ulimit -u)
active=$(ps -u | wc -l)
printf '%s\n' "$limit" "$active" >limit.txt
Leave off the backticks.
Use modern $() command substitution syntax.
Avoid multiple redirections.
Avoid all-caps names for your own variables (these names are used for variables with meaning to the OS or system; lowercase names are reserved for application use).
Doing It Right
#!/bin/bash
exec >limit.txt # open limit.txt as output for the rest of the script
ulimit -u # run ulimit -u, inheriting that FD for output
ps -u | wc -l # run your pipeline, likewise with output to the existing FD
You have a typo on the very first line: #/bin/bash should be #!/bin/bash - this is often known as a "shebang" line, for "hash" (#) + "bang" (!)
Without that syntax written correctly, the script is run through the system's default shell, which will see that line as just a comment.
As pointed out in comments, that also means only the standardised options available to the builtin ulimit command, which doesn't include -u.
After the question In a shell script: echo shell commands as they are executed I wonder how can I redirect the command executed/echoed to a file (or a variable)?
I tried the usual stdout redirection, like ls $HOME > foo.txt, after setting the bash verbose mode, set -v, but only the output of ls was redirected.
PS: What I want is to have a function (call it "save_succ_cmdline()") that I could put in front of a (complex) command-line (e.g, save_succ_cmdline grep -m1 "model name" /proc/cpuinfo | sed 's/.*://' | cut -d" " -f -3) so that this function will save the given command-line if it succeeds.
Notice that the grep -m1 "model name" ... example above is just to give an example of a command-line with special characters (|,',"). What I expect from such a function "save_succ_cmdline()" is that the actual command (after the function name, grep -m1 "model name"...) is executed and the function verifies the exit code ([$? == 0]) to decide if the command-line can be save or not. If the actual command has succeeded, the function ("save_succ_cmdline") can save the command-line expression (with the pipes and everything else).
My will is to use the bash -o verbose feature to have and (temporarily) save the command-line. But I am not being able to do it.
Thanks in advance.
Your save_succ_cmdline function will only see the grep -m1 "model name" /proc/cpuinfo part of the command line as the shell will see the pipe itself.
That being said if you just want the grep part then this will do what you want.
save_succ_cmdline() {
"$#" && cmd="$#"
}
If you want the whole pipeline then you would need to quote the entire argument to save_succ_cmdline and use eval on "$#" (or similar) and I'm not sure you could make that work for arbitrary quoting.
I have the following (bash) shell script, that I would ideally use to kill multiple processes by name.
#!/bin/bash
kill `ps -A | grep $* | awk '{ print $1 }'`
However, while this script works is one argument is passed:
end chrome
(the name of the script is end)
it does not work if more than one argument is passed:
$end chrome firefox
grep: firefox: No such file or directory
What is going on here?
I thought the $* passes multiple arguments to the shell script in sequence. I'm not mistyping anything in my input - and the programs I want to kill (chrome and firefox) are open.
Any help is appreciated.
Remember what grep does with multiple arguments - the first is the word to search for, and the remainder are the files to scan.
Also remember that $*, "$*", and $# all lose track of white space in arguments, whereas the magical "$#" notation does not.
So, to deal with your case, you're going to need to modify the way you invoke grep. You either need to use grep -F (aka fgrep) with options for each argument, or you need to use grep -E (aka egrep) with alternation. In part, it depends on whether you might have to deal with arguments that themselves contain pipe symbols.
It is surprisingly tricky to do this reliably with a single invocation of grep; you might well be best off tolerating the overhead of running the pipeline multiple times:
for process in "$#"
do
kill $(ps -A | grep -w "$process" | awk '{print $1}')
done
If the overhead of running ps multiple times like that is too painful (it hurts me to write it - but I've not measured the cost), then you probably do something like:
case $# in
(0) echo "Usage: $(basename $0 .sh) procname [...]" >&2; exit 1;;
(1) kill $(ps -A | grep -w "$1" | awk '{print $1}');;
(*) tmp=${TMPDIR:-/tmp}/end.$$
trap "rm -f $tmp.?; exit 1" 0 1 2 3 13 15
ps -A > $tmp.1
for process in "$#"
do
grep "$process" $tmp.1
done |
awk '{print $1}' |
sort -u |
xargs kill
rm -f $tmp.1
trap 0
;;
esac
The use of plain xargs is OK because it is dealing with a list of process IDs, and process IDs do not contain spaces or newlines. This keeps the simple code for the simple case; the complex case uses a temporary file to hold the output of ps and then scans it once per process name in the command line. The sort -u ensures that if some process happens to match all your keywords (for example, grep -E '(firefox|chrome)' would match both), only one signal is sent.
The trap lines etc ensure that the temporary file is cleaned up unless someone is excessively brutal to the command (the signals caught are HUP, INT, QUIT, PIPE and TERM, aka 1, 2, 3, 13 and 15; the zero catches the shell exiting for any reason). Any time a script creates a temporary file, you should have similar trapping around the use of the file so that it will be cleaned up if the process is terminated.
If you're feeling cautious and you have GNU Grep, you might add the -w option so that the names provided on the command line only match whole words.
All the above will work with almost any shell in the Bourne/Korn/POSIX/Bash family (you'd need to use backticks with strict Bourne shell in place of $(...), and the leading parenthesis on the conditions in the case are also not allowed with Bourne shell). However, you can use an array to get things handled right.
n=0
unset args # Force args to be an empty array (it could be an env var on entry)
for i in "$#"
do
args[$((n++))]="-e"
args[$((n++))]="$i"
done
kill $(ps -A | fgrep "${args[#]}" | awk '{print $1}')
This carefully preserves spacing in the arguments and uses exact matches for the process names. It avoids temporary files. The code shown doesn't validate for zero arguments; that would have to be done beforehand. Or you could add a line args[0]='/collywobbles/' or something similar to provide a default - non-existent - command to search for.
To answer your question, what's going on is that $* expands to a parameter list, and so the second and later words look like files to grep(1).
To process them in sequence, you have to do something like:
for i in $*; do
echo $i
done
Usually, "$#" (with the quotes) is used in place of $* in cases like this.
See man sh, and check out killall(1), pkill(1), and pgrep(1) as well.
Look into pkill(1) instead, or killall(1) as #khachik comments.
$* should be rarely used. I would generally recommend "$#". Shell argument parsing is relatively complex and easy to get wrong. Usually the way you get it wrong is to end up having things evaluated that shouldn't be.
For example, if you typed this:
end '`rm foo`'
you would discover that if you had a file named 'foo' you don't anymore.
Here is a script that will do what you are asking to have done. It fails if any of the arguments contain '\n' or '\0' characters:
#!/bin/sh
kill $(ps -A | fgrep -e "$(for arg in "$#"; do echo "$arg"; done)" | awk '{ print $1; }')
I vastly prefer $(...) syntax for doing what backtick does. It's much clearer, and it's also less ambiguous when you nest things.