bash get command that was used before pipe symbol - bash

For a half-finished script that already uses the output of a program I also need the name and the parameters of the program that was used to pipe to my script.
So I run it like this:
yay something | ./myscript
Now I need to store "yay something" into a variable.
There is a way to to get previous runned commands or the current one by using set -o history -o histexpand and echo !! or echo $0 but that doesn't include what I wrote right before the pipe.
Maybe you would suggest to pass the name of the program and it's parameter to my script as parameters and then run it there but I don't want this (pass a command as an argument to bash script).

UPDATED SOLUTION (old below):
#!/bin/bash -i
#get processes
processes=$(> >(ps -f))
echo beginning:
echo "$processes"
#filter bin/bash -i
pac=$(echo "$processes" | sed '1,/bin\/bash -i/!d')
pac=$(echo "$pac" | tail -2 | head -1)
#kill
delete=$(echo $pac | grep -oP "(?<=$USER\s)\w+")
pac=$(echo "$pac" | grep -o -P '(?<=00:00:00).*(?=)')
echo "$delete"
kill -9 "$delete"
#print
echo " "
echo end:
echo "${pac:1}"
Note: When you use echo, man or cat then $pac will be empty.
OLD Text:
Thanks to Charles for his enormous effort and his link that finally led me to processes=$(> >(ps -f)).
Here a working example. You can e.g. use it with vi test | ./testprocesses (or nano or package helpers like yay or trizen but it won't work with echo, man nor with cat):
#!/bin/bash -i
#get processes
processes=$(> >(ps -f))
echo beginning:
echo $processes
#filter
pac=$(echo $processes | grep -o -P '(?<=CM).*(?=testprocesses)' | grep -o -P '(?<=D).*(?=testprocesses)' | grep -o -P "(?<=00:00:00).*(?=$USER)")
#kill
delete=$(echo $pac | grep -oP "(?<=$USER\s)\w+")
pac=$(echo $pac | grep -o -P '(?<=00:00:00).*(?=)')
kill -9 $delete
#print
echo " "
echo end:
echo $pac
The kill part is necessary to kill the vi instance else it will still be running and eventually interfer with future executions of the script.

Related

How to easily find out which part of a bash/zsh pipeline failed due to `set -o pipefail`?

I have a bash/zsh command with multiple pipes | that fails when using set -o pipefail. For simplicity assume the command is
set -o pipefail; echo "123456" | head -c2 | grep 5 | cat
How do I quickly find out which command is the first to fail and why? I know I can check the exit code, but that doesn't show which part of the pipeline failed first.
Is there something simpler than the rather verbose check of building up the pipeline one by one and checking for the first failing exit code?
Edit: I removed the contrived code example I made up as it confused people about my purpose of asking. The actual command that prompted this question was:
zstdcat metadata.tsv.zst | \
tsv-summarize -H --group-by Nextclade_pango --count | \
tsv-filter -H --ge 'count:2' | \
tsv-select -H -f1 >open_lineages.txt
In bash, use echo "${PIPESTATUS[#]}" right after the command to get the exit status for each component in a space separated list:
#!/bin/bash
$ set -o pipefail; echo "123456" | head -c2 | grep 5 | cat
$ echo ${PIPESTATUS[#]}
0 0 1 0
Beware zsh users, you need to use the lower case pipestatus instead:
#!/bin/zsh
$ set -o pipefail; echo "123456" | head -c2 | grep 5 | cat
$ echo $pipestatus
0 0 1 0
In fish you can also simply use echo $pipestatus for the same output.
${PIPESTATUS[#]} right after is the answer you were looking for. However, I want to advise on the first example. It's a good habit to anticipate error, so instead of testing after you should have check the path prior everything.
if [ -d "/nonexistent_directory" ]; then
# here pipe shouldn't fail to grep
# ...unless there's something wrong with "foo"
# ...but "grep" may be a failure if the pattern isn't found
if ! ls -1 "/nonexistent_directory" | grep 'foo' ; then
echo "The command 'grep foo' failed."
# else
# echo "The pipeline succeeded."
fi
else
echo "The command 'ls /nonexistent_directory' failed."
fi
Whenever possible, avoid greping ls output in script, that' fragile...

execute a string in a bash script containing multiple redirects

I am trying to write a bash script which simply acts as an emulator. It takes input from the user and executes the command while forwarding the command along with the result onto a file. I am unable to handle inputs which have either a | or a > in them.
The only option I could find was segregating the commands based on the | into an array and run them individually. However, this does not allow > redirects.
Thanking in advance.
$cmd is a command taken as input from the user
I used the command
$cmd 2>&1 | tee -a $flname
but this does not work if there is a | or a > in $cmd
/bin/bash -c "$cmd 2>&1 | tee -a $flname" does not run/store the command either
Try this:
#!/bin/bash
read -r -p "Insert command to execute"$'\n' cmd
echo "Executing '$cmd'"
/bin/bash -c "$cmd"
# or eval "$cmd"
Example of execution:
$ ./script.sh
Insert command to execute
printf '1\n2\n3\n4\n' | grep '1\|3'
Executing 'printf '1\n2\n3\n4\n' | grep '1\|3''
1
3

Bash script not killing all PIDs in specified file or allowing partial names for input [duplicate]

This question already has answers here:
How to kill all processes with a given partial name? [closed]
(14 answers)
Closed 6 years ago.
Right now, my bash script works for 1 PID processes and I must use an exact process name for input. It will not accept *firefox*' for example. Also, I run a bash script that opens multiplersync` processes, and I would like this script to kill all of those processes. But, this script only works on processes with 1 PID.
Here is the script:
#!/bin/bash
createProcfile() {
ps -eLf | grep -f process.tmp | grep -v 'grep' | awk '{print $2,$10}' | sort -u | egrep -o '[0-9]{4,}' > pid.tmp
# pgrep "$(cat process.tmp)" > pid.tmp
}
PIDFile=pid.tmp
echo "Enter a process name"
read -r process
echo "$process" > process.tmp
# node_process_id=$(pidof "$process")
node_process_id=$(ps -eLf | grep $process | grep -v 'grep' | awk '{print $2,$10}' | sort -u | egrep -o '[0-9]{4,}')
if [[ -z "$node_process_id" ]]; then
echo "Please enter a valid process."
rm process.tmp
exit 0
fi
ps -eLf | grep $process | awk '{print $2,$10}' | sort -u | grep -v 'grep'
# pgrep "$(cat process.tmp)"
echo "Would you like to kill this process(es)? (y/n)"
read -r answer
if [[ "$answer" == y ]]; then
createProcfile
pkill -F "$PIDFile"
rm "$PIDFile"
sleep 1
createProcfile
node_process_id=$(pidof "$process")
if [[ -z $node_process_id ]]; then
echo "Process terminated successfully."
rm process.tmp
exit 0
else
echo "Process not terminated. Kill process manually."
ps -eLf | grep $process | awk '{print $2,$10}' | sort -u | grep -v 'grep'
# pgrep "$(cat process.tmp)"
rm "$PIDFile"
rm process.tmp
exit 0
fi
fi
I edited the script. Thanks to your comments, it works now and does the following:
Make script accept partial name as input
Kill more than 1 PID
Thank you!
pkill exists to solve your problem. It accepts a pattern to match against the process name, or the entire command line if -f is specified.
It will not accept *firefox*
Use killall command. Example :
killall -r "process.*"
This will kill all the processes whose names contain process in the beginning followed by any stuff.
The [ manual ] says :
-r, --regexp
Interpret process name pattern as an extended regular expression.
Sidenote:
Note that we have to double quote the regular expression to prevent file globbing. (Thanks #broslow for reminding this stuff).

No command executed after performing kill command in shell script

Here is my shell script:
#!/bin/bash
PIDS=$(ps -e | grep $1 |grep -v grep| awk '{print $1}')
kill -s SIGINT $PIDS
echo "Done sendings signal"
I am passing the name of the process as command line argument.
The echo command is not getting executed, although the target processes are actually receiving the SIGINT signal and exited.
Any suggestions?
Update:
I changed the code to:
#!/bin/bash
PIDS=$(ps -e |grep $1 | grep -v grep | awk '{print $1}'|grep -v $$)
echo $PIDS
kill -s SIGINT $PIDS
echo "Done sendings signal"
echo "The current process is $$"
Now I am noticing a strange thing:
The script is working but not as expected. Executing following command in command line outside the script
ps -e|grep process-name|grep -v grep|awk '{print $1}'|grep -v $$
gives pid of the process-name but when I execute the same command inside shell script, assign it to variable PIDS and then echo PIDS then it shows one more pid in addition to the pid of process-name. Therefore when the kill command executes it gives an error that the process with second pid doesn't exist. It does echo the remaining sentences in the terminal. Any clue ?
There really are only a couple of possibilities. Assuming you're just running this from the command line, you should see the message ... unless, of course, what you're doing puts the PID of your shell process in PIDS, in which case the kill would kill the (sub) shell running your command before you hit the echo.
Suggestion: echo $PIDS before you call kill and see what's there. In fact, I'd be tempted to comment out the kill and try the command, just to see what happens.
#!/bin/bash
PIDS=$(ps -e | grep $1 |grep -v grep| awk '{print $1}')
echo $PIDS
# kill -s SIGINT $PIDS
echo "Done sendings signal"
Of course, you can always run the script with bash -x to see everything.
Your script works. The only reason I can see for the echo not being executed is that some value of $1 and the script file name combine so that your script PID is also gathered, thereby making the script suicide.
The PIDS line spawns a process running ps, grep, another grep -- so you won't find in PIDS the processes running grep, but what about the parent process itself?
Try:
#!/bin/bash
PIDS=$(ps -e | grep $1 |grep -v grep | awk '{print $1}' | grep -v "^$$\$" )
kill -s SIGINT $PIDS
echo "Done sendings signal"
or run the pipes one after the other with suitable safety greps.
Edit: it is evident that the "$1" selection is selecting too much. So I'd rewrite the script like this:
#!/bin/bash
# Gather the output of "ps -e". This will also gather the PIDs of this
# process and of ps process and its subshell.
PSS=$( ps -e )
# Extract PIDs, excluding this one PID and excluding a process called "ps".
# Don't need to expunge 'grep' since no grep was running when getting PSS.
PIDS=$( echo "$PSS" | grep -v "\<ps\>" | grep "$1" | awk '{print $1}' | grep -v "^$$\$" )
if [ -n "$PIDS" ]; then
kill -s SIGINT $PIDS
else
echo "No process found matching $1"
fi
echo "Done sending signal."
ps -e is identical to ps -A and selects all processes ( cf. http://linux.die.net/man/1/ps ), i. e. ps -e displays "information about other users' processes, including those without controlling terminals" (Mac OS X man page of ps). This means you will also kill the PID ($$) of your shell process, as already pointed out by Charlie Martin, because you will also grep a line of output of the ps -e command that looks like so:
67988 ttys000 0:00.00 /bin/bash ./killpids sleep
Just log the output of ps -e to a file to see that your script commits suicide:
./killpids sleep 2>err.log
#!/bin/bash
# cat killpids
echo $$
for n in {1..10}; do
sleep 5000 &
done
sleep 1
unset PIDS
PIDS="$(ps -e | tee /dev/stderr | grep "$1" | grep -v grep | awk '{print $1}')"
#PIDS="$(ps -www -U $USER -o pid,uid,comm | tee /dev/stderr | grep "$1" | grep -v grep | awk '{print $1}')"
wc -l <<<"$PIDS"
#kill -s SIGINT $PIDS
echo kill -s TERM $PIDS
kill -s TERM $PIDS
echo "Done sendings signal"

Do a tail -F until matching a pattern

I want to do a tail -F on a file until matching a pattern. I found a way using awk, but IMHO my command is not really clean. The problem is that I need to do it in only one line, because of some limitations.
tail -n +0 -F /tmp/foo | \
awk -W interactive '{if ($1 == "EOF") exit; print} END {system("echo EOF >> /tmp/foo")}'
The tail will block until EOF appears in the file. It works pretty well. The END block is mandatory because awk's exit does not exit right away. It makes awk to eval the END block before quitting. The END block hangs on a read call (because of tail), so the last thing I need to do, is to write another line in the file to force tail to exit.
Does someone know a better way to do that?
Use tail's --pid option and tail will stop when the shell dies. No need to add extra to the tailed file.
sh -c 'tail -n +0 --pid=$$ -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
Try this:
sh -c 'tail -n +0 -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
The whole command-line will exit as soon as the "EOF" string is seen in /tmp/foo.
There is one side-effect: the tail process will be left running (in the background) until anything is written to /tmp/foo.
I've not results with the solution:
sh -c 'tail -n +0 -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
There is some issue related with the buffer because if there aren't more lines appended to the file, then sed will not read the input. So, with a little more research i came up with this:
sed '/EOF/q' <(tail -n 0 -f /tmp/foo)
The script is in https://gist.github.com/2377029
This is something Tcl is quite good at. If the following is "tail_until.tcl",
#!/usr/bin/env tclsh
proc main {filename pattern} {
set pipe [open "| tail -n +0 -F $filename"]
set pid [pid $pipe]
fileevent $pipe readable [list handler $pipe $pattern]
vwait ::until_found
catch {exec kill $pid}
}
proc handler {pipe pattern} {
if {[gets $pipe line] == -1} {
if {[eof $pipe]} {
set ::until_found 1
}
} else {
puts $line
if {[string first $pattern $line] != -1} {
set ::until_found 1
}
}
}
main {*}$argv
Then you'd do tail_until.tcl /tmp/foo EOF
Does this work for you?
tail -n +0 -F /tmp/foo | sed '/EOF/q'
I'm assuming that 'EOF' is the pattern you're looking for. The sed command quits when it finds it, which means that the tail should quit the next time it writes.
I suppose that there is an outside chance that tail would hang around if the pattern is found at about the end of the file, waiting for more output to appear in the file which will never appear. If that's really a concern, you could probably arrange to kill it - the pipeline as a whole will terminate when sed terminates (unless you're using a funny shell that decides that isn't the correct behaviour).
Grump about Bash
As feared, bash (on MacOS X, at least, but probably everywhere) is a shell that thinks it needs to hang around waiting for tail to finish even though sed quit. Sometimes - more often than I like - I prefer the behaviour of good old Bourne shell which wasn't so clever and therefore guessed wrong less often than Bash does. dribbler is a program which dribbles out messages one per second ('1: Hello' etc in the example), with the output going to standard output. In Bash, this command sequence hangs until I did 'echo pqr >>/tmp/foo' in a separate window.
date
{ timeout -t 2m dribbler -t -m Hello; echo EOF; } >/tmp/foo &
echo Hi
sleep 1 # Ensure /tmp/foo is created
tail -n +0 -F /tmp/foo | sed '/EOF/q'
date
Sadly, I don't immediately see an option to control this behaviour. I did find shopt lithist, but that's unrelated to this problem.
Hooray for Korn Shell
I note that when I run that script using Korn shell, it works as I'd expect - leaving a tail lurking around to be killed somehow. What works there is 'echo pqr >> /tmp/foo' after the second date command completes.
Here's an extended version of Jon's solution which uses sed instead of grep so that the output of tail goes to stdout:
sed -r '/EOF/q' <( exec tail -n +0 -f /tmp/foo ); kill $! 2> /dev/null
This works because sed gets created before tail so $! holds the PID of tail
The main advantage of this over the sh -c solutions is that killing a sh seems to print something to the output such as 'Terminated' which is unwelcome
sh -c 'tail -n +0 --pid=$$ -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
Here the main problem is with $$.
If you run command as is, $$ is set not to sh but to the PID of the current shell where command is run.
To make kill work you need to change kill $$ to kill \$$
After that you can safely get rid of --pid=$$ passed to tail command.
Summarising, following will work just fine:
/bin/sh -c 'tail -n 0 -f /tmp/foo | { sed "/EOF/ q" && kill \$$ ;}
Optionally you can pass -n to sed to keep it quiet :)
To kill the dangling tail process as well you may execute the tail command in a (Bash) process substituion context which can later be killed as if it had been a backgrounded process. (Code taken from How to read one line from 'tail -f' through a pipeline, and then terminate?).
: > /tmp/foo
grep -m 1 EOF <( exec tail -f /tmp/foo ); kill $! 2> /dev/null
echo EOF > /tmp/foo # terminal window 2
As an alternative you could use a named pipe.
(
: > /tmp/foo
rm -f pidfifo
mkfifo pidfifo
sh -c '(tail -n +0 -f /tmp/foo & echo $! > pidfifo) |
{ sed "/EOF/ q" && kill $(cat pidfifo) && kill $$ ;}'
)
echo EOF > /tmp/foo # terminal window 2
ready to use for tomcat =
sh -c 'tail -f --pid=$$ catalina.out | { grep -i -m 1 "Server startup in" && kill $$ ;}'
for above scenario :
sh -c 'tail -f --pid=$$ /tmp/foo | { grep -i -m 1 EOF && kill $$ ;}'
tail -f <filename> | grep -q "<pattern>"

Resources