My make file looks like:
nstop:
#kill `cat ${APP_ROOT}/run/nginx.pid` ||:
But I still get output:
$ make nstop
cat: /run/nginx.pid: No such file or directory
/bin/sh: 1: kill: Usage: kill [-s sigspec | -signum | -sigspec] [pid | job]... or
kill -l [exitstatus]
How to suppress output from command at backtick?
I have resolved that by redirecting error output into /dev/null:
nstop:
#kill `cat ${APP_ROOT}/run/nginx.pid 2>/dev/null` 2>/dev/null ||:
But, I think, there should be better solution.
Related
I keep getting the following message whenever I run my bash script:
kill: usage kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
This is the line it happens on, when I am trying to kill all instances of methserver:
kill $(ps aux | grep '[m]ethserver' | awk '{print $2}')
How do I fix it? Would like to get rid of this annoying message!
Obviously you're giving it bad input or it wouldn't be outputting a usage message. In any case, there is a tool to do this for you.
pkill methserver
I have this very simple bash code that should kill a list of tail -f processes on a remote server.
old_tailf_pids=`ssh root#$server "ps -ef | grep 'tail -f -n +1 /opt/wd' | grep root | grep -v grep | sed -e \"s#root *\([0-9]\+\) .*#\1#g\""`
echo $old_tailf_pids
echo "Killing old tailfs..."
ssh root#$server "kill -9 $old_tailf_pids"
Output:
4007 5281 5906 8265 8823 9918 10477 11587 12213 12753 13396 13976 14558 15985 16788 18128 18762 19412 20109 21393 28924 29487 31542 32155
Killing old tailfs...
bash: line 1: 5281: command not found
bash: line 2: 5906: command not found
bash: line 3: 8265: command not found
bash: line 4: 8823: command not found
bash: line 5: 9918: command not found
...
Seems like the SSH command killed only the first pid, and then tried to 'run' the rest of the pids. Any idea why?
Thanks
As it is evident from comments below the question that variable contains newlines after each process id, you may use this xargs command in remote ssh:
ssh root#$server "xargs kill -9 <<< \"old_tailf_pids\""
I often run the command
squeue -u $USER | tee >(wc -l)
where squeue is a Slurm command to see how many jobs you are running. This gives me both the output from squeue and automatically tells how many lines are in it.
How can I watch this command?
watch -n.1 "squeue -u $USER | tee >(wc -l)" results in
Every 0.1s: squeue -u randoms | tee >(wc -l) Wed May 9 14:46:36 2018
sh: -c: line 0: syntax error near unexpected token `('
sh: -c: line 0: `squeue -u randoms | tee >(wc -l)'
From the watch man page:
Note that command is given to "sh -c" which means that you may need to use extra quoting to get the desired effect.
sh -c also does not support process substitution, the syntax you're using here as >().
Fortunately, that syntax isn't actually needed for what you're doing:
watch -n.1 'out=$(squeue -u "$USER"); echo "$out"; { echo "$out" | wc -l; }'
...or, if you really want to use your original code even at a heavy performance penalty (starting not just one but two new shells every tenth of a second -- first sh, and then bash):
bash_cmd() { squeue -u "$USER" | tee >(wc -l); } # create a function
export -f bash_cmd # export function to the environment
watch -n.1 'bash -c bash_cmd' # call function from bash started from sh started by watch
I use this code to kill a process with a PID file:
Process.kill 15, File.read('/tmp/pidfile').to_i
But the following two examples never work, when i try:
system "kill `cat /tmp/file.pid`"
or
`kill \`cat /tmp/pidfile\``
output is:
sh: 1: kill: Usage: kill [-s sigspec | -signum | -sigspec] [pid | job]... or
kill -l [exitstatus]
Is there a problem with the backstick ? because in bash this works perfectly:
kill `cat /tmp/file.pid`
The string is not being interpolated. This does not run a cat command:
system "kill `cat /tmp/file.pid`"
Instead, you could write this as:
system "kill #{`cat /tmp/file.pid`}"
However, I'm unclear why you'd choose to do this over your original (working) method.
This question already has answers here:
Pipe output and capture exit status in Bash
(16 answers)
Closed 5 years ago.
How do I get the correct return code from a unix command line application after I've piped it through another command that succeeded?
In detail, here's the situation :
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file} -- when only the tar command fails $?=0
$ echo $?
0
And, what I'd like to see is:
$ tar -cEvhf - -I ${sh_tar_inputlist} 2>${sh_tar_error_file} | gzip -5 -c > ${sh_tar_file}
$ echo $?
1
Does anyone know how to accomplish this?
Use ${PIPESTATUS[0]} to get the exit status of the first command in the pipe.
For details, see http://tldp.org/LDP/abs/html/internalvariables.html#PIPESTATUSREF
See also http://cfajohnson.com/shell/cus-faq-2.html for other approaches if your shell does not support $PIPESTATUS.
Look at $PIPESTATUS which is an array variable holding exit statuses. So ${PIPESTATUS[0]} holds the exit status of the first command in the pipe, ${PIPESTATUS[1]} the exit status of the second command, and so on.
For example:
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file}
$ echo ${PIPESTATUS[0]}
To print out all statuses use:
$ echo ${PIPESTATUS[#]}
Here is a general solution using only POSIX shell and no temporary files:
Starting from the pipeline:
foo | bar | baz
exec 4>&1
error_statuses=`((foo || echo "0:$?" >&3) |
(bar || echo "1:$?" >&3) |
(baz || echo "2:$?" >&3)) 3>&1 >&4`
exec 4>&-
$error_statuses contains the status codes of any failed processes, in random order, with indexes to tell which command emitted each status.
# if "bar" failed, output its status:
echo $error_statuses | grep '1:' | cut -d: -f2
# test if all commands succeeded:
test -z "$error_statuses"
# test if the last command succeeded:
echo $error_statuses | grep '2:' >/dev/null
As others have pointed out, some modern shells provide PIPESTATUS to get this info. In classic sh, it's a bit more difficult, and you need to use a fifo:
#!/bin/sh
trap 'rm -rf $TMPDIR' 0
TMPDIR=$( mktemp -d )
mkfifo ${FIFO=$TMPDIR/fifo}
cmd1 > $FIFO &
cmd2 < $FIFO
wait $!
echo The return value of cmd1 is $?
(Well, you don't need to use a fifo. You can have the commands early in the pipe echo a status variable and eval that in the main shell, redirecting file descriptors all over the place and basically bending over backwards to check things, but using a fifo is much, much easier.)