Telnet Process Continues after Bash Script - bash

I am running the following script
#! /bin/bash
HOSTLIST="192.168.0.5 192.168.22.1"
DELAY=3
stty echo
exec 4>&1
for HOST in $HOSTLIST ; do
telnet $HOST 135 | grep Connected & pid=$!
echo "Checking $HOST"
sleep $DELAY
kill -9 $pid &> /dev/null
done
However, when it finishes, the Telnet connections are still being attempted in the background which spams annoying "telnet: unable to connect" errors randomly for the next few moments. I tried adding killing the process to stop this but it still does it. Am I doing something wrong for killing the process?
Also I have to use telnet, can't use netcat or nmap.

The pid you are trying to kill is the pid of the grep since $! is the pid of the most recently executed background command. If you hadn't thrown away stderr when trying to kill it might have provided some clue...
BTW, kill -9 is a serious code smell. Any well behaved process can be killed by at least one of the -INT, -HUP, -TERM or -QUIT signals. You should never need to kill -KILL. It's bad because it doesn't give the process opportunity to clean up its mess.

Related

How to trap and exit all child processes

I am trying to get hollywood to run in a way that I can exit it with a normal Ctrl+C signal.
Currently I have to press Ctrl+C a bunch of times just to get stuck in the tmux instance that hollywoodcreated. Looking at the source code, there is a trap command:
trap "pkill -f -9 lib/hollywood/ >/dev/null 2>&1; exit 0" INT
But apparently that is not enough. I've tried replacing it with several different ones, but none of them was able to do it right:
trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM EXIT
trap 'kill $(jobs -p)' EXIT
trap 'pkill -f -9 lib/hollywood/ >/dev/null 2>&1; kill -9 $(ps -eo pid,command | grep tmux | grep byobu | grep hollywood | sed -r "s/^[^0-9]*([0-9]+).*/\1/") >/dev/null 2>&1; exit 0' INT
trap "exit" INT TERM
trap "kill 0" EXIT
I've tried several answers of this question: How do I kill background processes / jobs when my shell script exits?
But none of those worked. (I still had to press Ctrl+C a bunch of times and then manually exit the tmux session.)
Is there a simple way to fix this? (I would prefer to having to mess with the source code too much.)
They aren't background processes, they are running in different tmux panes - tmux is the parent process, not the hollywood script. So most of the commands you list won't have any effect.
pkill should work if the pattern is right. Does the pkill work if you run it from outside tmux? It looks like each hollywood widget installs its own trap with the same pkill so it should kill them all if any one is killed.
Alternatively, if you are not running it in your own tmux server, you could simply make C-c kill the tmux window - change hollywood to do something like this when it creates the tmux session (around line 78):
$tmux_launcher bind -n C-c kill-window
If you have an existing tmux that you don't want to kill, this is harder because you want C-c for other purposes. You could change your C-c binding to something like:
bind -n C-c if -F '#{==:#{window_name},hollywood}' 'kill-window' 'send C-c'
But you might not want to do this unless you are using hollywood a lot.
It might be more useful to ask this in the hollywood issue tracker than here TBH.

Kill a background task in mac

I have a task(appium server) running in the background. I started this task by running the command appium & . After running my tests, i need to kill appium. I tried running kill <pid_of_appium> , but the task is not killed immediately. I manually have to press the Enter Key to kill it.
I initially thought this was a problem with appium alone, but I tried running several background tasks and all of these tasks are getting killed only after pressing the Enter key. How can i handle this through code as I need to stop the background task programmatically using a shell command
Be careful using kill -9. It can cause corrupted data and potential problems associated with that. I found this script that should attempt to kill the process with a signal -15, and then with a signal -9 as a last resort.
#!/bin/bash
# Getting the PID of the process
PID=`pid_of_appium`
# Number of seconds to wait before using "kill -9"
WAIT_SECONDS=10
# Counter to keep count of how many seconds have passed
count=0
while kill $PID > /dev/null
do
# Wait for one second
sleep 1
# Increment the second counter
((count++))
# Has the process been killed? If so, exit the loop.
if ! ps -p $PID > /dev/null ; then
break
fi
# Have we exceeded $WAIT_SECONDS? If so, kill the process with "kill-9"
# and exit the loop
if [ $count -gt $WAIT_SECONDS ]; then
kill -9 $PID
break
fi
done
echo "Process has been killed after $count seconds."
If a task doesn't respond to a general kill command, you can try kill -9 instead. Adding the -9 causes the kill program to dispatch a much more ruthless assassin to carry out the deed than the normal version does.
Give a try to pkill and pgrep:
pgrep, pkill -- find or signal processes by name
To find the process and print the PID you can use:
pgrep -l appium
To kill all the processes you can do:
pkill appium
In case want to send a a KILL 9 signal you could do this;
pkill 9 appium

Wait for last created process (daemon which forks) to end

I'm writing a wrapper script to use in inittab.
This script starts a daemon and waits for it to terminate.
Here's what I have currently:
#!/bin/bash
/usr/local/bin/mydaemon --lots_of_params_here
while kill -0 `echo $!` 2> /dev/null; do sleep 1; done;
The problem is with the second line; it just returns immediately. If I instead do:
while kill -0 `pgrep mydaemon` 2> /dev/null; do sleep 1; done;
It all works fine, but this isn't a good solution for me as I have other scripts with the prefix mydaemon.
What am I doing wrong?
EDIT:
The problem seems to be related to the daemon fork(). So, I always get the parent pid in $!. I'm looking for ways to solve this problem. Maybe I should use pid files and have mydaemon write its pid there.
You can do the following way to get through your issue.
#!/bin/bash
/usr/local/bin/mydaemon --lots_of_params_here &
wait $!
wait command will wait till the process completes and comes out.
If you are looking for to wait after some other commands then you can store the PID in any other variable and use that.
#!/bin/bash
/usr/local/bin/mydaemon --lots_of_params_here &
mypid=$!
### Some other commands
wait $mypid

Bash script, pid of xterm process

I have a small problem. In bash scripting i need to run an xterm who do some things like this.
xterm -e "(time ./program.exe 127.0.0.1) 2> out.txt"
How can i say the pid of this process?
I need to wait who it finisced for write the output and merge with another fine.
Thank so much to all!!
Basically you start the process in the background by adding & to the end of your command, get the last started pid with $! and wait on the process to complete with wait. So, something like:
xterm -e "(time ./program.exe 127.0.0.1) 2> out.txt" &
pid=$!
wait $pid
should work.

Starting a process over ssh using bash and then killing it on sigint

I want to start a couple of jobs on different machines using ssh. If the user then interrupts the main script I want to shut down all the jobs gracefully.
Here is a short example of what I'm trying to do:
#!/bin/bash
trap "aborted" SIGINT SIGTERM
aborted() {
kill -SIGTERM $bash2_pid
exit
}
ssh -t remote_machine /foo/bar.sh &
bash2_pid=$!
wait
However the bar.sh process is still running the remote machine. If I do the same commands in a terminal window it shuts down the process on the remote host.
Is there an easy way to make this happen when I run the bash script? Or do I need to make it log on to the remote machine, find the right process and kill it that way?
edit:
Seems like I have to go with option B, killing the remotescript through another ssh connection
So no I want to know how do I get the remotepid?
I've tried a something along the lines of :
remote_pid=$(ssh remote_machine '{ /foo/bar.sh & } ; echo $!')
This doesn't work since it blocks.
How do I wait for a variable to print and then "release" a subprocess?
It would definitely be preferable to keep your cleanup managed by the ssh that starts the process rather than moving in for the kill with a second ssh session later on.
When ssh is attached to your terminal; it behaves quite well. However, detach it from your terminal and it becomes (as you've noticed) a pain to signal or manage remote processes. You can shut down the link, but not the remote processes.
That leaves you with one option: Use the link as a way for the remote process to get notified that it needs to shut down. The cleanest way to do this is by using blocking I/O. Make the remote read input from ssh and when you want the process to shut down; send it some data so that the remote's reading operation unblocks and it can proceed with the cleanup:
command & read; kill $!
This is what we would want to run on the remote. We invoke our command that we want to run remotely; we read a line of text (blocks until we receive one) and when we're done, signal the command to terminate.
To send the signal from our local script to the remote, all we need to do now is send it a line of text. Unfortunately, Bash does not give you a lot of good options, here. At least, not if you want to be compatible with bash < 4.0.
With bash 4 we can use co-processes:
coproc ssh user#host 'command & read; kill $!'
trap 'echo >&"${COPROC[1]}"' EXIT
...
Now, when the local script exits (don't trap on INT, TERM, etc. Just EXIT) it sends a new line to the file in the second element of the COPROC array. That file is a pipe which is connected to ssh's stdin, effectively routing our line to ssh. The remote command reads the line, ends the read and kills the command.
Before bash 4 things get a bit harder since we don't have co-processes. In that case, we need to do the piping ourselves:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
This should work in pretty much any bash version.
Try this:
ssh -tt host command </dev/null &
When you kill the local ssh process, the remote pty will close and SIGHUP will be sent to the remote process.
Referencing the answer by lhunath and https://unix.stackexchange.com/questions/71205/background-process-pipe-input I came up with this script
run.sh:
#/bin/bash
log="log"
eval "$#" \&
PID=$!
echo "running" "$#" "in PID $PID"> $log
{ (cat <&3 3<&- >/dev/null; kill $PID; echo "killed" >> $log) & } 3<&0
trap "echo EXIT >> $log" EXIT
wait $PID
The difference being that this version kills the process when the connection is closed, but also returns the exit code of the command when it runs to completion.
$ ssh localhost ./run.sh true; echo $?; cat log
0
running true in PID 19247
EXIT
$ ssh localhost ./run.sh false; echo $?; cat log
1
running false in PID 19298
EXIT
$ ssh localhost ./run.sh sleep 99; echo $?; cat log
^C130
running sleep 99 in PID 20499
killed
EXIT
$ ssh localhost ./run.sh sleep 2; echo $?; cat log
0
running sleep 2 in PID 20556
EXIT
For a one-liner:
ssh localhost "sleep 99 & PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
For convenience:
HUP_KILL="& PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
ssh localhost "sleep 99 $HUP_KILL"
Note: kill 0 may be preferred to kill $PID depending on the behavior needed with regard to spawned child processes. You can also kill -HUP or kill -INT if you desire.
Update:
A secondary job control channel is better than reading from stdin.
ssh -n -R9002:localhost:8001 -L8001:localhost:9001 localhost ./test.sh sleep 2
Set job control mode and monitor the job control channel:
set -m
trap "kill %1 %2 %3" EXIT
(sleep infinity | netcat -l 127.0.0.1 9001) &
(netcat -d 127.0.0.1 9002; kill -INT $$) &
"$#" &
wait %3
Finally, here's another approach and a reference to a bug filed on openssh:
https://bugzilla.mindrot.org/show_bug.cgi?id=396#c14
This is the best way I have found to do this. You want something on the server side that attempts to read stdin and then kills the process group when that fails, but you also want a stdin on the client side that blocks until the server side process is done and will not leave lingering processes like <(sleep infinity) might.
ssh localhost "sleep 99 < <(cat; kill -INT 0)" <&1
It doesn't actually seem to redirect stdout anywhere but it does function as a blocking input and avoids capturing keystrokes.
The solution for bash 3.2:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
doesn't work. The ssh command is not on the ps list on the "client" machine. Only after I echo something into the pipe will it appear in the process list of the client machine. The process that appears on the "server" machine would just be the command itself, not the read/kill part.
Writing again into the pipe does not terminate the process.
So summarizing, I need to write into the pipe for the command to start up, and if I write again, it does not kill the remote command, as expected.
You may want to consider mounting the remote file system and run the script from the master box. For instance, if your kernel is compiled with fuse (can check with the following):
/sbin/lsmod | grep -i fuse
You can then mount the remote file system with the following command:
sshfs user#remote_system: mount_point
Now just run your script on the file located in mount_point.

Resources