Opening and closing a process using bash - bash

Using Ubuntu, I would like to create a shell script (bash) for an Ubuntu server, that will open an instance of firefox and then close that specific instance the browser?
To open an instance of firefox, I can write:
firefox www.example.com
I have read that to search for all firefox instances, and to close them manually I can write:
ps aux | grep firefox
pidof firefox
kill #process#
But is there a way for me to search for the specific instance instance of firefox that I opened at the start?

You can use jobs to get the IDs of all running processes started from that shell (e.g. inside your script)
#!/bin/bash
firefox www.example.com &
PID=`jobs -p`
kill $PID
See help jobs for the options. Note that jobs lists all process started from this shell, so if you follow this approach and want to kill multiple processes you might need to do some additional parsing on the output from jobs to find the correct process.

Start the process in the background, and remember its pid.
#!/bin/bash
firefox www.example.com &
declare -i PID=$!
# blah, blah, blah
kill ${PID}
If you're worried about firefox exiting and some other process being assigned ${PID} in the mean time you could change the kill to something like the following to reduce the risk:
ps -p ${PID} | fgrep firefox && kill ${PID}

Related

sending ctrl-c in bash to perf command

The perf-stat command in linux runs until crtl-c is pressed. I am trying to use this command in script to profile a loop. The recommended solution to simulate sending crtl-c is to issue a kill command with -2 or -SIGINT flag.
However this does not work for me. I am on RHEL.
The script more or less looks as follows:
for i in {1..12}
do
pid=$1
perf stat -e dTLB-loads -p $pid > perf.out&
perf_pid=$!
sleep 10
kill -SIGINT $perf_pid
done
Even after the kill the perf process is still active. All the ctrl-c's are executed at the end when the script gets over.
Reading the man page for perf, I came across the --control option which seems is the proper approach to profile a portion of running command.
However, this option is not available on RHEL .
I was able find workaround by using the -INT option for kill mentioned here. For some reason -2 or -SIGINT doesn't work on RHEL.

How to run a shell script with the terminal closed, and stop the script at any time

What I usually do is pause my script, run it in the background and then disown it like
./script
^Z
bg
disown
However, I would like to be able to cancel my script at any time. If I have a script that runs indefinitely, I would like to be able to cancel it after a few hours or a day or whenever I feel like cancelling it.
Since you are having a bit of trouble following along, let's see if we can keep it simple for you. (this presumes you can write to /tmp, change as required). Let's start your script in the background and create a PID file containing the PID of its process.
$ ./script & echo $! > /tmp/scriptPID
You can check the contents of /tmp/scriptPID
$ cat /tmp/scriptPID
######
Where ###### is the PID number of the running ./script process. You can further confirm with pidof script (which will return the same ######). You can use ps aux | grep script to view the number as well.
When you are ready to kill the ./script process, you simply pass the number (e.g. ######) to kill. You can do that directly with:
$ kill $(</tmp/scriptPID)
(or with the other methods listed in my comment)
You can add rm /tmp/scriptPID to remove the pid file after killing the process.
Look things over and let me know if you have any further questions.

Stop script when gnome session ends

In Start Script when Gnome Starts Up it was asked how to automatically start a script on gnome login. But how to automatically stop a long running script on logout, that was started on login? In my case there are two processes when I login twice. Interestingly the process started first does not reside under gnome-session anymore.
I would wrap the binary that gets executed in a simple bash script that saves the pid of the started process in a temporary file. If this file already exists it skips the start of the application. Since the file is saved in the /tmp directory everything gets deleted once you restart your computer.
#!/bin/bash
binary="git-cola"
temp_file="/tmp/my_${binary}_instance.pid"
if [[ -f ${temp_file} ]]
then
echo "PID exists"
else
exec ${binary} &
echo $! > ${temp_file}
fi
With a little more effort you can check if the pid of the process is still running and restart it on the login again (for example if the process crashed or the other user closed it).
I actually don't use Gnome, so I can't tell you if there is a more elegant way to kill the process. Like a logout hook. But once you got the pid of the process saved you can kill it with kill -9 PID. (See man kill for more gentle ways to end the process).
This might not be the solution to stop the process. But to prevent it starting twice.

Bash script that will survive disconnection, but not user break

I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I can solve the first part of it like this:
#!/bin/bash
cmd='
#commands here, avoiding single quotes...
'
nohup bash -c "$cmd" &
tail -f nohup.out
But pressing Ctrl+C obviously just kills the tail process, not the main body. Can I have both? Maybe using Screen?
I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I think this is exactly the answer on the question you formulated, this one without screen:
#!/bin/bash
cmd=`cat <<EOF
# commands here
EOF
`
nohup bash -c "$cmd" &
# store the process id of the nohup process in a variable
CHPID=$!
# whenever ctrl-c is pressed, kill the nohup process before exiting
trap "kill -9 $CHPID" INT
tail -f nohup.out
Note however that nohup is not reliable. When the invoking user logs out, chances are that nohup also quits immediately. In that case disown works better.
bash -c "$cmd" &
CHPID=$!
disown
This is probably the simplest form using screen:
screen -S SOMENAME script.sh
Then, if you get disconnected, on reconnection simply run:
screen -r SOMENAME
Ctrl+C should continue to work as expected
Fact 1: When a terminal (xterm for example) gets closed, the shell is supposed to send a SIGHUP ("hangup") to any processes running in it. This harkens back to the days of analog modems, when a program needed to clean up after itself if mom happened to pick up the phone while you were online. The signal could be trapped, so that a special function could do the cleanup (close files, remove temporary junk, etc). The concept of "losing your connection" still exists even though we use sockets and SSH tunnels instead of analog modems. (Concepts don't change; all that changes is the technology we use to implement them.)
Fact 2: The effect of Ctrl-C depends on your terminal settings. Normally, it will send a SIGINT, but you can check by running stty -a in your shell and looking for "intr".
You can use these facts to your advantage, using bash's trap command. For example try running this in a window, then press Ctrl-C and check the contents of /tmp/trapped. Then run it again, close the window, and again check the contents of /tmp/trapped:
#!/bin/bash
trap "echo 'one' > /tmp/trapped" 1
trap "echo 'two' > /tmp/trapped" 2
echo "Waiting..."
sleep 300000
For information on signals, you should be able to man signal (FreeBSD or OSX) or man 7 signal (Linux).
(For bonus points: See how I numbered my facts? Do you understand why?)
So ... to your question. To "survive" disconnection, you want to specify behaviour that will be run when your script traps SIGHUP.
(Bonus question #2: Now do you understand where nohup gets its name?)

How to make ssh to kill remote process when I interrupt ssh itself?

In a bash script I execute a command on a remote machine through ssh. If user breaks the script by pressing Ctrl+C it only stops the script - not even ssh client. Moreover even if I kill ssh client the remote command is still running...
How can make bash to kill local ssh client and remote command invocation on Crtl+c?
A simple script:
#/bin/bash
ssh -n -x root#db-host 'mysqldump db' -r file.sql
Eventual I found a solution like that:
#/bin/bash
ssh -t -x root#db-host 'mysqldump db' -r file.sql
So - I use '-t' instead of '-n'.
Removing '-n', or using different user than root does not help.
When your ssh session ends, your shell will get a SIGHUP. (hang-up signal). You need to make sure it sends that on to all processes started from it. For bash, try shopt -s huponexit; your_command. That may not work, because the man page says huponexit only works for interactive shells.
I remember running into this with users running jobs on my cluster, and whether they had to use nohup or not (to get the opposite behaviour of what you want) but I can't find anything in the bash man page about whether child processes ignore SIGHUP by default. Hopefully huponexit will do the trick. (You could put that shopt in your .bashrc, instead of on the command line, I think.)
Your ssh -t should work, though, since when the connection closes, reads from the terminal will get EOF or an error, and that makes most programs exit.
Do you know what the options you're passing to ssh do? I'm guessing not. The -n option redirects input from /dev/null, so the process you're running on the remote host probably isn't seeing SIGINT from Ctrl-C.
Now, let's talk about how bad an idea it is to allow remote root logins:
It's a really, really bad idea. Have a look at HOWTO: set up ssh keys for some suggestions how to securely manage remote process execution over ssh. If you need to run something with privileges remotely you'll probably want a solution that involves a ssh public key with embedded command and a script that runs as root courtesy of sudo.
trap "some_command" SIGINT
will execute some_command locally when you press Ctrl+C . help trap will tell you about its other options.
Regarding the ssh issue, i don't know much about ssh. Maybe you can make it call ssh -n -x root#db-host 'killall mysqldump' instead of some_command to kill the remote command?
What if you don't want to require using "ssh -t" (for those as forgetful as I am)?
I stumbled upon looking at the parent PID, because CTRL/C from the initiating session results in the ssh-launched process on the remote process exiting, although its child process continues. By way of example, here's my script that is on the remote server.
#!/bin/bash
Answer=(Alive Dead)
Index=0
while [ ${Index} -eq 0 ]; do
if ! kill -0 ${PPID} 2> /dev/null ; then Index=1; fi
echo "Parent PID ${PPID} is ${Answer[$Index]} at $(date +%Y%m%d%H%M%S%Z)" > ~/NowTime.txt
sleep 1
done
I then invoke it with "ssh remote_server ./test_script.sh"
"watch cat ~/NowTime.txt" on the remote server shows the timestamp in the file increasing and declaring that the parent process is alive; once I hit CTRL/C in the launching process, the script on the remote server notes that its parent process has died, and the script exits.

Resources