I have a shell script that starts a server. I actually ssh into my server and run the shell script. As soon as it starts, it logs everything to the console and the console does not return. The problem starts when I close my Machine, the ssh connection is disconnected and the server that I started is shutdown. I guess I need to start the server and return from the shell. Here is what I have so far:
#!/bin/bash
java -Xmx1G -Dhttp.port=8080 -Dconfig.file=MyProject/conf/application.conf -cp ".:MyProject/lib/*" play.core.server.NettyServer .
exit 0
Any suggestions on how to return after calling this shell script?
After ssh to the server Just backgrounding your script (./myscript &) will not daemonize it. You must disconnect stdin, stdout, and stderr, and make it ignore the hangup signal (SIGHUP).
nohup ./myscript 0<&- &>/dev/null &
will do the job. Or, to capture all output:
nohup ./myscript 0<&- &> my.admin.log.file &
To avoid script termination on ssh session close use nohup (No hangup) with output redirection to a log file:
nohup bash /path/to/startScript.sh > script.log 2>&1 &
You can redirect stdout and stderr to files, background and disown the process (or nohup it) and then exit the script.
However, the correct way to do this is to use some kind of process manager daemon like upstart.
Related
We have script which do some processing and triggers a job in background using nohup. When we schedule this script from Oracle OEM (or it can be any scheduler job), i see the following error and show status as failed but the script actually finished without issue. How to exit the script correctly when backup ground job is started with nohup?
Remote operation finished but process did not close its stdout/stderr
file: test.sh
#!/bin/bash
# do some processing
...
nohup ./start.sh 2000 &
# end of the script
By executing start.sh in this manner you are allowing it to claim partial ownership of test.sh's output file descriptors (stdout/stderr). So whereas when most bash scripts exit, their file descriptors are closed for them (by the operating system), test.sh's file descriptors cannot be closed because start.sh still has a claim to them.
The solution is to not let start.sh claim the same output file descriptors as test.sh is using. If you don't care about its output, you can launch it like this:
nohup ./start.sh 2000 1>/dev/null 2>/dev/null &
which tells the new process to send both its stdout and stderr to /dev/null. If you do care about its output, then just capture it somewhere more meaningful:
nohup ./start.sh 2000 1>/path/to/stdout.txt 2>/path/to/stderr.txt &
The problem with each command is that it has a continuous output that does not allow me to execute any more commands. Each command can only be stopped by pressing CTRL+C or killing the session. Executing 1 command per terminal window works but is time consuming and inefficient. This question is in relation to VLC application that outputs video status information until killed. Seeing output from each command is not necessary.
As mentioned by Doon, & (to background the process) will do the trick.
However, the process will die if you close the terminal. If you want the backgrounded process not to die if you close the terminal it was started in, you can prefix the process with nohup.
Extended from example above:
nohup command1 > /dev/null 2>&1 &;
nohup command2 > /dev/null 2>&1 &;
...
I've started ssh -N <somehost> & in a bash script (to create a tunnel), and the connection persists after the script ends, and I see with ps that the ssh process has detached.
I am currently killing the background job with kill jobs -p, but is there a better way to do that?
Do you manually end your script?
if so:
Try to catch the QUIT signal (or others) inside your script (use the
trap builtin command I think). Then kill ssh.
else:
Kill ssh at the end of your script.
I ssh to another server and run a shell script like this nohup ./script.sh 1>/dev/null 2>&1 &
Then type exit to exit from the server. However it just hangs. The server is Solaris.
How can I exit properly without hanging??
Thanks.
I assume that this script is a long running one. In this case you need to detach the process from the terminal that you wish to close when you terminate your ssh session.
Actually you already done most of the work by reassigning both stdout and stderr to /dev/null, however you didn't do that for stdin.
I used the test case of:
ssh localhost
nohup sleep 10m &> /dev/null &
^D
# hangs
While
ssh localhost
nohup sleep 10m &> /dev/null < /dev/null &
^D
# exits
I second the recommendation to use the excellent gnu screen, that will do this service for you, among others.
Oh, and have you considered running the script directly and not within a shell? I.e.:
ssh user#host script.sh
If you're trying to leave a command running remotely after you close your SSH link, I strongly recommend you use screen and learn to detach the screen. That's much better than leaving background processes around; it also lets you reconnect and see what the process is up to.
Since you haven't provided us with script.sh, I don't think we can know for sure why the command is hanging.
You can use the command :
~.
This command close the ssh session.
sh -c ./script.sh &
I can created an (very) simple applescript app to run Firefox background & exit. (The reason is I have different profiles for work & home). My script is basically:
do shell script
"/Applications/firefox.app/Contents/MacOS/firefox
-no-remote -P 'Personal' &"
It works, but the script/app doesn't exit until I quit Firefox. How can I fix that?
You need to redirect stdout and stderr somewhere. The do shell script command knows that the pipes it setup for the program's stdout and stderr are still open, so it waits for them to be closed.
do shell script "/Applications/firefox.app/Contents/MacOS/firefox -no-remote -P 'Personal' > /dev/null 2>&1 &"