Multiple process from one bash script [duplicate] - bash

I'm trying to use a shell script to start a command. I don't care if/when/how/why it finishes. I want the process to start and run, but I want to be able to get back to my shell immediately...

You can just run the script in the background:
$ myscript &
Note that this is different from putting the & inside your script, which probably won't do what you want.

Everyone just forgot disown. So here is a summary:
& puts the job in the background.
Makes it block on attempting to read input, and
Makes the shell not wait for its completion.
disown removes the process from the shell's job control, but it still leaves it connected to the terminal.
One of the results is that the shell won't send it a SIGHUP(If the shell receives a SIGHUP, it also sends a SIGHUP to the process, which normally causes the process to terminate).
And obviously, it can only be applied to background jobs(because you cannot enter it when a foreground job is running).
nohup disconnects the process from the terminal, redirects its output to nohup.out and shields it from SIGHUP.
The process won't receive any sent SIGHUP.
Its completely independent from job control and could in principle be used also for foreground jobs(although that's not very useful).
Usually used with &(as a background job).

nohup cmd
doesn't hangup when you close the terminal. output by default goes to nohup.out
You can combine this with backgrounding,
nohup cmd &
and get rid of the output,
nohup cmd > /dev/null 2>&1 &
you can also disown a command. type cmd, Ctrl-Z, bg, disown

Alternatively, after you got the program running, you can hit Ctrl-Z which stops your program and then type
bg
which puts your last stopped program in the background. (Useful if your started something without '&' and still want it in the backgroung without restarting it)

screen -m -d $command$ starts the command in a detached session. You can use screen -r to attach to the started session. It is a wonderful tool, extremely useful also for remote sessions. Read more at man screen.

Related

How to prevent nohup from "clogging" the command line?

I want to write a bash script that runs two commands in the background. I am using nohup for this:
nohup cmd1 &
nohup cmd2 &
However, only the 1st command runs in the background.
When I run nohup cmd1 & manually in the command line. First, I type nohup cmd1 & then hit enter; this starts the process:
But, then I need to hit enter again to be able to type another command:
I think this is "clogging" up the command line, and is causing my bash script to get stuck at the first nohup ... & command.
Is there a way to prevent this?
Nothing is "clogged". The first command, running in the background, prints some output after your shell prints its next prompt. The shell is waiting for you to type a command, even though the cursor is no longer on the same line as the prompt. That extra Enter is an empty command, causing the shell to print another prompt. It's harmless but unnecessary.
Let me say something to nohup because I'm not sure if you are certain about what it is doing. In short, the nohup command is not necessary to run a process in background. The ampersand at the end of the line is doing it.
nohup prevents the background process from receiving SIGHUP (hup for hang up) if you close the terminal where the starting shell runs it. SIGHUP would effectively terminate the process.
If started with nohup the process will not receive that event and will continue running, owned by the init process (pid 1) if the terminal will being closed.
Furthermore the nohup command will redirect standard output of the controlled process to a file, meaning it will not appear on screen any more. By default this file is called nohup.out.

Running bash script does not return to terminal when using ampersand (&) to run a subprocess in the background

I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &

executing a script which runs even if i log off

So, I have a long running script (of order few days) say execute.sh which I am planning to execute on a server on which I have a user account...
Now, I want to execute this script so that it runs forever even if I logoff or disconnect from the server??
How do i do that?
THanks
You have a couple of choices. The most basic would be to use nohup:
nohup ./execute.sh
nohup executes the command as a child process and detaches from terminal and continues running if it receives SIGHUP. This signal means sig hangup and will getting triggered if you close a terminal and a process is still attached to it.
The output of the process will getting redirected to a file, per default nohup.out located in the current directory.
You may also use bash's disown functionality. You can start a script in bash:
./execute.sh
Then press Ctrl+z and then enter:
disown
The process will now run in background, detached from the terminal. If you care about the scripts output you may redirect output to a logfile:
./execute.sh > execute.log 2>&1
Another option would be to install screen on the remote machine, run the command in a screen session and detach from it. You'll find a lot of tutorials about this.
nohup (no hangup) it and run it in the background:
nohup execute.sh &
Output that normally would have gone to the screen (STDOUT) will go to a file called nohup.out.

how to send ssh job to background

I logged in to a remote server via ssh and started a php script. Appereantly, it will take 17 hours to complete, is there a way to break the connection but the keep the script executing? I didn't make any output redirection, so I am seeing all the output.
Can you stop the process right now? If so, launch screen, start the process and detach screen using ctrl-a then ctrl-d. Use screen -r to retrieve the session later.
This should be available in most distros, failing that, a package will definitely be available for you.
ctrl + z
will pause it. Than type
bg
to send it to background. Write down the PID of the process for later usage ;)
EDIT: I forgot, you have to execute
disown -$PID
where $PID is the pid of your process
after that, and the process will not be killed after you close the terminal.
you described it's important to protect script continuation. Unfortunately I don't know, you make any interaction with script and script is made by you.
continuation protects 'screen' command. your connection will break, but screen protect pseudo terminal, you can reconnect to this later, see man.
if you don't need operators interaction with script, you simply can put script to background at the start, and log complete output into log file. Simply use command:
nohup /where/is/your.script.php >output.log 2&>1 &
>output.log will redirect output into log file, 2&>1 will append error stream into output, effectively into log file. last & will put command into background. Notice, nohup command will detach process from terminal group.
At now you can safely exit from ssh shell. Because your script is out of terminal group, then it won't be killed. It will be rejoined from your shell process, into system INIT process. It is unix like system behavior. Complete output you can monitor using command
tail -f output.log #allways breakable by ^C, it is only watching
Using this method you do not need use ^Z , bg etc shell tricks for putting command to the background.
Notice, using redirection to nohup command is preferred. Otherwise nohup will auto redirect all outputs for you to nohup.out file in the current directory.
You can use screen.

Why do unix background processes sometimes die when I exit my shell?

I wanted to know why i am seeing a different behaviour in the background process in Bash shell
Case 1: Logged in to Unix server using Putty(SSH)
By default it uses csh shell
I changed to bash shell
typed sleep 2000 &
press enter
It gave me the job number. Now i killed my session by clicking the x in the putty window
Now open another session and tried to lookup the process..the process died.
Case 2:Case 1: Logged in to Unix server using Putty(SSH)
By default it uses csh shell
I changed to bash shell
vi mysleep.sh
sleep 2000 & Saved mysleep.sh
./mysleep.sh
Diff here is..instead of executing the sleep command directly i am storing the sleep command in a file and executing the file.
Now i killed my session by clicking the x in the putty window
Now open another session and tried to lookup the process..the process is still there
Not sure why this is happening. I thought i need to do disown in bash to run the process even after logging out.
One diff i see in the parent process id..In the second case..the parent process id for the sleep 2000 becomes 1. Looks like as soon as process for mysleep.sh died the kernel assigned the parent process to 1.
The difference here is indeed the intervening process.
When you close the terminal window, a HUP signal (related to "nohup" as an0nymo0usc0ward mentioned) is sent to the processes running in it. The default action on receiving HUP is to die - from the signal(3) manpage,
No Name Default Action Description
1 SIGHUP terminate process terminal line hangup
In your first example, the sleep process directly receives this HUP signal and dies because it isn't set to do anything else. (Some processes catch HUP and use it to perform some action, e.g. reread some configuration files)
In the second example, the shell process running your shell script has already died, so the sleep process never gets the signal. In UNIX, every process must have a parent process due to the internals of how the wait(2) family of calls works and indeed processes in general. So when the parent process dies, the kernel gives it to init (pid 1, as you note) as a foster child.
Orphan process (on wikipedia) has some more information available about it, also see Zombie process for some additional technical details.
Already running process?
^z
bg
disown %<jobid>
New process/script (on local machine's console)?
nohup script.sh &
New process/script (on remote machine's console)?
Depending on your need,
there are two options [ there will be more ;-) ]
ssh remotehost 'nohup /path/to/script.sh </dev/null > nohup.out 2>&1 &'
OR
use 'screen'
Try "nohup cmd args..."
Steven's answer is correct, but I'd like to highlight the tricky part here again:
=> Using a bash script that just executes sleep in the background
The effect of this is that the "script" exits almost immediately (since it's done all its commands). However, it did create a child process (sleep) during its lifetime. The effect of this is that:
The "script" cannot be the parent anymore, and sleep is orphaned to init (which shows nicely in a pstree)
The bash shell where you started the script from has no underlying jobs anymore
Note that this stuff all happens when you executed the script, and has nothing to do with any ssh logout/putty closing.
When you then finally close your putty session, bash receives a "SIGHUP", but doesn't forward it to any other process (since there are no jobs left)
In the other case, bash did still have a job left, which it then sent the SIGHUP to, causing it to end (as you noticed)
Hope this helps

Resources