Send command to a background process - bash

I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!

Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.

Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt

If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.

For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?

You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc

you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.

Related

Run multiple commands simultaneously in bash in one line

I am looking for an alternative to something like ssh user#node1 uptime && ssh user#node2 uptime, where both of the SSH-commands are run simultaneosly. As they are both blocking until the command returns, && and ; between them don't work.
My goal is to run infinite while loops on both nodes via SSH. So the first one would never return, and the second one would never be run. I would then like to save the output after terminating the loops with Ctrl+C to a log-file and read that one via Python.
Is there an easy solution to this?
Thanks in advance!
Capturing SSH output
On the one hand, you need to capture the ssh output/error and store it into a file so that you can process it afterwards with Python. To this purpose you can:
1- Store output and error directly into a file
ssh user#node cmd 2>&1 > session.log
2- Show output/error in the console while storing it into a file (I would recommend this one)
ssh user#node cmd 2>&1 | tee session.log
Check this for further information about the tee command.
Running commands in parallel
On the other hand, you want to run both commands in parallel and block the current bash process. You can achieve this by:
1- Blocking the current bash process until their childs are done.
cmd1 & ; cmd2 & ; wait
Check this for further information about the wait command.
2- Spawning the child processes and freeing the current bash process. Notice that the processes will be kept alive although the main process ends.
nohup cmd & ; nohup cmd &
The whole thing
I would recommend combining both approaches using tee (so you can still see the ssh outputs on your terminal) and blocking the current process until everything is done (so that when you kill the main process all the processes are killed too).
ssh user#node1 uptime 2>&1 | tee session1.log & ; ssh user#node2 uptime 2>&1 | tee session2.log & ; wait

Why does this nested bash command with subshells hang? [duplicate]

I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &

Scripts with nohup inside don't exit correctly

We have script which do some processing and triggers a job in background using nohup. When we schedule this script from Oracle OEM (or it can be any scheduler job), i see the following error and show status as failed but the script actually finished without issue. How to exit the script correctly when backup ground job is started with nohup?
Remote operation finished but process did not close its stdout/stderr
file: test.sh
#!/bin/bash
# do some processing
...
nohup ./start.sh 2000 &
# end of the script
By executing start.sh in this manner you are allowing it to claim partial ownership of test.sh's output file descriptors (stdout/stderr). So whereas when most bash scripts exit, their file descriptors are closed for them (by the operating system), test.sh's file descriptors cannot be closed because start.sh still has a claim to them.
The solution is to not let start.sh claim the same output file descriptors as test.sh is using. If you don't care about its output, you can launch it like this:
nohup ./start.sh 2000 1>/dev/null 2>/dev/null &
which tells the new process to send both its stdout and stderr to /dev/null. If you do care about its output, then just capture it somewhere more meaningful:
nohup ./start.sh 2000 1>/path/to/stdout.txt 2>/path/to/stderr.txt &

Running bash script does not return to terminal when using ampersand (&) to run a subprocess in the background

I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &

Bash: Set a process to die in start parameters with sighup

Is it possible to set a process to die in its start parameters?
What I'm trying to do is set a process to die either before it's started or when it's starting.
When I try to get a pid of an opened named pipe with cat > $pipe by adding an ampersand and $! after it, it spams the terminal with ">" symbols, so I was wondering if it were possible to start cat > $pipe with a parameter to die on a PID's SIGHUP.
How would I go about doing this? If I put anything after the cat > $pipe, it will not work correctly for me.
"get a pid of an opened named pipe"
A named pipe does not have a pid, only processes have pids (the clue is in the 'p')
However, that does not appear to be anything to do with the title of the question. By default, a process will die on a SIGHUP. However, a child process inherits the parent's signal mask, so if the parent ignored SIGHUP then that will be the case in the child (not true for handlers). So you can force a die with (for example):
trap 'exit 128' SIGHUP
But how does that part of the question relate to named pipes? Are you trying to find which processes have the pipe open? You can iterate through /proc for that.
EDIT after comments from the OP:
If you run cat > mypipe & then the cat will hang trying to access the keyboard - cat by default reads STDIN.
[1]+ Stopped cat > mypipe
So then you have to bring it into forground (fg) to enter data, normally terminated by <CTRL>+D. I am at a loss as to why you want to use cat in this way.
Anyway, if you run in background it is very easy to get a background job's pid:
assuming it is job number 1
set $(jobs -l %1)
pid=$2
Maybe you could further investigate why you can't run the job in background, and show an example (use the script command to get a copy of your session in a file called typescript)
Have you tried putting it in parentheses? (cat > $pipe) &$

Resources