exec.Command hangs on Bash script containing nohup - bash

When I use Go's exec.Command{} to execute some bash script that contains nohup, it will hang forever.
I don't know what are the differences between ping and ifconfig. I tried to redirect the stdin (< /dev/null), the stdout(> /dev/null) and the stderr(2> /dev/null), and their combination, some of them work some don't.
When I use sh to execute the script, it just ends up immediately.
The Go code:
package main
import (
"fmt"
"os/exec"
)
func main() {
cmd := exec.Command("sh", "a.sh")
out, err := cmd.Output() // Or cmd.CombinedOutput()
fmt.Println(string(out), err)
}
The Bash script (a.sh):
#!/bin/bash
# hangs
#nohup ping localhost &
# dot not hang
nohup ifconfig &

(Converting comments, with glitches fixed, to answer)
The use of nohup here is mostly a red herring. The real problem is that ping never finishes. However, nohup has some extra weirdness, which you can see if you run, from an interactive terminal, these two sets of commands:
$ nohup echo foo
nohup: ignoring input and appending output to 'nohup.out'
$ cat nohup.out
foo
$
vs:
$ nohup echo foo </dev/null 2>&1 | cat
foo
$
Note how the first one printed a weird message, and then the output foo went to a file; the second did not, and then the output foo showed up on the regular output stream. This is because POSIX says that nohup should do these redirections if appropriate.1 When run with exec.Cmd and cmd.Output, the redirections are not performed.
At the OS level, on a Linux- or other Unix-like system, the exec code creates an OS pipe object by which the invoked command can send output back to the Go runtime. (There may be a separate pipe for its stderr output, or the two may both be directed to a single pipe, depending on how you run the command; see https://golang.org/src/os/exec/exec.go#L280.) This pipe winds up being passed to ping, so that ping can keep writing output there as long as it likes.
The shell itself exits, because the command nohup ping localhost & is backgrounded. However, ping still has write access to the pipe object, so the Go runtime continues calling the OS read code until the pipe is closed—which is never. If the pipe were ever closed, the Go runtime would receive EOF and call the wait system call to collect the shell's exit status, but that never happens.
Redirecting ping's output, such that the shell itself has the only write access to the pipe, should result in the pipe being closed as soon as the shell itself exits.
(Some shells may have a builtin nohup that may behave weirdly, especially in the presence of redirection. This is true of some particularly ancient shells.)
1See https://pubs.opengroup.org/onlinepubs/9699919799/utilities/nohup.html for complete details. The Linux variant redirects stdin as well as stdout and stderr, if the input is a terminal, and if the output and stderr are terminals. The FreeBSD variant redirects only stdout and/or stderr. The "is a terminal" test is based on the C language isatty function, which does the same thing as https://godoc.org/golang.org/x/crypto/ssh/terminal#IsTerminal.

Related

How to daemonise a shell-script in FreeBSD (and macOS)

The way I normally start a long-running shell script is
% (nohup ./script.sh </dev/null >script.log 2>&1 & )
The redirections close stdin, and reopen stdout and stderr; the nohup stops HUP reaching the process when the owning process exits (I realise that the 2>&1 is somewhat redundant, since the nohup does something like this anyway); and the backgrounding within the subshell is the double-fork which means that the ./script.sh process's parent has exited while it's still running, so it acquires the init process as its parent.
That doesn't completely work, however, because when I exit the shell from which I've invoked this (typically, of course, I'm doing this on a remote machine), it doesn't exit cleanly. I can do ^C to exit, and this is OK – the process does carry on in the background as intended. However I can't work out what is/isn't happening to require the ^C, and that's annoying me.
The actions above seem to tick most of the boxes in the unix FAQ (question 1.7), except that I'm not doing anything to detach this process from a controlling terminal, or to make it a session leader. The setsid(2) call exists on FreeBSD, but not the setsid command; nor, as far as I can see, is there an obvious substitute for that command. The same is true on macOS, of course.
So, the questions are:
Is there a differently-named caller of setsid on this platform, that I'm missing?
What, precisely, is happening when I exit the calling shell, that I'm killing with the ^C? Is there any way this could bite me?
Related questions (eg 1, 2) either answer a slightly different question, or assume the presence of the setsid command.
(This question has annoyed me for years, but because what I do here doesn't actually not work, I've never before got around to investigating, getting stumped, and asking about it).
In FreeBSD, out of the box you could use daemon -- run detached from the controlling terminal. option -r could be useful:
-r Supervise and restart the program after a one-second delay if it
has been terminated.
You could also try a supervisor, for example immortal is available for both platforms:
pkg install immortal # FreeBSD
brew install immortal # macOS
To daemonize your script and log (stdout/stderr) you could use:
immortal /path/to/your/script.sh -l /tmp/script.log
Or for more options, you could create a my-service.yml for example:
cmd: /path/to/script
cwd: /your/path
env:
DEBUG: 1
ENVIROMENT: production
log:
file: /tmp/app.log
stderr:
file: /tmp/app-error.log
And then run it with immortal -c my-service.yml
More examples can be found here: https://immortal.run/post/examples
If just want to use nohup and save the stdout & stderr into a file, you could add this to your script:
#!/bin/sh
exec 2>&1
...
Check more about exec 2>&1 in this answers https://stackoverflow.com/a/13088401/1135424
And then simply call nohup /your/script.sh & and check the file nohup.out, from the man
FILES
nohup.out The output file of the nohup execution if stan-
dard output is a terminal and if the current
directory is writable.
$HOME/nohup.out The output file of the nohup execution if stan-
dard output is a terminal and if the current
directory is not writable.

The difference between "-D" and "&" in bash script

According to this docker tutorial
What's the difference between
./my_first_process -D
./my_main_process &
They both seem unblocking to bash script and run in background
& tells the shell to put the command that precedes it into the background. -D is simply a flag that is passed to my_first_process and is interpreted by it; it has absolutely nothing whatsoever to do with the shell.
You will have to look into the documentation of my_first_process to see what -D does … it could mean anything. E.g. in npm, -D means "development", whereas in some other tools, it may mean "directory". In diff, it means "Output merged file to show `#ifdef NAME' diffs."
Some programs, by convention, take -D as an instruction to self-daemonize. Doing this looks something like the following:
Call fork(), and exit if it returns 0 (so only the child survives).
Close stdin, stdout and stderr if they are attached to the console (ideally, replacing their file descriptors with handles on /dev/null, so writes don't trigger an error).
Call setsid() to create a new session.
Call fork() again, and exit if it returns 0 again.
That's a lot more work than what just someprogram & does! A program that has self-daemonized can no longer log to the terminal, and will no longer be impacted if the terminal itself closes. That's not true of a program that's just started in the background.
To get something similar to the same behavior from bash, correct code would be something like:
someprogram </dev/null >/dev/null 2>&1 & disown -h
...wherein disown -h tells the shell not to pass along a SIGHUP to that process. It's also not uncommon to see the external tool nohup used for this purpose (though by default, it redirects stdout and stderr to a file called nohup.out if they're pointed at the TTY, the end purpose -- of making sure they're not pointed at the terminal, and thus that writes to them don't start failing if the terminal goes away -- is achieved):
nohup someprogram >/dev/null &

How to immediately trap a signal to an interactive Bash shell?

I try to send a signal from one terminal A to another terminal B. Both run an interactive shell.
In terminal B, I trap signal SIGUSR1 like so :
$ trap 'source ~/mycommand' SIGUSR1
Now in terminal A I send a signal like so :
$ kill -SIGUSR1 pidOfB
Unfortunately, nothing happens in B. If I want to have my command executed, I need to switch to B and either input a new command or press enter.
How can I avoid this drawback and immediately execute my command instead ?
EDIT :
It's important to note that I want to interact directly with the interactive shell in terminal B from terminal A.
For this reason, every solution where the trap command would be executed in a subshell would not work for me...
Also, terminal B must stay interactive.
The shell may simply be stuck in a blocking read, waiting for command-line input. Hitting enter causes the handler to execute before the entered command. Running a non-blocking command like wait:
$ sleep 60 & wait
then sending the signal causes wait to terminate immediately, followed by the output of the handler.
Based on the answers and my numerous attempt to solve this, I don't think it's possible to catch a trap signal immediately in an interactive bash terminal.
For it to trigger, there must be an interaction from the user.
This is due to the readline program blocks until a newline is entered. And there is no way to stop this read.
My solution is to use dtach, a small program that emulate the detach feature of screen.
This program can run a fully interactive shell and features in its last version a way to communicate via a custom socket to this shell (or whatever program you launch)
To start a new dtach session running an interactive bash, in terminal B :
$ dtach -a /tmp/MySocket bash -i
Now from terminal A, we can send a message to the bash session in terminal B like so :
$ echo 'echo hello' | dtach -p /tmp/MySocket
In terminal B, we now see :
$ echo hello
hello
To expand on that if I now do in terminal A :
$ trap 'echo "cd $(pwd)" | dtach -p /tmp/MySocket' DEBUG
I'll have the directory of the two terminals synced
PS :I'd still like to know if there is a way to do this in pure bash
I use a similar trap so that periodically I can (from a separate cron job) force all idle bash processes to do a 'history -a'. I found that if I trap SIGALRM instead of SIGUSR1, then the bash blocking read seems not to be a problem: the trap runs now, rather than next time one hits return. I tried SIGINT, but that caused an annoying "^C", followed by a new prompt line, to be displayed. I haven't yet found any drawbacks of using SIGALRM, but perhaps they will arise.
It may be buffering.
As a test, try installing a loop trigger. In window A:
{ trap 'ls' USR1; while sleep 1; do echo>/dev/null;done } &
[1] 7316
in window B:
kill -usr1 7316
back in window A the ls is firing when the loop does an echo.
Don't know if that will help, but it's something.

Bash command substitution ( $(...) ) forcing process to foreground

Summary: I have a bash script that runs a process in background, and is supposed to work as a normal command and inside a command substitution block such as $(...). The script itself spawns a process that forks to background. It can be reduced to this test case:
#!/bin/sh
echo something
sleep 5 &
Running this script in a shell will return immediately (and print "something"), running it inside $(...) will hang for 5 seconds, waiting for the backgrounded "sleep" to finish.
Applies to anything that is started inside the command substitution shell and spawns processes in background, including any children in that process tree apparently. Seems to affect both bash and zsh, haven't tried others.
Original question: I have a bash script that is supposed to print a value to stdout and also copy it to the X clipboard every time it runs.
#!/bin/sh
echo something
echo something | xclip -selection clipboard
This script (let's call it "something") is meant to be used to get this word (which is actually the output of another command) and be used in different ways such as:
$ something
something
$ xclip -o -selection clipboard
something
$ echo $(something)
^C
Prints to normal stdout, copies the output to the clipboard to be used in normal X applications, and should also be able to use the stdout with bash command substitution, to insert this word in the middle of any command.
However the bash command substitution seems to force xclip to stay alive in foreground. xclip normally daemonizes itself since the X clipboard requires that a client provides the clipboard contents, and the default behavior is to make it quit once the clipboard contents are replaced.
After having this issue with xclip I made the minimal test case that I wrote at the beginning of this question, so it seems to apply that anything that daemonizes inside the $(...) shell
Can anyone explain this behavior? Is there any way I can avoid it?
If you want the backgrounded process to not interfere with command substitution, you have to disconnect its stdout. This will return immediately:
$ cat bg.sh
#!/bin/sh
echo before
sleep 5 >/dev/null &
echo after
$ date; x=$(./bg.sh); date; echo "$x"
Sat Jun 1 13:02:26 EDT 2013
Sat Jun 1 13:02:26 EDT 2013
before
after
You will lose the ability to capture the backgrounded process's stdout, but if you're running it in the background you probably don't care. the bg.sh process can always write to disk.

Send command to a background process

I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!
Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.
For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?
You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc
you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.

Resources