Can not get access to stdout of background process (Ubuntu) - shell

When I start some background process in the shell, for example:
geth --maxpeers 0 --rpc &
It returns something like:
[1] 1859
...without any out stream of this process. I do not understand what is it? And how can I get stdout of geth? There is information in documentation that stdout of background process is displayed in shell by default.
My shell is running in remote Ubuntu system.

The "&" directs the shell to run the command in the background. It uses the fork system call to create a sub-shell and run the job asynchronously.
The stdout and stderr should still be printed to the screen.
If you do not want to see any output on the screen, redirect both stdout and stderr to a file by:
geth --maxpeers 0 --rpc > logfile 2>&1 &

Regarding the first part of your question:
...without any out stream of this process. I do not understand what is
it?
This is part of the command execution environment (part of the shell itself) and is not the result of your script. (it is how the shell handles backgrounding your script, and keeping track of the process to allow pause and resume of jobs).
If you look at man bash under JOB CONTROL, it explains what you are seeing in detail, e.g.
The shell associates a job with each pipeline. It keeps a table
of currently executing jobs, which may be listed with the jobs
command. When bash starts a job asynchronously (in the background),
it prints a line that looks like:
[1] 25647

I do not understand what is it? [1] 1859
It is the output from Bash's job feature, which enables managing background processes (jobs), and it contains information about the job just started, printed to stderr:
1 is the job ID (which, prefixed with %, can be used with builtins such as kill and wait)
25647 is the PID (process ID) of the background process.
Read more in the JOB CONTROL section of man bash.
how can I get stdout of geth? There is information in documentation that stdout of background process is displayed in shell by default.
Indeed, background jobs by default print their output to the current shell's stdout and stderr streams, but note that they do so asynchronously - that is, output from background jobs will appear as it is produced (potentially buffered), interleaved with output sent directly to the current shell, which can be disruptive.
You can apply redirections as usual to a background command in order to capture its output in file(s), as demonstrated in user3589054's helpful answer, but note that doing so will not silence the job-control message ([1] 1859 in the example above).
If you want to silence the job-control message on creation, use:
{ geth --maxpeers 0 --rpc & } 2>/dev/null
To silence the entire life cycle of a job, see this answer of mine.

Related

What does "&!" (ampersand and exclamation) mean in linux shell

I found that some people run a program in shell like this
exe > the.log 2>&1 &!
I understand the first part, it redirects stderr to stdout also "&" means runs the program in background, but I don't know what does "&!" mean, what does the exclamation mark mean?
Within zsh the command &! is a shortcut for disown, i.e. the program won't get killed upon exiting the invoking shell.
See man zshbuiltins
disown [ job ... ]
job ... &|
job ... &!
Remove the specified jobs from the job table; the shell will no longer report their status, and will not complain if you try to exit an interactive shell with them running or stopped. If no job is specified, disown the current job. If the jobs are currently stopped and the AUTO_CONTINUE option is not set, a warning is printed containing information about how to make them running after they have been disowned. If one of the latter two forms is used, the jobs will automatically be made running, independent of the setting of the AUTO_CONTINUE option.

How to disown bash process substitution

To redirect stderr for a command to syslog I use a helper like
with_logger my-tag command arg1 ...
where with_logger is
#!/bin/bash
syslog_tag="$1"
shift
exec "$#" 2> >(exec /usr/bin/logger -t "$syslog_tag")
Here 2 exec calls and the process substitution is used to avoid having a bash process waiting for the command or logger command to finish. However this creates a zombie. That is, when the logger process exits after the command exits closing its stderr, nobody waited for the process. This results in the parent process receiving an unexpected signal about unknown child processes.
To solve this I suppose I have to somehow disown the >() process. Is there a way to do it?
Update to clarify the question
I need to invoke my wrapper script from another program, not from a bash script.
Update 2 - this was a wrong question
See the answer below.
I would just define a short shell function
to_logger () {
exec /usr/bin/logger -t "$1"
}
and call your code with the minimally longer
2> >(to_logger my-tag) command arg1 ...
This has several benefits:
The command can be any shell construct; you aren't passing the command as arguments to another command; you are just redirecting standard error of an arbitrary command.
You are spawning one fewer process to handle the logging.
My question was wrong.
In my setup I use supervisord to control few processes. As it has limited syslog support and does not allow to use different tags when redirecting processes' stderr to syslog, I use the above shell script for that. While testing the script I noticed CRIT reaped unknown pid <number> messages in the log for supervisord itself. I assumed that this was bad and tried to fix this.
But it turned out the messages are not critical at all. In fact supervisord was doing the proper job and in its latest source the message was changed from CRIT to INFO. So there is nothing to answer here as there are no issues with the script in question :)

Multiple process from one bash script [duplicate]

I'm trying to use a shell script to start a command. I don't care if/when/how/why it finishes. I want the process to start and run, but I want to be able to get back to my shell immediately...
You can just run the script in the background:
$ myscript &
Note that this is different from putting the & inside your script, which probably won't do what you want.
Everyone just forgot disown. So here is a summary:
& puts the job in the background.
Makes it block on attempting to read input, and
Makes the shell not wait for its completion.
disown removes the process from the shell's job control, but it still leaves it connected to the terminal.
One of the results is that the shell won't send it a SIGHUP(If the shell receives a SIGHUP, it also sends a SIGHUP to the process, which normally causes the process to terminate).
And obviously, it can only be applied to background jobs(because you cannot enter it when a foreground job is running).
nohup disconnects the process from the terminal, redirects its output to nohup.out and shields it from SIGHUP.
The process won't receive any sent SIGHUP.
Its completely independent from job control and could in principle be used also for foreground jobs(although that's not very useful).
Usually used with &(as a background job).
nohup cmd
doesn't hangup when you close the terminal. output by default goes to nohup.out
You can combine this with backgrounding,
nohup cmd &
and get rid of the output,
nohup cmd > /dev/null 2>&1 &
you can also disown a command. type cmd, Ctrl-Z, bg, disown
Alternatively, after you got the program running, you can hit Ctrl-Z which stops your program and then type
bg
which puts your last stopped program in the background. (Useful if your started something without '&' and still want it in the backgroung without restarting it)
screen -m -d $command$ starts the command in a detached session. You can use screen -r to attach to the started session. It is a wonderful tool, extremely useful also for remote sessions. Read more at man screen.

Is there a way to make bash job control quiet?

Bash is quite verbose when running jobs in the background:
$ echo toto&
toto
[1] 15922
[1]+ Done echo toto
Since I'm trying to run jobs in parallel and use the output, I'd like to find a way to silence bash. Is there a way to remove this superfluous output?
You can use parentheses to run a background command in a subshell, and that will silence the job control messages. For example:
(sleep 10 & )
Note: The following applies to interactive Bash sessions. In scripts, job-control messages are never printed.
There are 2 basic scenarios for silencing Bash's job-control messages:
Launch-and-forget:
CodeGnome's helpful answer answer suggests enclosing the background command in a simple subshell - e.g, (sleep 10 &) - which effectively silences job-control messages - both on job creation and on job termination.
This has an important side effect:
By using control operator & inside the subshell, you lose control of the background job - jobs won't list it, and neither %% (the spec. (ID) of the most recently launched job) nor $! (the PID of the (last) process launched (as part of) the most recent job) will reflect it.[1]
For launch-and-forget scenarios, this is not a problem:
You just fire off the background job,
and you let it finish on its own (and you trust that it runs correctly).
[1] Conceivably, you could go looking for the process yourself, by searching running processes for ones matching its command line, but that is cumbersome and not easy to make robust.
Launch-and-control-later:
If you want to remain in control of the job, so that you can later:
kill it, if need be.
synchronously wait (at some later point) for its completion,
a different approach is needed:
Silencing the creation job-control messages is handled below, but in order to silence the termination job-control messages categorically, you must turn the job-control shell option OFF:
set +m (set -m turns it back on)
Caveat: This is a global setting that has a number of important side effects, notably:
Stdin for background commands is then /dev/null rather than the current shell's.
The keyboard shortcuts for suspending (Ctrl-Z) and delay-suspending (Ctrl-Y) a foreground command are disabled.
For the full story, see man bash and (case-insensitively) search for occurrences of "job control".
To silence the creation job-control messages, enclose the background command in a group command and redirect the latter's stderr output to /dev/null
{ sleep 5 & } 2>/dev/null
The following example shows how to quietly launch a background job while retaining control of the job in principle.
$ set +m; { sleep 5 & } 2>/dev/null # turn job-control option off and launch quietly
$ jobs # shows the job just launched; it will complete quietly due to set +m
If you do not want to turn off the job-control option (set +m), the only way to silence the termination job-control message is to either kill the job or wait for it:
Caveat: There are two edge cases where this technique still produces output:
If the background command tries to read from stdin right away.
If the background command terminates right away.
To launch the job quietly (as above, but without set +m):
$ { sleep 5 & } 2>/dev/null
To wait for it quietly:
$ wait %% 2>/dev/null # use of %% is optional here
To kill it quietly:
{ kill %% && wait; } 2>/dev/null
The additional wait is necessary to make the termination job-control message that is normally displayed asynchronously by Bash (at the time of actual process termination, shortly after the kill) a synchronous output from wait, which then allows silencing.
But, as stated, if the job completes by itself, a job-control message will still be displayed.
Wrap it in a dummy script:
quiet.sh:
#!/bin/bash
$# &
then call it, passing your command to it as an argument:
./quiet.sh echo toto
You may need to play with quotes depending on your input.
Interactively, no. It will always display job status. You can influence when the status is shown using set -b.
There's nothing preventing you from using the output of your commands (via pipes, or storing it variables, etc). The job status is sent to the controlling terminal by the shell and doesn't mix with other I/O. If you're doing something complex with jobs, the solution is to write a separate script.
The job messages are only really a problem if you have, say, functions in your bashrc which make use of job control which you want to have direct access to your interactive environment. Unfortunately there's nothing you can do about it.
One solution (in bash anyway) is to route all the output to /dev/null
echo 'hello world' > /dev/null &
The above will not give you any output other than the id for the bg process.

how to send ssh job to background

I logged in to a remote server via ssh and started a php script. Appereantly, it will take 17 hours to complete, is there a way to break the connection but the keep the script executing? I didn't make any output redirection, so I am seeing all the output.
Can you stop the process right now? If so, launch screen, start the process and detach screen using ctrl-a then ctrl-d. Use screen -r to retrieve the session later.
This should be available in most distros, failing that, a package will definitely be available for you.
ctrl + z
will pause it. Than type
bg
to send it to background. Write down the PID of the process for later usage ;)
EDIT: I forgot, you have to execute
disown -$PID
where $PID is the pid of your process
after that, and the process will not be killed after you close the terminal.
you described it's important to protect script continuation. Unfortunately I don't know, you make any interaction with script and script is made by you.
continuation protects 'screen' command. your connection will break, but screen protect pseudo terminal, you can reconnect to this later, see man.
if you don't need operators interaction with script, you simply can put script to background at the start, and log complete output into log file. Simply use command:
nohup /where/is/your.script.php >output.log 2&>1 &
>output.log will redirect output into log file, 2&>1 will append error stream into output, effectively into log file. last & will put command into background. Notice, nohup command will detach process from terminal group.
At now you can safely exit from ssh shell. Because your script is out of terminal group, then it won't be killed. It will be rejoined from your shell process, into system INIT process. It is unix like system behavior. Complete output you can monitor using command
tail -f output.log #allways breakable by ^C, it is only watching
Using this method you do not need use ^Z , bg etc shell tricks for putting command to the background.
Notice, using redirection to nohup command is preferred. Otherwise nohup will auto redirect all outputs for you to nohup.out file in the current directory.
You can use screen.

Resources