Simply forking and redirecting the output of a command to /dev/null - bash

I frequently execute from a shell (in my case Bash) commands that I want to fork immediately and whose output I want to ignore. So frequently in fact that I created a script (silent) to do it:
#!/bin/bash
$# &> /dev/null &
I can then run, e.g.
silent inkscape myfile.svg
and my terminal will not be polluted by the debug output of the process I just forked.
I have two questions:
Is there an "official" way of doing this?, i.e. something shorter but equivalent to &> /dev/null & ?
If not, is there a way I can make tab-completion work after my silent command as if it weren't there ? To give an example, after I've typed silent inksc, I'd like bash to auto-complete my command to silent inkscape when I press [tab].

aside: probably want to exec "$#" &> /dev/null & in your silent script, to cause it to discard the sub-shell, and the quotes around "$#" will keep spaces from getting in the way.
As for #2: complete -F _command silent should do something like what you want. (I call my version of that script launch and have complete -F launch in my .bash_profile)

It looks like nohup does more or less what you want. The tab-completion problem is because bash thinks that you are trying to complete a filename as an argument to the script, whereas its completion rules know that nohup takes a command as its first argument.
Nohup redirects stout and stderr to nohup.out and will also leave the command running if your shell exits.

Here's a little script I use for launching interactive (and chatty) X apps from e.g. an xterm
#!/bin/bash
exe="$1"
shift
"$exe" "$#" 2>/tmp/$$."$exe".err 1>&2 & disown $!
No output, won't die if the terminal exits, but in case something goes wrong there's a log of all output in /tmp
If you don't want the log just use /dev/null instead.
Also will work from a function if you're script-alergic.

Perhaps if you could 'rebind' the tab key? An example on superuser Stackoverflow with the enter key is shown. Is this the right idea?

Related

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

How to make exec command in shell script run in different process?

I'm working with this GUI application, GTKWave, that has a startup script, gtkwave, which I've added to my system path. At the very end of said script, there is this line:
$EXEC "$bundle_contents/MacOS/$name-bin" $* $EXTRA_ARGS
where $EXEC=exec
My problem is that given how exec works, my terminal is "hijacked" by the GUI process, meaning I can't run any more commands until I close the window, which is not ideal.
I currently got around this by putting this function in my .bash_profile:
function run-gtkwave ( ) { nohup gtkwave $1 &>/dev/null & }
While this works, it makes it hard to pass arguments to gtkwave. Ideally, I could do a form right before that exec command, but I'm not sure how to do that in bash. I know that the & character should do that, but
$EXEC "$bundle_contents/MacOS/$name-bin" $* $EXTRA_ARGS &
doesn't cut it, and neither does
"$bundle_contents/MacOS/$name-bin" $* $EXTRA_ARGS &
or
("$bundle_contents/MacOS/$name-bin" $* $EXTRA_ARGS) &
How do I modify this line to run the application in its own process?
You can make your run-gtkwave function pass arguments to gtkwave by changing the definition to:
function run-gtkwave { nohup gtkwave "$#" &>/dev/null & }
Since using nohup and output redirection stops the terminal being "hijacked", you might be able to fix the problem in gtkwave by doing the nohup redirection and the output redirection there. (If standard input is a terminal, nohup redirects it from /dev/null.) One possibility is:
"$bundle_contents/MacOS/$name-bin" $* $EXTRA_ARGS </dev/null &>/dev/null &
(I've omitted the $EXEC because I can't see any point in using exec and & together.)

Why isn't this command returning to shell after &?

In Ubuntu 14.04, I created the following bash script:
flock -nx "$1" xdg-open "$1" &
The idea is to lock the file specified in $1 (flock), then open it in my usual editor (xdg-open), and finally return to prompt, so I can open other files in sequence (&).
However, the & isn't working as expected. I need to press Enter to make the shell prompt appear again. In simpler constructs, such as
gedit test.txt &
it works as it should, returning the prompt immediately. I think it has to do with the existence of two commands in the first line. What am I doing wrong, please?
EDIT
The prompt is actually there, but it is somehow "hidden". If I issue the command
sudo ./edit error.php
it replies with
Warning: unknown mime-type for "error.php" -- using "application/octet-stream"
Error: no "view" mailcap rules found for type "application/octet-stream"
Opening "error.php" with Geany (application/x-php)
__
The errors above are not related to the question. But instead of __ I see nothing. I know the prompt is there because I can issue other commands, like ls, and they work. But the question remains: WHY the prompt is hidden? And how can I make it show normally?
Why isn't this command returning to shell after &?
It is.
You're running a command in the background. The shell prints a new prompt as soon as the command is launched, without waiting for it to finish.
According to your latest comment, the background command is printing some message to your screen. A simple example of the same thing:
$ echo hello &
$ hello
The cursor is left at the beginning of the line after the $ hello.
As far as the shell is concerned, it's printed a prompt and is waiting a new command. It doesn't know or care that a background process has messed up your display.
One solution is to redirect the command's output to somewhere other than your screen, either to a file or to /dev/null. If it's an error message, you'll probably have to redirect both stdout and `stderr.
flock -nx "$1" xdg-open "$1" >/dev/null 2>&1 &
(This assumes you don't care about the content of the message.)
Another option, pointed out in a comment by alvits, is to sleep for a second or so after executing the command, so the message appears followed by the next shell prompt. The sleep command is executed in the foreground, delaying the printing of the next prompt. A simple example:
$ echo hello & sleep 1
hello
[1] + Done echo hello
$
or for your example:
flock -nx "$1" xdg-open "$1" & sleep 1
This assumes that the error message is printed in the first second. That's probably a valid assumption for you example, but it might not be in general.
I don't think the command is doing what you think it does.
Have you tried to run it twice to see if the lock cannot be obtained the second time.
Well, if you do it, you will see that it doesn't fail because xdg-open is forking to exec the editor. Also if it fails you expect some indication.
You should use something like this
flock -nx "$1" -c "gedit '$1' &" || { echo "ERROR"; exit 1; }

Can I change the name of `nohup.out`?

When I run nohup some_command &, the output goes to nohup.out; man nohup says to look at info nohup which in turn says:
If standard output is a terminal, the
command's standard output is appended
to the file 'nohup.out'; if that
cannot be written to, it is appended
to the file '$HOME/nohup.out'; and if
that cannot be written to, the command
is not run.
But if I already have one command using nohup with output going to /nohup.out and I want to run another, nohup command, can I redirect the output to nohup2.out?
nohup some_command &> nohup2.out &
and voila.
Older syntax for Bash version < 4:
nohup some_command > nohup2.out 2>&1 &
For some reason, the above answer did not work for me; I did not return to the command prompt after running it as I expected with the trailing &. Instead, I simply tried with
nohup some_command > nohup2.out&
and it works just as I want it to. Leaving this here in case someone else is in the same situation. Running Bash 4.3.8 for reference.
Above methods will remove your output file data whenever you run above nohup command.
To Append output in user defined file you can use >> in nohup command.
nohup your_command >> filename.out &
This command will append all output in your file without removing old data.
As the file handlers points to i-nodes (which are stored independently from file names) on Linux/Unix systems You can rename the default nohup.out to any other filename any time after starting nohup something&. So also one could do the following:
$ nohup something&
$ mv nohup.out nohup2.out
$ nohup something2&
Now something adds lines to nohup2.out and something2 to nohup.out.
my start.sh file:
#/bin/bash
nohup forever -c php artisan your:command >>storage/logs/yourcommand.log 2>&1 &
There is one important thing only. FIRST COMMAND MUST BE "nohup", second command must be "forever" and "-c" parameter is forever's param, "2>&1 &" area is for "nohup". After running this line then you can logout from your terminal, relogin and run "forever restartall" voilaa... You can restart and you can be sure that if script halts then forever will restart it.
I <3 forever

starting remote script via ssh containing nohup

I want to start a script remotely via ssh like this:
ssh user#remote.org -t 'cd my/dir && ./myscript data my#email.com'
The script does various things which work fine until it comes to a line with nohup:
nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &
it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:
Connection to remote.org closed.
What is the problem here?
Thanks for any help
Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.
Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:
nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &
so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)
This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:
Redirect all I/O on the remote nohup'ed command
Tell your local SSH command to exit as soon as it's done starting the remote process(es).
Quoting the answer I already mentioned, in turn quoting wikipedia:
Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
UPDATE
I've just had success with this pattern:
ssh -f user#host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'
Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:
https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs
https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21
http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:
nohup script.sh >&- 2>&- <&-&
2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)

Resources