Why isn't this command returning to shell after &? - bash

In Ubuntu 14.04, I created the following bash script:
flock -nx "$1" xdg-open "$1" &
The idea is to lock the file specified in $1 (flock), then open it in my usual editor (xdg-open), and finally return to prompt, so I can open other files in sequence (&).
However, the & isn't working as expected. I need to press Enter to make the shell prompt appear again. In simpler constructs, such as
gedit test.txt &
it works as it should, returning the prompt immediately. I think it has to do with the existence of two commands in the first line. What am I doing wrong, please?
EDIT
The prompt is actually there, but it is somehow "hidden". If I issue the command
sudo ./edit error.php
it replies with
Warning: unknown mime-type for "error.php" -- using "application/octet-stream"
Error: no "view" mailcap rules found for type "application/octet-stream"
Opening "error.php" with Geany (application/x-php)
__
The errors above are not related to the question. But instead of __ I see nothing. I know the prompt is there because I can issue other commands, like ls, and they work. But the question remains: WHY the prompt is hidden? And how can I make it show normally?

Why isn't this command returning to shell after &?
It is.
You're running a command in the background. The shell prints a new prompt as soon as the command is launched, without waiting for it to finish.
According to your latest comment, the background command is printing some message to your screen. A simple example of the same thing:
$ echo hello &
$ hello
The cursor is left at the beginning of the line after the $ hello.
As far as the shell is concerned, it's printed a prompt and is waiting a new command. It doesn't know or care that a background process has messed up your display.
One solution is to redirect the command's output to somewhere other than your screen, either to a file or to /dev/null. If it's an error message, you'll probably have to redirect both stdout and `stderr.
flock -nx "$1" xdg-open "$1" >/dev/null 2>&1 &
(This assumes you don't care about the content of the message.)
Another option, pointed out in a comment by alvits, is to sleep for a second or so after executing the command, so the message appears followed by the next shell prompt. The sleep command is executed in the foreground, delaying the printing of the next prompt. A simple example:
$ echo hello & sleep 1
hello
[1] + Done echo hello
$
or for your example:
flock -nx "$1" xdg-open "$1" & sleep 1
This assumes that the error message is printed in the first second. That's probably a valid assumption for you example, but it might not be in general.

I don't think the command is doing what you think it does.
Have you tried to run it twice to see if the lock cannot be obtained the second time.
Well, if you do it, you will see that it doesn't fail because xdg-open is forking to exec the editor. Also if it fails you expect some indication.
You should use something like this
flock -nx "$1" -c "gedit '$1' &" || { echo "ERROR"; exit 1; }

Related

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Send echo command to an external xTerm

I have a bash script, and I want to be able to keep a log in an xterm, and be able to send echo to it anytime.
How would I do this?
Check the GPG_TTY variable in your xterm session. It should have the value similar to
GPG_TTY=/dev/pts/2
This method should be available for terminals that support GNU Pinentry.
Another option to determine the current terminal name is to use
readlink /proc/self/fd/0
The last method applies only to Linux
Now if your bash script implements a command
echo "Hello, world!" > /dev/pts/2
This line should appear on the xterm screen.
I managed to make a console by running an xterm with a while loop clearing the screen, reading the contents of the log file, pauseing for a second, then looping again. Here was the command:
xterm -T Console -e "while true: do cls && cat ${0}-LOG.txt && sleep 1; done"
Then to send something to the console:
echo -e "\e[91;1mTest" >> ${0}-LOG.txt
And the console will update each second.

bash show output only during runtime

I am trying to write a script that displays its output to the terminal only while it's running, much like the 'less' or 'ssh' commands.
When I launch said script, it would take over the whole terminal, print what it needs to print, and then when I exit the script, I would return to my terminal where the only record that my script has run will be the line that shows the command itself. When I scroll up, I don't want to see what my script output.
[snoopdougg#machine /home/snoopdougg/logs]$ ls
alog.log blog.log clog.log myScript.sh
[snoopdougg#machine /home/snoopdougg/logs]$ whoami
snoopdougg
[snoopdougg#machine /home/snoopdougg/logs]$ ./myScript.sh
[snoopdougg#machine /home/snoopdougg/logs]$
(Like nothing ever happened... but myScript.sh would have print things to the terminal while it was running).
How can I do this?
You're talking about the alternate screen, which you can access with a pair of terminal attributes, smcup and rmcup. Put this in a script and run it for a small demo:
tput smcup
echo hello
sleep 5
tput rmcup
Use screen:
screen ./myScript.sh

Unix redirection issue: </dev/null 1>&- 2>&- &

Unix redirection:
Recently I faced an issue where one of the script was using the command below to execute it in background. The issue was that the script was executing twice when it is started.
For example:
In the script I put an echo "Hello" to print to log file. When the script executed I saw in the log file that it printed twice at the same time. Can any one tell me what caused here to execute the script twice.
nohup <runScript> </dev/null 1>&- 2>&- &
The original version of your question was slightly confusing. The subject line asks about (with command and argument inferred):
somecmd arg1 </dev/null 1>&- 2>&- &
The body of the question appeared to ask about:
nohup &- 2>&- &
which could reasonably be inferred to mean:
nohup somecmd arg1 &- 2>&- &
The edited version of your question is also confusing — though the change was just to indent code fragment. The notation <runscript> is ill-chosen when you are asking about I/O redirections. I'm guessing that what you wrote as <runscript> is equivalent to me writing somecmd, rather than redirecting standard input from runscript and an ill-formed output redirection. However, the revised
code does at least match the subject line:
nohup runScript </dev/null 1>&- 2>&- &
So, I'll ignore the &- notation (a previous version of this answer did not).
Notation </dev/null 1>&- 2>&- &
The first command line redirects standard input from /dev/null, and closes both standard output and standard error and executes the command in background. Redirecting from /dev/null is good; closing standard output and standard error is not so good — programs are entitled to have those three file descriptors open, and that can be done by redirecting to /dev/null too:
somecmd arg1 </dev/null >/dev/null 2>&1 &
or:
somecmd arg1 </dev/null >/dev/null 2>/dev/null &
There is not much difference between these two.
Double running
There is nothing in any of the code that would account for the script being run twice, or the output appearing in a log file twice. Since you have not shown the script that was run, we cannot deduce any cause from that. On the whole, the charge would be 'operator error' — you managed to run the command twice. If you want us to look into that, you'll have to provide a reproducible script that:
Shows the script to be run.
Empties the log file.
Runs the script once with your chosen notation.
Shows that the log file contains two entries.
Without such a reproducible script, there's nothing anyone can do to help you.

Simply forking and redirecting the output of a command to /dev/null

I frequently execute from a shell (in my case Bash) commands that I want to fork immediately and whose output I want to ignore. So frequently in fact that I created a script (silent) to do it:
#!/bin/bash
$# &> /dev/null &
I can then run, e.g.
silent inkscape myfile.svg
and my terminal will not be polluted by the debug output of the process I just forked.
I have two questions:
Is there an "official" way of doing this?, i.e. something shorter but equivalent to &> /dev/null & ?
If not, is there a way I can make tab-completion work after my silent command as if it weren't there ? To give an example, after I've typed silent inksc, I'd like bash to auto-complete my command to silent inkscape when I press [tab].
aside: probably want to exec "$#" &> /dev/null & in your silent script, to cause it to discard the sub-shell, and the quotes around "$#" will keep spaces from getting in the way.
As for #2: complete -F _command silent should do something like what you want. (I call my version of that script launch and have complete -F launch in my .bash_profile)
It looks like nohup does more or less what you want. The tab-completion problem is because bash thinks that you are trying to complete a filename as an argument to the script, whereas its completion rules know that nohup takes a command as its first argument.
Nohup redirects stout and stderr to nohup.out and will also leave the command running if your shell exits.
Here's a little script I use for launching interactive (and chatty) X apps from e.g. an xterm
#!/bin/bash
exe="$1"
shift
"$exe" "$#" 2>/tmp/$$."$exe".err 1>&2 & disown $!
No output, won't die if the terminal exits, but in case something goes wrong there's a log of all output in /tmp
If you don't want the log just use /dev/null instead.
Also will work from a function if you're script-alergic.
Perhaps if you could 'rebind' the tab key? An example on superuser Stackoverflow with the enter key is shown. Is this the right idea?

Resources