Bash shell read error: 0: Resource temporarily unavailable - bash

When writing a bash script. Sometimes you are running a command which opens up another program such as npm, composer.. etc. But at the same time you need to use read in order to prompt the user.
Inevitable you hit this kind of error:
read: read error: 0: Resource temporarily unavailable
After doing some research there seems to be a solution by piping the STDIN of those programs which manipulate the STDIN of your bash script to /dev/null.
Something like:
npm install </dev/null
Other research has shown it has something to do with the fact that the STDIN is being set to some sort of blocking/noblocking status and it isn't being reset after the program finishes.
The question is there some sort of fool proof, elegant way of reading user prompted input without being affected by those programs that manipulate the STDIN and not having to hunt which programs need to have their STDIN redirected to /dev/null. You may even need to use the STDIN of those programs!

Usually it is important to know what input the invoked program expects and from where, so it is not a problem to redirect stdin from /dev/null for those that shouldn't be getting any.
Still, it is possible to do it for the shell itself and all invoked programs. Simply move stdin to another file descriptor and open /dev/null in its place. Like this:
exec 3<&0 0</dev/null
The above duplicates stdin file descriptor (0) under file descriptor 3 and then opens /dev/null to replace it.
After this any invoked command attempting to read stdin will be reading from /dev/null. Programs that should read original stdin should have redirection from file descriptor 3. Like this:
read -r var 0<&3
The < redirection operator assumes destination file descriptor 0, if it is omitted, so the above two commands could be written as such:
exec 3<&0 </dev/null
read -r var <&3

When this happens, run bash from within your bash shell, then exit it (thus returning to the original bash shell). I found a mention of this trick in https://github.com/fish-shell/fish-shell/issues/176 and it worked for me, seems like bash restores the STDIN state. Example:
bash> do something that exhibits the STDIN problem
bash> bash
bash> exit
bash> repeat something: STDIN problem fixed

I had a similar issue, but the command I was running did need a real STDIN, /dev/null wasn't good enough. Instead, I was able to do:
TTY=$(/usr/bin/tty)
cmd-using-stdin < $TTY
read -r var
or combined with spbnick's answer:
TTY=$(/usr/bin/tty)
exec 3<&0 < $TTY
cmd-using-stdin
read -r var 0<&3`
which leaves a clean STDIN in 3 for you to read and 0 becomes a fresh stream from the terminal for the command.

I had the same problem. I solved by reading directly from tty like this, redirecting stdin:
read -p "Play both [y]? " -n 1 -r </dev/tty
instead of simply:
read -p "Play both [y]? " -n 1 -r
In my case, the use of exec 3<&0 ... didn't work.

Clearly (resource temporarily unavailable is EAGAIN) this is caused by programs that exits but leaves STDIN in nonblocking mode.
Here is another solution (easiest to script?):
perl -MFcntl -e 'fcntl STDIN, F_SETFL, fcntl(STDIN, F_GETFL, 0) & ~O_NONBLOCK'

The answers here which suggest using redirection are good. Fortunately, Bash's read should soon no longer need such fixes. The author of Readline, Chet Ramey, has already written a patch: http://gnu-bash.2382.n7.nabble.com/read-may-fail-due-to-nonblocking-stdin-td18519.html
However, this problem is more general than just the read command in Bash. Many programs presume stdin is blocking (e.g., mimeopen) and some programs leave stdin non-blocking after they exit (e.g., cec-client). Bash has no builtin way to turn off non-blocking input, so, in those situations, you can use Python from the command line:
$ python3 -c $'import os\nos.set_blocking(0, True)'
You can also have Python print the previous state so that it may be changed only temporarily:
$ o=$(python3 -c $'import os\nprint(os.get_blocking(0))\nos.set_blocking(0, True)')
$ somecommandthatreadsstdin
$ python3 -c $'import os\nos.set_blocking(0, '$o')'

Related

Shell script exec with less then sign

Can someone explain to me what this line would do in a shell script?
exec 3<&0 </dev/null
I tried googling, but couldn't hone in on the details. I believe 3 is a new file descriptor, 0 is STDIN? and am not sure what the last /dev/null does, or the purpose of exec or the "<" signs.
exec without a command argument changes the I/O redirection for the rest of the script.
3<&0 duplicates the current stdin descriptor to file desscriptor 3.
</dev/null redirects stdin to /dev/null, which is a special device that contains nothing (reading it returns EOF immediately, writing to it discards the data).
The purpose of all this is to redirect standard input to the null device, but save it on FD 3 so that it can be reverted later. So somewhere later in the script you should see:
exec <&3 3<&-
This duplicates FD 3 back to stdin, and then closes FD 3.
Redirection syntax is described in the Redirections section of the Bash Manual.

Bash: assigning the output of 'times' builtin to a variable

In a bash script I would like to assign the output of the times builtin to an array variable, but I found no better way than
tempnam=/tmp/aaa_$$_$RANDOM
times > ${tempnam}
mapfile -t times_a < ${tempnam}
I write the output to a temp file and read it back in array times_a because pipelines or $(times) would execute in a subshell and return the wrong values.
Any better solution without the temp file?
The fundamental problem that you need to solve is how to get both the execution of time and the variable assignment to happen in the same shell, without a temporary file. Almost every method Bash provides of piping the output of one thing to another, or capturing the output of a command, has one side working in a subshell.
Here is one way you can do it without a temporary file, but I'll warn you, it's not pretty, it's not portable to other shells, and it requires at least Bash 4:
coproc co { cat; }; times 1>&${co[1]}; eval "exec ${co[1]}>&-"; mapfile -tu ${co[0]} times_a
I'll break this down for you:
coproc co { cat; }
This creates a coprocess; a process which runs in the background, but you are given pipes to talk to its standard input and standard output, which are the FDs ${co[0]} (standard out of cat) and ${co[1]} (standard in of cat). The commands are executed in a subshell, so we can't do either of our goals in there (running times or reading into a variable), but we can use cat to simply pass the input through to the output, and then use that pipe to talk to times and mapfile in the current shell.
times >&${co[1]};
Run times, redirecting its standard out to the standard in of the cat command.
eval "exec ${co[1]}>&-"
Close the input end of the cat command. If we don't do this, cat will continue waiting for input, keeping its output open, and mapfile will continue waiting for that, causing your shell to hang. exec, when passed no commands, simply applies its redirections to the current shell; redirecting to - closes an FD. We need to use eval because Bash seems to have trouble with exec ${co[1]}>&-, interpreting the FD as the command instead of part of the redirection; using eval allows that variable to be substituted first, and then then executed.
mapfile -tu ${co[0]} times_a
Finally we actually read the data from the standard out of the coprocess. We've managed to run both the times and the mapfile command in this shell, and used no temporary files, though we did use a temporary process as a pipeline between the two commands.
Note that this has a subtle race. If you execute these commands one by one, instead of all as one command, the last one fails; because when you close cats standard in, it exits, causing the coprocess to exit and FDs to be closed. It appears that when executed all on one line, mapfile is executed quickly enough that the coprocess is still open when it runs, and thus it can read from the pipe; but I may be getting lucky. I haven't figured out a good way around this.
All told, it's much simpler just to write out the temp file. I would use mktemp to generate a filename, and if you're in a script, add a trap to ensure that you clean up your tempfile before exiting:
tempnam=$(mktemp)
trap "rm '$tempnam'" EXIT
times > ${tempnam}
mapfile -t times_a < ${tempnam}
Brian's answer got me very interested in this problem, leading to this solution which has no race condition:
coproc cat;
times >&${COPROC[1]};
{
exec {COPROC[1]}>&-;
mapfile -t times_a;
} <&${COPROC[0]};
This is quite similar in underlying structure to Brian's solution, but there are some key differences that ensure no funny-business occurs due to timing issues. As Brian stated, his solution typically works because the bash interpreter starts running the mapfile command before the coprocess' file descriptors have been fully closed and cleaned up, so any unexpected delay before or during mapfile would break it.
Essentially, the coprocess' stdout file descriptor gets closed a moment after we close the coprocess' stdin file descriptor. We need a way to preserve the coprocess' stdout.
In the man pages for pipe, we find:
If all file descriptors referring to the read end of a pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process.
Thus, we must preserve at least one file descriptor to the coprocess' stdout. This is easy enough to accomplish with Redirections. We can do something like exec 3<&${COPROC[0]}- to move the coprocess' stdout file descriptor to a newly created fd 31. When the coprocess terminates, we will still have a file descriptor for its stdout and be able to read from it.
Now, we can do the following:
Create a coprocess that does nothing but cat.
coproc cat;
Redirect times's stdout to the coprocess' stdin.
times >&${COPROC[1]};
Get a temporary copy of the coprocess' stdout file descriptor.
Close the coprocess' stdin file descriptor. (If using a version before Bash-4.3, you can use the eval trick Brian used.)
exec 3<&${COPROC[0]} {COPROC[1]}>&-;
Read from our temporary file descriptor into a variable.
mapfile -tu 3 times_a;
Close our temporary file descriptor (not necessary, but good practice).
exec 3<&-;
And we're done! However, we still have some opportunities to restructure to make things neater. Thanks to the nature of redirection syntax, this code:
coproc cat;
times >&${COPROC[1]};
exec 3<&${COPROC[0]} {COPROC[1]}>&-;
mapfile -tu 3 times_a;
exec 3<&-;
behaves identically to this code:
coproc cat;
times >&${COPROC[1]};
{
exec {COPROC[1]}>&-;
mapfile -tu 3 times_a;
} 3<&${COPROC[0]};
From here, we can remove the temporary file descriptor altogether, leading to our solution:
Bash 4.3+:
coproc cat; times >&${COPROC[1]}; { exec {COPROC[1]}>&-; mapfile -t times_a; } <&${COPROC[0]}
Bash 4.0+:
coproc cat; times >&${COPROC[1]}; { eval "exec ${COPROC[1]}>&-"; mapfile -t times_a; } <&${COPROC[0]}
1We do not need to close the original file descriptor here, and can just duplicate it, as the original gets closed when the stdin descriptor gets closed
Wow, good question.
An improvment would be to use mktemp, so that you are not relying on randomness to keep your file unique.
TMPFILE=$(mktemp aaa_XXXXXXXXXX)
times > "$TMPFILE"
mapfile -t times_a < ${tempnam}
rm "$TMPFILE"
Also, I use for instead of mapfile (becuase I don't have mapfile).
a=0; for var in $(cat "$TMPFILE"); do ((a++)); TIMES_A[$a]=$var; done
But, yeah, I do not see how you can do it without files or named pipes.
One possibility to do something similar would be
times > >(other_command)
It's an output redirection combided with a process substitution. This way times is executed in the current shell and the output redirected to a new subprocess. Therefore however doing this with mapfile doesn't make much sense, as that command wouldn't be executed in the same shell, and that's probably not what you'll want. The situation is a bit tricky, as you can't have either call to the shell builtins executed in a subshell.

Why wont Bash wait for read when used with Curl?

I wrote a Bash script to congfigure Git. It uses the read builtin, but when I do:
bash < <(curl -s https://raw.github.com/gist/419201/gitconfig.bash)
It doesn't wait for me to enter input. How do I get it to wait?
I tested it whitout the < as jcomeau_ictx suggested and it worked.
bash <(curl -s https://raw.github.com/gist/419201/gitconfig.bash | head -n 3)
Note: I used head -3 to stop execution after the read.
You may try to read directly from the controlling terminal /dev/tty to re-enable user input in case stdin is already redirected, i.e. file descriptor 0 is not opened on a terminal.
You may even use the -t option to the test command to handle such a situation programmatically (see help test or man test).
read git_name < /dev/tty # per-command I/O redirection
#read git_name < /dev/console # alternative
exec 0</dev/tty # script-wide I/O redirection
read git_name
In order to use stdin, you'd need to fetch the file, say to /tmp, then bash /tmp/gitconfig.bash. The way you're doing it now, you're redirecting stdin, and Unix doesn't have a separate file descriptor for command input like VMS does.

Keep a file open forever within a bash script

I need a way to make a process keep a certain file open forever. Here's an example of what I have so far:
sleep 1000 > myfile &
It works for a thousand seconds, but really don't want to make some complicated sleep/loop statement. This post suggested that cat is the same thing as sleep for infinite. So I tried this:
cat > myfile &
It almost looks like a mistake doesn't it? It seemed to work from the command line, but in a script the file connection did not stay open. Any other ideas?
Rather than using a background process, you can also just use bash to open one of its file descriptors:
exec 5>myfile
(The special use of exec here allows changing the current file descriptor redirections - see man bash for details). This will open file descriptor 5 to "myfile" (use >> if you don't want to empty the file).
You can later close the file again with:
exec 5>&-
(One possible downside of this is that the FD gets inherited by every program that the shell runs in the meantime. Mostly this is harmless - e.g. your greps and seds will generally ignore the extra FD - but it could be annoying in some cases, especially if you spawn any processes that stay around (because they will then keep the FD open).
Note: If you are using a newer version of bash (>4.1) you can use a slightly different syntax:
exec {fd}>myfile
This allocates a new file descriptor, and puts it in the variable fd. This can help ensure that scripts don't accidentally overwrite each other's file descriptors. To close the file later, use
exec {fd}>&-
The reason that cat>myfile& works is because it re-directs standard input into a file.
if you launch it with an ampersand (in background), it won't get ANY input, including end-of-file, which means it will forever wait and print nothing to the output file.
You can get an equivalent effect, except WITHOUT dependency on standard input (the latter is what makes it not work in your script), with this command:
tail -f /dev/null > myfile &
On the cat > myfile & issue running in terminal vs not running as part of a script: In a non-interactive shell the stdin of a backgrounded command & gets implicitly redirected from /dev/null.
So, cat > myfile & in a script actually gets translated into cat </dev/null > myfile, which terminates cat immediately.
See the POSIX standard on the Shell Command Language & Asynchronous Lists:
The standard input for an asynchronous list, before any explicit redirections are
performed, shall be considered to be assigned to a file that has the same
properties as /dev/null. If it is an interactive shell, this need not happen.
In all cases, explicit redirection of standard input shall override this activity.
# some tests
sh -c 'sleep 10 & lsof -p ${!}'
sh -c 'sleep 10 0<&0 & lsof -p ${!}'
sh -ic 'sleep 10 & lsof -p ${!}'
# in a script
- cat > myfile &
+ cat 0<&0 > myfile &
tail -f myfile
This 'follows' the file, and outputs any changes to the file. If you don't want to see the output of tail, redirect output to /dev/null or something:
tail -f myfile > /dev/null
You may want to use the --retry option, depending on your specific case. See man tail for more information.

Is there a command-line shortcut for ">/dev/null 2>&1"

It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.

Resources