Keep a file open forever within a bash script - bash

I need a way to make a process keep a certain file open forever. Here's an example of what I have so far:
sleep 1000 > myfile &
It works for a thousand seconds, but really don't want to make some complicated sleep/loop statement. This post suggested that cat is the same thing as sleep for infinite. So I tried this:
cat > myfile &
It almost looks like a mistake doesn't it? It seemed to work from the command line, but in a script the file connection did not stay open. Any other ideas?

Rather than using a background process, you can also just use bash to open one of its file descriptors:
exec 5>myfile
(The special use of exec here allows changing the current file descriptor redirections - see man bash for details). This will open file descriptor 5 to "myfile" (use >> if you don't want to empty the file).
You can later close the file again with:
exec 5>&-
(One possible downside of this is that the FD gets inherited by every program that the shell runs in the meantime. Mostly this is harmless - e.g. your greps and seds will generally ignore the extra FD - but it could be annoying in some cases, especially if you spawn any processes that stay around (because they will then keep the FD open).
Note: If you are using a newer version of bash (>4.1) you can use a slightly different syntax:
exec {fd}>myfile
This allocates a new file descriptor, and puts it in the variable fd. This can help ensure that scripts don't accidentally overwrite each other's file descriptors. To close the file later, use
exec {fd}>&-

The reason that cat>myfile& works is because it re-directs standard input into a file.
if you launch it with an ampersand (in background), it won't get ANY input, including end-of-file, which means it will forever wait and print nothing to the output file.
You can get an equivalent effect, except WITHOUT dependency on standard input (the latter is what makes it not work in your script), with this command:
tail -f /dev/null > myfile &

On the cat > myfile & issue running in terminal vs not running as part of a script: In a non-interactive shell the stdin of a backgrounded command & gets implicitly redirected from /dev/null.
So, cat > myfile & in a script actually gets translated into cat </dev/null > myfile, which terminates cat immediately.
See the POSIX standard on the Shell Command Language & Asynchronous Lists:
The standard input for an asynchronous list, before any explicit redirections are
performed, shall be considered to be assigned to a file that has the same
properties as /dev/null. If it is an interactive shell, this need not happen.
In all cases, explicit redirection of standard input shall override this activity.
# some tests
sh -c 'sleep 10 & lsof -p ${!}'
sh -c 'sleep 10 0<&0 & lsof -p ${!}'
sh -ic 'sleep 10 & lsof -p ${!}'
# in a script
- cat > myfile &
+ cat 0<&0 > myfile &

tail -f myfile
This 'follows' the file, and outputs any changes to the file. If you don't want to see the output of tail, redirect output to /dev/null or something:
tail -f myfile > /dev/null
You may want to use the --retry option, depending on your specific case. See man tail for more information.

Related

Program has no output only when run by a shell script enforcing a timeout

I will first write a sequence of commands which I want to run
echo <some_input> | <some_command> > temp.dat & pid=$!
sleep 5
kill -INT "$pid"
The above is working perfectly fine when i run it one by one from bash shell, and the contents in the temp.dat file is exactly what I want. But, when I create a bash script containing the same set of commands, I am getting nothing in the temp.dat file.
Now, I'll mention why I'm writing those commands in such a way:
<some_command> asks for an input, that's why I'm piping <some_input>
I want the output of that command in a separate file, that's why I've redirected the output.
I want to kill the command by sending SIGINT signal after some time.
I've tried running an interactive shell by writing #!/bin/bash -i in the first line of the shell script, but it's not working.
Any alternate method to achieve the same results will be appreciated.
Update: <some_command> is also invoking a python script, but I don't think that this will cause it to behave differently.
Update2: python script was the only cause of that different behavior.
One likely cause here is that your Python process may not be flushing stdout within the allowed five seconds of runtime.
export PYTHONUNBUFFERED=1
...will cause content to be promptly written, rather than waiting for process exit / file close / amount of buffered content to reach a level sufficient to justify the overhead of a flush operation.
will this work for you?
read -p "Input Data : " inputdata ; echo $inputdata > temp.data ; sleep 5; exit
obvs
#!/usr/bin/env bash
read -p "Input Data : " inputdata
echo $inputdata > temp.data
sleep 5
should work as a script
to suit :D
#!/usr/bin/env bash
read -p "Input Data : " inputdata
<code you write eg echo $inputdata> > temp.data
sleep 5

How does this redirection after a here-document work?

ftp -v -n <<! > /tmp/ftp$$ 2>&1
open $TARGET_HOST
user $TARGET_USER $TARGET_PWORD
binary
cd $TARGET_PUT_DIR
put $RESULTS_OUT_DIR/$FILE $FILE
bye
!
I inderstand that <<! is a "here-document" and is passing the commands to ftp until it reaches the delimiter "!" but I can't seem to wrap my head around this redirection:
> /tmp/ftp$$ 2>&1
Could someone please explain what is happening here?
First, the heredoc could be listed last without affecting what happens. Heredocs are traditionally written last but the <<NAME but can actually be written anywhere within the command. The order of << relative to the two > redirections doesn't matter since the former changes stdin and the latter change stdout and stderr.
It'd be clearer if it were written:
ftp -v -n > /tmp/ftp$$ 2>&1 <<!
...
!
Second, to explain the output redirections:
> /tmp/ftp$$ redirects stdout to a file named /tmp/ftp1234, where 1234 is the PID of the current shell process. It's an ad hoc way of making a temporary file with a relatively unique name. If the shell script were run several times in parallel each copy would write to a different temp file.
2>&1 redirects stderr (fd 2) to stdout (fd 1). In other words, it sends error messages to the same file /tmp/ftp$$.

Explain the bash command "exec > >(tee $LOG_FILE) 2>&1"

My intent was to have all the output of my bash script displayed on the console and logged to a file.
Here is my script that works as expected.
#!/bin/bash
LOG_FILE="test_log.log"
touch $LOG_FILE
# output to console and to logfile
exec > >(tee $LOG_FILE) 2>&1
echo "Starting command ls"
ls -al
echo "End of script"
However I do not understand why it works that way.
I expected to have exec >>(tee $LOG_FILE) 2>&1 work but it fails although exec >>$LOG_FILE 2>&1 indeed works.
I could not find the reason for the construction exec > >(command ) in the bash manual nor in advanced bash scripting. Can you explain the logic behind it ?
The >(tee $LOG_FILE) is an example of Process substitution, you might wish to search for that. Advanced Shell Scriptng and Bash manual
Using the syntax, <(program) for capturing output and >(program) for feeding input, we can pass data just one record at a time. It is more powerful than command substitution (backticks, or $( )) because it substitutes for a filename, not text. Therefore anywhere a file is normally specified we can substitute a program's standard output or input (although process substitution on input is not all that common).
This is particularly useful where a program does not use standard streams for what you want.
Note that in your example you are missing a space, exec >>(tee $LOG_FILE) 2>&1 is wrong (you will get a syntax error). Rather,
exec > >(tee $LOG_FILE) 2>&1
is correct, that space is critical.
So, the exec > part changes file descriptor 1 (the default), also known as stdout or standard output, to refer to "whatever comes next", in this case it is the process substitution, although normally it would be a filename.
2>&1 redirects file descriptor 2 (stderr or standard error) to refer to the same place as file descriptor 1 (stdout or standard out). Important: if you omit the & you end-up with a file called 1 rather than successful redirection.
Once you have called the exec line above, then you have changed the current process's standard output, so output from the commands which follow go to that tee process instead of to regular stdout.

Bash shell read error: 0: Resource temporarily unavailable

When writing a bash script. Sometimes you are running a command which opens up another program such as npm, composer.. etc. But at the same time you need to use read in order to prompt the user.
Inevitable you hit this kind of error:
read: read error: 0: Resource temporarily unavailable
After doing some research there seems to be a solution by piping the STDIN of those programs which manipulate the STDIN of your bash script to /dev/null.
Something like:
npm install </dev/null
Other research has shown it has something to do with the fact that the STDIN is being set to some sort of blocking/noblocking status and it isn't being reset after the program finishes.
The question is there some sort of fool proof, elegant way of reading user prompted input without being affected by those programs that manipulate the STDIN and not having to hunt which programs need to have their STDIN redirected to /dev/null. You may even need to use the STDIN of those programs!
Usually it is important to know what input the invoked program expects and from where, so it is not a problem to redirect stdin from /dev/null for those that shouldn't be getting any.
Still, it is possible to do it for the shell itself and all invoked programs. Simply move stdin to another file descriptor and open /dev/null in its place. Like this:
exec 3<&0 0</dev/null
The above duplicates stdin file descriptor (0) under file descriptor 3 and then opens /dev/null to replace it.
After this any invoked command attempting to read stdin will be reading from /dev/null. Programs that should read original stdin should have redirection from file descriptor 3. Like this:
read -r var 0<&3
The < redirection operator assumes destination file descriptor 0, if it is omitted, so the above two commands could be written as such:
exec 3<&0 </dev/null
read -r var <&3
When this happens, run bash from within your bash shell, then exit it (thus returning to the original bash shell). I found a mention of this trick in https://github.com/fish-shell/fish-shell/issues/176 and it worked for me, seems like bash restores the STDIN state. Example:
bash> do something that exhibits the STDIN problem
bash> bash
bash> exit
bash> repeat something: STDIN problem fixed
I had a similar issue, but the command I was running did need a real STDIN, /dev/null wasn't good enough. Instead, I was able to do:
TTY=$(/usr/bin/tty)
cmd-using-stdin < $TTY
read -r var
or combined with spbnick's answer:
TTY=$(/usr/bin/tty)
exec 3<&0 < $TTY
cmd-using-stdin
read -r var 0<&3`
which leaves a clean STDIN in 3 for you to read and 0 becomes a fresh stream from the terminal for the command.
I had the same problem. I solved by reading directly from tty like this, redirecting stdin:
read -p "Play both [y]? " -n 1 -r </dev/tty
instead of simply:
read -p "Play both [y]? " -n 1 -r
In my case, the use of exec 3<&0 ... didn't work.
Clearly (resource temporarily unavailable is EAGAIN) this is caused by programs that exits but leaves STDIN in nonblocking mode.
Here is another solution (easiest to script?):
perl -MFcntl -e 'fcntl STDIN, F_SETFL, fcntl(STDIN, F_GETFL, 0) & ~O_NONBLOCK'
The answers here which suggest using redirection are good. Fortunately, Bash's read should soon no longer need such fixes. The author of Readline, Chet Ramey, has already written a patch: http://gnu-bash.2382.n7.nabble.com/read-may-fail-due-to-nonblocking-stdin-td18519.html
However, this problem is more general than just the read command in Bash. Many programs presume stdin is blocking (e.g., mimeopen) and some programs leave stdin non-blocking after they exit (e.g., cec-client). Bash has no builtin way to turn off non-blocking input, so, in those situations, you can use Python from the command line:
$ python3 -c $'import os\nos.set_blocking(0, True)'
You can also have Python print the previous state so that it may be changed only temporarily:
$ o=$(python3 -c $'import os\nprint(os.get_blocking(0))\nos.set_blocking(0, True)')
$ somecommandthatreadsstdin
$ python3 -c $'import os\nos.set_blocking(0, '$o')'

bash script write executing time in a file

I need to write the time taken to execute this command in a txt file:
time ./program.exe
How can I do in bash script?
I try with >> time.txt but that doesn't work (the output does not go to file and does go to the screen).
Getting time in bash to write to a file is hard work. It is a bash built-in command. (On Mac OS X, there's an external command, /usr/bin/time, that does a similar job but with a different output format and less recalcitrance.)
You need to use:
(time ./program.exe) 2> time.txt
It writes to standard error (hence the 2> notation). However, if you don't use the sub-shell (the parentheses), it doesn't work; the output still comes to the screen.
Alternatively, and without a sub-shell, you can use:
{ time ./program.exe; } 2> time.txt
Note the space after the open brace and the semi-colon; both are necessary on a single line. The braces must appear where a command could appear, and must be standalone symbols. (If you struggle hard enough, you'll come up with ...;}|something or ...;}2>&1. Both of these identify the brace as a standalone symbol, though. If you try ...;}xyz, the shell will (probably) fail to find a command called }xyz, though.)
I need to run more command in more terminal. If I do this:
xterm -xrm '*hold: true' -e (time ./Program.exe) >> time.exe & sleep 2
it doesn't work and tells me Syntax error: "(" unexpected. How do I fix this?
You would need to do something like:
xterm -xrm '*hold: true' -e sh -c "(time ./Program.exe) 2> time.txt & sleep 2"
The key change is to run the shell with the script coming from the argument to the -c option; you can replace sh with /bin/bash or an equivalent name. That should get around any 'Syntax error' issues. I'm not quite sure what triggers that error, though, so there may be a simpler and better way to deal with it. It's also conceivable that xterm's -e option only takes a single string argument, in which case, I suppose you'd use:
xterm -xrm '*hold: true' -e 'sh -c "(time ./Program.exe) 2> time.txt & sleep 2"'
You can manual bash xterm as well as I can.
I'm not sure why you run the timed program in background mode, but that's your problem, not mine. Similarly, the sleep 2 is not obviously necessary if the hold: true keeps the terminal open.
time_elapsed=(time sh -c "./program.exe") 2>&1 | grep "real" | awk '{print $(NF)}'
echo time_elapsed > file.txt
This command should give you the exact time consumed in bash in a desired file..
You can also redirect this to a file usng 2 > file.txt as explained in another reply.
It's not easy to redirect the output of the bash builtin time.
One solution is to use the external time program:
/bin/time --append -o time.txt ./program.exe
(on most systems it's a GNU program, so use info time rather than man to get its documentation).
Just enclose the command to time in a { .. }:
{ time ./program.exe; } 2>&1
Of course, the output of builtin time goes to stderr, thus the needed redirection 2>&1.
Then, it may appear to be tricky to capture the output, let's use a second { .. } to read the command more easily, this works:
{ { time ./program.exe; } 2>&1; } >> time.txt # This works.
However, the correct construct should simply have the capture reversed, as this:
{ time ./program.exe; } >> time.txt 2>&1; # Correct.
To close any possible output from the command, redirect it's output to /dev/null, as this:
{ time ./program.exe >/dev/null 2>&1; } >> time.txt 2>&1 # Better.
And, as now there is only output on stderr, we could simply capture just it:
{ time ./program.exe >/dev/null 2>&1; } 2>> time.txt # Best.
The output from ./program should be redirected, or it may well end inside time.txt.

Resources