bash show output only during runtime - bash

I am trying to write a script that displays its output to the terminal only while it's running, much like the 'less' or 'ssh' commands.
When I launch said script, it would take over the whole terminal, print what it needs to print, and then when I exit the script, I would return to my terminal where the only record that my script has run will be the line that shows the command itself. When I scroll up, I don't want to see what my script output.
[snoopdougg#machine /home/snoopdougg/logs]$ ls
alog.log blog.log clog.log myScript.sh
[snoopdougg#machine /home/snoopdougg/logs]$ whoami
snoopdougg
[snoopdougg#machine /home/snoopdougg/logs]$ ./myScript.sh
[snoopdougg#machine /home/snoopdougg/logs]$
(Like nothing ever happened... but myScript.sh would have print things to the terminal while it was running).
How can I do this?

You're talking about the alternate screen, which you can access with a pair of terminal attributes, smcup and rmcup. Put this in a script and run it for a small demo:
tput smcup
echo hello
sleep 5
tput rmcup

Use screen:
screen ./myScript.sh

Related

Bash script not ending because of the background processes

I have a bash script as given below: It runs the python script with different arguments, each one as a background process (note that I have used '&')
#!/bin/bash
declare -a arr=("arg1" "arg2" "arg3")
for i in "${arr[#]}"
do
echo "$i"
python3 test.py $i &
echo "hi"
done
exit
The test.py file is as shown below:
import sys
print('Argument List:', str(sys.argv))
I tried to run the bash script with the command ./bash_script_test.sh.
Output is also right, but the script just doesnt end running. Plus the python code's output starts in a new command line. Refer below for the output.
arg1
hi
arg2
hi
arg3
hi
[root#csit-openstack1 risav]# Argument List: ['test.py', 'arg2']
Argument List: ['test.py', 'arg3']
Argument List: ['test.py', 'arg1']
Why is a new command line coming up and why is the shell script not exiting? Is it because of the use of & ? If yes, can somebody explain?
Take a cup of red color and a cup of green color and pour them into the same bucket. The result is a brown mess. The same happens with your terminal.
You have two processes, the foreground and the background process. Both write at the same time to the same terminal. The result is a mess. Background processes should write to log files instead.
Replace the line
python3 test.py $i &
with
python3 test.py $i > $i.log &
to give each background process its own log file.
If you want to merge the different sources, you have to use a tool like Syslog.
BTW: the script is ending. The last thing it does in the loop is printing "hi". And your output shows three times a "hi".

Echo a command back to to bash shell prompt?

I'm trying to implement a simple command line util that will allow users to select from a set of commands and then echo those commands strings back to the shell. I don't want the shell to execute said commands, but I want the commands to simply echo out to the prompt, so the user can verify them or change them before pressing return button.
No idea where to start. Of course echoing the command to STDOUT is pretty easy with a log command, or println kind of thing, but that would be the stdout of current process, Ideally, I would like the stdout of that process to be the stdin of the shell, but only into the prompt line, not a pipe into a new shell or a command execution. Is this possible?
e.g.
$ help # user asks for help
1. you can do this
2. you can do that
? 1 # user chooses 1, help echoes back a string to the parent shell $$
$ this-command --flags # simply ends up on prompt line, but doesn't exec
Is this possible without a hook in the terminal ui or tty?

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Why isn't this command returning to shell after &?

In Ubuntu 14.04, I created the following bash script:
flock -nx "$1" xdg-open "$1" &
The idea is to lock the file specified in $1 (flock), then open it in my usual editor (xdg-open), and finally return to prompt, so I can open other files in sequence (&).
However, the & isn't working as expected. I need to press Enter to make the shell prompt appear again. In simpler constructs, such as
gedit test.txt &
it works as it should, returning the prompt immediately. I think it has to do with the existence of two commands in the first line. What am I doing wrong, please?
EDIT
The prompt is actually there, but it is somehow "hidden". If I issue the command
sudo ./edit error.php
it replies with
Warning: unknown mime-type for "error.php" -- using "application/octet-stream"
Error: no "view" mailcap rules found for type "application/octet-stream"
Opening "error.php" with Geany (application/x-php)
__
The errors above are not related to the question. But instead of __ I see nothing. I know the prompt is there because I can issue other commands, like ls, and they work. But the question remains: WHY the prompt is hidden? And how can I make it show normally?
Why isn't this command returning to shell after &?
It is.
You're running a command in the background. The shell prints a new prompt as soon as the command is launched, without waiting for it to finish.
According to your latest comment, the background command is printing some message to your screen. A simple example of the same thing:
$ echo hello &
$ hello
The cursor is left at the beginning of the line after the $ hello.
As far as the shell is concerned, it's printed a prompt and is waiting a new command. It doesn't know or care that a background process has messed up your display.
One solution is to redirect the command's output to somewhere other than your screen, either to a file or to /dev/null. If it's an error message, you'll probably have to redirect both stdout and `stderr.
flock -nx "$1" xdg-open "$1" >/dev/null 2>&1 &
(This assumes you don't care about the content of the message.)
Another option, pointed out in a comment by alvits, is to sleep for a second or so after executing the command, so the message appears followed by the next shell prompt. The sleep command is executed in the foreground, delaying the printing of the next prompt. A simple example:
$ echo hello & sleep 1
hello
[1] + Done echo hello
$
or for your example:
flock -nx "$1" xdg-open "$1" & sleep 1
This assumes that the error message is printed in the first second. That's probably a valid assumption for you example, but it might not be in general.
I don't think the command is doing what you think it does.
Have you tried to run it twice to see if the lock cannot be obtained the second time.
Well, if you do it, you will see that it doesn't fail because xdg-open is forking to exec the editor. Also if it fails you expect some indication.
You should use something like this
flock -nx "$1" -c "gedit '$1' &" || { echo "ERROR"; exit 1; }

Simply forking and redirecting the output of a command to /dev/null

I frequently execute from a shell (in my case Bash) commands that I want to fork immediately and whose output I want to ignore. So frequently in fact that I created a script (silent) to do it:
#!/bin/bash
$# &> /dev/null &
I can then run, e.g.
silent inkscape myfile.svg
and my terminal will not be polluted by the debug output of the process I just forked.
I have two questions:
Is there an "official" way of doing this?, i.e. something shorter but equivalent to &> /dev/null & ?
If not, is there a way I can make tab-completion work after my silent command as if it weren't there ? To give an example, after I've typed silent inksc, I'd like bash to auto-complete my command to silent inkscape when I press [tab].
aside: probably want to exec "$#" &> /dev/null & in your silent script, to cause it to discard the sub-shell, and the quotes around "$#" will keep spaces from getting in the way.
As for #2: complete -F _command silent should do something like what you want. (I call my version of that script launch and have complete -F launch in my .bash_profile)
It looks like nohup does more or less what you want. The tab-completion problem is because bash thinks that you are trying to complete a filename as an argument to the script, whereas its completion rules know that nohup takes a command as its first argument.
Nohup redirects stout and stderr to nohup.out and will also leave the command running if your shell exits.
Here's a little script I use for launching interactive (and chatty) X apps from e.g. an xterm
#!/bin/bash
exe="$1"
shift
"$exe" "$#" 2>/tmp/$$."$exe".err 1>&2 & disown $!
No output, won't die if the terminal exits, but in case something goes wrong there's a log of all output in /tmp
If you don't want the log just use /dev/null instead.
Also will work from a function if you're script-alergic.
Perhaps if you could 'rebind' the tab key? An example on superuser Stackoverflow with the enter key is shown. Is this the right idea?

Resources