How to not show on the terminal my program - bash

I use GROMACS. I think about how can I make my script faster.
This is my script
#!/bin/bash
number=1
for var1 in {1095000..1100000}
do
gmx hbond -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1 <<EOF
8 45
EOF
number=$((number+1))
done
My script makes 5000 files and it shows on the screen program GROMACS and wants me to choose two numbers (that is why I use <<EOF 8 45 EOF to make this automatically for every file).
But I heard that showing something on the screen takes time, so how to not showing gromacs program in the terminal?
I try to use '>' but still, I see most part of the program (I do not see the part when I should choose the number
#!/bin/bash
number=1
for var1 in {1095000..1100000}
do
gmx hbond >/dev/null -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1 >/dev/null <<EOF
8 45
EOF
>/dev/null
number=$((number+1))
done

Terminals have 3 file descriptors by default:
Standard input, a.k.a. stdin
Standard output, a.k.a. stdout
Standard error, a.k.a. stderr
When redirecting >/dev/null it actually redirects the standard output to /dev/null, which is strictly equivalent to 1>/dev/null
However the program may also output to the standard error, in which case you may want to add 2>/dev/null to suppress stderr messages:
gmx hbond >/dev/null 2>/dev/null -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1
In Bash you may use &> to redirect both stdout and stderr at the same time:
gmx hbond &>/dev/null -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1

Related

xargs output buffering -P parallel

I have a bash function that i call in parallel using xargs -P like so
echo ${list} | xargs -n 1 -P 24 -I# bash -l -c 'myAwesomeShellFunction #'
Everything works fine but output is messed up for obvious reasons (no buffering)
Trying to figure out a way to buffer output effectively. I was thinking I could use awk, but I'm not good enough to write such a script and I can't find anything worthwhile on google? Can someone help me write this "output buffer" in sed or awk? Nothing fancy, just accumulate output and spit it out after process terminates. I don't care the order that shell functions execute, just need their output buffered... Something like:
echo ${list} | xargs -n 1 -P 24 -I# bash -l -c 'myAwesomeShellFunction # | sed -u ""'
P.s. I tried to use stdbuf as per
https://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe but did not work, i specified buffering on o and e but output still unbuffered:
echo ${list} | xargs -n 1 -P 24 -I# stdbuf -i0 -oL -eL bash -l -c 'myAwesomeShellFunction #'
Here's my first attempt, this only captures first line of output:
$ bash -c "echo stuff;sleep 3; echo more stuff" | awk '{while (( getline line) > 0 )print "got ",$line;}'
$ got stuff
This isn't quite atomic if your output is longer than a page (4kb typically), but for most cases it'll do:
xargs -P 24 bash -c 'for arg; do printf "%s\n" "$(myAwesomeShellFunction "$arg")"; done' _
The magic here is the command substitution: $(...) creates a subshell (a fork()ed-off copy of your shell), runs the code ... in it, and then reads that in to be substituted into the relevant position in the outer script.
Note that we don't need -n 1 (if you're dealing with a large number of arguments -- for a small number it may improve parallelization), since we're iterating over as many arguments as each of your 24 parallel bash instances is passed.
If you want to make it truly atomic, you can do that with a lockfile:
# generate a lockfile, arrange for it to be deleted when this shell exits
lockfile=$(mktemp -t lock.XXXXXX); export lockfile
trap 'rm -f "$lockfile"' 0
xargs -P 24 bash -c '
for arg; do
{
output=$(myAwesomeShellFunction "$arg")
flock -x 99
printf "%s\n" "$output"
} 99>"$lockfile"
done
' _

How to redirect stdin to a FIFO with bash

I'm trying to redirect stdin to a FIFO with bash. This way, I will be able to use this stdin on an other part of the script.
However, it doesn't seem to work as I want
script.bash
#!/bin/bash
rm /tmp/in -f
mkfifo /tmp/in
cat >/tmp/in &
# I want to be able to reuse /tmp/in from an other process, for example :
xfce4-terminal --hide-menubar --title myotherterm --fullscreen -x bash -i -c "less /tmp/in"
Here I would expect , when I run ls | ./script.bash, to see the output of ls, but it doesn't work (eg the script exits, without outputing anything)
What am I misunderstanding ?
I am pretty sure that less need additional -f flag when reading from pipe.
test_pipe is not a regular file (use -f to see it)
If that does not help I would also recommend to change order between last two lines of your script:
#!/bin/bash
rm /tmp/in -f
mkfifo /tmp/in
xfce4-terminal --hide-menubar --title myotherterm --fullscreen -x bash -i -c "less -f /tmp/in" &
cat /dev/stdin >/tmp/in
In general, I avoid the use of /dev/stdin, because I get a lot of surprises from what is exactly /dev/stdin, especially when using redirects.
However, what I think that you're seeing is that less finishes before your terminal is completely started. When the less ends, so will the terminal and you won't get any output.
As an example:
xterm -e ls
will also not really display a terminal.
A solution might be tail -f, as in, for example,
#!/bin/bash
rm -f /tmp/in
mkfifo /tmp/in
xterm -e "tail -f /tmp/in" &
while :; do
date > /tmp/in
sleep 1
done
because the tail -f remains alive.

Copy *unbuffered* stdout to file from within bash script itself

I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-

Best way to use Unix domain socket in bash script

I'm working on a simple bash script daemon that uses Unix domain sockets. I have a loop like this:
#!/bin/bash
while true
do
rm /var/run/mysock.sock
command=`nc -Ul /var/run/mysock.sock`
echo $command > /tmp/command
done
I'm echoing the command out to /tmp/command just for debugging purposes.
Is this the best way to do this?
Looks like I'm late to the party. Anyway, here is my suggestion I employ successfully for one-shot messages with response:
INPUT=$(mktemp -u)
mkfifo -m 600 "$INPUT"
OUTPUT=$(mktemp -u)
mkfifo -m 600 "$OUTPUT"
(cat "$INPUT" | nc -U "$SKT_PATH" > "$OUTPUT") &
NCPID=$!
exec 4>"$INPUT"
exec 5<"$OUTPUT"
echo "$POST_LINE" >&4
read -u 5 -r RESPONSE;
echo "Response: '$RESPONSE'"
Here I use two FIFOs to talk to nc (1) and fetch it's response.
You can use a single file also to use bidirectional.
mkfifo communicate_pipe
exec 3<> communicate_pipe
cat communicate_pipe - | python socket.py 127.0.0.1:8002 | while read line; do
cmd="./something.sh '${line}' > communicate_pipe";
eval $cmd;
fi;
done

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Resources