Use read builtin command to read from parent stdin while in a subshell - bash

I have script that is launching a subshell/background command to read input and then doing more work:
#!/bin/bash
(
while true; do
read -u 0 -r -e -p "test_rl> " line || break
echo "line: ${line}"
done
) &
sleep 3600 # more work
With the above I don't even get a prompt. If I exec 3>&0 prior to launching the subshell and then read from descriptor 3 (-u 3) then I at least get the prompt, but the read command still doesn't get any input that I type.
How do I get the read builtin to read correctly from the terminal (parent's stdin file descriptor)?

How do I get the read builtin to read correctly from the terminal
(parent's stdin file descriptor)?
You might want to try this (using the parent's filedescriptors):
#!/bin/bash
(
while true; do
read -u 0 -r -e -p "test_rl> " line || break
echo "line: ${line}"
done
)<&0 >&1 &
sleep 3600 # more work

Related

How to monitore the stdout of a command with a timer?

I'd like to know when an application hasn't print a line in stdout for N seconds.
Here is a reproducible example:
#!/bin/bash
dmesg -w | {
while IFS= read -t 3 -r line
do
echo "$line"
done
echo "NO NEW LINE"
}
echo "END"
I can see the NO NEW LINE but the pipe doesn't stop and the bash doesn't continue. END is never displayed.
How to exit from the braces' code?
Source: https://unix.stackexchange.com/questions/117501/in-bash-script-how-to-capture-stdout-line-by-line
How to exit from the brackets' code?
Not all commands exit when they can't write to output or receive SIGPIPE, and they will not exit until they actually notice they can't write to output. Instead, run the command in the background. If the intention is not to wait on the process, in bash you could just use process substitution:
{
while IFS= read -t 3 -r line; do
printf "%s\n" "$line"
done
echo "end"
} < <(dmesg -w)
You could also use coprocess. Or just run the command in the background with a pipe and kill it when done with it.

Shell script can read file line by line but not perform actions for each line

I'm trying to run this command over multiple machines
sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#x.x.x.x "mkdir test"
The IPs are stored in the following .txt file
$ cat ips.txt
10.0.2.15
10.0.2.5
I created a bash script that reads this file line by line. If I run it with an echo:
#!/bin/bash
input="ips.txt"
while IFS= read -r line
do
echo "$line"
#sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#$line "mkdir test"
done < "$input"
It prints every line:
$ ./variables.sh
10.0.2.15
10.0.2.5
This makes me understand that the script is working as intended. However, when I replace the echo line with the command I want to run for each line:
#!/bin/bash
input="ips.txt"
while IFS= read -r line
do
#echo "$line"
sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#$line "mkdir test"
done < "$input"
It only performs the action for the first IP on the file, then stops. Why?
Managed to solve this by using a for instead of a while. Script ended up looking like this:
for file in $(cat ips.txt)
do
sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#$file "mkdir test"
done
While your example is a solution that works, it's not the explanation.
Your could find the explanation here : ssh breaks out of while-loop in bash
In two words :
"while" loop continue reading from the same file-descriptor that defined in the loop header ( $input in your case )
ssh (or sshpass) read data from stdin (but in your case from file descriptor $input). And here is the point that hide the things as we didn't exect "ssh" to read the data.
Just to understand the problem you could have same strange experience for example using commands like "ffmpeg" or "mplayer" in while loop. Mplayer and ffmpeg use the keyboards while they are running, so they will consume all the the file-descriptor.
Another good and funny example :
#!/bin/bash
{
echo first
for ((i=0; i < 16384; i++)); do echo "testing"; done
echo "second"
} > test_file
while IFS= read -r line
do
echo "Read $line"
cat | uptime > /dev/null
done < test_file
At first part we write 1st line : first
16384 lines : testing
then last line : second
16384 lines "testing" are equal to 128Kb buffer
At the second part, the command "cat | uptime" will consume exactly 128Kb buffer, so our script will give
Read first
Read second
As solution, as you did, we could use "for" loop.
Or use "ssh -n"
Or playing with some file descriptor - you could find the example in the link that I gave.

Read from .txt file containing executable commands to be executed w/ output of commands executed sent to another file

When I run my script, the .txt file is read, the executable commands are assigned to $eggs, then to execute the commands and redirect the output to a file I use echo $eggs>>eggsfile.txt but when I cat the file, I just see all the commands and not the execution output of those commands.
echo "Hi, $USER"
cd ~/mydata
echo -e "Please enter the name of commands file:/s"
read fname
if [ -z "$fname" ]
then
exit
fi
terminal=`tty`
exec < $fname #exec destroys current shell, opens up new shell process where FD0 (STDIN) input comes from file fname
count=1
while read line
do
echo $count.$line #count line numbers
count=`expr $count + 1`; eggs="${line#[[:digit:]]*}";
touch ~/mydata/eggsfile.txt; echo $eggs>>eggsfile.txt; echo "Reading eggsfile contents: $(cat eggsfile.txt)"
done
exec < $terminal
If you just want to execute the commands, and log the command name before each command, you can use 'sh -x'. You will get '+ command' before each command.
sh -x commands
+pwd
/home/user
+ date
Sat Apr 4 21:15:03 IDT 2020
If you want to build you own (custom formatting, etc), you will have to force execution of each command. Something like:
cd ~/mydata
count=0
while read line ; do
count=$((count+1))
echo "$count.$line"
eggs="${line#[[:digit:]]*}"
echo "$eggs" >> eggsfile.txt
# Execute the line.
($line) >> eggsfile.txt
done < $fname
Note that this approach uses local redirection for the while loop, avoiding having to revert the input back to the terminal.

Safe shell redirection when command not found

Let's say we have a text file named text (doesn't matter what it contains) in current directory, when I run the command (in Ubuntu 14.04, bash version 4.3.11):
nocommand > text # make sure noommand doesn't exists in your system
It reports a 'command not found' error and it erases the text file! I just wonder if I can avoid the clobber of the file if the command doesn't exist.
I try this command set -o noclobber but the same problem happens if I run:
nocommand >| text # make sure noommand doesn't exists in your system
It seems that bash redirects output before looking for specific command to run. Can anyone give me some advices how to avoid this?
Actually, the shell first looks at the redirection and creates the file. It evaluates the command after that.
Thus what happens exactly is: Because it's a > redirection, it first replaces the file with an empty file, then evaluates a command which does not exist, which produces an error message on stderr and nothing on stdout. It then stores stdout in this file (which is nothing so the file remains empty).
I agree with Nitesh that you simply need to check if the command exists first, but according to this thread, you should avoid using which. I think a good starting point would be to check at the beginning of your script that you can run all the required functions (see the thread, 3 solutions), and abort the script otherwise.
Write to a temporary file first, and only move it into place over the desired file if the command succeeds.
nocommand > tmp.txt && mv tmp.txt text
This avoids errors not only when nocommand doesn't exist, but also when an existing command exits before it can finish writing its output, so you don't overwrite text with incomplete data.
With a little more work, you can clean up the temp file in the event of an error.
{ nocommand > tmp.txt || { rm tmp.txt; false; } } && mv tmp.txt text
The inner command group ensures that the exit status of the outer command group is non-zero so that even if the rm succeeds, the mv command is not triggered.
A simpler command that carries the slight risk of removing the temp file when nocommand succeeds but the mv fails is
nocommand > tmp.txt && mv tmp.txt text || rm tmp.txt
This would write to file only if the pipe sends at least a single character:
nocommand | (
IFS= read -d '' -n 1 || exit
exec >myfile
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
Or using a function:
function protected_write {
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
}
nocommand | protected_write myfile
Note that if lastpipe option is enabled, you'll have to place it on a subshell:
nocommand | ( protected_write myfile )
At your option you can also just summon subshell on the function by default:
function protected_write {
(
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
}
() summons a subshell. A subshell is a fork and runs on a different process space. In x | y, y is also summoned by default in a subshell unless lastpipe option (try shopt lastpipe) is enabled.
IFS= read -d '' -n 1 waits for a single character (see help read) and would return zero code when it reads one which bypasses exit.
exec >"$1" redirects stdout to file. This makes everything that prints to stdout print to file instead.
Everything besides \x00 when read is stored in REPLY that is why we do printf '\x00' when REPLY has null (empty) value.
exec cat replaces the subshell's process with cat which would send everything that it receives to the file and finish the remaining job. See help exec.
If you do:
set -o noclobber
then
invalidcmd > myfile
if myfile exists in current path then you will get:
-bash: myfile: cannot overwrite existing file
Check using "which" command
#!/usr/bin/env bash
command_name="npm2" # Add your command here
command=`which $command_name`
if [ -z "$command" ]; then #if command exists go ahead with your logic
echo "Command not found"
else # Else fallback
echo "$command"
fi
Hope this helps

create read/write environment using named pipes

I am using RedHat EL 4. I am using Bash 3.00.15.
I am writing SystemVerilog and I want to emulate stdin and stdout. I can only use files as the normal stdin and stdout is not supported in the environment. I would like to use named pipes to emulate stdin and stdout.
I understand how to create a to_sv and from_sv file using mkpipe, and how to open them and use them in SystemVerilog.
By using "cat > to_sv" I can output strings to the SystemVerilog simulation. But that also outputs what I'm typing in the shell.
I would like, if possible, a single shell where it acts almost like a UART terminal. Whatever I type goes directly out to "to_sv", and whatever is written to "from_sv" gets printed out.
If I am going about this completely wrong, then by all means suggest the correct way! Thank you so much,
Nachum Kanovsky
Edit: You can output to a named pipe and read from an other one in the same terminal. You can also disable keys to be echoed to the terminal using stty -echo.
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
cat /tmp/from & cat > /tmp/to
Whit this command everything you write goes to /tmp/to and is not echoed and everything written to /tmp/from will be echoed.
Update: I have found a way to send every chars inputed to the /tmp/to one at a time. Instead of cat > /tmp/to use this command:
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to;
fi;
printf "%s" "$c" >> /tmp/to;
done
You probably want to use exec as in:
exec > to_sv
exec < from_sv
See sections 19.1. and 19.2. in the Advanced Bash-Scripting Guide - I/O Redirection
Instead of cat /tmp/from & you may use tail -f /tmp/from & (at least here on Mac OS X 10.6.7 this prevented a deadlock if I echo more than once to /tmp/from).
Based on Lynch's code:
# terminal window 1
(
rm -f /tmp/from /tmp/to
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
#cat -u /tmp/from &
tail -f /tmp/from &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to
fi;
printf "%s" "$c" >> /tmp/to
done
)
# terminal window 2
(
tail -f /tmp/to &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
wait
)
# terminal window 3
echo "hello from /tmp/from" > /tmp/from

Resources