I have a bash script that should run as follows: Read a line from a (large) file, process the line, display the results, wait for a run signal from the user, start over. Right now it looks like
while read newline
do
# process newline
# display newline
read go_ahead
done < my_input_file.txt
The processing and displaying look sort of OK, but it's hard to tell because -- here's my problem -- the script won't stop to read the go_ahead variable. I am guessing this is because it thinks it's supposed to read newline and go_ahead from my_input_file.txt? In any event, can someone tell me how to fix this?
The easiest way to do this might be to use a separate file descriptor for your input data. Something like this:
#!/usr/bin/env bash
exec 3< /path/to/inputfile.txt
while read -u 3 newline; do
processed=$(tr '[a-z]' '[A-Z]' <<<"$newline")
printf '%s\n' "$processed"
read go_ahead
done
#!/bin/bash
exec 3>&1
while read line; do
echo $line
read -u 3 -p 'continue(y/n)? ' yn
[[ $yn == n ]] && break
done < "$1"
exit 0
The -u arg to read let's you specify another descriptor.
Related
I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).
I have a bash script using process substitution (< <) to continuously display the result of a long process.
while read -r LINE
do
something
done < <( unbuffer my_long_running_script)
I would like to ask for user input in between.
while read -r LINE
do
something
ASK_THE_USER
done < <( unbuffer my_long_running_script)
I tried with basic select or read but the result of my_long_running_script is still comming in and is taken as a reply (and therefore invalid).
How can I solve this?
A simple example:
#!/usr/bin/env bash
while read -u 3 -r line; do
echo "$line" # something
read -p "Continue?" -r response
[[ $response == 'y' ]] || break
done 3< <(unbuffer my_long_running_script)
Sending the process substitution (<(...)) as input to custom file descriptor 3 (3<) leaves stdin free to read from the terminal.
-u 3 makes read read from that descriptor.
Consider this very simple bash script:
#!/bin/bash
cat > /tmp/file
It redirects whatever you pipe into it to a file. e.g.
echo "hello" | script.sh
and "hello" will be in the file /tmp/file. This works... but it seems like there should be a native bash way of doing this without using "cat". But I can't figure it out.
NOTE:
It must be in a script. I want the script to operate on the file contents afterwards.
It must be in a file, the steps afterward in my case involve a tool that only reads from a file.
I already have a pretty good way of doing this - its just that it seems like a hack. Is there a native way? Like "/tmp/file < 0 " or "0> /tmp/file". I thought bash would have a native syntax to do this...
You could simply do
cp /dev/stdin myfile.txt
Terminate your input with Ctrl+D or Ctrl+Z and, viola! You have your file created with text from the stdin.
echo "$(</dev/stdin)" > /tmp/file
terminate your input with ENTERctrl+d
I don't think there is a builtin that reads from stdin until EOF, but you can do this:
#!/bin/bash
exec > /tmp/file
while IFS= read -r line; do
printf '%s\n' "$line"
done
Another way of doing it using pure BASH:
#!/bin/bash
IFS= read -t 0.01 -r -d '' indata
[[ -n $indata ]] && printf "%s" "$indata" >/tmp/file
IFS= and -d '' causes all of stdin data to be read into a variable indata.
Reason of using -t 0.01: When this script is called with no input pipe then read will timeout after negligible 0.01 seconds delay. If there is any data available in input it will be read in indata variable and it will be redirected to >/tmp/file.
Another option: dd of=/tmp/myfile/txt
Note: This is not a built-in, however, it might help other people looking for a simple solution.
Why don't you just
GENERATE INPUT | (
# do whatever you like to the input here
)
But sometimes, especially when you want to complete the input first, then operate on the modified output, you should still use temporary files:
TMPFILE="/tmp/fileA-$$"
GENERATE INPUT | (
# modify input
) > "$TMPFILE"
(
# do something with the input from TMPFILE
) < "$TMPFILE"
rm "$TMPFILE"
If you don't want the program to end after reaching EOF, this might be helpful.
#!/bin/bash
exec < <(tail -F /tmp/a)
cat -
I am trying to run a file that contains a sequence of commands/scripts to run with arguments, like:
ls /etc/
cat /etc/hosts/
script.sh some parameters
...
This seems to work fine but in some cases the while loop will end prematurely. This seems to be the case only when the scripts it is executing contains SSH/SCP at the end. The code to read the file:
while IFS= read -r line
do
# Cut command and parameters
IFS=', ' read -a parameters <<< "$line"
cmd="${parameters[0]}"
unset parameters[0]
runScriptAndCheckError "$cmd" "${parameters[#]}"
done < "$SCRIPT_FILENAME"
When using set -x:
+ checkError 0 'ERROR: script.sh failed'
+ '[' 0 -ne 0 ']'
+ IFS=
+ read -r line
It looks like there is no more input although there is still lines in the file. If I comment out runScriptAndCheckError "$cmd" "${parameters[#]}" then it does print a lot more lines.
I am not sure what is wrong with this code. I'd be really helpful if someone could please help.
If runScriptAndCheckError also reads from standard input, it will read lines from $SCRIPT_FILENAME. Have the read command in the while loop use a different file descriptor:
while IFS= read -r line <&3; do
...
done 3< "$SCRIPT_FILENAME"
Stealing Input
The key is as #chepner states, that the statements inside:
read line; do
<command>
done
Will interfere with the loop if they attempt to read from stdin.
If you don't want your script reading from stdin, prevent it from doing so as follows:
read line; do
cmd < /dev/null
done
Now your read loop will not be missing any input.
I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).