We have a ksh script which is reading from a doing a 'while read line' with the input piped into it. At the same time we're reading user confirmation input with 'read < /dev/tty', similar to the following sketch:
cat interestingdata | while read line ; do
x=$(dostuff $line)
if [[ x -ne 0 ]] ; then
read y < /dev/tty
$(domorestuff $y)
fi
echo "done optional stuff"
done
All works fine for processing the lines of 'interestingdata', and for most of the reads from /dev/tty. However, on the first two iterations of the while loop, the first string + newline are ignored.
By this, I mean the user types something and presses enter, and the script doesn't progress to echo "done optional stuff". Instead, the user has to type something else and press enter again, and only then does the script proceed.
This happens only for the first two iterations of the while loop, and then everything works perfectly. Any ideas how I can fix this? I have no idea what else I can do here!
Running linux kernel 2.6.9-55.9.vm2.ELsmp with ksh93 if that helps.
It sounds like either "dostuff" or "domorestuff" is sometimes reading from stdin.
Try replacing "dostuff" with "dostuff < /dev/null" and "domorestuff" with "domorestuff < /dev/null" and see if the behavior changes.
Related
I am creating a script (myscript.sh) in BASH that reads from STDOUT, typically a stream of data that comes from cat, or from a file and outputs the stream of data (amazing!), like this:
$cat myfile.txt
hello world!
$cat myfile.txt | myscript.sh
hello world!
$myscript.sh myfile.txt
hello world!
But I also would like the following behaviour: if I call the script without arguments I'd like it to output a brief help:
$myscript.sh
I am the help: I just print what you say.
== THE PROBLEM ==
The problem is that I am capturing the stream of data like this:
if [[ $# -eq 0 ]]; then
stream=$(cat <&0)
elif [[ -n "$stream" ]]; then
echo "I am the help: I just print what you say."
else
echo "Unknown error."
fi
And when I call the script with no arguments like this:
$myscript.sh
It SHOULD print the "help" part, but it just keep waiting for a stream of data in line 2 of code above...
Is there any way to tell bash that if nothing comes from STDOUT just break and continue executing?
Thanks in advance.
There's always a standard input stream; if no arguments are given and input isn't redirected, standard input is the terminal.
If you want to treat that specially, use test -t to test if standard input is connected to a terminal.
if [[ $# -eq 0 && -t 0 ]]; then
echo "I am the help: I just print what you say."
else
stream=$(cat -- "$#")
fi
There's no need to test $#. Just pass your arguments to cat; if it gets filenames it will read from them, otherwise it will read from standard input.
I agree to #Barmar's solution.
However, it might be better to entirely avoid a situation where your program behavior depends on whether the input file descriptor is a terminal (there are situations where a terminal is mimicked even though there's none -- in such a situation, your script would just produce the help string).
You could instead introduce a special - argument to explicitly request reading from stdin. This will result in simpler option handling and uniform behavior of your script, no matter what's the environment.
First answer is to help yourself - try running the script with bash -x myscript.sh. It will include lot of information to help you.
If you specific case, the condition $# -eq 0 was flipped. As per requirement, you want to print the help message is NOT ARGUMENT ARE PROVIDED:
if [[ $# -eq 0 ]] ; then
echo "I am the help: I just print what you say."
exit 0
fi
# Rest of you script, read data from file, etc.
cat -- "$#"
Assuming this approach is taken, and if you want to process standard input or a file, simple pass '-' as parameter: cat foobar.txt | myscript.sh -
The goal here it's to display directly the result on the STDOUT so on the terminal when I press CTRL+D, First I tried many things to find the solution.
So when I execute the program like ./myprog.sh, it wait a enter, so we can write :
bob
cookie
And we we press on CRTL+D, I want this result:
bob
cookie
boy
dog
My code is:
while :
do
read INPUT_STRING ||break
case $INPUT_STRING in
bob)
echo "boy"
;;
alicia)
echo "girl"
;;
cookie)
echo "dog"
;;
bye)
break
echo " "
;;
*)
echo "unknown"
;;
esac
done
How can I display the content after write many different things in my terminal?
Let's start with the basics, what is Ctrl+D? This key-combination resembles end-of-file. It is not a signal that you can catch, but it is something that is related to files. So if you want actions to happen when you press
Ctrl+D, you actually are saying that:
I want to create a file over /dev/stdin, read it into memory and then perform various actions on it.
An example in awk would be:
awk '{a[NR]=$0}END{for(i=1;i<=NR;++i) print i,a[i]}' -
Here, it reads the full "file" into memory and when you press Ctrl+D, you tell awk that EOF is reached and it executes the END statement which prints the line number and then line. They <hyphen> at the end of the script is a short-hand for /dev/stdin.
Now, if you want to do something like that in a shell, lets say bash to keep it simple, you can do something like this.
read the full input into memory by storing it into an array or a temporary file. The reading is finished when pressing Ctrl+D
process the input afterwards.
This looks like this:
#!/usr/bin/env bash
# declare an array, to store stuff in
declare -a myArray
# read the full file into the array
# This while loop terminates when pressing CTRL-D
i=1
while read -r line; do
myArray[i]="${line}"
((i++))
done < /dev/stdin
# Process the array
for ((j=1;j<i;++j)); do
# perform your actions here on myArray[j]
echo "$i" "${myArray[j]}"
done
EDIT: Corrected process/thread terminology
My shell script has a foreground process that reads user input and a background process that prints messages. I would like to print these messages on the line above the input prompt rather than interrupting the input. Here's a canned example:
sleep 5 && echo -e "\nINFO: Helpful Status Update!" &
echo -n "> "
read input
When I execute it and type "input" a bunch of times, I get something like this:
> input input input inp
INFO: Helpful Status Update!
ut input
But I would like to see something like this:
INFO: Helpful Status Update!
> input input input input input
The solution need not be portable (I'm using bash on linux), though I would like to avoid ncurses if possible.
EDIT: According to #Nick, previous lines are inaccessible for historical reasons. However, my situation only requires modifying the current line. Here's a proof of concept:
# Make named pipe
mkfifo pipe
# Spawn background process
while true; do
sleep 2
echo -en "\033[1K\rINFO: Helpful Status Update!\n> `cat pipe`"
done &
# Start foreground user input
echo -n "> "
pid=-1
collected=""
IFS=""
while true; do
read -n 1 c
collected="$collected$c"
# Named pipes block writes, so must do background process
echo -n "$collected" >> pipe &
# Kill last loop's (potentially) still-blocking pipe write
if kill -0 $pid &> /dev/null; then
kill $pid &> /dev/null
fi
pid=$!
done
This produces mostly the correct behavior, but lacks CLI niceties like backspace and arrow navigation. These could be hacked in, but I'm still having trouble believing that a standard approach hasn't already been developed.
The original ANSI codes still work in bash terminal on Linux (and MacOS), so you can use \033[F where \033 is the ESCape character. You can generate this in bash terminal by control-V followed by the ESCape character. You should see ^[ appear. Then type [F. If you test the following script:
echo "original line 1"
echo "^[[Fupdated line 1"
echo "line 2"
echo "line 3"
You should see output:
updated line 1
line 2
line 3
EDIT:
I forgot to add that using this in your script will cause the cursor to return to the beginning of the line, so further input will overwrite what you have typed already. You could use control-R on the keyboard to cause bash to re-type the current line and return the cursor to the end of the line.
I have been facing a very peculiar issue with shell scripts.
Here is the scenario
Script1 (spawns in background)--> Script2
Script2 has the following code
function check_log()
{
logfile=$1
tail -5f ${logfile} | while read line
do
echo $line
if echo $line|grep "${triggerword}";then
echo "Logout completion detected"
start_leaks_detection
triggerwordfound=true
echo "Leaks detection complete"
fi
if $triggerwordfound;then
echo "Trigger word found and processing complete.Exiting"
break
fi
done
echo "Outside loop"
exit 0
}
check_log "/tmp/somefile.log" "Logout detected"
Now the break in while loop does not help here. I can see "Logout completion detected" as well as "Leaks detection complete" being echoed on the stdout, but not the string "outside loop"
I am assuming this has to do something with tail -f creating a subshell. What I want to do is, exit that subshell as well as exit Script2 to get control back to Script1.
Can someone please shed some light on how to do this?
Instead of piping into your while loop, use this format instead:
while read line
do
# put loop body here
done < <(tail -5f ${logfile})
Try this, although it's not quite the same (it doesn't skip the beginning of the log file at startup):
triggerwordfound=
while [ -z "$triggerwordfound" ]; do
while read line; do
echo $line
if echo $line|grep "${triggerword}";then
echo "Logout completion detected"
start_leaks_detection
triggerwordfound=true
echo "Leaks detection complete"
fi
done
done < "$logfile"
echo "Outside loop"
The double loop effectively does the same thing as tail -f.
Your function works in a sense, but you won't notice that it does so until another line is written to the file after the trigger word has been found. That's because tail -5 -f can usually write all of the last five lines of the file to the pipe in one write() call and continue to write new lines all in one call, so it won't be sent a SIGPIPE signal until it tries to write to the pipe after the while loop has exited.
So, if your file grows regularly then there shouldn't be a problem, but if it's more common for your file to stop growing just after the trigger word is written to it, then your watcher script will also hang until any new output is written to the file.
I.e. SIGPIPE is not sent immediately when a pipe is closed, even if there's un-read data buffered in it, but only when a subsequent write() on the pipe is attempted.
This can be demonstrated very simply. This command will not exit (provided the tail of the file is less than a pipe-sized buffer) until you either interrupt it manually, or you write one more byte to the file:
tail -f some_large_file | read one
However if you force tail to make multiple writes to the pipe and make sure the reader exits before the final write, then everything will work as expected:
tail -c 1000000 some_large_file | read one
Unfortunately it's not always easy to discover the size of a pipe buffer on a given system, nor is it always possible to only start reading the file when there's already more than a pipe buffer's worth of data in the file, and the trigger word is already in the file and at least a pipe buffer's size bytes from the end of the file.
Unfortunately tail -F (which is what you should probably use instead of -f) doesn't also try writing zero bytes every 5 seconds, or else that would maybe solve your problem in a more efficient manner.
Also, if you're going to stick with using tail, then -1 is probably sufficient, at least for detecting any future event.
BTW, here's a slightly improved implementation, still using tail since I think that's probably your best option (you could always add a periodic marker line to the log with cron or similar (most syslogd implementations have a built-in mark feature too) to guarantee that your function will return within the period of the marker):
check_log ()
{
tail -1 -F "$1" | while read line; do
case "$line" in
*"${2:-SOMETHING_IMPOSSIBLE_THAT_CANNOT_MATCH}"*)
echo "Found trigger word"
break
;;
esac
done
}
Replace the echo statement with whatever processing you need to do when the trigger phrase is read.
So this is probably an easy question, but I am not much of a bash programmer and I haven't been able to figure this out.
We have a closed source program that calls a subprogram which runs until it exits, at which point the program will call the subprogram again. This repeats indefinitely.
Unfortunately the main program will sometimes spontaneously (and repeatedly) fail to call the subprogram after a random period of time. The eventual solution is to contact the original developers to get support, but in the meantime we need a quick hotfix for the issue.
I'm trying to write a bash script that will monitor the output of the program and when it sees a specific string, it will restart the machine (the program will run again automatically on boot). The bash script needs to pass all standard output through to the screen up until it sees the specific string. The program also needs to continue to handle user input.
I have tried the following with limited success:
./program1 | ./watcher.sh
watcher.sh is basically just the following:
while read line; do
echo $line
if [$line == "some string"]
then
#the reboot script works fine
./reboot.sh
fi
done
This seems to work OK, but leading whitespace is stripped on the echo statement, and the echo output hangs in the middle until subprogram exits, at which point the rest of the output is printed to the screen. Is there a better way to accomplish what I need to do?
Thanks in advance.
I would do something along the lines of:
stdbuf -o0 ./program1 | grep --line-buffered "some string" | (read && reboot)
you need to quote your $line variable, i.e. "$line" for all references *(except the read line bit).
Your program1 is probably the source of the 'paused' data. It needs to flush its output buffer. You probably don't have control of that, so
a. check if your system has unbuffer command available. If so try unbuffer cmd1 | watcher You may have to experiment with which cmd you wrap unbuffer with, maybe you whill have to do cmd1 | unbuffer watcher.
b. OR you can try wrapping watcher as a process-group, (I think that is the right terminology), i.e.
./program1 | { ./watcher.sh ; printf "\n" ; }
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
use read's $REPLY variable, also I'd suggest using printf instead of echo
while read; do
printf "%s\n" "$REPLY"
# '[[' is Bash, quotes are not necessary
# use '[ "$REPLY" == "some string" ]' if in another shell
if [[ $REPLY == "some string" ]]
then
#the reboot script works fine
./reboot.sh
fi
done