Reading input in bash in an infinite loop and reacting to it - bash

So I want to wait 1 second for some input (options that i will implement later).
Or i want the program to print something(will implement later too). I have run into a problem tho, when trying to read that 1 char for the function, here is my code:
while true
do read $var -t 1
case $var in
("h")
help
;;
esac
done
If I try to echo after the case, the program does wait for 1 second, the problem is it doesnt recognise my h input, how would i fix that?

I've modified your sample slightly so that it works. There was an error in the read statement. use read varinstead of read $var. This corrected sample will now recognise also the h input.
Related to your question Why it doesn't wait the second (which was btw. hard to determine so i increased the timeout a bit ;-) )? This is because when you enter something, the read timeout is interrupted. It is as the parameter name say's a timeout for the user input. So if the user input's something the timeout is interrupted.
#!/bin/bash
while true
do
echo 'wait for input ...'
read -t 10 var
echo 'got input ...'
case $var in
h)
echo 'help'
;;
esac
done

Related

How can I display a result directly on STDOUT with a shell program?

The goal here it's to display directly the result on the STDOUT so on the terminal when I press CTRL+D, First I tried many things to find the solution.
So when I execute the program like ./myprog.sh, it wait a enter, so we can write :
bob
cookie
And we we press on CRTL+D, I want this result:
bob
cookie
boy
dog
My code is:
while :
do
read INPUT_STRING ||break
case $INPUT_STRING in
bob)
echo "boy"
;;
alicia)
echo "girl"
;;
cookie)
echo "dog"
;;
bye)
break
echo " "
;;
*)
echo "unknown"
;;
esac
done
How can I display the content after write many different things in my terminal?
Let's start with the basics, what is Ctrl+D? This key-combination resembles end-of-file. It is not a signal that you can catch, but it is something that is related to files. So if you want actions to happen when you press
Ctrl+D, you actually are saying that:
I want to create a file over /dev/stdin, read it into memory and then perform various actions on it.
An example in awk would be:
awk '{a[NR]=$0}END{for(i=1;i<=NR;++i) print i,a[i]}' -
Here, it reads the full "file" into memory and when you press Ctrl+D, you tell awk that EOF is reached and it executes the END statement which prints the line number and then line. They <hyphen> at the end of the script is a short-hand for /dev/stdin.
Now, if you want to do something like that in a shell, lets say bash to keep it simple, you can do something like this.
read the full input into memory by storing it into an array or a temporary file. The reading is finished when pressing Ctrl+D
process the input afterwards.
This looks like this:
#!/usr/bin/env bash
# declare an array, to store stuff in
declare -a myArray
# read the full file into the array
# This while loop terminates when pressing CTRL-D
i=1
while read -r line; do
myArray[i]="${line}"
((i++))
done < /dev/stdin
# Process the array
for ((j=1;j<i;++j)); do
# perform your actions here on myArray[j]
echo "$i" "${myArray[j]}"
done

Waiting for a file to be created in Bash

I need to create a bash script to wait for a file to be created. The script will use sleep command inside a while loop to periodically check on a file every 10 seconds. Print out a message while waiting. Display the content of the file once the file is created. Below is what I have tried to implement and it obviously does not work. At this point, I'm not entirely sure how to proceed.
#!/bin/bash
let file=$1
while '( -f ! /tmp/$1)'
do
sleep 10
echo "still waiting"
done
echo "Content of the file $1:"
The problem here is with the test, not the sleep (as the original question hypothesized). The smallest possible fix might look as follows:
while ! test -f "/tmp/$1"; do
sleep 10
echo "Still waiting"
done
Keep in mind the syntax for a while loop:
while: while COMMANDS; do COMMANDS; done
Expand and execute COMMANDS as long as the final command in the
`while' COMMANDS has an exit status of zero.
That is to say, the first argument given to while, expanding the loop, is a command; it needs to follow the same syntax rules as any other shell command.
-f is valid as an argument to test -- a command which is also accessible under the name [, requiring a ] as the last argument when used in that name -- but it's not valid as a command in and of itself -- and when passed as part of a string, it's not even a shell word that could be parsed as an individual command name or argument.
When you run '( -f ! /tmp/$1)' as a command, inside quotes, the shell is looking for an actual command with exactly that name (including spaces). You probably don't have a file named '/usr/bin/( -f ! /tmp/$1)' in your PATH or any other command by that name found, so it'll always fail -- exiting the while loop immediately.
By the way -- if you're willing to make your code OS-specific, there are approaches other than using sleep to wait for a file to exist. Consider, for instance, inotifywait, from the inotify-tools package:
while ! test -f "/tmp/$1"; do
echo "waiting for a change to the contents of /tmp" >&2
inotifywait --timeout 10 --event create /tmp >/dev/null || {
(( $? == 2 )) && continue ## inotify exit status 2 means timeout expired
echo "unable to sleep with inotifywait; doing unconditional 10-second loop" >&2
sleep 10
}
done
The benefit of an inotify-based interface is that it returns immediately upon a filesystem change, and doesn't incur polling overhead (which can be particularly significant if it prevents a system from sleeping).
By the way, some practice notes:
Quoting expansions in filenames (ie. "/tmp/$1") prevents names with spaces or wildcards from being expanded into multiple distinct arguments.
Using >&2 on echo commands meant to log for human consumption keeps stderr available for programmatic consumption
let is used for math, not general-purpose assignments. If you want to use "$file", nothing wrong with that -- but the assignment should just be file=$1, with no preceding let.

Using tail in a subshell in conjunction with while/break does not exit the loop

I have been facing a very peculiar issue with shell scripts.
Here is the scenario
Script1 (spawns in background)--> Script2
Script2 has the following code
function check_log()
{
logfile=$1
tail -5f ${logfile} | while read line
do
echo $line
if echo $line|grep "${triggerword}";then
echo "Logout completion detected"
start_leaks_detection
triggerwordfound=true
echo "Leaks detection complete"
fi
if $triggerwordfound;then
echo "Trigger word found and processing complete.Exiting"
break
fi
done
echo "Outside loop"
exit 0
}
check_log "/tmp/somefile.log" "Logout detected"
Now the break in while loop does not help here. I can see "Logout completion detected" as well as "Leaks detection complete" being echoed on the stdout, but not the string "outside loop"
I am assuming this has to do something with tail -f creating a subshell. What I want to do is, exit that subshell as well as exit Script2 to get control back to Script1.
Can someone please shed some light on how to do this?
Instead of piping into your while loop, use this format instead:
while read line
do
# put loop body here
done < <(tail -5f ${logfile})
Try this, although it's not quite the same (it doesn't skip the beginning of the log file at startup):
triggerwordfound=
while [ -z "$triggerwordfound" ]; do
while read line; do
echo $line
if echo $line|grep "${triggerword}";then
echo "Logout completion detected"
start_leaks_detection
triggerwordfound=true
echo "Leaks detection complete"
fi
done
done < "$logfile"
echo "Outside loop"
The double loop effectively does the same thing as tail -f.
Your function works in a sense, but you won't notice that it does so until another line is written to the file after the trigger word has been found. That's because tail -5 -f can usually write all of the last five lines of the file to the pipe in one write() call and continue to write new lines all in one call, so it won't be sent a SIGPIPE signal until it tries to write to the pipe after the while loop has exited.
So, if your file grows regularly then there shouldn't be a problem, but if it's more common for your file to stop growing just after the trigger word is written to it, then your watcher script will also hang until any new output is written to the file.
I.e. SIGPIPE is not sent immediately when a pipe is closed, even if there's un-read data buffered in it, but only when a subsequent write() on the pipe is attempted.
This can be demonstrated very simply. This command will not exit (provided the tail of the file is less than a pipe-sized buffer) until you either interrupt it manually, or you write one more byte to the file:
tail -f some_large_file | read one
However if you force tail to make multiple writes to the pipe and make sure the reader exits before the final write, then everything will work as expected:
tail -c 1000000 some_large_file | read one
Unfortunately it's not always easy to discover the size of a pipe buffer on a given system, nor is it always possible to only start reading the file when there's already more than a pipe buffer's worth of data in the file, and the trigger word is already in the file and at least a pipe buffer's size bytes from the end of the file.
Unfortunately tail -F (which is what you should probably use instead of -f) doesn't also try writing zero bytes every 5 seconds, or else that would maybe solve your problem in a more efficient manner.
Also, if you're going to stick with using tail, then -1 is probably sufficient, at least for detecting any future event.
BTW, here's a slightly improved implementation, still using tail since I think that's probably your best option (you could always add a periodic marker line to the log with cron or similar (most syslogd implementations have a built-in mark feature too) to guarantee that your function will return within the period of the marker):
check_log ()
{
tail -1 -F "$1" | while read line; do
case "$line" in
*"${2:-SOMETHING_IMPOSSIBLE_THAT_CANNOT_MATCH}"*)
echo "Found trigger word"
break
;;
esac
done
}
Replace the echo statement with whatever processing you need to do when the trigger phrase is read.

ksh ignores exactly two newlines when reading from /dev/tty

We have a ksh script which is reading from a doing a 'while read line' with the input piped into it. At the same time we're reading user confirmation input with 'read < /dev/tty', similar to the following sketch:
cat interestingdata | while read line ; do
x=$(dostuff $line)
if [[ x -ne 0 ]] ; then
read y < /dev/tty
$(domorestuff $y)
fi
echo "done optional stuff"
done
All works fine for processing the lines of 'interestingdata', and for most of the reads from /dev/tty. However, on the first two iterations of the while loop, the first string + newline are ignored.
By this, I mean the user types something and presses enter, and the script doesn't progress to echo "done optional stuff". Instead, the user has to type something else and press enter again, and only then does the script proceed.
This happens only for the first two iterations of the while loop, and then everything works perfectly. Any ideas how I can fix this? I have no idea what else I can do here!
Running linux kernel 2.6.9-55.9.vm2.ELsmp with ksh93 if that helps.
It sounds like either "dostuff" or "domorestuff" is sometimes reading from stdin.
Try replacing "dostuff" with "dostuff < /dev/null" and "domorestuff" with "domorestuff < /dev/null" and see if the behavior changes.

Bash Script Monitor Program for Specific Output

So this is probably an easy question, but I am not much of a bash programmer and I haven't been able to figure this out.
We have a closed source program that calls a subprogram which runs until it exits, at which point the program will call the subprogram again. This repeats indefinitely.
Unfortunately the main program will sometimes spontaneously (and repeatedly) fail to call the subprogram after a random period of time. The eventual solution is to contact the original developers to get support, but in the meantime we need a quick hotfix for the issue.
I'm trying to write a bash script that will monitor the output of the program and when it sees a specific string, it will restart the machine (the program will run again automatically on boot). The bash script needs to pass all standard output through to the screen up until it sees the specific string. The program also needs to continue to handle user input.
I have tried the following with limited success:
./program1 | ./watcher.sh
watcher.sh is basically just the following:
while read line; do
echo $line
if [$line == "some string"]
then
#the reboot script works fine
./reboot.sh
fi
done
This seems to work OK, but leading whitespace is stripped on the echo statement, and the echo output hangs in the middle until subprogram exits, at which point the rest of the output is printed to the screen. Is there a better way to accomplish what I need to do?
Thanks in advance.
I would do something along the lines of:
stdbuf -o0 ./program1 | grep --line-buffered "some string" | (read && reboot)
you need to quote your $line variable, i.e. "$line" for all references *(except the read line bit).
Your program1 is probably the source of the 'paused' data. It needs to flush its output buffer. You probably don't have control of that, so
a. check if your system has unbuffer command available. If so try unbuffer cmd1 | watcher You may have to experiment with which cmd you wrap unbuffer with, maybe you whill have to do cmd1 | unbuffer watcher.
b. OR you can try wrapping watcher as a process-group, (I think that is the right terminology), i.e.
./program1 | { ./watcher.sh ; printf "\n" ; }
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
use read's $REPLY variable, also I'd suggest using printf instead of echo
while read; do
printf "%s\n" "$REPLY"
# '[[' is Bash, quotes are not necessary
# use '[ "$REPLY" == "some string" ]' if in another shell
if [[ $REPLY == "some string" ]]
then
#the reboot script works fine
./reboot.sh
fi
done

Resources