In Bash the only way to get a (user) input seems to be to use the read method, which pauses the rest of the script. Is there any way to receive a command line input (ending with the enter key) without pausing the script. From what I've seen there may be a way to do it with $1 ..?
read -t0 can be used to probe for input if your process is structured as a loop
#!/bin/bash
a='\|/-'
spin()
{
sleep 0.3
a="${a:1}${a:0:1}"
echo -n $'\e'7$'\r'"${a:1:1}"$'\e'8
}
echo 'try these /|\- , dbpq , |)>)|(<( , =>-<'
echo -n " enter a pattern to spin:"
while true
do
spin
if read -t0
then
read a
echo -n " using $a enter a new pattern:"
fi
done
else you could run one command in the background while promptiong for input in the foreground. etc...
Related
## bash script
foo(){
sleep 0.0001
read -s -t 0.0002 var ## get the pipeline input if have
if [ -n "$var" ];then
echo "has pipe input"
fi
if [ $# -gt 0 ];then
echo "has std input"
fi
read -s -t 30 var1 #1 # expect wait for 30 seconds , but acturaly not
echo "failed"
read -s -t 30 var2 #2
echo "failed again"
}
echo "ok1"| foo "ok"
## output:
has pipe input
has std input
failed
failed again
if the foo has pipeline input , the read command at #1 & #2 will return immediately without waiting for input TIMEOUT
in my real script there are three needs :
1 made function could accept pipeline input and parameter at the same time (cause i want my function can take configuration from pipeline input , i think that will be nice for me. for example: )
foo(){
local config
sleep 0.0001
read -t 0.0002 config
eval "$config"
}
then i can pass configuration like this
foo para1 para2 <<EOF
G_TIMEOUT=300
G_ROW_SPACE=2
G_LINE_NUM=10
EOF
2 in my function i need to read user input from keyboard( i need interactive with user by using read )
3 wait for user input should has a timeout ( i want to implament a screensaver ,if there is no action after TIMEOUT seconds passed from user, will call screensaver script , and after any keydown , screensaver will return , and again to wait for use input
if there is a way to redirect pipeline input to fd3 after i get the pipeline input ,and then close fd3 to make pipe broken , then reopen fd0 to standard input (keyboard) and wait for user input ?
It doesn't wait for input because it has reached the end of the pipe.
echo "ok1" | ... writes a single line to the pipe, then closes it. The first read in the function reads ok1 into var. All other read calls return immediately because there is no more data to read and no chance of more data appearing later because the write end of the pipe has already been closed.
If you want the pipe to stay open, you have to do something like
{ echo ok1; sleep 40; echo ok2; } | foo
because function foo has pipeline input , so in child process , the input fd been redirected to pipeline automaticly , just redirect the standard input to keyboard(/proc/pid/0) after get the pipeline input ,will solve the problem
thanks for those guys give me that clue , it is not read command problem ,it is fd problem acturaly
foo(){
local config
sleep 0.0001
read -t 0.0002 config
if [ -n "$config" ];then
config=$(cat -)
fi
echo "$config"
exec 3</dev/tty
read -t 10 -u 3 input
echo "success!"
}
a better approche:
foo(){
local config
sleep 0.0001
read -t 0.0002 config
if [ -n "$config" ];then
config=$(cat -)
fi
exec 0<&- ## close current pipeline input
exec 0</dev/tty ##reopen input fd with standard input
read -t 10 input ##read will wait for input for keyboard :) good !
echo "success!"
}
furthermore if i can detect current input is pipe or standard input , i colud not use read config to judge if there are pipeline input , but how to fullfill that ? [ -t 0 ] is a good idea
a better approche:
foo(){
local config
if [ ! -t 0 ];then
config=$(cat -)
exec 0<&- ## close current pipeline input
exec 0</dev/tty ##reopen input fd with standard input
fi
read -t 10 input ##read will wait for input for keyboard :) great !
echo "success!"
}
As an extra to the answer of melpomene you can see this when executing the following line:
$ echo foo | { read -t 10 var1; echo $?; read -t 10 var2; echo $?; }
0
1
$ read -t 1 var; echo $?
142
This line outputs the return codes of read and the manual states
The return code is zero, unless end-of-file is encountered, read times out (in which case the return code is greater than 128), or an invalid
file descriptor is supplied as the argument to -u
source: man bash
From this we see that the second read in the first command fails because EOF is reached.
I'm having a Bash-Script that sequentially runs some Perl-Scripts which are read from a file. These scripts require the press of Enter to continue.
Strangely when I run the script it's never waiting for the input but just continues. I assume something in the Bash-Script is interpreted as an Enter or some other Key-Press and makes the Perl continue.
I'm sure there is a solution out there but don't really know what to look for.
My Bash has this while-Loop which iterates through the list of Perl-Scripts (which is listed in seqfile)
while read zeile; do
if [[ ${datei:0:1} -ne 'p' ]]; then
datei=${zeile:8}
else
datei=$zeile
fi
case ${zeile: -3} in
".pl")
perl $datei #Here it just goes on...
#echo "Test 1"
#echo "Test 2"
;;
".pm")
echo $datei "is a Perl Module"
;;
*)
echo "Something elso"
;;
esac
done <<< $seqfile;
You notice the two commented lines With echo "Test 1/2". I wanted to know how they are displayed.
Actually they are written under each other like there was an Enter-Press:
Test 1
Test 2
The output of the Perl-Scripts is correct I just have to figure out a way how to force the input to be read from the user and not from the script.
Have the perl script redirect input from /dev/tty.
Proof of concept:
while read line ; do
export line
perl -e 'print "Enter $ENV{line}: ";$y=<STDIN>;print "$ENV{line} is $y\n"' </dev/tty
done <<EOF
foo
bar
EOF
Program output (user input in bold):
Enter foo: 123
foo is 123
Enter bar: 456
bar is 456
#mob's answer is interesting, but I'd like to propose an alternative solution for your use case that will also work if your overall bash script is run with a specific input redirection (i.e. not /dev/tty).
Minimal working example:
script.perl
#!/usr/bin/env perl
use strict;
use warnings;
{
local( $| ) = ( 1 );
print "Press ENTER to continue: ";
my $resp = <STDIN>;
}
print "OK\n";
script.bash
#!/bin/bash
exec 3>&0 # backup STDIN to fd 3
while read line; do
echo "$line"
perl "script.perl" <&3 # redirect fd 3 to perl's input
done <<EOF
First
Second
EOF
exec 3>&- # close fd 3
So this will work with both: ./script.bash in a terminal and yes | ./script.bash for example...
For more info on redirections, see e.g. this article or this cheat sheet.
Hoping this helps
I have a bash loop that looks like this:
something(){
echo "something"
}
while true; do
something
sleep 10s
done | otherCommand
When the loop is in the sleep state, I want to be able to be able to run a function from the terminal that will skip the sleep step and go on to the next iteration of the loop.
For example, if the script has been sleeping for 5 seconds, I want the command to stop the script from sleeping, go on to the next iteration of the loop, run something, and continue sleeping again.
This is not foolproof, but may be robust enough:
To interrupt the sleep command, run:
pkill -f 'sleep 10s'
Note that the script running the sleep command prints something like the following to stderr when sleep is killed: <script-path>: line <line-number>: <pid> Terminated: 15 sleep 10s. Curiously, you cannot suppress this script with sleep 10s 2>/dev/null; to suppress it, you have to either apply 2>/dev/null to the while loop as a whole, or to the script invocation as a whole.
As #miken32 points out in the comments, this command tries to kill all commands whose invocation command line matches sleep 10s, though - unless you're running as root - killing will only succeed for matches among your processed due to lack of permission to kill other users' processes.
To be safer, you can explicitly restrict matches to your own processes:
pkill -u "$(id -u)" -f 'sleep 10s'
Truly safe use, however, requires that you capture your running script's PID (process ID), save it to a file, and use the PID from that file with pkill's -P option, which limits matches to child processes of the specified PID - see this answer (thanks, #miken32).
If you want to skip the sleep, use something like a file you can touch:
if [ ! -f /tmp/skipsleep ]; then
sleep 10
fi
When you want to interrupt the sleep command, kill it!
You could read from a named pipe with read -t 10 instead of sleeping: props for named pipe added by "that other guy"
something()
{
echo "something"
}
somethingelse()
{
echo "something else"
}
mkfifo ~/.myfifo
while cat ~/.myfifo; do true; done |
while true
do
something
read -t 10 && somethingelse
done
Now whenever another script writes to the fifo with echo > ~/.myfifo, the loop will skip its current wait and continue to the next iteration.
This way, different users or different scripts waiting ten seconds will not interfere with each other.
Solution below works if script is running in foreground and you are waiting. Of course you would need a valid loop exit condition. You can also check the value entered and act on it differently. In this case pressing any key except 'j' will iterate the loop. Pressing 'j' will pipe the output of somethingelse to awk.
something()
{
echo "something"
}
somethingelse()
{
echo "something else"
}
while true; do
something |awk '{print "piping something: " $0 }'
read -t 3 -s -n 1 answer
if [ $? == 0 ]; then
echo "you didn't want to wait!"
fi
if [ "$answer" = "j" ]; then
somethingelse | awk '{print "piping something else: " $0 }'
fi
done
If its interactive then just use "read -t #" instead of "sleep #".
You can then just press enter to skip the timeout.
This may be old, but a very simple, elegant solution is to iterate a non existant, empty or 0 set variable, while checking if it has some value first. In this case the ternary operator works as intended
for((;;)){
# skip an interation
(( i ))&& do something || i=1
# skip 5 iterations
(( n >= 5 ))&& do something ||
((n++))
}
I am writing a bash script that should interact (interactively) with an existing (perl) program. Unfortunately I cannot touch the existing perl program nor can I use expect.
Currently the script works along the lines of this stackoverflow answer Is it possible to make a bash shell script interact with another command line program?
The problem is (read: seems to be) that the perl program does not always send a <newline> before asking for input. This means that bash's while ... read on the named pipe does not "get" (read: display) the perl program's output because it keeps waiting for more. At least that is how I understand it.
So basically the perl program is waiting for input but the user does not know because nothing is on the screen.
So what I do in the bash script is about
#!/bin/bash
mkfifo $readpipe
mkfifo $writepipe
[call perl program] < $writepipe &> $readpipe &
exec {FDW}>$writepipe
exec {FDR}<$readpipe
...
while IFS= read -r L
do
echo "$L"
done < $readpipe
That works, unless the perl program is doing something like
print "\n";
print "Choose action:\n";
print "[A]: Action A [B]: Action B\n";
print " [C]: cancel\n";
print " ? ";
print "[C] ";
local $SIG{INT} = 'IGNORE';
$userin = <STDIN> || ''; chomp $userin;
print "\n";
Then the bash script only "sees"
Choose action:
[A]: Action A [B]: Action B
[C]: cancel
but not the
? [C]
This is not the most problematic case, but the one that is easiest to describe.
Is there a way to make sure the ? [C] is printed as well (I played around with cat <$readpipe & but that did not really work)?
Or is there a better approach all together (given the limitation that I cannot modify the perl program nor can I use expect)?
Use read -N1.
Lets try with following example: interact with a program that sends a prompt (not ended by newline), our system must send some command, receive the echo of the command sent. That is, the total output of the child process is:
$ cat example
prompt> command1
prompt> command2
The script could be:
#!/bin/bash
#
cat example | while IFS=$'\0' read -N1 c; do
case "$c" in
">")
echo "received prompt: $buf"
# here, sent some command
buf=""
;;
*)
if [ "$c" == $'\n' ]; then
echo "received command: $buf"
# here, process the command echo
buf=""
else
buf="$buf$c"
fi
;;
esac
done
that produces following output:
received prompt: prompt
received command: command1
received prompt: prompt
received command: command2
This second example is more near to the original question:
$ cat example
Choose action:
[A]: Action A [B]: Action B
[C]: cancel
? [C]
script is now:
#!/bin/bash
#
while IFS=$'\0' read -N1 c; do
case "$c" in
'?')
echo "*** received prompt after: $buf$c ***"
echo '*** send C as option ***'
buf=""
;;
*)
buf="$buf$c"
;;
esac
done < example
echo "*** final buffer is: $buf ***"
and the result is:
*** received prompt after:
Choose action:[A]: Action A [B]: Action B
[C]: cancel
? ***
*** send C as option ***
*** final buffer is: [C]
***
Bash: I want to run a command and pipe the results through some filter, but if the command fails, I want to return the command's error value, not the boring return value of the filter:
E.g.:
if !(cool_command | output_filter); then handle_the_error; fi
Or:
set -e
cool_command | output_filter
In either case it's the return value of cool_command that I care about -- for the 'if' condition in the first case, or to exit the script in the second case.
Is there some clean idiom for doing this?
Use the PIPESTATUS builtin variable.
From man bash:
PIPESTATUS
An array variable (see Arrays
below) containing a list of exit
status values from the processes in
the most-recently-executed foreground
pipeline (which may contain only a
single command).
If you didn't need to display the error output of the command, you could do something like
if ! echo | mysql $dbcreds mysql; then
error "Could not connect to MySQL. Did you forget to add '--db-user=' or '--db-password='?"
die "Check your credentials or ensure server is running with /etc/init.d/mysqld status"
fi
In the example, error and die are defined functions. elsewhere in the script. $dbcreds is also defined, though this is built from command line options. If there is no error generated by the command, nothing is returned. If an error occurs, text will be returned by this particular command.
Correct me if I'm wrong, but I get the impression you're really looking to do something a little more convoluted than
[ `id -u` -eq '0' ] || die "Must be run as root!"
where you actually grab the user ID prior to the if statement, and then perform the test. Doing it this way, you could then display the result if you choose. This would be
UID=`id -u`
if [ $UID -eq '0' ]; then
echo "User is root"
else
echo "User is not root"
exit 1 ##set an exit code higher than 0 if you're exiting because of an error
fi
The following script uses a fifo to filter the output in a separate process. This has the following advantages over the other answers. First, it is not bash specific. In particular it does not rely on PIPESTATUS. Second, output is not stalled until the command has completed.
$ cat >test_filter.sh <<EOF
#!/bin/sh
cmd()
{
echo $1
echo $2 >&2
return $3
}
filter()
{
while read line
do
echo "... $line"
done
}
tmpdir=$(mktemp -d)
fifo="$tmpdir"/out
mkfifo "$fifo"
filter <"$fifo" &
pid=$!
cmd a b 10 >"$fifo" 2>&1
ret=$?
wait $pid
echo exit code: $ret
rm -f "$fifo"
rmdir "$tmpdir"
EOF
$ sh ./test_filter.sh
... a
... b
exit code: 10