I am writing a bash script that should interact (interactively) with an existing (perl) program. Unfortunately I cannot touch the existing perl program nor can I use expect.
Currently the script works along the lines of this stackoverflow answer Is it possible to make a bash shell script interact with another command line program?
The problem is (read: seems to be) that the perl program does not always send a <newline> before asking for input. This means that bash's while ... read on the named pipe does not "get" (read: display) the perl program's output because it keeps waiting for more. At least that is how I understand it.
So basically the perl program is waiting for input but the user does not know because nothing is on the screen.
So what I do in the bash script is about
#!/bin/bash
mkfifo $readpipe
mkfifo $writepipe
[call perl program] < $writepipe &> $readpipe &
exec {FDW}>$writepipe
exec {FDR}<$readpipe
...
while IFS= read -r L
do
echo "$L"
done < $readpipe
That works, unless the perl program is doing something like
print "\n";
print "Choose action:\n";
print "[A]: Action A [B]: Action B\n";
print " [C]: cancel\n";
print " ? ";
print "[C] ";
local $SIG{INT} = 'IGNORE';
$userin = <STDIN> || ''; chomp $userin;
print "\n";
Then the bash script only "sees"
Choose action:
[A]: Action A [B]: Action B
[C]: cancel
but not the
? [C]
This is not the most problematic case, but the one that is easiest to describe.
Is there a way to make sure the ? [C] is printed as well (I played around with cat <$readpipe & but that did not really work)?
Or is there a better approach all together (given the limitation that I cannot modify the perl program nor can I use expect)?
Use read -N1.
Lets try with following example: interact with a program that sends a prompt (not ended by newline), our system must send some command, receive the echo of the command sent. That is, the total output of the child process is:
$ cat example
prompt> command1
prompt> command2
The script could be:
#!/bin/bash
#
cat example | while IFS=$'\0' read -N1 c; do
case "$c" in
">")
echo "received prompt: $buf"
# here, sent some command
buf=""
;;
*)
if [ "$c" == $'\n' ]; then
echo "received command: $buf"
# here, process the command echo
buf=""
else
buf="$buf$c"
fi
;;
esac
done
that produces following output:
received prompt: prompt
received command: command1
received prompt: prompt
received command: command2
This second example is more near to the original question:
$ cat example
Choose action:
[A]: Action A [B]: Action B
[C]: cancel
? [C]
script is now:
#!/bin/bash
#
while IFS=$'\0' read -N1 c; do
case "$c" in
'?')
echo "*** received prompt after: $buf$c ***"
echo '*** send C as option ***'
buf=""
;;
*)
buf="$buf$c"
;;
esac
done < example
echo "*** final buffer is: $buf ***"
and the result is:
*** received prompt after:
Choose action:[A]: Action A [B]: Action B
[C]: cancel
? ***
*** send C as option ***
*** final buffer is: [C]
***
Related
I'm having a Bash-Script that sequentially runs some Perl-Scripts which are read from a file. These scripts require the press of Enter to continue.
Strangely when I run the script it's never waiting for the input but just continues. I assume something in the Bash-Script is interpreted as an Enter or some other Key-Press and makes the Perl continue.
I'm sure there is a solution out there but don't really know what to look for.
My Bash has this while-Loop which iterates through the list of Perl-Scripts (which is listed in seqfile)
while read zeile; do
if [[ ${datei:0:1} -ne 'p' ]]; then
datei=${zeile:8}
else
datei=$zeile
fi
case ${zeile: -3} in
".pl")
perl $datei #Here it just goes on...
#echo "Test 1"
#echo "Test 2"
;;
".pm")
echo $datei "is a Perl Module"
;;
*)
echo "Something elso"
;;
esac
done <<< $seqfile;
You notice the two commented lines With echo "Test 1/2". I wanted to know how they are displayed.
Actually they are written under each other like there was an Enter-Press:
Test 1
Test 2
The output of the Perl-Scripts is correct I just have to figure out a way how to force the input to be read from the user and not from the script.
Have the perl script redirect input from /dev/tty.
Proof of concept:
while read line ; do
export line
perl -e 'print "Enter $ENV{line}: ";$y=<STDIN>;print "$ENV{line} is $y\n"' </dev/tty
done <<EOF
foo
bar
EOF
Program output (user input in bold):
Enter foo: 123
foo is 123
Enter bar: 456
bar is 456
#mob's answer is interesting, but I'd like to propose an alternative solution for your use case that will also work if your overall bash script is run with a specific input redirection (i.e. not /dev/tty).
Minimal working example:
script.perl
#!/usr/bin/env perl
use strict;
use warnings;
{
local( $| ) = ( 1 );
print "Press ENTER to continue: ";
my $resp = <STDIN>;
}
print "OK\n";
script.bash
#!/bin/bash
exec 3>&0 # backup STDIN to fd 3
while read line; do
echo "$line"
perl "script.perl" <&3 # redirect fd 3 to perl's input
done <<EOF
First
Second
EOF
exec 3>&- # close fd 3
So this will work with both: ./script.bash in a terminal and yes | ./script.bash for example...
For more info on redirections, see e.g. this article or this cheat sheet.
Hoping this helps
I am trying to run from a shell script a C++ program that print some outputs (using std::cout), and I would like to see them in the console while the program is running.
I tried some things like this :
RES=`./program`
RES=$(./program)
But all I can do is to only display the result at the end : echo $RES...
How to display the outputs in run-time in the console, AND in the variable RES ?
TTY=$(tty);
SAVED_OUTPUT=$(echo "my dummy c++ program" | tee ${TTY});
echo ${SAVED_OUTPUT};
prints
my dummy c++ program
my dummy c++ program
First we save off the name of the current terminal (because tty doesn't work in a pipeline).
TTY=$(tty)
Then we "T" the output (a letter T looks like one stream in at the bottom, 2 out at the top, and comes from the same "plumbing" metaphor as "pipe"), which copies it to the filename given; in this case the "file" is really a special device representing our terminal.
echo "my dummy c++ program" | tee ${TTY}
RES=( $(./program) )
echo ${RES[#]}
you can try in this way
You can use a temporary file
./program | tee temp
RES=$(< temp)
rm temp
You can generate a temporary file with unique name by using mktemp.
res=$(sed -n 'p;' <<< $(printf '%s\n' '*' $'hello\t\tworld'; sleep 5; echo "post-streaming content")&;wait)
echo $res
#output
*
hello world
post-streaming content
[sky#kvm35066 tmp]$ cat test.sh
#!/bin/bash
echo $BASH_VERSION
res=$(printf '%s\n' '*' $'hello\t\tworld'; sleep 5; echo "post-streaming content")
echo "$res"
[sky#kvm35066 tmp]$ bash test.sh
4.1.2(1)-release
*
hello world
post-streaming content
i think the result is correct, and it is what you want
* the wording of the question is terrible, sorry!
I have some bash functions I create
test() {echo "hello wold"}
test2() {echo "hello wold"}
Then in my .bashrc I source the file that has the above function . ~/my_bash_scripts/testFile
In the terminal I can run test and get hello world.
is there a way for me to add parent variable that holds all my functions together. For example personal test, personal test2.
Similar to every other gem out there, I downloaded a tweeter one. All it's methods are followed by the letter t, as in t status to write a status, instead of just status
You are asking about writing a command-line program. Just a simple one here:
#!/usr/bin/env bash
if [[ $# -eq 0 ]]; then
echo "no command specified"
exit
elif [[ $# -gt 1 ]]; then
echo "only one argument expected"
exit
fi
case "$1" in
test)
echo "hello, this is test1"
;;
test2)
echo "hello, this is test2"
;;
*)
echo "unknown command: $1"
;;
esac
Then save it and make it an executable by run chmod +x script.sh, and in your .bashrc file, add alias personal="/fullpath/to/the/script.sh".
This is just very basic and simple example using bash and of course you can use any language you like, e.g. Python, Ruby, Node e.t.c.
Use arguments to determine final outputs.
You can use "$#" for number of arguments.
For example,
if [ $# -ne 2 ]; then
# TODO: print usage
exit 1
fi
Above code exits if arguments not euqal to 2.
So below bash program
echo $#
with
thatscript foo bar baz quux
will output 4.
Finally you can combine words to determine what to put stdout.
If you want to flag some functions as your personal functions; no, there is no explicit way to do that, and essentially, all shell functions belong to yourself (although some may be defined by your distro maintainer or system administrator as system-wide defaults).
What you could do is collect the output from declare -F at the very top of your personal shell startup file; any function not in that list is your personal function.
SYSFNS=$(declare -F | awk '{ a[++i] = $3 }
END { for (n=1; n<=i; n++) printf "%s%s", (n>1? ":" : ""), a[n] }')
This generates a variable SYSFNS which contains a colon-separated list of system-declared functions.
With that defined, you can check out which functions are yours:
myfns () {
local fun
declare -F |
while read -r _ _ fun; do
case :$SYSFNS: in *:"$fun":*) continue;; esac
echo "$fun"
done
}
In Bash the only way to get a (user) input seems to be to use the read method, which pauses the rest of the script. Is there any way to receive a command line input (ending with the enter key) without pausing the script. From what I've seen there may be a way to do it with $1 ..?
read -t0 can be used to probe for input if your process is structured as a loop
#!/bin/bash
a='\|/-'
spin()
{
sleep 0.3
a="${a:1}${a:0:1}"
echo -n $'\e'7$'\r'"${a:1:1}"$'\e'8
}
echo 'try these /|\- , dbpq , |)>)|(<( , =>-<'
echo -n " enter a pattern to spin:"
while true
do
spin
if read -t0
then
read a
echo -n " using $a enter a new pattern:"
fi
done
else you could run one command in the background while promptiong for input in the foreground. etc...
Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.