An interactive program running in an xterm can send control codes to the xterm, which cause the xterm to respond by sending back an answer. I would like to read the answer. There are several of these control codes, so for the sake of this discussion, let's stick with the control sequence ESC Z, which causes the xterm to send back its terminal id. For instance, when I type in my shell (I'm using Zsh, but as far I can see, this applies to bash as well)
echo -e '\eZ'
I see in my terminal buffer the string
63;1;2;4;6;9;15;22;29c
and hear a beep (because the answer sent from xterm also contains non-printable characters, in particular it also contains an ESC). My goal is to somehow read this string 63;1;2;4;6;9;15;22;29c into a shell variable. I mainly use Zsh, but a solution in bash would be welcome as well (or in any other scripting language, such as Ruby, Perl or Python).
My first attempt was pretty straightforward:
#!/bin/zsh
echo -e '\eZ'
read -rs -k 25
echo $REPLY | xxd
This works, because I found out (with a little bit of trial and error) that the answerback string in this particular example on my particular xterm has a length of 25 characters. In the general case, of course, I don't know the exact length in advance, so I wanted a more flexible solution. The idea would be to read one character at a time, until nothing is left, and I wrote the following program to test my idea:
#!/bin/zsh
str=''
echo -e '\eZ'
while :
do
read -rs -t -k
if [[ -z $REPLY ]]
then
echo '(all read)'
break
fi
str="${str}$REPLY"
done
echo $str | xxd
However, this doesn't read anything. REPLY is always empty.
I also tried the variations read -rs -t -k 1 (same effect) and read -rs -k 1 (hangs forever). Even read -rs k 25 does not work anymore, so I guess the culprit is the while loop, not the read command. However, I do need a loop if I want to read the answerback string one character at a time.
Could someone explain, why my approach failed, and how I could solve my problem?
You could make the read-statement read directly from the terminal, e.g.,
read -rs -t -k < $(tty)
to work around redirection of the shell.
Since I hadn't got any new findings for this question here, I have crossposted the problem at Unix/Linux Forums and received an answer for Zsh, and an improved answer for bash, which I would like to summarize here:
Assuming that tty contains the string denoting my terminal(following here the advice given by #ThomasDickey in his answer), the Zsh-command
read -rs -t 0.2 -k 1 <$tty # zsh
reads the next single character into the variable REPLY. It is important to note that -t must be given a timeout value, and the fact, that no character is available, can not be deduced from an empty REPLY, but from the exit code of the read command.
As for bash, there is an easier solution than the one I've posted in my comment: With
read -rs -t 0.2 -d "" <$tty # bash
the whole answerback string is read as once into REPLY; no loop is necessary.
Putting the answers here together, here is the finished product:
#!/usr/bin/env bash
tty=/dev/tty
cat >$tty
read -rs -t 0.2 -d "" <$tty
echo $REPLY | xxd -r -p
$ printf '\eP+q544e\e\\' | escape_code_answer_read.bash
TNxterm-kitty
Related
I apologize in advance - I don't fully understand the ideas behind what I'm asking well enough to understand why it's not working (I don't know what I need to learn). I searched stack exchange for answers first - I found some information that seemed possibly relevant, but didn't explain the concepts well enough that I understood how to build a working solution. I've been scouring google but haven't found any information that describes exactly what's going on in such a way that I understand. Any direction to background concepts that may help me understand what's going on would be greatly appreciated.
Is it possible to get user input in a bash script that was executed from a pipe?
For example:
wget -q -O - http://myscript.sh | bash
And in the script:
read -p "Do some action (y/n): " __response
if [[ "$__response" =~ ^[Yy]$ ]]; then
echo "Performing some action ..."
fi
As I understand it, this doesn't work because read attempts to read the input from stdin and the bash script is currently "executing through that pipe" (i'm sure there is a more technical accurate way to describe what is occurring, but i don't know how).
I found a solution that recommended using:
read -t 1 __response </dev/tty
However, this does not work either.
Any light shed on the concepts I need to understand to make this work, or explanations of why it is not working or solutions would be greatly appreciated.
The tty solution works. Test it with this code, for example:
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? y
Sat Apr 12 10:51:16 PDT 2014
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? n
OK
The prompt from read appears on the terminal and read waits for a response before deciding to echo the date or not.
What I wrote above differs from the line below in two key aspects:
read -t 1 __response </dev/tty
First, the option -t 1 gives read a timeout of one second. Secondly, this command does not provide a prompt. The combination of these two probably means that, even though read was briefly asking for input, you didn't know it.
The main reason why this is not working is, as the OP validly indicated,
The | <pipe> which is used, sends the standard output from the first command as standard input to the second command. In this case, the first command is
wget -q -O - http://myscript.sh
which passes a downloaded script via the pipe to its interpreter bash
The read statement in the script uses the same standard input to obtain its value.
So this is where it collapses because read is not awaiting input from you but takes it from its own script. Example:
$ cat - <<EOF | bash
> set -x
> read p
> somecommand
> echo \$p
> EOF
+ read p
+ echo somecommand
somecommand
In this example, I used a here-document which is piped to bash. The script enables debugging using set -x to show what is happening. As you see, somecommand is never executed but actually read by read and stored in the variable p which is then outputted by echo (note, the $ has been escaped to avoid the substitution in the here-document).
So how can we get this to work then?
First of, never pipe to an interpreter such as {ba,k,z,c,tc,}sh. It is ugly and should be avoided, even though it feels the natural thing to do. The better thing to do is to use any of its options:
bash -c string: If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
$ bash -c "$(command you want to pipe)"
This also works for zsh, csh, tcsh, ksh, sh and probably a lot of others.
I'm currently trying to build a shell script that sends broadcast UDP packets. My problem is that my echo is outputting the arguments instead, and I have no ideia why. Here's my script:
#!/bin/bash
# Script
var1="\xdd\x02\x00\x13\x00\x00\x00\x10\x46\x44\x30\x30\x37\x33\x45\x31\x39\x39\x45\x43\x31\x42\x39\x34\x00"
var2="\xdd\x00\x0a\x\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x02"
echo -ne $var1 | socat - UDP4-DATAGRAM:255.255.255.255:5050,broadcast
echo -ne $var2 | socat - UDP4-DATAGRAM:255.255.255.255:5050,broadcast
Using wireshark I can see the script is printing -ne as characters and also is not converting each \xHH to the correspondant ASCII character.
Thanks!
I figured my problem out. It turns out I was runninc my script with sh ./script.sh instead of bash ./script.sh
echo implementations are hopelessly inconsistent about whether they take command options (like -ne) or simply treat them as part of the string to print, and/or whether they interpret escape sequences in the strings to print. It sounds like you're seeing a difference between bash's builtin version of echo vs. (I'm guessing) the version in /bin/echo. I've also seen it vary even between different versions of bash.
If you want consistent behavior for anything nontrivial, use printf instead of echo. It's slightly more complicated to use it correctly, but IMO worth it because your scripts won't randomly break because echo changed for whatever reason. The tricky thing about printf is that the first argument is special -- it's a format string in which all escape sequences are interpreted, and any % sequences tell it how to add in the rest of the arguments. Also, it doesn't add a linefeed at the end unless you specifically tell it to. In this case, you can just give it the hex codes in the format string:
printf "$var1" | socat - UDP4-DATAGRAM:255.255.255.255:5050,broadcast
printf "$var2" | socat - UDP4-DATAGRAM:255.255.255.255:5050,broadcast
I apologize in advance - I don't fully understand the ideas behind what I'm asking well enough to understand why it's not working (I don't know what I need to learn). I searched stack exchange for answers first - I found some information that seemed possibly relevant, but didn't explain the concepts well enough that I understood how to build a working solution. I've been scouring google but haven't found any information that describes exactly what's going on in such a way that I understand. Any direction to background concepts that may help me understand what's going on would be greatly appreciated.
Is it possible to get user input in a bash script that was executed from a pipe?
For example:
wget -q -O - http://myscript.sh | bash
And in the script:
read -p "Do some action (y/n): " __response
if [[ "$__response" =~ ^[Yy]$ ]]; then
echo "Performing some action ..."
fi
As I understand it, this doesn't work because read attempts to read the input from stdin and the bash script is currently "executing through that pipe" (i'm sure there is a more technical accurate way to describe what is occurring, but i don't know how).
I found a solution that recommended using:
read -t 1 __response </dev/tty
However, this does not work either.
Any light shed on the concepts I need to understand to make this work, or explanations of why it is not working or solutions would be greatly appreciated.
The tty solution works. Test it with this code, for example:
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? y
Sat Apr 12 10:51:16 PDT 2014
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? n
OK
The prompt from read appears on the terminal and read waits for a response before deciding to echo the date or not.
What I wrote above differs from the line below in two key aspects:
read -t 1 __response </dev/tty
First, the option -t 1 gives read a timeout of one second. Secondly, this command does not provide a prompt. The combination of these two probably means that, even though read was briefly asking for input, you didn't know it.
The main reason why this is not working is, as the OP validly indicated,
The | <pipe> which is used, sends the standard output from the first command as standard input to the second command. In this case, the first command is
wget -q -O - http://myscript.sh
which passes a downloaded script via the pipe to its interpreter bash
The read statement in the script uses the same standard input to obtain its value.
So this is where it collapses because read is not awaiting input from you but takes it from its own script. Example:
$ cat - <<EOF | bash
> set -x
> read p
> somecommand
> echo \$p
> EOF
+ read p
+ echo somecommand
somecommand
In this example, I used a here-document which is piped to bash. The script enables debugging using set -x to show what is happening. As you see, somecommand is never executed but actually read by read and stored in the variable p which is then outputted by echo (note, the $ has been escaped to avoid the substitution in the here-document).
So how can we get this to work then?
First of, never pipe to an interpreter such as {ba,k,z,c,tc,}sh. It is ugly and should be avoided, even though it feels the natural thing to do. The better thing to do is to use any of its options:
bash -c string: If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
$ bash -c "$(command you want to pipe)"
This also works for zsh, csh, tcsh, ksh, sh and probably a lot of others.
I'm currently using the following to capture everything that goes to the terminal and throw it into a log file
exec 4<&1 5<&2 1>&2>&>(tee -a $LOG_FILE)
however, I don't want color escape codes/clutter going into the log file. so i have something like this that sorta works
exec 4<&1 5<&2 1>&2>&>(
while read -u 0; do
#to terminal
echo "$REPLY"
#to log file (color removed)
echo "$REPLY" | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' >> $LOG_FILE
done
unset REPLY #tidy
)
except read waits for carriage return which isn't ideal for some portions of the script (e.g. echo -n "..." or printf without \n).
Follow-up to Jonathan Leffler's answer:
Given the example script test.sh:
#!/bin/bash
LOG_FILE="./test.log"
echo -n >$LOG_FILE
exec 4<&1 5<&2 1>&2>&>(tee -a >(sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' > $LOG_FILE))
##### ##### #####
# Main
echo "starting execution"
printf "\n\n"
echo "color test:"
echo -e "\033[0;31mhello \033[0;32mworld\033[0m!"
printf "\n\n"
echo -e "\033[0;36mEnvironment:\033[0m\n foo: cat\n bar: dog\n your wife: hot\n fix: A/C"
echo -n "Before we get started. Is the above information correct? "
read YES
echo -e "\n[READ] $YES" >> $LOG_FILE
YES=$(echo "$YES" | sed 's/^\s*//;s/\s*$//')
test ! "$(echo "$YES" | grep -iE '^y(es)?$')" && echo -e "\nExiting... :(" && exit
printf "\n\n"
#...some hundreds of lines of code later...
echo "Done!"
##### ##### #####
# End
exec 1<&4 4>&- 2<&5 5>&-
echo "Log File: $LOG_FILE"
The output to the terminal is as expected and there is no color escape codes/clutter in the log file as desired. However upon examining test.log, I do not see the [READ] ... (see line 21 of test.sh).
The log file [of my actual bash script] contains the line Log File: ... at the end of it even after closing the 4 and 5 fds. I was able to resolve the issue by putting a sleep 1 before the second exec - I assume there's a race condition or fd shenanigans to blame for it. Unfortunately for you guys, I am not able to reproduce this issue with test.sh but I'd be interested in any speculation anyone may have.
Consider using the pee program discussed in Is it possible to distribute stdin over parallel processes. It would allow you to send the log data through your sed script, while continuing to send the colours to the actual output.
One major advantage of this is that it would remove the 'execute sed once per line of log output'; that is really diabolical for performance (in terms of number of processes executed, if nothing else).
I know it's not a perfect solution, but cat -v will make non visible chars like \x1B to be converted into visible form like ^[[1;34m. The output will be messy, but it will be ascii text at least.
I use to do stuff like this by setting TERM=dumb before running my command. That pretty much removed any control characters except for tab, CR, and LF. I have no idea if this works for your situation, but it's worth a try. The problem is that you won't see color encodings on your terminal either since it's a dumb terminal.
You can also try either vis or cat (especially the -v parameter) and see if these do something for you. You'd simply put them in your pipeline like this:
exec 4<&1 5<&2 1>&2>&>(tee -a | cat -v | $LOG_FILE)
By the way, almost all terminal programs have an option to capture the input, and most clean it up for you. What platform are you on, and what type of terminal program are you using?
You could attempt to use the -n option for read. It reads in n characters instead of waiting for a new line. You could set it to one. This would increase the number of iteration the code runs, but it would not wait for newlines.
From the man:
-n NCHARS read returns after reading NCHARS characters rather than waiting for a complete line of input.
Note: I have not tested this
You can use ANSIFilter to strip or transform console output with ANSI escape sequences.
See http://www.andre-simon.de/zip/download.html#ansifilter
Might not screen -L or the script commands be viable options instead of this exec loop?
So this is probably an easy question, but I am not much of a bash programmer and I haven't been able to figure this out.
We have a closed source program that calls a subprogram which runs until it exits, at which point the program will call the subprogram again. This repeats indefinitely.
Unfortunately the main program will sometimes spontaneously (and repeatedly) fail to call the subprogram after a random period of time. The eventual solution is to contact the original developers to get support, but in the meantime we need a quick hotfix for the issue.
I'm trying to write a bash script that will monitor the output of the program and when it sees a specific string, it will restart the machine (the program will run again automatically on boot). The bash script needs to pass all standard output through to the screen up until it sees the specific string. The program also needs to continue to handle user input.
I have tried the following with limited success:
./program1 | ./watcher.sh
watcher.sh is basically just the following:
while read line; do
echo $line
if [$line == "some string"]
then
#the reboot script works fine
./reboot.sh
fi
done
This seems to work OK, but leading whitespace is stripped on the echo statement, and the echo output hangs in the middle until subprogram exits, at which point the rest of the output is printed to the screen. Is there a better way to accomplish what I need to do?
Thanks in advance.
I would do something along the lines of:
stdbuf -o0 ./program1 | grep --line-buffered "some string" | (read && reboot)
you need to quote your $line variable, i.e. "$line" for all references *(except the read line bit).
Your program1 is probably the source of the 'paused' data. It needs to flush its output buffer. You probably don't have control of that, so
a. check if your system has unbuffer command available. If so try unbuffer cmd1 | watcher You may have to experiment with which cmd you wrap unbuffer with, maybe you whill have to do cmd1 | unbuffer watcher.
b. OR you can try wrapping watcher as a process-group, (I think that is the right terminology), i.e.
./program1 | { ./watcher.sh ; printf "\n" ; }
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
use read's $REPLY variable, also I'd suggest using printf instead of echo
while read; do
printf "%s\n" "$REPLY"
# '[[' is Bash, quotes are not necessary
# use '[ "$REPLY" == "some string" ]' if in another shell
if [[ $REPLY == "some string" ]]
then
#the reboot script works fine
./reboot.sh
fi
done