WSL2 interop issue causes premature exit from read loop in shell script - bash

Summary
Using read loop that runs a windows executable in WSL shell script causes it to exit the loop after the first iteration.
Details
I've been quite baffled by what appears to be an interoperability issue with running windows executables from a shell script in WSL2. The following while loop should print 3 lines but it will only print "line 1". It has been tested on Ubuntu 20.04 in dash, bash, and zsh.
while read -r line; do
powershell.exe /C "echo \"${line}\""
done << EOF
line 1
line 2
line 3
EOF
This issue also occurs when reading lines from a file instead of a heredoc even if that file has windows line endings. Note that if powershell were changed to /bin/bash or any other native executable this would print 3 lines. Also powershell could be replaced with any windows executable cmd.exe, explorer.exe, etc and it would still only run the first iteration. This appears to be a problem with read specifically since this loop will work fine.
for line in "line 1" "line 2" "line 3"
do
powershell.exe /C "echo \"${line}\""
done
Work-around
Thanks to this post I have discovered a work around is to pipe through a dummy command: echo "" | cmd.exe /C "echo \"${line}\"". A note about this fix is that only piping seems to work. Redirecting the output or running it through another layer of bash does not: /bin/bash -c "cmd.exe /C \"echo ${line}\"". I am partially posting this for improved visibility for anyone having this issue in the future, but I am still curious if anyone has any insight as to why this issue exists (perhaps due to line endings?). Thank you!

Short Answer:
A slightly-improved solution over echo "" | is to do a second-redirection from /dev/null. This avoids potential newline issues from the echo, but there are other solutions as well:
while read -r line; do
powershell.exe /C "echo \"${line}\"" < /dev/null
done << EOF
line 1
line 2
line 3
EOF
Explanation:
Well, you already had a solution, but what you really wanted was the explanation.
Jetchisel and MarkPlotnick are on the right track in the comments. This appears to be the same root cause (and solution) as in this question about ssh. To replicate your example with ssh (assuming a key in ssh-agent so that no password prompt is generated):
while read -r line; do
ssh hostname echo ${line}
done << EOF
line 1
line 2
line 3
EOF
You will see the same results as with PowerShell -- Only "line 1" displays.
In both cases, the first line goes to the read statement, but the subsequent lines are stdin which are consumed by powershell.exe (or ssh) itself.
You can see this "proven" in PowerShell through a slight modification to your script:
while read -r line; do
powershell.exe -c "echo \"--- ${line} ---\"; \$input"
done << EOF
line 1
line 2
line 3
EOF
Results in:
--- line 1 ---
line 2
line 3
The follow-up question is, IMHO, why bash doesn't have this issue. The answer is that PowerShell seems to always consume whatever stdin is available at the time of invocation and adds it to the $input magic variable. Bash, on the other hand, does not consume the additional stdin until explicitly asked:
while read -r line; do
bash -c "echo --- \"${line}\" ---; cat /dev/stdin"
done << EOF
line 1
line 2
line 3
EOF
Generates the same results as the previous PowerShell example:
--- line 1 ---
line 2
line 3
Ultimately, the main solution with PowerShell is to force a second indirection which is consumed before your desired input. echo "" | can do this, but be careful:
while read -r line; do
echo "" | powershell.exe -c "echo \"--- ${line} ---\"; \$input"
done << EOF
line 1
line 2
line 3
EOF
Results in:
--- line 1 ---
--- line 2 ---
--- line 3 ---
< /dev/null doesn't have this issue, but you could also handle it with echo -n "" | instead.

Related

In bash print to line above terminal output

EDIT: Corrected process/thread terminology
My shell script has a foreground process that reads user input and a background process that prints messages. I would like to print these messages on the line above the input prompt rather than interrupting the input. Here's a canned example:
sleep 5 && echo -e "\nINFO: Helpful Status Update!" &
echo -n "> "
read input
When I execute it and type "input" a bunch of times, I get something like this:
> input input input inp
INFO: Helpful Status Update!
ut input
But I would like to see something like this:
INFO: Helpful Status Update!
> input input input input input
The solution need not be portable (I'm using bash on linux), though I would like to avoid ncurses if possible.
EDIT: According to #Nick, previous lines are inaccessible for historical reasons. However, my situation only requires modifying the current line. Here's a proof of concept:
# Make named pipe
mkfifo pipe
# Spawn background process
while true; do
sleep 2
echo -en "\033[1K\rINFO: Helpful Status Update!\n> `cat pipe`"
done &
# Start foreground user input
echo -n "> "
pid=-1
collected=""
IFS=""
while true; do
read -n 1 c
collected="$collected$c"
# Named pipes block writes, so must do background process
echo -n "$collected" >> pipe &
# Kill last loop's (potentially) still-blocking pipe write
if kill -0 $pid &> /dev/null; then
kill $pid &> /dev/null
fi
pid=$!
done
This produces mostly the correct behavior, but lacks CLI niceties like backspace and arrow navigation. These could be hacked in, but I'm still having trouble believing that a standard approach hasn't already been developed.
The original ANSI codes still work in bash terminal on Linux (and MacOS), so you can use \033[F where \033 is the ESCape character. You can generate this in bash terminal by control-V followed by the ESCape character. You should see ^[ appear. Then type [F. If you test the following script:
echo "original line 1"
echo "^[[Fupdated line 1"
echo "line 2"
echo "line 3"
You should see output:
updated line 1
line 2
line 3
EDIT:
I forgot to add that using this in your script will cause the cursor to return to the beginning of the line, so further input will overwrite what you have typed already. You could use control-R on the keyboard to cause bash to re-type the current line and return the cursor to the end of the line.

Error when using exec vi

#!/bin/bash
if [ $# -ne 1 ]
then
echo "USAGE:vitest filename"
else
FILENAME=$1
exec vi $FILENAME <<EOF
i
Line 1.
Line 2.
^[
ZZ
EOF
fi
exit 0
I'm trying to input the Line 1. and Line 2. with Exec vi using the here doc, and commands.
When running the script it gives me the following:
Vim(?):Warning: Input is not from a terminal
Vim: Error reading input, exiting...
Press ENTER or type command to continueVim: Finished.
Vim: Error reading input, exiting...
Vim: Finished.
You want to start vi in ex mode, with a few minor changes to the script.
vi -e "$FILENAME" <<EOF
i
Line 1.
Line 2.
.
wq
EOF
exec is almost certainly unnecessary, especially since you have an exit command following vi. exec is used to replace the current script with the given command; it is not needed simply to execute a command.
A brief history of UNIX text editors:
ed was the original editor, designed to work with a teletype rather than a video terminal.
ex was an extended version of ed, designed to take advantage of a video terminal.
vi was an editor that provided ex with a full-screen visual mode, in contrast with the line-oriented interface employed by ed and ex.
As suggested, ed
ed file << END
1i
line1
line2
.
wq
END
The "dot" line means "end of input".
It can be written less legibly as a one-liner
printf "%s\n" 1i "line1" "line2" . wq | ed file
Use cat.
$ cat file1.txt file2.txt | tee file3.txt
Line 1
Line 2
aaaa
bbbb
cccc
Using sed
If I understand correctly, you want to add two lines to the beginning of a file. In that case, as per Cyrus' suggestion, run:
#!/bin/bash
if [ $# -ne 1 ]
then
echo "USAGE:vitest filename"
exit 1
fi
sed -i.bak '1 s/^/line1\nline2\n/' "$1"
Notes:
When a shell variable is used, it should be in double-quotes unless you want word splitting and pathname expansion to be performed. This is important for file names, for example, as it is now common for them to contain whitespace.
It is best practice to use lower or mixed case names for shell variables. The system uses upper case names for its variables and you don't want to overwrite one of them accidentally.
In the check for the argument, the if statement should include an exit to prevent the rest of the script from being run in the case that no argument was provided. In the above, we added exit 1 which sets the exit code to 1 to signal an error.
Using vi
Let's start with this test file:
$ cat File
some line
Now, let's run vi and see what is in File afterward:
$ vi -s <(echo $'iline1\nline2\n\eZZ') File
$ cat File
line1
line2
some line
The above requires bash or similar.

bash - getting line by line - error caused by pipe?

i was trying to read file servers.txt and ping every line in it.
it contains server on each line.
#!/bin/bash
clear
output="pingtest.txt"
for line in < cat "servers.txt"
do
ping $line >> "$output" 2>&1
done
But the script simply does not work, because of '<' on line 4.
What am i doing wrong?
A for loop loops over what is essentially positional parameters. It does not read from standard input. read reads from standard input.
You want
while read line; do
…
done < "servers.txt"
This is the very first BASH FAQ.
If you would like to stick with a for loop you could try this:
#!/bin/bash
clear
output="pingtest.txt"
for line in `cat servers.txt`
do
ping -c 1 -W 1 ${line} >> ${output} 2>&1
done
Also, you will want to provide a count and timeout for the ping command.
c : is the amount of times you would like to ping the server
W : is the amount of time to wait for the server to respond

bash tee remove color

I'm currently using the following to capture everything that goes to the terminal and throw it into a log file
exec 4<&1 5<&2 1>&2>&>(tee -a $LOG_FILE)
however, I don't want color escape codes/clutter going into the log file. so i have something like this that sorta works
exec 4<&1 5<&2 1>&2>&>(
while read -u 0; do
#to terminal
echo "$REPLY"
#to log file (color removed)
echo "$REPLY" | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' >> $LOG_FILE
done
unset REPLY #tidy
)
except read waits for carriage return which isn't ideal for some portions of the script (e.g. echo -n "..." or printf without \n).
Follow-up to Jonathan Leffler's answer:
Given the example script test.sh:
#!/bin/bash
LOG_FILE="./test.log"
echo -n >$LOG_FILE
exec 4<&1 5<&2 1>&2>&>(tee -a >(sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' > $LOG_FILE))
##### ##### #####
# Main
echo "starting execution"
printf "\n\n"
echo "color test:"
echo -e "\033[0;31mhello \033[0;32mworld\033[0m!"
printf "\n\n"
echo -e "\033[0;36mEnvironment:\033[0m\n foo: cat\n bar: dog\n your wife: hot\n fix: A/C"
echo -n "Before we get started. Is the above information correct? "
read YES
echo -e "\n[READ] $YES" >> $LOG_FILE
YES=$(echo "$YES" | sed 's/^\s*//;s/\s*$//')
test ! "$(echo "$YES" | grep -iE '^y(es)?$')" && echo -e "\nExiting... :(" && exit
printf "\n\n"
#...some hundreds of lines of code later...
echo "Done!"
##### ##### #####
# End
exec 1<&4 4>&- 2<&5 5>&-
echo "Log File: $LOG_FILE"
The output to the terminal is as expected and there is no color escape codes/clutter in the log file as desired. However upon examining test.log, I do not see the [READ] ... (see line 21 of test.sh).
The log file [of my actual bash script] contains the line Log File: ... at the end of it even after closing the 4 and 5 fds. I was able to resolve the issue by putting a sleep 1 before the second exec - I assume there's a race condition or fd shenanigans to blame for it. Unfortunately for you guys, I am not able to reproduce this issue with test.sh but I'd be interested in any speculation anyone may have.
Consider using the pee program discussed in Is it possible to distribute stdin over parallel processes. It would allow you to send the log data through your sed script, while continuing to send the colours to the actual output.
One major advantage of this is that it would remove the 'execute sed once per line of log output'; that is really diabolical for performance (in terms of number of processes executed, if nothing else).
I know it's not a perfect solution, but cat -v will make non visible chars like \x1B to be converted into visible form like ^[[1;34m. The output will be messy, but it will be ascii text at least.
I use to do stuff like this by setting TERM=dumb before running my command. That pretty much removed any control characters except for tab, CR, and LF. I have no idea if this works for your situation, but it's worth a try. The problem is that you won't see color encodings on your terminal either since it's a dumb terminal.
You can also try either vis or cat (especially the -v parameter) and see if these do something for you. You'd simply put them in your pipeline like this:
exec 4<&1 5<&2 1>&2>&>(tee -a | cat -v | $LOG_FILE)
By the way, almost all terminal programs have an option to capture the input, and most clean it up for you. What platform are you on, and what type of terminal program are you using?
You could attempt to use the -n option for read. It reads in n characters instead of waiting for a new line. You could set it to one. This would increase the number of iteration the code runs, but it would not wait for newlines.
From the man:
-n NCHARS read returns after reading NCHARS characters rather than waiting for a complete line of input.
Note: I have not tested this
You can use ANSIFilter to strip or transform console output with ANSI escape sequences.
See http://www.andre-simon.de/zip/download.html#ansifilter
Might not screen -L or the script commands be viable options instead of this exec loop?

Echoing the last command run in Bash?

I am trying to echo the last command run inside a bash script. I found a way to do it with some history,tail,head,sed which works fine when commands represent a specific line in my script from a parser standpoint. However under some circumstances I don't get the expected output, for instance when the command is inserted inside a case statement:
The script:
#!/bin/bash
set -o history
date
last=$(echo `history |tail -n2 |head -n1` | sed 's/[0-9]* //')
echo "last command is [$last]"
case "1" in
"1")
date
last=$(echo `history |tail -n2 |head -n1` | sed 's/[0-9]* //')
echo "last command is [$last]"
;;
esac
The output:
Tue May 24 12:36:04 CEST 2011
last command is [date]
Tue May 24 12:36:04 CEST 2011
last command is [echo "last command is [$last]"]
[Q] Can someone help me find a way to echo the last run command regardless of how/where this command is called within the bash script?
My answer
Despite the much appreciated contributions from my fellow SO'ers, I opted for writing a run function - which runs all its parameters as a single command and display the command and its error code when it fails - with the following benefits:
-I only need to prepend the commands I want to check with run which keeps them on one line and doesn't affect the conciseness of my script
-Whenever the script fails on one of these commands, the last output line of my script is a message that clearly displays which command fails along with its exit code, which makes debugging easier
Example script:
#!/bin/bash
die() { echo >&2 -e "\nERROR: $#\n"; exit 1; }
run() { "$#"; code=$?; [ $code -ne 0 ] && die "command [$*] failed with error code $code"; }
case "1" in
"1")
run ls /opt
run ls /wrong-dir
;;
esac
The output:
$ ./test.sh
apacheds google iptables
ls: cannot access /wrong-dir: No such file or directory
ERROR: command [ls /wrong-dir] failed with error code 2
I tested various commands with multiple arguments, bash variables as arguments, quoted arguments... and the run function didn't break them. The only issue I found so far is to run an echo which breaks but I do not plan to check my echos anyway.
Bash has built in features to access the last command executed. But that's the last whole command (e.g. the whole case command), not individual simple commands like you originally requested.
!:0 = the name of command executed.
!:1 = the first parameter of the previous command
!:4 = the fourth parameter of the previous command
!:* = all of the parameters of the previous command
!^ = the first parameter of the previous command (same as !:1)
!$ = the final parameter of the previous command
!:-3 = all parameters in range 0-3 (inclusive)
!:2-5 = all parameters in range 2-5 (inclusive)
!! = the previous command line
etc.
So, the simplest answer to the question is, in fact:
echo !!
...alternatively:
echo "Last command run was ["!:0"] with arguments ["!:*"]"
Try it yourself!
echo this is a test
echo !!
In a script, history expansion is turned off by default, you need to enable it with
set -o history -o histexpand
The command history is an interactive feature. Only complete commands are entered in the history. For example, the case construct is entered as a whole, when the shell has finished parsing it. Neither looking up the history with the history built-in (nor printing it through shell expansion (!:p)) does what you seem to want, which is to print invocations of simple commands.
The DEBUG trap lets you execute a command right before any simple command execution. A string version of the command to execute (with words separated by spaces) is available in the BASH_COMMAND variable.
trap 'previous_command=$this_command; this_command=$BASH_COMMAND' DEBUG
…
echo "last command is $previous_command"
Note that previous_command will change every time you run a command, so save it to a variable in order to use it. If you want to know the previous command's return status as well, save both in a single command.
cmd=$previous_command ret=$?
if [ $ret -ne 0 ]; then echo "$cmd failed with error code $ret"; fi
Furthermore, if you only want to abort on a failed commands, use set -e to make your script exit on the first failed command. You can display the last command from the EXIT trap.
set -e
trap 'echo "exit $? due to $previous_command"' EXIT
Note that if you're trying to trace your script to see what it's doing, forget all this and use set -x.
After reading the answer from Gilles, I decided to see if the $BASH_COMMAND var was also available (and the desired value) in an EXIT trap - and it is!
So, the following bash script works as expected:
#!/bin/bash
exit_trap () {
local lc="$BASH_COMMAND" rc=$?
echo "Command [$lc] exited with code [$rc]"
}
trap exit_trap EXIT
set -e
echo "foo"
false 12345
echo "bar"
The output is
foo
Command [false 12345] exited with code [1]
bar is never printed because set -e causes bash to exit the script when a command fails and the false command always fails (by definition). The 12345 passed to false is just there to show that the arguments to the failed command are captured as well (the false command ignores any arguments passed to it)
I was able to achieve this by using set -x in the main script (which makes the script print out every command that is executed) and writing a wrapper script which just shows the last line of output generated by set -x.
This is the main script:
#!/bin/bash
set -x
echo some command here
echo last command
And this is the wrapper script:
#!/bin/sh
./test.sh 2>&1 | grep '^\+' | tail -n 1 | sed -e 's/^\+ //'
Running the wrapper script produces this as output:
echo last command
history | tail -2 | head -1 | cut -c8-
tail -2 returns the last two command lines from history
head -1 returns just first line
cut -c8- returns just command line, removing PID and spaces.
There is a racecondition between the last command ($_) and last error ( $?) variables. If you try to store one of them in an own variable, both encountered new values already because of the set command. Actually, last command hasn't got any value at all in this case.
Here is what i did to store (nearly) both informations in own variables, so my bash script can determine if there was any error AND setting the title with the last run command:
# This construct is needed, because of a racecondition when trying to obtain
# both of last command and error. With this the information of last error is
# implied by the corresponding case while command is retrieved.
if [[ "${?}" == 0 && "${_}" != "" ]] ; then
# Last command MUST be retrieved first.
LASTCOMMAND="${_}" ;
RETURNSTATUS='✓' ;
elif [[ "${?}" == 0 && "${_}" == "" ]] ; then
LASTCOMMAND='unknown' ;
RETURNSTATUS='✓' ;
elif [[ "${?}" != 0 && "${_}" != "" ]] ; then
# Last command MUST be retrieved first.
LASTCOMMAND="${_}" ;
RETURNSTATUS='✗' ;
# Fixme: "$?" not changing state until command executed.
elif [[ "${?}" != 0 && "${_}" == "" ]] ; then
LASTCOMMAND='unknown' ;
RETURNSTATUS='✗' ;
# Fixme: "$?" not changing state until command executed.
fi
This script will retain the information, if an error occured and will obtain the last run command. Because of the racecondition i can not store the actual value. Besides, most commands actually don't even care for error noumbers, they just return something different from '0'. You'll notice that, if you use the errono extention of bash.
It should be possible with something like a "intern" script for bash, like in bash extention, but i'm not familiar with something like that and it wouldn't be compatible as well.
CORRECTION
I didn't think, that it was possible to retrieve both variables at the same time. Although i like the style of the code, i assumed it would be interpreted as two commands. This was wrong, so my answer devides down to:
# Because of a racecondition, both MUST be retrieved at the same time.
declare RETURNSTATUS="${?}" LASTCOMMAND="${_}" ;
if [[ "${RETURNSTATUS}" == 0 ]] ; then
declare RETURNSYMBOL='✓' ;
else
declare RETURNSYMBOL='✗' ;
fi
Although my post might not get any positive rating, i solved my problem myself, finally.
And this seems appropriate regarding the intial post. :)

Resources