Variable value inconsistency on BASH and CSH - shell

May God never give you the bane of working on Solaris.
Now, I am trying to run this simple shell script:
#!/bin/sh
input="a
b
c"
data="123"
while read eachline
do
data="$data$eachline"
done << EOF
$(echo "$input")
EOF
echo "$data"
exit 0
On RHEL(BASH) I receive output as expected i.e "123abc", however, on Solaris I receive just "123".
After fair bit of googling, I realized that Solaris is forking a process for code inside the while loop and hence the variable's($data) value is not reflected on the outside of while loop.
Any hope to make this code compatible on both platforms would be greatly appreciated.
And oh yes, using a temp file for redirection would not be a very elegant solution :| .

Do you have a bash executable on the Solaris box ? I note you're referring to bash on RHEL, but your shell is set to #!/bin/sh (i.e. the vanilla Bourne shell).

How about this (please note backticks):
#!/bin/sh
input="a
b
c"
data="123"
z=`while read eachline
do
data="$data$eachline"
done << EOF
$(echo "$input")
EOF
echo $data
`
echo $z
exit 0

Your script works fine on Mac OS X 10.5, which is certified Single Unix Specification version 3.
The behaviour of while in /bin/sh on Solaris is very strange. Can you provide the link where you found the forking issue?
A possible solution may be to use another shell, e.g., /bin/ksh.
Edit
The examples provided by your links don't show an issue with while but a normal behaviour of any shell. The basic construct of all of them is:
first_command | second_command_updating_variable
The fact that second_command_updating_variable is a while loop is not important. What is important is that the second command of the pipe is executed in a subshell and cannot modify variables of its parent shell.

This code below solved my problem. I guess the key here was to echo the variable from within the subshell and ofcourse, not use the "here" document.
I have tested this code on CSH/BASH/KSH.
#!/bin/sh
input="a
b
c"
printf "$input" | {
data="123"
while read eachline
do
data="$data$eachline"
done
echo "$data"
}
exit 0
My heartfelt thanks to all who participated in this discussion.

Why execute a subshell in the here document?
while read eachline
do
data="$data$eachline"
done << EOF
$(echo "$input")
EOF
You could just interpolate:
while read eachline
do
data="$data$eachline"
done << EOF
$input
EOF
Or echo:
echo "$input" | while read eachline
do
data="$data$eachline"
done
Actually, when I tested these, they all worked on Solaris just fine for me, so I'm not sure why you had trouble.

Related

Why read -p won't work when use curl to get bash script? [duplicate]

I apologize in advance - I don't fully understand the ideas behind what I'm asking well enough to understand why it's not working (I don't know what I need to learn). I searched stack exchange for answers first - I found some information that seemed possibly relevant, but didn't explain the concepts well enough that I understood how to build a working solution. I've been scouring google but haven't found any information that describes exactly what's going on in such a way that I understand. Any direction to background concepts that may help me understand what's going on would be greatly appreciated.
Is it possible to get user input in a bash script that was executed from a pipe?
For example:
wget -q -O - http://myscript.sh | bash
And in the script:
read -p "Do some action (y/n): " __response
if [[ "$__response" =~ ^[Yy]$ ]]; then
echo "Performing some action ..."
fi
As I understand it, this doesn't work because read attempts to read the input from stdin and the bash script is currently "executing through that pipe" (i'm sure there is a more technical accurate way to describe what is occurring, but i don't know how).
I found a solution that recommended using:
read -t 1 __response </dev/tty
However, this does not work either.
Any light shed on the concepts I need to understand to make this work, or explanations of why it is not working or solutions would be greatly appreciated.
The tty solution works. Test it with this code, for example:
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? y
Sat Apr 12 10:51:16 PDT 2014
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? n
OK
The prompt from read appears on the terminal and read waits for a response before deciding to echo the date or not.
What I wrote above differs from the line below in two key aspects:
read -t 1 __response </dev/tty
First, the option -t 1 gives read a timeout of one second. Secondly, this command does not provide a prompt. The combination of these two probably means that, even though read was briefly asking for input, you didn't know it.
The main reason why this is not working is, as the OP validly indicated,
The | <pipe> which is used, sends the standard output from the first command as standard input to the second command. In this case, the first command is
wget -q -O - http://myscript.sh
which passes a downloaded script via the pipe to its interpreter bash
The read statement in the script uses the same standard input to obtain its value.
So this is where it collapses because read is not awaiting input from you but takes it from its own script. Example:
$ cat - <<EOF | bash
> set -x
> read p
> somecommand
> echo \$p
> EOF
+ read p
+ echo somecommand
somecommand
In this example, I used a here-document which is piped to bash. The script enables debugging using set -x to show what is happening. As you see, somecommand is never executed but actually read by read and stored in the variable p which is then outputted by echo (note, the $ has been escaped to avoid the substitution in the here-document).
So how can we get this to work then?
First of, never pipe to an interpreter such as {ba,k,z,c,tc,}sh. It is ugly and should be avoided, even though it feels the natural thing to do. The better thing to do is to use any of its options:
bash -c string: If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
$ bash -c "$(command you want to pipe)"
This also works for zsh, csh, tcsh, ksh, sh and probably a lot of others.

trouble capturing output of a subshell that has been backgrounded

Attempting to make a "simple" parallel function in bash. The problem is currently that when the line to capture the output is backgrounded, the output is lost. If that line is not backgrounded, the output is captured fine, but this of course defeats the purpose of the function.
#!/usr/bin/env bash
cluster="${1:-web100s}"
hosts=($(inventory.pl bash "$cluster" | sort -V))
cmds="${2:-uptime}"
parallel=10
cx=0
total=0
for host in "${hosts[#]}"; do
output[$total]=$(echo -en "$host: ")
echo "${output[$total]}"
output[$total]+=$(ssh -o ConnectTimeout=5 "$host" "$cmds") &
cx=$((cx + 1))
total=$((total + 1))
if [[ $cx -gt $parallel ]]; then
wait >&/dev/null
cx=0
fi
done
echo -en "***** DONE *****\n Results\n"
for ((i=0; i<= $total; i++)); do
echo "${output[$i]}"
done
That's because your command (the assignment) is run in a subshell, so this assignment can't influence the parent shell. This boils down to this:
a=something
a='hello senorsmile' &
echo "$a"
Can you guess what the output is? the output is, of course,
something
and not hello senorsmile. The only way for the subshell to communicate with the parent shell is to use an IPC (interprocess communication), in one form or another. I don't have any solution to propose, I only tried to explain why it fails.
If you think of it, it should make sense. What do you think of this?
a=$( echo a; sleep 1000000000; echo b ) &
The command immediately returns (after forking)... but the output is only going to be fully available in... over 31 years.
Assigning a shell variable in the background this way is effectively meaningless. Bash does have built in co-processing which should work for you:
http://www.gnu.org/software/bash/manual/bashref.html#Coprocesses

Using read -p in a bash script that was executed from pipe

I apologize in advance - I don't fully understand the ideas behind what I'm asking well enough to understand why it's not working (I don't know what I need to learn). I searched stack exchange for answers first - I found some information that seemed possibly relevant, but didn't explain the concepts well enough that I understood how to build a working solution. I've been scouring google but haven't found any information that describes exactly what's going on in such a way that I understand. Any direction to background concepts that may help me understand what's going on would be greatly appreciated.
Is it possible to get user input in a bash script that was executed from a pipe?
For example:
wget -q -O - http://myscript.sh | bash
And in the script:
read -p "Do some action (y/n): " __response
if [[ "$__response" =~ ^[Yy]$ ]]; then
echo "Performing some action ..."
fi
As I understand it, this doesn't work because read attempts to read the input from stdin and the bash script is currently "executing through that pipe" (i'm sure there is a more technical accurate way to describe what is occurring, but i don't know how).
I found a solution that recommended using:
read -t 1 __response </dev/tty
However, this does not work either.
Any light shed on the concepts I need to understand to make this work, or explanations of why it is not working or solutions would be greatly appreciated.
The tty solution works. Test it with this code, for example:
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? y
Sat Apr 12 10:51:16 PDT 2014
$ date | { read -p "Echo date? " r </dev/tty ; [ "$r" = "y" ] && cat || echo OK ; }
Echo date? n
OK
The prompt from read appears on the terminal and read waits for a response before deciding to echo the date or not.
What I wrote above differs from the line below in two key aspects:
read -t 1 __response </dev/tty
First, the option -t 1 gives read a timeout of one second. Secondly, this command does not provide a prompt. The combination of these two probably means that, even though read was briefly asking for input, you didn't know it.
The main reason why this is not working is, as the OP validly indicated,
The | <pipe> which is used, sends the standard output from the first command as standard input to the second command. In this case, the first command is
wget -q -O - http://myscript.sh
which passes a downloaded script via the pipe to its interpreter bash
The read statement in the script uses the same standard input to obtain its value.
So this is where it collapses because read is not awaiting input from you but takes it from its own script. Example:
$ cat - <<EOF | bash
> set -x
> read p
> somecommand
> echo \$p
> EOF
+ read p
+ echo somecommand
somecommand
In this example, I used a here-document which is piped to bash. The script enables debugging using set -x to show what is happening. As you see, somecommand is never executed but actually read by read and stored in the variable p which is then outputted by echo (note, the $ has been escaped to avoid the substitution in the here-document).
So how can we get this to work then?
First of, never pipe to an interpreter such as {ba,k,z,c,tc,}sh. It is ugly and should be avoided, even though it feels the natural thing to do. The better thing to do is to use any of its options:
bash -c string: If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
$ bash -c "$(command you want to pipe)"
This also works for zsh, csh, tcsh, ksh, sh and probably a lot of others.

How to use the read command in Bash?

When I try to use the read command in Bash like this:
echo hello | read str
echo $str
Nothing echoed, while I think str should contain the string hello. Can anybody please help me understand this behavior?
The read in your script command is fine. However, you execute it in the pipeline, which means it is in a subshell, therefore, the variables it reads to are not visible in the parent shell. You can either
move the rest of the script in the subshell, too:
echo hello | { read str
echo $str
}
or use command substitution to get the value of the variable out of the subshell
str=$(echo hello)
echo $str
or a slightly more complicated example (Grabbing the 2nd element of ls)
str=$(ls | { read a; read a; echo $a; })
echo $str
Other bash alternatives that do not involve a subshell:
read str <<END # here-doc
hello
END
read str <<< "hello" # here-string
read str < <(echo hello) # process substitution
Typical usage might look like:
i=0
echo -e "hello1\nhello2\nhello3" | while read str ; do
echo "$((++i)): $str"
done
and output
1: hello1
2: hello2
3: hello3
The value disappears since the read command is run in a separate subshell: Bash FAQ 24
To put my two cents here: on KSH, reading as is to a variable will work, because according to the IBM AIX documentation, KSH's read does affects the current shell environment:
The setting of shell variables by the read command affects the current shell execution environment.
This just resulted in me spending a good few minutes figuring out why a one-liner ending with read that I've used a zillion times before on AIX didn't work on Linux... it's because KSH does saves to the current environment and BASH doesn't!
I really only use read with "while" and a do loop:
echo "This is NOT a test." | while read -r a b c theRest; do
echo "$a" "$b" "$theRest"; done
This is a test.
For what it's worth, I have seen the recommendation to always use -r with the read command in bash.
You don't need echo to use read
read -p "Guess a Number" NUMBER
Another alternative altogether is to use the printf function.
printf -v str 'hello'
Moreover, this construct, combined with the use of single quotes where appropriate, helps to avoid the multi-escape problems of subshells and other forms of interpolative quoting.
Do you need the pipe?
echo -ne "$MENU"
read NUMBER

Bash Script Monitor Program for Specific Output

So this is probably an easy question, but I am not much of a bash programmer and I haven't been able to figure this out.
We have a closed source program that calls a subprogram which runs until it exits, at which point the program will call the subprogram again. This repeats indefinitely.
Unfortunately the main program will sometimes spontaneously (and repeatedly) fail to call the subprogram after a random period of time. The eventual solution is to contact the original developers to get support, but in the meantime we need a quick hotfix for the issue.
I'm trying to write a bash script that will monitor the output of the program and when it sees a specific string, it will restart the machine (the program will run again automatically on boot). The bash script needs to pass all standard output through to the screen up until it sees the specific string. The program also needs to continue to handle user input.
I have tried the following with limited success:
./program1 | ./watcher.sh
watcher.sh is basically just the following:
while read line; do
echo $line
if [$line == "some string"]
then
#the reboot script works fine
./reboot.sh
fi
done
This seems to work OK, but leading whitespace is stripped on the echo statement, and the echo output hangs in the middle until subprogram exits, at which point the rest of the output is printed to the screen. Is there a better way to accomplish what I need to do?
Thanks in advance.
I would do something along the lines of:
stdbuf -o0 ./program1 | grep --line-buffered "some string" | (read && reboot)
you need to quote your $line variable, i.e. "$line" for all references *(except the read line bit).
Your program1 is probably the source of the 'paused' data. It needs to flush its output buffer. You probably don't have control of that, so
a. check if your system has unbuffer command available. If so try unbuffer cmd1 | watcher You may have to experiment with which cmd you wrap unbuffer with, maybe you whill have to do cmd1 | unbuffer watcher.
b. OR you can try wrapping watcher as a process-group, (I think that is the right terminology), i.e.
./program1 | { ./watcher.sh ; printf "\n" ; }
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
use read's $REPLY variable, also I'd suggest using printf instead of echo
while read; do
printf "%s\n" "$REPLY"
# '[[' is Bash, quotes are not necessary
# use '[ "$REPLY" == "some string" ]' if in another shell
if [[ $REPLY == "some string" ]]
then
#the reboot script works fine
./reboot.sh
fi
done

Resources