Can someone please explain the difference between these two while loops:
while read test; do
echo $test
done <<< "$(seq 5)"
-
while read test; do
echo $test
done < <(seq 5)
while read test; do
echo $test
done <<< "$(seq 5)"
Execute seq 5, collecting the result into a temporary variable. Then execute the while loop, feeding it the collecting result.
while read test; do
echo $test
done < <(seq 5)
Set up a subshell to execute seq 5 and connect its stdout to stdin. Then start the while loop. When it finishes, restore stdin.
What's the difference? For seq 5, practically nothing; however, it can still be made visible by changing seq 5 to seq 5; echo done generating sequence >&2. Then you can see that in the first case, the entire seq execution finishes before the while loop starts, while in the second case they execute in parallel.
$ while read n; do echo $n > /dev/stderr; done \
> <<<"$(seq 5; echo done generating sequence >&2)"
done generating sequence
1
2
3
4
5
$ while read n; do echo $n > /dev/stderr; done \
> < <(seq 5; echo done generating sequence >&2)
1
2
done generating sequence
3
4
5
If it were seq 10000000, the difference would be much clearer. The <<<"$(...) form would use a lot more memory in order to store the temporary string.
Based on what I perceive, the only difference is that process substitution would represent a named pipe e.g. /dev/fd/63 as a file for input whereas <<< "" would send the input internally as if reading a buffer. Of course is the command reading the input is on another process like a subshell or another binary, then it would be sent to it like a pipe. Sometimes in environments where process substitution is not possible like in Cygwin, here documents or here strings along with command substitutions are more helpful.
If you do echo <(:) you see the difference in concept for process substitution over other string inputs.
Process substitution is more of representing a file, while here strings are more of sending in a buffer for input.
Related
The following short script prints a random ten-digit binary number:
#!/usr/bin/zsh
point=''
for i in `seq 1 10`
do
echo $RANDOM >/dev/null
point=$point`if [ $RANDOM -gt 16383 ]; then echo 0; else echo 1; fi`
done
echo $point
However, if I remove the apparently useless echo $RANDOM >/dev/null line, the script always prints either 1111111111 or 0000000000.
Why?
Subshells (as created by backticks, or their modern replacement $()) execute in a different context from the parent shell -- meaning that when they exit, all process-local state changes -- including the random number generator's state -- are thrown away.
Reading from $RANDOM inside the parent shell updates the RNG's state, which is why the echo $RANDOM >/dev/null has an effect.
That said, don't do that. Do something like this, which has no subshells at all:
point=
for ((i=0; i<10; i++)); do
point+=$(( (RANDOM > 16383) ? 0 : 1 ))
done
If you test this generating more than 10 digits -- try, say, 1000, or 10000 -- you'll also find that it performs far better than the original did.
From the man page:
The values of RANDOM form an intentionally-repeatable pseudo-random sequence; subshells that reference
RANDOM will result in identical pseudo-random values unless the value of RANDOM is referenced or seeded
in the parent shell in between subshell invocations.
The "useless" call to echo provides the reference that allows the subshell induced by the command substitution to produce a different value each time.
As other answers have stated, Zsh $RANDOM doesn't behave the same as bash and repeats the same sequence if invoked within a subshell. That behavior can be seen with this simple example:
$ # bash: different numbers; zsh: all the same number
$ for i in {0..99}; do echo $(echo $RANDOM); done
22865
22865
...
22865
What the other answers don't cover is how do get around this behavior in Zsh. While echo $RANDOM >/dev/null helps if you know $RANDOM was invoked, it doesn't help much if it's tucked away in a function you're writing. That puts extra burden on the caller to do this weird echo:
function rolldice {
local i times=${1:-1} sides=${2:-6}
for i in $(seq $times); do
echo $(( 1 + $RANDOM % $sides ))
done
}
# all 4 players rolled the exact same thing!
for i in {1..4}; do echo $(rolldice 10); done
# seriously 🙄...
for i in {1..4}; do echo $(rolldice 10); echo $RANDOM >/dev/null; done
While it's more expensive (slower) to do, you could get around this by running $RANDOM within a new Zsh session to get results similar to Bash:
function rolldice {
local i times=${1:-1} sides=${2:-6}
for i in $(seq $times); do
echo $(( 1 + $(zsh -c 'echo $RANDOM') % $sides ))
done
}
# now all 4 players rolled different results!
for i in {1..4}; do echo $(rolldice 10); done
There are other better ways than using $RANDOM to get a uniformly random number in Zsh, but this works in a pinch. One popular alternative is random=$(od -vAn -N4 -t u4 < /dev/urandom); echo $random.
I'm new to programming in general and my teacher is starting me out with simple bash scripts.
This is what I did, but it's not working.
seq 1 3 | while read a; do
echo 123
done
What am I doing wrong?
As seen in comments, what you want to do is to use the output of seq, every time one.
Thus, use echo "$a" (good to quote!):
seq 1 3 | while read a; do
echo "$a"
done
By the way, for safety it is good to use IFS= (input field separator) and -r for the read to prevent weird situations. Also, you can avoid the pipe by giving the input from indirection < <(seq...). Finally, seq 1 3 is the same as seq 3, since 1 is the default beginning point. All together:
while IFS= read -r a
do
echo "$a"
done < <(seq 3)
The echo command is the problem. The example should look like
seq 1 3 | while read a; do
echo $a
done
Rather than attempting to print all of the sequence values in each use of echo, you should be printing the individual sequence values read via the pipe from seq.
Bash allows to use: cat <(echo "$FILECONTENT")
Bash also allow to use: while read i; do echo $i; done </etc/passwd
to combine previous two this can be used: echo $FILECONTENT | while read i; do echo $i; done
The problem with last one is that it creates sub-shell and after the while loop ends variable i cannot be accessed any more.
My question is:
How to achieve something like this: while read i; do echo $i; done <(echo "$FILECONTENT") or in other words: How can I be sure that i survives while loop?
Please note that I am aware of enclosing while statement into {} but this does not solves the problem (imagine that you want use the while loop in function and return i variable)
The correct notation for Process Substitution is:
while read i; do echo $i; done < <(echo "$FILECONTENT")
The last value of i assigned in the loop is then available when the loop terminates.
An alternative is:
echo $FILECONTENT |
{
while read i; do echo $i; done
...do other things using $i here...
}
The braces are an I/O grouping operation and do not themselves create a subshell. In this context, they are part of a pipeline and are therefore run as a subshell, but it is because of the |, not the { ... }. You mention this in the question. AFAIK, you can do a return from within these inside a function.
Bash also provides the shopt builtin and one of its many options is:
lastpipe
If set, and job control is not active, the shell runs the last command of a pipeline not executed in the background in the current shell environment.
Thus, using something like this in a script makes the modfied sum available after the loop:
FILECONTENT="12 Name
13 Number
14 Information"
shopt -s lastpipe # Comment this out to see the alternative behaviour
sum=0
echo "$FILECONTENT" |
while read number name; do ((sum+=$number)); done
echo $sum
Doing this at the command line usually runs foul of 'job control is not active' (that is, at the command line, job control is active). Testing this without using a script failed.
Also, as noted by Gareth Rees in his answer, you can sometimes use a here string:
while read i; do echo $i; done <<< "$FILECONTENT"
This doesn't require shopt; you may be able to save a process using it.
Jonathan Leffler explains how to do what you want using process substitution, but another possibility is to use a here string:
while read i; do echo "$i"; done <<<"$FILECONTENT"
This saves a process.
This function makes duplicates $NUM times of jpg files (bash)
function makeDups() {
NUM=$1
echo "Making $1 duplicates for $(ls -1 *.jpg|wc -l) files"
ls -1 *.jpg|sort|while read f
do
COUNT=0
while [ "$COUNT" -le "$NUM" ]
do
cp $f ${f//sm/${COUNT}sm}
((COUNT++))
done
done
}
I wonder If it is possible to write "for i in {n..k}" loop with a variable.
For example;
for i in {1..5}; do
echo $i
done
This outputs
1
2
3
4
5
On the other hands
var=5
for i in {1..$var}; do
echo $i
done
prints
{1..5}
How can I make second code run as same as first one?
p.s. I know there is lots of way to create a loop by using a variable but I wanted to ask specifically about this syntax.
It is not possible to use variables in the {N..M} syntax. Instead, what you can do is use seq:
$ var=5
$ for i in $(seq 1 $var) ; do echo "$i"; done
1
2
3
4
5
Or...
$ start=3
$ end=8
$ for i in $(seq $start $end) ; do echo $i; done
3
4
5
6
7
8
While seq is fine, it can cause problems if the value of $var is very large, as the entire list of values needs to be generated, which can cause problems if the resulting command line is too long. bash also has a C-style for loop which doesn't explicitly generate the list:
for ((i=1; i<=$var; i++)); do
echo "$i"
done
(This applies to constant sequences as well, since {1..10000000} would also generate a very large list which could overflow the command line.)
You can use eval for this:
$ num=5
$ for i in $(eval echo {1..$num}); do echo $i; done
1
2
3
4
5
Please read drawbacks of eval before using.
According the bash(1) man pages, when I run the following:
set -e
x=2
echo Start $x
while [ $((x--)) -gt 0 ]; do echo Loop $x; done | cat
echo End $x
The output will be:
Start 2
Loop 1
Loop 0
End 2
After the loop (runs as a subshell) the variable x reset to 2. But if I remove the pipe the x will be updated:
Start 2
Loop 1
Loop 0
End -1
I need to change the x but, I need the pipe too.
Any idea how to get around this problem?
bash always (at least as of 4.2) runs all non-rightmost parts of a pipeline in a subshell. If the value of x needs to change in the calling shell, you must rewrite your code to avoid the pipeline.
One horrible-looking example:
# If you commit to one bash feature, may as well commit to them all:
# Arithmetic compound: (( x-- > 0 ))
# Process substitution: > >( cat )
while (( x-- > 0 )); do echo Loop $x; done > >( cat )