How do I iterate through each line of a text file with Bash?
With this script:
echo "Start!"
for p in (peptides.txt)
do
echo "${p}"
done
I get this output on the screen:
Start!
./runPep.sh: line 3: syntax error near unexpected token `('
./runPep.sh: line 3: `for p in (peptides.txt)'
(Later I want to do something more complicated with $p than just output to the screen.)
The environment variable SHELL is (from env):
SHELL=/bin/bash
/bin/bash --version output:
GNU bash, version 3.1.17(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.
cat /proc/version output:
Linux version 2.6.18.2-34-default (geeko#buildhost) (gcc version 4.1.2 20061115 (prerelease) (SUSE Linux)) #1 SMP Mon Nov 27 11:46:27 UTC 2006
The file peptides.txt contains:
RKEKNVQ
IPKKLLQK
QYFHQLEKMNVK
IPKKLLQK
GDLSTALEVAIDCYEK
QYFHQLEKMNVKIPENIYR
RKEKNVQ
VLAKHGKLQDAIN
ILGFMK
LEDVALQILL
One way to do it is:
while read p; do
echo "$p"
done <peptides.txt
As pointed out in the comments, this has the side effects of trimming leading whitespace, interpreting backslash sequences, and skipping the last line if it's missing a terminating linefeed. If these are concerns, you can do:
while IFS="" read -r p || [ -n "$p" ]
do
printf '%s\n' "$p"
done < peptides.txt
Exceptionally, if the loop body may read from standard input, you can open the file using a different file descriptor:
while read -u 10 p; do
...
done 10<peptides.txt
Here, 10 is just an arbitrary number (different from 0, 1, 2).
cat peptides.txt | while read line
do
# do something with $line here
done
and the one-liner variant:
cat peptides.txt | while read line; do something_with_$line_here; done
These options will skip the last line of the file if there is no trailing line feed.
You can avoid this by the following:
cat peptides.txt | while read line || [[ -n $line ]];
do
# do something with $line here
done
Option 1a: While loop: Single line at a time: Input redirection
#!/bin/bash
filename='peptides.txt'
echo Start
while read p; do
echo "$p"
done < "$filename"
Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).
#!/bin/bash
filename='peptides.txt'
exec 4<"$filename"
echo Start
while read -u4 p ; do
echo "$p"
done
This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.
for word in $(cat peptides.txt); do echo $word; done
This format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done
Or if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txt
I've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:
OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFS
This just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.
Best of luck!
A few more things not covered by other answers:
Reading from a delimited file
# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
# process the fields
# if the line has less than three fields, the missing fields will be set to an empty string
# if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt
Reading from the output of another command, using process substitution
while read -r line; do
# process the line
done < <(command ...)
This approach is better than command ... | while read -r line; do ... because the while loop here runs in the current shell rather than a subshell as in the case of the latter. See the related post A variable modified inside a while loop is not remembered.
Reading from a null delimited input, for example find ... -print0
while read -r -d '' line; do
# logic
# use a second 'read ... <<< "$line"' if we need to tokenize the line
done < <(find /path/to/dir -print0)
Related read: BashFAQ/020 - How can I find and safely handle file names containing newlines, spaces or both?
Reading from more than one file at a time
while read -u 3 -r line1 && read -u 4 -r line2; do
# process the lines
# note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt
Based on #chepner's answer here:
-u is a bash extension. For POSIX compatibility, each call would look something like read -r X <&3.
Reading a whole file into an array (Bash versions earlier to 4)
while read -r line; do
my_array+=("$line")
done < my_file
If the file ends with an incomplete line (newline missing at the end), then:
while read -r line || [[ $line ]]; do
my_array+=("$line")
done < my_file
Reading a whole file into an array (Bash versions 4x and later)
readarray -t my_array < my_file
or
mapfile -t my_array < my_file
And then
for line in "${my_array[#]}"; do
# process the lines
done
More about the shell builtins read and readarray commands - GNU
More about IFS - Wikipedia
BashFAQ/001 - How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Related posts:
Creating an array from a text file in Bash
What is the difference between thee approaches to reading a file that has just one line?
Bash while read loop extremely slow compared to cat, why?
Use a while loop, like this:
while IFS= read -r line; do
echo "$line"
done <file
Notes:
If you don't set the IFS properly, you will lose indentation.
You should almost always use the -r option with read.
Don't read lines with for
If you don't want your read to be broken by newline character, use -
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "$line"
done < "$1"
Then run the script with file name as parameter.
Suppose you have this file:
$ cat /tmp/test.txt
Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR
There are four elements that will alter the meaning of the file output read by many Bash solutions:
The blank line 4;
Leading or trailing spaces on two lines;
Maintaining the meaning of individual lines (i.e., each line is a record);
The line 6 not terminated with a CR.
If you want the text file line by line including blank lines and terminating lines without CR, you must use a while loop and you must have an alternate test for the final line.
Here are the methods that may change the file (in comparison to what cat returns):
1) Lose the last line and leading and trailing spaces:
$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt instead, you preserve the leading and trailing spaces but still lose the last line if it is not terminated with CR)
2) Using process substitution with cat will reads the entire file in one gulp and loses the meaning of individual lines:
$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR'
(If you remove the " from $(cat /tmp/test.txt) you read the file word by word rather than one gulp. Also probably not what is intended...)
The most robust and simplest way to read a file line-by-line and preserve all spacing is:
$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
' Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space '
'Line 6 has no ending CR'
If you want to strip leading and trading spaces, remove the IFS= part:
$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'
(A text file without a terminating \n, while fairly common, is considered broken under POSIX. If you can count on the trailing \n you do not need || [[ -n $line ]] in the while loop.)
More at the BASH FAQ
I like to use xargs instead of while. xargs is powerful and command line friendly
cat peptides.txt | xargs -I % sh -c "echo %"
With xargs, you can also add verbosity with -t and validation with -p
This might be the simplest answer and maybe it don't work in all cases, but it is working great for me:
while read line;do echo "$line";done<peptides.txt
if you need to enclose in parenthesis for spaces:
while read line;do echo \"$line\";done<peptides.txt
Ahhh this is pretty much the same as the answer that got upvoted most, but its all on one line.
#!/bin/bash
#
# Change the file name from "test" to desired input file
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
do
echo $x
done
Here is my real life example how to loop lines of another program output, check for substrings, drop double quotes from variable, use that variable outside of the loop. I guess quite many is asking these questions sooner or later.
##Parse FPS from first video stream, drop quotes from fps variable
## streams.stream.0.codec_type="video"
## streams.stream.0.r_frame_rate="24000/1001"
## streams.stream.0.avg_frame_rate="24000/1001"
FPS=unknown
while read -r line; do
if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
echo ParseFPS $line
FPS=parse
fi
if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
echo ParseFPS $line
FPS=${line##*=}
FPS="${FPS%\"}"
FPS="${FPS#\"}"
fi
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then
echo ParseFPS Unknown frame rate
fi
echo Found $FPS
Declare variable outside of the loop, set value and use it outside of loop requires done <<< "$(...)" syntax. Application need to be run within a context of current console. Quotes around the command keeps newlines of output stream.
Loop match for substrings then reads name=value pair, splits right-side part of last = character, drops first quote, drops last quote, we have a clean value to be used elsewhere.
This is coming rather very late, but with the thought that it may help someone, i am adding the answer. Also this may not be the best way. head command can be used with -n argument to read n lines from start of file and likewise tail command can be used to read from bottom. Now, to fetch nth line from file, we head n lines, pipe the data to tail only 1 line from the piped data.
TOTAL_LINES=`wc -l $USER_FILE | cut -d " " -f1 `
echo $TOTAL_LINES # To validate total lines in the file
for (( i=1 ; i <= $TOTAL_LINES; i++ ))
do
LINE=`head -n$i $USER_FILE | tail -n1`
echo $LINE
done
#Peter: This could work out for you-
echo "Start!";for p in $(cat ./pep); do
echo $p
done
This would return the output-
Start!
RKEKNVQ
IPKKLLQK
QYFHQLEKMNVK
IPKKLLQK
GDLSTALEVAIDCYEK
QYFHQLEKMNVKIPENIYR
RKEKNVQ
VLAKHGKLQDAIN
ILGFMK
LEDVALQILL
Another way to go about using xargs
<file_name | xargs -I {} echo {}
echo can be replaced with other commands or piped further.
for p in `cat peptides.txt`
do
echo "${p}"
done
Sometimes I want to create lots of whitespace at once (slightly different than a specific character). I attempted to do this using a for loop, but I am only printing \n once with this implementation. Furthermore, the actual '\n' character is actually printed instead of a blank line. What is a better way to do this?
for i in {1....100}
> do
> echo "\n"
> done
To print 5 blank lines:
yes '' | sed 5q
To print N blank lines:
yes '' | sed ${N}q
Brace expansion expects two dots, not any other number:
$ echo {1....5}
{1....5}
$ echo {1..5}
1 2 3 4 5
That's why your loop was executed just once.
If you want echo to interpret your escape sequences, you need to call it with echo -e.
echo outputs a newline anyway, so echo -e "\n" prints two newlines. To prevent echo from printing a newline you have to use echo -n, and echo -ne "\n" is the same as just echo.
You can print repeating characters, in this case a newline, like this:
printf '\n%.0s' {1..100}
As Jerry commented you made a syntax error.
This seems to work :
for i in {1..100}
do
echo "\n"
done
Try either of the following
for i in {1..100}; do echo ; done
Or
for i in {1..100}
do
echo
done
Worked for Ubuntu 16LTS and MacOS X
You can abuse printf for that:
printf '\n%.s' {1..100}
This prints \n followed by a zero-length string 100 times (so effectively, \n 100 times).
One caveat of using brace expansion though is that you cannot use variables in it, unless you eval it:
count=100
eval "printf '\n%.s' {1..$count}"
To avoid the eval you can use a loop; it's slightly slower but shouldn't mater unless you need thousands of them:
count=100
for ((i=0; i<$count; i++)); do printf '\n'; done
NB: If you use the eval method, make sure you trust your count, else it's easy to inject commands into the shell:
$ count='1}; /bin/echo "Hello World ! " # '
$ eval "printf '\n%.s' {1..$count}"
Hello World !
One easy fix in Bash is to declare your variable as an integer (ex: declare -i count). Any attempts to pass something else than an number will fail. It may still be possible to trigger DOS attacks by passing very large values for the brace expansion, which may cause bash to trigger an OOM condition.
How do I pass a list to for in bash?
I tried
echo "some
different
lines
" | for i ; do
echo do something with $i;
done
but that doesn't work. I also tried to find an explanation with man but there is no man for
EDIT:
I know, I could use while instead, but I think I once saw a solution with for where they didn't define the variable but could use it inside the loop
for iterates over a list of words, like this:
for i in word1 word2 word3; do echo "$i"; done
use a while read loop to iterate over lines:
echo "some
different
lines" | while read -r line; do echo "$line"; done
Here is some useful reading on reading lines in bash.
This might work but I don't recommend it:
echo "some
different
lines
" | for i in $(cat) ; do
...
done
$(cat) will expand everything on stdin but if one of the lines of the echo contains spaces, for will think that's two words. So it might eventually break.
If you want to process a list of words in a loop, this is better:
a=($(echo "some
different
lines
"))
for i in "${a[#]}"; do
...
done
Explanation: a=(...) declares an array. $(cmd...) expands to the output of the command. It's still vulnerable for white space but if you quote properly, this can be fixed.
"${a[#]}" expands to a correctly quoted list of elements in the array.
Note: for is a built-in command. Use help for (in bash) instead.
This seems to work :
for x in $(echo a b c); do
echo $x
done
This is not a pipe, but quite similar:
args="some
different
lines";
set -- $args;
for i ; do
echo $i;
done
cause for defaults to $# if no in seq is given.
maybe you can shorten this a bit somehow?
#!/usr/local/bin/bash
out=`grep apache README`
echo $out;
Usually grep shows each match on a separate line when run on the command line. However, in the above scripts, the newline separating each match disappears. Does anyone know how the newline can be preserved?
You're not losing it in the assignment but in the echo. You can see this clearly if you:
echo "${out}"
You'll see a similar effect with the following script:
x="Hello,
I
am
a
string
with
newlines"
echo "====="
echo ${x}
echo "====="
echo "${x}"
echo "====="
which outputs:
=====
Hello, I am a string with newlines
=====
Hello,
I
am
a
string
with
newlines
=====
And, irrelevant to your question but I'd like to mention it anyway, I prefer to use the $() construct rather than backticks, just for the added benefit of being able to nest commands. So your script line becomes:
out=$(grep apache README)
Now that may not look any different (and it isn't) but it makes possible more complex commands like:
lines_with_nine=$(grep $(expr 7 + 2) inputfile)
Put $out in quotes:
#!/usr/local/bin/bash
out=`grep apache README`
echo "$out";
Quoting variables in bash preserves the whitespace.
For instance:
#!/bin/bash
var1="A B C D"
echo $var1 # A B C D
echo "$var1" # A B C D
since newlines are whitespace they get "removed"
Combining other answers into a one liner:
echo "($(grep apache README))"
Here is my code
michal#argon:~$ cat test.txt
1
2
3
4
5
michal#argon:~$ cat test.txt | while read line;do echo $line;done > new.txt
michal#argon:~$ cat new.txt
1
2
3
4
5
I don't know why the command echo $line filtered the space character, I want test.txt and new.txt to be exactly the same.
Please help, thanks.
Several issues with your code.
$parameter outside of " ". Don't.
read uses $IFS to split input line into words, you have to disable this.
UUOC
To summarize:
while IFS= read line ; do echo "$line" ; done < test.txt > new.txt
While the provided answers solves your task at hand, they do not explain why bash and echo "forgot" to print the spaces you have in your string. Lets first make an small example to show the problem. I simply run the commands in my shell, no real script needed for this one:
mogul#linuxine:~$ echo something
something
mogul#linuxine:~$ echo something
something
Two echo commands that both print something right at the beginning of the line, even if the first one had plenty space between echo and something. And now with quoting:
mogul#linuxine:~$ echo " something"
something
Notice, here echo printed the leading spaces before something
If we stuff the string into a variable it work exactly the same:
mogul#linuxine:~$ str=" something"
mogul#linuxine:~$ echo $str
something
mogul#linuxine:~$ echo "$str"
something
Why?
Because the shell, bash in your case, removes space between arguments to commands before passing them on to the sub process. By quoting the strings we tell bash that we mean this literally and it should not mess with out strings.
This knowledge will become quite valuable if you are going to handle files with funny names, like "this is a file with blanks in its name.txt"
Try this --
$ oldIFS="$IFS"; IFS=""; while read line;do echo $line >> new.txt ;done < test.txt; IFS="$oldIFS"
$ cat new.txt
1
2
3
4
5