More efficient way to loop through lines in shell - bash

I've come to learn that looping through lines in bash by
while read line; do stuff; done <file
Is not the most efficient way to do it. https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice
What is a more time/resource efficient method?

Here's a time'd example using Bash and awk. I have 1 million records in a file:
$ wc -l 1M
1000000 1M
Counting it's records with bash, using while read:
$ time while read -r line ; do ((i++)) ; done < 1M ; echo $i
real 0m12.440s
user 0m11.548s
sys 0m0.884s
1000000
Using let "i++" took 15.627 secs (real) and NOPing with do : ; 10.466. Using awk:
$ time awk '{i++}END{print i}' 1M
1000000
real 0m0.128s
user 0m0.128s
sys 0m0.000s

As others have said, it depends on what you're doing.
The reason it's inefficient is that everything runs in its own process. Depending on what you are doing, that may or may not be a big deal.
If what you want to do in the loop is run another shell process, you won't get any gain from eliminating the loop. If you can do what you need without the need for a loop, you could get a gain.

awk? Perl? C(++)? Of course it depends on if you're interested in CPU time or programmer time, and the latter depends on what the programmer is used to using.
The top answer to the question you linked to pretty much explains that the biggest problem is spawning external processes for simple text processing tasks. E.g. running an instance of awk or a pipeline of sed and cut for each single line just to get a part of the string is silly.
If you want to stay in shell, use the string processing parameter expansions (${var#word}, ${var:n:m}, ${var/search/replace} etc.) and other shell features as much as you can. If you see yourself running a set of commands for each input line, it's time to think the structure of the script again. Most of the text processing commands can process a whole file with one execution, so use that.
A trivial/silly example:
while read -r line; do
x=$(echo "$line" | awk '{print $2}')
somecmd "$x"
done < file
would be better as
awk < file '{print $2}' | while read -r x ; do somecmd "$x" ; done

Choose between awk or perl both are efficient

Related

Fastest way to print a certain portion of a file using bash commands

currently I am using sed to print the required portion of the file. For example, I used the below command
sed -n 89001,89009p file.xyz
However, it is pretty slow as the file size is increasing (my file is currently 6.8 GB). I have tried to follow this link and used the command
sed -n '89001,89009{p;q}' file.xyz
But, this command is only printing the 89001th line. Kindly, help me.
The syntax is a little bit different:
sed -n '89001,89009p;89009q' file.xyz
UPDATE:
Since there is also an answer with awk I made small comparison and as I thought - sed is a little bit faster:
$ wc -l large-file
100000000 large-file
$ du -h large-file
954M large-file
$ time sed -n '890000,890010p;890010q' large-file > /dev/null
real 0m0.141s
user 0m0.068s
sys 0m0.000s
$ time awk 'NR>=890000{print} NR==890010{exit}' large-file > /dev/null
real 0m0.433s
user 0m0.208s
sys 0m0.008s`
UPDATE2:
There is a faster way with awk as posted by #EdMorton but still not as fast as sed:
$ time awk 'NR>=890000{print; if (NR==890010) exit}' large-file > /dev/null
real 0m0.252s
user 0m0.172s
sys 0m0.008s
UPDATE:
This is the fastest way I was able to find (head and tail):
$ time head -890010 large-file| tail -10 > /dev/null
real 0m0.085s
user 0m0.024s
sys 0m0.016s
awk 'NR>=89001{print; if (NR==89009) exit}' file.xyz
Dawid Grabowski's helpful answer is the way to go (with sed[1]
; Ed Morton's helpful answer is a viable awk alternative; a tail+head combination will typically be the fastest[2]).
As for why your approach didn't work:
A two-address expression such as 89001,89009 selects an inclusive range of lines, bounded by the start and end address (line numbers, in this case).
The associated function list, {p;q;}, is then executed for each line in the selected range.
Thus, line # 89001 is the 1st line that causes the function list to be executed: right after printing (p) the line, function q is executed - which quits execution right away, without processing any further lines.
To prevent premature quitting, Dawid's answer therefore separates the aspect of printing (p) all lines in the range from quitting (q) processing, using two commands separated with ;:
89001,89009p prints all lines in the range
89009q quits processing when the range's end point is reached
[1] A slightly less repetitive reformulation that should perform equally well ($ represents the last line, which is never reached due to the 2nd command):
sed -n '89001,$ p; 89009 q'
[2] A better reformulation of the head + tail solution from Dawid's answer is
tail -n +89001 file | head -n 9, because it caps the number of bytes that are not of interest yet are still sent through the pipe at the pipe-buffer size (a typical pipe-buffer size is 64 KB).
With GNU utilities (Linux), this is the fastest solution, but on OSX with stock utilities (BSD), the sed solution is fastest.
easier to read in awk, performance should be similar to sed
awk 'NR>=89001{print} NR==89009{exit}' file.xyz
you can replace {print} with semicolon as well.
Another way to do it will be using combination of head and tail:
$ time head -890010 large-file| tail -10 > /dev/null
real 0m0.085s
user 0m0.024s
sys 0m0.016s
This is faster than sed and awk.
It requires sed to search from the beginning of the file to find the N'th line. To make things faster, divide the large file at fixed number of lines intervals using and index file. Then use dd to skip early portions of the big file before feeding to sed.
Build the index file using:
#!/bin/bash
INTERVAL=1000
LARGE_FILE="big-many-GB-file"
INDEX_FILE="index"
LASTSTONE=123
MILESTONE=0
echo $MILESTONE > $INDEX_FILE
while [ $MILESTONE != $LASTSTONE ] ;do
LASTSTONE=$MILESTONE
MILESTONE=$(dd if="$LARGE_FILE" bs=1 skip=$LASTSTONE 2>/dev/null |head -n$INTERVAL |wc -c)
MILESTONE=$(($LASTSTONE+$MILESTONE))
echo $MILESTONE >> $INDEX_FILE
done
exit
Then search for a line using: ./this_script.sh 89001
#!/bin/bash
INTERVAL=1000
LARGE_FILE="big-many-GB-file"
INDEX_FILE="index"
LN=$(($1-1))
OFFSET=$(head -n$((1+($LN/$INTERVAL))) $INDEX_FILE |tail -n1)
LN=$(($LN-(($LN/$INTERVAL)*$INTERVAL)))
LN=$(($LN+1))
dd if="$LARGE_FILE" bs=1 skip=$OFFSET 2>/dev/null |sed -n "$LN"p

What can I do to speed up this bash script?

The code I have goes through a file and multiplies all the numbers in the first column by a number. The code works, but I think its somewhat slow. It takes 26.676s (walltime) to go through a file with 2302 lines in it. I'm using a 2.7 GHz Intel Core i5 processor. Here is the code.
#!/bin/bash
i=2
sed -n 1p data.txt > data_diff.txt #outputs the header (x y)
while [ $i -lt 2303 ]; do
NUM=`sed -n "$i"p data.txt | awk '{print $1}'`
SEC=`sed -n "$i"p data.txt | awk '{print $2}'`
NNUM=$(bc <<< "$NUM*0.000123981")
echo $NNUM $SEC >> data_diff.txt
let i=$i+1
done
Honestly, the biggest speedup you can get will come from using a single language that can do the whole task itself. This is mostly because your script invokes 5 extra processes for each line, and invoking extra processes is slow, but also text processing in bash is really not that well optimized.
I'd recommend awk, given that you have it available:
awk '{ print $1*0.000123981, $2 }'
I'm sure you can improve this to skip the header line and print it without modification.
You can also do this sort of thing with Perl, Python, C, Fortran, and many other languages, though it's unlikely to make much difference for such a simple calculation.
Your script runs 4603 separate sed processes, 4602 separate awk processes, and 2301 separate bc processes. If echo were not a built-in then it would also run 2301 echo processes. Starting a process has relatively large overhead. Not so large that you would ordinarily notice it, but you are running over 11000 short processes. The wall time consumption doesn't seem unreasonable for that.
MOREOVER, each sed that you run processes the whole input file anew, selecting from it just one line. This is horribly inefficient.
The solution is to reduce the number of processes you are running, and especially to perform only a single run through the whole input file. A fairly easy way to do that would be to convert to an awk script, possibly with a bash wrapper. That might look something like this:
#!/bin/bash
awk '
NR==1 { print; next }
NR>=2303 { exit }
{ print $1 * 0.000123981, $2 }
' data.txt > data_diff.txt
Note that the line beginning with NR>=2303 artificially stops processing the input file when it reaches the 2303rd line, as your original script does; you could omit that line of the script altogether to let it simply process all the lines, however many there are.
Note, too, that that uses awk's built-in FP arithmetic instead of running bc. If you actually need the arbitrary-precision arithmetic of bc then I'm sure you can figure out how to modify the script to get that.
As an example of how to speed up the bash script (without implying that this is the right solution)
#!/bin/bash
{ IFS= read -r header
echo "$header"
# You can drop the third name "rest" if your input file
# only has two columns.
while read -r num sec rest; do
nnum=$( bc <<< "$num * 0.000123981" )
echo "$nnum $sec"
done
} < data.txt > data_diff.txt
Now you only have one extra call to bc per data line, necessitated by the fact that bash doesn't do floating-point arithmetic. The right answer is to use a single call to program that can do floating-point arithmetic, as pointed out by David Z.

Doubts about bash script efficiency

I have to accomplish a relatively simple task, basically i have an enormous amount of files with the following format
"2014-01-27","07:20:38","data","data","data"
Basically i would like to extract the first 2 fields, convert them into a unix epoch date, add 6 hours to it (due timezone difference), and replace the first 2 original columns with the resulting milliseconds (unix epoch, since 19700101 converted to mills)
I have written a script that works fine, well, the issue is that is very very slow, i need to run this over 150 files with a total line count of more then 5.000.000 and i was wondering if you had any advice about how could i make it faster, here it is:
#!/bin/bash
function format()
{
while read line; do
entire_date=$(echo ${line} | cut -d"," -f1-2);
trimmed_date=$(echo ${entire_date} | sed 's/"//g;s/,/ /g');
seconds=$(date -d "${trimmed_date} + 6 hours" +%s);
millis=$((${seconds} * 1000));
echo ${line} | sed "s/$entire_date/\"$millis\"/g" >> "output"
done < $*
}
format $*
You are spawning a significant number of processes for each input line. Probably half of those could easily be factored away, by quick glance, but I would definitely recommend a switch to Perl or Python instead.
perl -MDate::Parse -pe 'die "$0:$ARGV:$.: Unexpected input $_"
unless s/(?<=^")([^"]+)","([^"]+)(?=")/ (str2time("$1 $2")+6*3600)*1000 /e'
I'd like to recommend Text::CSV but I do not have it installed here, and if you have requirements to not touch the fields after the second at all, it might not be what you need anyway. This is quick and dirty but probably also much simpler than a "proper" CSV solution.
The real meat is the str2time function from Date::Parse, which I imagine will be a lot quicker than repeatedly calling date (ISTR it does some memoization internally so it can do nearby dates quickly). The regex replaces the first two fields with the output; note the /e flag which allows Perl code to be evaluated in the replacement part. The (?<=^") and (?=") zero-width assertions require these matches to be present but does not include them in the substitution operation. (I originally substituted the enclosing double quotes, but with this change, they are retained, as apparently you want to keep them.)
Change the die to a warn if you want the script to continue in spite of errors (maybe redirect standard error to a file then!)
I have tried to avoid external commands (except date) to gain time. Tests show that it is 4 times faster than your code. (Okay, the tripleee's perl solution is 40 times faster than mine !)
#! /bin/bash
function format()
{
while IFS=, read date0 date1 datas; do
date0="${date0//\"/}"
date1="${date1//\"/}"
seconds=$(date -d "$date0 $date1 + 6 hours" +%s)
echo "\"${seconds}000\",$datas"
done
}
output="output.txt"
# Process each file in argument
for file ; do
format < "$file"
done >| "$output"
exit 0
Using the exist function mktime in awk, tested, it is faster than perl.
awk '{t=$2 " " $4;gsub(/[-:]/," ",t);printf "\"%s\",%s\n",(mktime(t)+6*3600)*1000,substr($0,25)}' FS=\" OFS=\" file
Here is the test result.
$ wc -l file
1244 file
$ time awk '{t=$2 " " $4;gsub(/[-:]/," ",t);printf "\"%s\",%s\n",(mktime(t)+6*3600)*1000,substr($0,25)}' FS=\" OFS=\" file > /dev/null
real 0m0.172s
user 0m0.140s
sys 0m0.046s
$ time perl -MDate::Parse -pe 'die "$0:$ARGV:$.: Unexpected input $_"
unless s/(?<=^")([^"]+)","([^"]+)(?=")/ (str2time("$1 $2")+6*3600)*1000 /e' file > /dev/null
real 0m0.328s
user 0m0.218s
sys 0m0.124s

Fastest way to print a single line in a file

I have to fetch one specific line out of a big file (1500000 lines), multiple times in a loop over multiple files, I was asking my self what would be the best option (in terms of performance).
There are many ways to do this, i manly use these 2
cat ${file} | head -1
or
cat ${file} | sed -n '1p'
I could not find an answer to this do they both only fetch the first line or one of the two (or both) first open the whole file and then fetch the row 1?
Drop the useless use of cat and do:
$ sed -n '1{p;q}' file
This will quit the sed script after the line has been printed.
Benchmarking script:
#!/bin/bash
TIMEFORMAT='%3R'
n=25
heading=('head -1 file' 'sed -n 1p file' "sed -n '1{p;q} file" 'read line < file && echo $line')
# files upto a hundred million lines (if your on slow machine decrease!!)
for (( j=1; j<=100,000,000;j=j*10 ))
do
echo "Lines in file: $j"
# create file containing j lines
seq 1 $j > file
# initial read of file
cat file > /dev/null
for comm in {0..3}
do
avg=0
echo
echo ${heading[$comm]}
for (( i=1; i<=$n; i++ ))
do
case $comm in
0)
t=$( { time head -1 file > /dev/null; } 2>&1);;
1)
t=$( { time sed -n 1p file > /dev/null; } 2>&1);;
2)
t=$( { time sed '1{p;q}' file > /dev/null; } 2>&1);;
3)
t=$( { time read line < file && echo $line > /dev/null; } 2>&1);;
esac
avg=$avg+$t
done
echo "scale=3;($avg)/$n" | bc
done
done
Just save as benchmark.sh and run bash benchmark.sh.
Results:
head -1 file
.001
sed -n 1p file
.048
sed -n '1{p;q} file
.002
read line < file && echo $line
0
**Results from file with 1,000,000 lines.*
So the times for sed -n 1p will grow linearly with the length of the file but the timing for the other variations will be constant (and negligible) as they all quit after reading the first line:
Note: timings are different from original post due to being on a faster Linux box.
If you are really just getting the very first line and reading hundreds of files, then consider shell builtins instead of external external commands, use read which is a shell builtin for bash and ksh. This eliminates the overhead of process creation with awk, sed, head, etc.
The other issue is doing timed performance analysis on I/O. The first time you open and then read a file, file data is probably not cached in memory. However, if you try a second command on the same file again, the data as well as the inode have been cached, so the timed results are may be faster, pretty much regardless of the command you use. Plus, inodes can stay cached practically forever. They do on Solaris for example. Or anyway, several days.
For example, linux caches everything and the kitchen sink, which is a good performance attribute. But it makes benchmarking problematic if you are not aware of the issue.
All of this caching effect "interference" is both OS and hardware dependent.
So - pick one file, read it with a command. Now it is cached. Run the same test command several dozen times, this is sampling the effect of the command and child process creation, not your I/O hardware.
this is sed vs read for 10 iterations of getting the first line of the same file, after read the file once:
sed: sed '1{p;q}' uopgenl20121216.lis
real 0m0.917s
user 0m0.258s
sys 0m0.492s
read: read foo < uopgenl20121216.lis ; export foo; echo "$foo"
real 0m0.017s
user 0m0.000s
sys 0m0.015s
This is clearly contrived, but does show the difference between builtin performance vs using a command.
If you want to print only 1 line (say the 20th one) from a large file you could also do:
head -20 filename | tail -1
I did a "basic" test with bash and it seems to perform better than the sed -n '1{p;q} solution above.
Test takes a large file and prints a line from somewhere in the middle (at line 10000000), repeats 100 times, each time selecting the next line. So it selects line 10000000,10000001,10000002, ... and so on till 10000099
$wc -l english
36374448 english
$time for i in {0..99}; do j=$((i+10000000)); sed -n $j'{p;q}' english >/dev/null; done;
real 1m27.207s
user 1m20.712s
sys 0m6.284s
vs.
$time for i in {0..99}; do j=$((i+10000000)); head -$j english | tail -1 >/dev/null; done;
real 1m3.796s
user 0m59.356s
sys 0m32.376s
For printing a line out of multiple files
$wc -l english*
36374448 english
17797377 english.1024MB
3461885 english.200MB
57633710 total
$time for i in english*; do sed -n '10000000{p;q}' $i >/dev/null; done;
real 0m2.059s
user 0m1.904s
sys 0m0.144s
$time for i in english*; do head -10000000 $i | tail -1 >/dev/null; done;
real 0m1.535s
user 0m1.420s
sys 0m0.788s
How about avoiding pipes?
Both sed and head support the filename as an argument. In this way you avoid passing by cat. I didn't measure it, but head should be faster on larger files as it stops the computation after N lines (whereas sed goes through all of them, even if it doesn't print them - unless you specify the quit option as suggested above).
Examples:
sed -n '1{p;q}' /path/to/file
head -n 1 /path/to/file
Again, I didn't test the efficiency.
I have done extensive testing, and found that, if you want every line of a file:
while IFS=$'\n' read LINE; do
echo "$LINE"
done < your_input.txt
Is much much faster then any other (Bash based) method out there. All other methods (like sed) read the file each time, at least up to the matching line. If the file is 4 lines long, you will get: 1 -> 1,2 -> 1,2,3 -> 1,2,3,4 = 10 reads whereas the while loop just maintains a position cursor (based on IFS) so would only do 4 reads in total.
On a file with ~15k lines, the difference is phenomenal: ~25-28 seconds (sed based, extracting a specific line from each time) versus ~0-1 seconds (while...read based, reading through the file once)
The above example also shows how to set IFS in a better way to newline (with thanks to Peter from comments below), and this will hopefully fix some of the other issue seen when using while... read ... in Bash at times.
For the sake of completeness you can also use the basic linux command cut:
cut -d $'\n' -f <linenumber> <filename>

How expensive is a bash function call really?

Just out if interest, are there any sources on how expensive function calls in Bash really are? I expect them to be several times slower than executing the code within them directly but I can't seem to find anything about this.
I don't really agree that performance should not be a worry when programming in bash. It's actually a very good question to ask.
Here's a possible benchmark, comparing the builtin true and the command true, the full path of which is /bin/true on my machine.
On my machine:
$ time for i in {0..1000}; do true; done
real 0m0.004s
user 0m0.004s
sys 0m0.000s
$ time for i in {0..1000}; do /bin/true; done
real 0m2.660s
user 0m2.880s
sys 0m2.344s
Amazing! That's about 2 to 3 ms wasted by just forking a process (on my machine)!
So next time you have some large text file to process, you'll avoid the mistaken long chains of piped cats, greps, awks, cuts, trs, seds, heads, tails, you-name-its. Besides, UNIX pipes and also very slow (will that be your next question?).
Imagine you have a 1000 line file, and in each line you put one cat then a grep then a sed and then an awk (no, don't laugh, you can see even worse by going through the posts on this site!), then you're already wasting (on my machine) at least 241000=8000ms=8s just forking stupid and useless processes.
To answer your comment about pipes...
###Subshells
Subshells are very slow:
$ time for i in {1..1000}; do (true); done
real 0m2.465s
user 0m2.812s
sys 0m2.140s
Amazing! over 2ms per subshell (on my machine).
###Pipes
Pipes are also very slow (this should be obvious regarding the fact that they involve subshells):
$ time for i in {1..1000}; do true | true; done
real 0m4.769s
user 0m5.652s
sys 0m4.240s
Amazing! over 4ms per pipe (on my machine), so that's 2ms for just the pipe after subtracting the time for the subshell.
Redirection
$ time for i in {1..1000}; do true > file; done
real 0m0.014s
user 0m0.008s
sys 0m0.008s
So that's pretty fast.
Ok, you probably also want to see it in action with creation of a file:
$ rm file*; time for i in {1..1000}; do true > file$i; done
real 0m0.030s
user 0m0.008s
sys 0m0.016s
Still decently fast.
Pipes vs redirections:
In your comment, you mention:
sed '' filein > filetmp; sed '' filetmp > fileout
vs
sed '' filein | sed '' > fileout
(Of course, the best thing would be to use a single instance of sed (it's usually possible), but that doesn't answer the question.)
Let's check that out:
A funny way:
$ rm file*
$ > file
$ time for i in {1..1000}; do sed '' file | sed '' > file$i; done
real 0m5.842s
user 0m4.752s
sys 0m5.388s
$ rm file*
$ > file
$ time for i in {1..1000}; do sed '' file > filetmp$i; sed '' filetmp$i > file$i; done
real 0m6.723s
user 0m4.812s
sys 0m5.800s
So it seems faster to use a pipe rather than using a temporary file (for sed). In fact, this could have been understood without typing the lines: in a pipe, as soon as the first sed spits out something, the second sed starts processing the data. In the second case, the first sed does its job, and then the second sed does its job.
So our experiment is not a good way of determining if pipes are better that redirections.
How about process substitutions?
$ rm file*
$ > file
$ time for i in {1..1000}; do sed '' > file$i < <(sed '' file); done
real 0m7.899s
user 0m1.572s
sys 0m3.712s
Wow, that's slow! Hey, but observe the user and system CPU usage: much less than the other two possibilities (if someone can explain that...)

Resources