Calculate average execution time of a program using Bash - bash

To get the execution time of any executable, say a.out, I can simply write time ./a.out. This will output a real time, user time and system time.
Is it possible write a bash script that runs the program numerous times and calculates and outputs the average real execution time?

You could write a loop and collect the output of time command and pipe it to awk to compute the average:
avg_time() {
#
# usage: avg_time n command ...
#
n=$1; shift
(($# > 0)) || return # bail if no command given
for ((i = 0; i < n; i++)); do
{ time -p "$#" &>/dev/null; } 2>&1 # ignore the output of the command
# but collect time's output in stdout
done | awk '
/real/ { real = real + $2; nr++ }
/user/ { user = user + $2; nu++ }
/sys/ { sys = sys + $2; ns++}
END {
if (nr>0) printf("real %f\n", real/nr);
if (nu>0) printf("user %f\n", user/nu);
if (ns>0) printf("sys %f\n", sys/ns)
}'
}
Example:
avg_time 5 sleep 1
would give you
real 1.000000
user 0.000000
sys 0.000000
This can be easily enhanced to:
sleep for a given amount of time between executions
sleep for a random time (within a certain range) between executions
Meaning of time -p from man time:
-p
When in the POSIX locale, use the precise traditional format
"real %f\nuser %f\nsys %f\n"
(with numbers in seconds) where the number of decimals in the
output for %f is unspecified but is sufficient to express the
clock tick accuracy, and at least one.
You may want to check out this command-line benchmarking tool as well:
sharkdp/hyperfine

Total execution time vs sum of single execution time
Care! dividing sum of N rounded execution time is imprecise!
Instead, we could divide total execution time of N iteration (by N)
avg_time_alt() {
local -i n=$1
local foo real sys user
shift
(($# > 0)) || return;
{ read foo real; read foo user; read foo sys ;} < <(
{ time -p for((;n--;)){ "$#" &>/dev/null ;} ;} 2>&1
)
printf "real: %.5f\nuser: %.5f\nsys : %.5f\n" $(
bc -l <<<"$real/$n;$user/$n;$sys/$n;" )
}
Nota: This uses bc instead of awk to compute the average. For this, we would create a temporary bc file:
printf >/tmp/test-pi.bc "scale=%d;\npi=4*a(1);\nquit\n" 60
This would compute ¶ with 60 decimals, then exit quietly. (You can adapt number of decimals for your host.)
Demo:
avg_time_alt 1000 sleep .001
real: 0.00195
user: 0.00008
sys : 0.00016
avg_time_alt 1000 bc -ql /tmp/test-pi.bc
real: 0.00172
user: 0.00120
sys : 0.00058
Where codeforester's function will anser:
avg_time 1000 sleep .001
real 0.000000
user 0.000000
sys 0.000000
avg_time 1000 bc -ql /tmp/test-pi.bc
real 0.000000
user 0.000000
sys 0.000000
Alternative, inspired by choroba's answer, using Linux's/proc
Ok, you could consider:
avgByProc() {
local foo start end n=$1 e=$1 values times
shift;
export n;
{
read foo;
read foo;
read foo foo start foo
} < /proc/timer_list;
mapfile values < <(
for((;n--;)){ "$#" &>/dev/null;}
read -a endstat < /proc/self/stat
{
read foo
read foo
read foo foo end foo
} </proc/timer_list
printf -v times "%s/100/$e;" ${endstat[#]:13:4}
bc -l <<<"$[end-start]/10^9/$e;$times"
)
printf -v fmt "%-7s: %%.5f\\n" real utime stime cutime cstime
printf "$fmt" ${values[#]}
}
This is based on /proc:
man 5 proc | grep [su]time\\\|timer.list | sed 's/^/> /'
(14) utime %lu
(15) stime %lu
(16) cutime %ld
(17) cstime %ld
/proc/timer_list (since Linux 2.6.21)
Then now:
avgByProc 1000 sleep .001
real : 0.00242
utime : 0.00015
stime : 0.00021
cutime : 0.00082
cstime : 0.00020
Where utime and stime represent user time and system time for bash himself and cutime and cstime represent child user time and child system time wich are the most interesting.
Nota: In this case (sleep) command won't use a lot of ressources.
avgByProc 1000 bc -ql /tmp/test-pi.bc
real : 0.00175
utime : 0.00015
stime : 0.00025
cutime : 0.00108
cstime : 0.00032
This become more clear...
Of course, as accessing timer_list and self/stat successively but not atomicaly, differences between real (nanosecs based) and c?[su]time (based in ticks ie: 1/100th sec) may appear!

From bashoneliners
adapted to transform (,) to (.) for i18n support
hardcoded to 10, adapt as needed
returns only the "real" value, the one you most likely want
Oneliner
for i in {1..10}; do time $#; done 2>&1 | grep ^real | sed s/,/./ | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'
I made a "fuller" version
outputs the results of every execution so you know the right thing is executed
shows every run time, so you glance for outliers
But really, if you need advanced stuff just use hyperfine.
GREEN='\033[0;32m'
PURPLE='\033[0;35m'
RESET='\033[0m'
# example: perf sleep 0.001
# https://serverfault.com/questions/175376/redirect-output-of-time-command-in-unix-into-a-variable-in-bash
perfFull() {
TIMEFORMAT=%R # `time` outputs only a number, not 3 lines
export LC_NUMERIC="en_US.UTF-8" # `time` outputs `0.100` instead of local format, like `0,100`
times=10
echo -e -n "\nWARMING UP ${PURPLE}$#${RESET}"
$# # execute passed parameters
echo -e -n "RUNNING ${PURPLE}$times times${RESET}"
exec 3>&1 4>&2 # redirects subshell streams
durations=()
for _ in `seq $times`; {
durations+=(`{ time $# 1>&3 2>&4; } 2>&1`) # passes stdout through so only `time` is caputured
}
exec 3>&- 4>&- # reset subshell streams
printf '%s\n' "${durations[#]}"
total=0
for duration in "${durations[#]}"; {
total=$(bc <<< "scale=3;$total + $duration")
}
average=($(bc <<< "scale=3;$total/$times"))
echo -e "${GREEN}$average average${RESET}"
}

It's probably easier to record the start and end time of the execution and divide the difference by the number of executions.
#!/bin/bash
times=10
start=$(date +%s)
for ((i=0; i < times; i++)) ; do
run_your_executable_here
done
end=$(date +%s)
bc -l <<< "($end - $start) / $times"
I used bc to calculate the average, as bash doesn't support floating point arithmetics.
To get more precision, you can switch to nanoseconds:
start=$(date +%s.%N)
and similarly for $end.

Related

Show average file output speed once for loop is complete, for benchmarking purposes?

Sorry for being unclear my follow mates,
So to elaborate and possibly answer my own question, while Distro1Analysis.txt is being written to, calculate output speed in kb/s and when output is done then average output speed and print to screen.
The second part, its own question really, is quite simple, I'm not a computer scientist or advanced programmer, but I am certain there's an relatively easy way to improve the overall execution speed of the script which asking what is the speed culprit, how the script was written, the chosen programs, the mix of programs (i.e., is it faster to use 3 instances of the same program as opposed to one instance of 3 different programs...) For instance, could recursive-ness be used and how?
I was orignally going to ask how to benchmark the speed of a program to run one command, but it seemed simpler to use an overarching (global) benchmark hence the question. But any help you can provide would be useful.
Rdepends Version
ps -A &>> Distro1Analysis.txt && sudo service --status-all &>> Distro1Analysis.txt && \
for z in $(dpkg -l | awk '/^[hi]i/{print $2}' | grep -v '^lib'); do \
printf "\n$z:" && \
aptitude show $z | grep -E 'Uncompressed Size' && \
result=$(apt-rdepends 2>/dev/null $z | grep -v "Depends")
final=$(apt show 2>/dev/null $result | grep -E "Package|Installed-Size" | sed "/APT/d;s/Installed-Size: //");
if [[ (${#final} -le 700) ]]; then echo $final; else :; fi done &>> Distro1Analysis.txt
Depends Version
ps -A &>> Distro1Analysis.txt && sudo service --status-all &>> Distro1Analysis.txt && \
for z in $(dpkg -l | awk '/^[hi]i/{print $2}' | grep -v '^lib'); do \
printf "\n$z:" && \
aptitude show $z | grep -E 'Uncompressed Size' && \
printf "\n" && \
apt show 2>/dev/null $(aptitude search '!~i?reverse-depends("^'$z'$")' -F "%p" | \
sed 's/:i386$//') | grep -E 'Package|Installed-Size' | sed '/APT/d;s/^.*Package:/\t&/;N;s/\n/ /'; done &>> Distro1Analysis.txt
calculate output speed in kb/s and when output is done then average
output speed and print to screen
Here's an answer that's basically
Starting your script to run in the background.
Checking the size of its output file every two seconds with du -b.
Run the following bash script like so: $ bash scriptoutmon.sh subscript.sh Distro1Analysis.txt 12 10 2
scriptoutmon.sh usage:
$1 : Path to the subscript to run
$2 : Path to output file to monitor
$3 : How long to run scriptoutmon.sh script in seconds.
$4 : How long to run the subscript ($1)
$5 : Tick length for displayed updates in seconds.
scriptoutmon.sh:
#!/bin/bash
# Date: 2020-04-13T23:03Z
# Author: Steven Baltakatei Sandoval
# License: GPLv3+ https://www.gnu.org/licenses/gpl-3.0.en.html
# Description: Runs subscript and measures change in file size of a specified file.
# Usage: scriptoutmon.sh [ path to subscript ] [ path to subscript output file ] [ script TTL (s) ] [ subscript TTL (s) ] [ tick size (s) ]
# References:
# [1]: Adrian Pronk (2013-02-22). "Floating point results in Bash integer division". https://stackoverflow.com/a/15015920
# [2]: chronitis (2012-11-15). "bc: set number of digits after decimal point". https://askubuntu.com/a/217575
# [3]: ypnos (2020-02-12). "Differences of size in du -hs and du -b". https://stackoverflow.com/a/60196741
# == Function Definitions ==
echoerr() { echo "$#" 1>&2; } # display message via stderr
getSize() { echo $(du -b "$1" | awk '{print $1}'); } # output file size in bytes. See [3].
# == Initialize settings ==
SUBSCRIPT_PATH="$1" # path to subscript to run
SUBSCRIPT_OUTPUT_PATH="$2" # path to output file generated by subscript
SCRIPT_TTL="$3" # set script time-to-live in seconds
SUBSCRIPT_TTL="$4" # set subscript time-to-live in seconds
TICK_SIZE="$5" # update tick size (in seconds)
# == Perform work ==
timeout $SUBSCRIPT_TTL bash "$SUBSCRIPT_PATH" & # run subscript for SCRIPT_TTL seconds.
# note: SUBSCRIPT_OUTPUT_PATH should be path of output file generated by subscript.sh .
if [ -f $SUBSCRIPT_OUTPUT_PATH ]; then SUBSCRIPT_OUTPUT_INITIAL_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH"); else SUBSCRIPT_OUTPUT_INITIAL_SIZE="0"; fi # save initial size if file exists.
echoerr "Running $(basename "$SUBSCRIPT_PATH") and then monitoring rate of file size changes to $(basename "$SUBSCRIPT_OUTPUT_PATH")." # explain displayed output
# Calc and display subscript output file size changes
while [ $SECONDS -lt $SCRIPT_TTL ]; do # loop while script age (in seconds) less than SCRIPT_TTL.
if [ $SECONDS -ge $TICK_SIZE ]; then # if after first tick
OUTPUT_PREVIOUS_SIZE="$OUTPUT_CURRENT_SIZE" ; # save size previous tick
OUTPUT_CURRENT_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH") ; # save size current tick
BYTES_WRITTEN=$(( $OUTPUT_CURRENT_SIZE - $OUTPUT_PREVIOUS_SIZE )) ; # calc size difference between current and previous ticks.
WRITE_SPEED_BYTES_PER_SECOND=$(($BYTES_WRITTEN / $TICK_SIZE)) ; # calc write speed in bytes per second
WRITE_SPEED_KILOBYTES_PER_SECOND=$( echo "scale=3; $WRITE_SPEED_BYTES_PER_SECOND / 1000" | bc -l ) ; # calc write speed in kilobytes per second. See [1], [2].
echo "File size change rate (KB/sec):"$WRITE_SPEED_KILOBYTES_PER_SECOND ;
else # if first tick
OUTPUT_CURRENT_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH") # save size current tick (initial)
fi
sleep "$TICK_SIZE"; # wait a tick
done
SUBSCRIPT_OUTPUT_FINAL_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH") # save final size
# == Display results ==
SUBSCRIPT_OUTPUT_TOTAL_CHANGE_BYTES=$(( $SUBSCRIPT_OUTPUT_FINAL_SIZE - $SUBSCRIPT_OUTPUT_INITIAL_SIZE )) # calc total size change in bytes
SUBSCRIPT_OUTPUT_TOTAL_CHANGE_KILOBYTES=$( echo "scale=3; $SUBSCRIPT_OUTPUT_TOTAL_CHANGE_BYTES / 1000" | bc -l ) # calc total size change in kilobytes. See [1], [2].
echoerr "$SUBSCRIPT_OUTPUT_TOTAL_CHANGE_KILOBYTES kilobytes added to $SUBSCRIPT_OUTPUT_PATH size in $SUBSCRIPT_TTL seconds."
exit 0;
You should get output like this:
baltakatei#debianwork:/tmp$ bash scriptoutmon.sh subscript.sh Distro1Analysis.txt 12 10 2
Running subscript.sh and then monitoring rate of file size changes to Distro1Analysis.txt.
File size change rate (KB/sec):6.302
File size change rate (KB/sec):.351
File size change rate (KB/sec):.376
File size change rate (KB/sec):.345
File size change rate (KB/sec):.335
15.419 kilobytes added to Distro1Analysis.txt size in 10 seconds.
baltakatei#debianwork:/tmp$
Increase $3 and $4 to monitor the script longer (perhaps to let it finish its work).
The second part, its own question really
I'd suggest making it a separate question.

How to extract the third largest file size in Unix [duplicate]

Is there a "canonical" way of doing that? I've been using head -n | tail -1 which does the trick, but I've been wondering if there's a Bash tool that specifically extracts a line (or a range of lines) from a file.
By "canonical" I mean a program whose main function is doing that.
head and pipe with tail will be slow for a huge file. I would suggest sed like this:
sed 'NUMq;d' file
Where NUM is the number of the line you want to print; so, for example, sed '10q;d' file to print the 10th line of file.
Explanation:
NUMq will quit immediately when the line number is NUM.
d will delete the line instead of printing it; this is inhibited on the last line because the q causes the rest of the script to be skipped when quitting.
If you have NUM in a variable, you will want to use double quotes instead of single:
sed "${NUM}q;d" file
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For adding lines with sed, you can check this:
sed: insert a line in a certain position
I have a unique situation where I can benchmark the solutions proposed on this page, and so I'm writing this answer as a consolidation of the proposed solutions with included run times for each.
Set Up
I have a 3.261 gigabyte ASCII text data file with one key-value pair per row. The file contains 3,339,550,320 rows in total and defies opening in any editor I have tried, including my go-to Vim. I need to subset this file in order to investigate some of the values that I've discovered only start around row ~500,000,000.
Because the file has so many rows:
I need to extract only a subset of the rows to do anything useful with the data.
Reading through every row leading up to the values I care about is going to take a long time.
If the solution reads past the rows I care about and continues reading the rest of the file it will waste time reading almost 3 billion irrelevant rows and take 6x longer than necessary.
My best-case-scenario is a solution that extracts only a single line from the file without reading any of the other rows in the file, but I can't think of how I would accomplish this in Bash.
For the purposes of my sanity I'm not going to be trying to read the full 500,000,000 lines I'd need for my own problem. Instead I'll be trying to extract row 50,000,000 out of 3,339,550,320 (which means reading the full file will take 60x longer than necessary).
I will be using the time built-in to benchmark each command.
Baseline
First let's see how the head tail solution:
$ time head -50000000 myfile.ascii | tail -1
pgm_icnt = 0
real 1m15.321s
The baseline for row 50 million is 00:01:15.321, if I'd gone straight for row 500 million it'd probably be ~12.5 minutes.
cut
I'm dubious of this one, but it's worth a shot:
$ time cut -f50000000 -d$'\n' myfile.ascii
pgm_icnt = 0
real 5m12.156s
This one took 00:05:12.156 to run, which is much slower than the baseline! I'm not sure whether it read through the entire file or just up to line 50 million before stopping, but regardless this doesn't seem like a viable solution to the problem.
AWK
I only ran the solution with the exit because I wasn't going to wait for the full file to run:
$ time awk 'NR == 50000000 {print; exit}' myfile.ascii
pgm_icnt = 0
real 1m16.583s
This code ran in 00:01:16.583, which is only ~1 second slower, but still not an improvement on the baseline. At this rate if the exit command had been excluded it would have probably taken around ~76 minutes to read the entire file!
Perl
I ran the existing Perl solution as well:
$ time perl -wnl -e '$.== 50000000 && print && exit;' myfile.ascii
pgm_icnt = 0
real 1m13.146s
This code ran in 00:01:13.146, which is ~2 seconds faster than the baseline. If I'd run it on the full 500,000,000 it would probably take ~12 minutes.
sed
The top answer on the board, here's my result:
$ time sed "50000000q;d" myfile.ascii
pgm_icnt = 0
real 1m12.705s
This code ran in 00:01:12.705, which is 3 seconds faster than the baseline, and ~0.4 seconds faster than Perl. If I'd run it on the full 500,000,000 rows it would have probably taken ~12 minutes.
mapfile
I have bash 3.1 and therefore cannot test the mapfile solution.
Conclusion
It looks like, for the most part, it's difficult to improve upon the head tail solution. At best the sed solution provides a ~3% increase in efficiency.
(percentages calculated with the formula % = (runtime/baseline - 1) * 100)
Row 50,000,000
00:01:12.705 (-00:00:02.616 = -3.47%) sed
00:01:13.146 (-00:00:02.175 = -2.89%) perl
00:01:15.321 (+00:00:00.000 = +0.00%) head|tail
00:01:16.583 (+00:00:01.262 = +1.68%) awk
00:05:12.156 (+00:03:56.835 = +314.43%) cut
Row 500,000,000
00:12:07.050 (-00:00:26.160) sed
00:12:11.460 (-00:00:21.750) perl
00:12:33.210 (+00:00:00.000) head|tail
00:12:45.830 (+00:00:12.620) awk
00:52:01.560 (+00:40:31.650) cut
Row 3,338,559,320
01:20:54.599 (-00:03:05.327) sed
01:21:24.045 (-00:02:25.227) perl
01:23:49.273 (+00:00:00.000) head|tail
01:25:13.548 (+00:02:35.735) awk
05:47:23.026 (+04:24:26.246) cut
With awk it is pretty fast:
awk 'NR == num_line' file
When this is true, the default behaviour of awk is performed: {print $0}.
Alternative versions
If your file happens to be huge, you'd better exit after reading the required line. This way you save CPU time See time comparison at the end of the answer.
awk 'NR == num_line {print; exit}' file
If you want to give the line number from a bash variable you can use:
awk 'NR == n' n=$num file
awk -v n=$num 'NR == n' file # equivalent
See how much time is saved by using exit, specially if the line happens to be in the first part of the file:
# Let's create a 10M lines file
for ((i=0; i<100000; i++)); do echo "bla bla"; done > 100Klines
for ((i=0; i<100; i++)); do cat 100Klines; done > 10Mlines
$ time awk 'NR == 1234567 {print}' 10Mlines
bla bla
real 0m1.303s
user 0m1.246s
sys 0m0.042s
$ time awk 'NR == 1234567 {print; exit}' 10Mlines
bla bla
real 0m0.198s
user 0m0.178s
sys 0m0.013s
So the difference is 0.198s vs 1.303s, around 6x times faster.
According to my tests, in terms of performance and readability my recommendation is:
tail -n+N | head -1
N is the line number that you want. For example, tail -n+7 input.txt | head -1 will print the 7th line of the file.
tail -n+N will print everything starting from line N, and head -1 will make it stop after one line.
The alternative head -N | tail -1 is perhaps slightly more readable. For example, this will print the 7th line:
head -7 input.txt | tail -1
When it comes to performance, there is not much difference for smaller sizes, but it will be outperformed by the tail | head (from above) when the files become huge.
The top-voted sed 'NUMq;d' is interesting to know, but I would argue that it will be understood by fewer people out of the box than the head/tail solution and it is also slower than tail/head.
In my tests, both tails/heads versions outperformed sed 'NUMq;d' consistently. That is in line with the other benchmarks that were posted. It is hard to find a case where tails/heads was really bad. It is also not surprising, as these are operations that you would expect to be heavily optimized in a modern Unix system.
To get an idea about the performance differences, these are the number that I get for a huge file (9.3G):
tail -n+N | head -1: 3.7 sec
head -N | tail -1: 4.6 sec
sed Nq;d: 18.8 sec
Results may differ, but the performance head | tail and tail | head is, in general, comparable for smaller inputs, and sed is always slower by a significant factor (around 5x or so).
To reproduce my benchmark, you can try the following, but be warned that it will create a 9.3G file in the current working directory:
#!/bin/bash
readonly file=tmp-input.txt
readonly size=1000000000
readonly pos=500000000
readonly retries=3
seq 1 $size > $file
echo "*** head -N | tail -1 ***"
for i in $(seq 1 $retries) ; do
time head "-$pos" $file | tail -1
done
echo "-------------------------"
echo
echo "*** tail -n+N | head -1 ***"
echo
seq 1 $size > $file
ls -alhg $file
for i in $(seq 1 $retries) ; do
time tail -n+$pos $file | head -1
done
echo "-------------------------"
echo
echo "*** sed Nq;d ***"
echo
seq 1 $size > $file
ls -alhg $file
for i in $(seq 1 $retries) ; do
time sed $pos'q;d' $file
done
/bin/rm $file
Here is the output of a run on my machine (ThinkPad X1 Carbon with an SSD and 16G of memory). I assume in the final run everything will come from the cache, not from disk:
*** head -N | tail -1 ***
500000000
real 0m9,800s
user 0m7,328s
sys 0m4,081s
500000000
real 0m4,231s
user 0m5,415s
sys 0m2,789s
500000000
real 0m4,636s
user 0m5,935s
sys 0m2,684s
-------------------------
*** tail -n+N | head -1 ***
-rw-r--r-- 1 phil 9,3G Jan 19 19:49 tmp-input.txt
500000000
real 0m6,452s
user 0m3,367s
sys 0m1,498s
500000000
real 0m3,890s
user 0m2,921s
sys 0m0,952s
500000000
real 0m3,763s
user 0m3,004s
sys 0m0,760s
-------------------------
*** sed Nq;d ***
-rw-r--r-- 1 phil 9,3G Jan 19 19:50 tmp-input.txt
500000000
real 0m23,675s
user 0m21,557s
sys 0m1,523s
500000000
real 0m20,328s
user 0m18,971s
sys 0m1,308s
500000000
real 0m19,835s
user 0m18,830s
sys 0m1,004s
Wow, all the possibilities!
Try this:
sed -n "${lineNum}p" $file
or one of these depending upon your version of Awk:
awk -vlineNum=$lineNum 'NR == lineNum {print $0}' $file
awk -v lineNum=4 '{if (NR == lineNum) {print $0}}' $file
awk '{if (NR == lineNum) {print $0}}' lineNum=$lineNum $file
(You may have to try the nawk or gawk command).
Is there a tool that only does the print that particular line? Not one of the standard tools. However, sed is probably the closest and simplest to use.
Save two keystrokes, print Nth line without using bracket:
sed -n Np <fileName>
^ ^
\ \___ 'p' for printing
\______ '-n' for not printing by default
For example, to print 100th line:
sed -n 100p foo.txt
This question being tagged Bash, here's the Bash (≥4) way of doing: use mapfile with the -s (skip) and -n (count) option.
If you need to get the 42nd line of a file file:
mapfile -s 41 -n 1 ary < file
At this point, you'll have an array ary the fields of which containing the lines of file (including the trailing newline), where we have skipped the first 41 lines (-s 41), and stopped after reading one line (-n 1). So that's really the 42nd line. To print it out:
printf '%s' "${ary[0]}"
If you need a range of lines, say the range 42–666 (inclusive), and say you don't want to do the math yourself, and print them on stdout:
mapfile -s $((42-1)) -n $((666-42+1)) ary < file
printf '%s' "${ary[#]}"
If you need to process these lines too, it's not really convenient to store the trailing newline. In this case use the -t option (trim):
mapfile -t -s $((42-1)) -n $((666-42+1)) ary < file
# do stuff
printf '%s\n' "${ary[#]}"
You can have a function do that for you:
print_file_range() {
# $1-$2 is the range of file $3 to be printed to stdout
local ary
mapfile -s $(($1-1)) -n $(($2-$1+1)) ary < "$3"
printf '%s' "${ary[#]}"
}
No external commands, only Bash builtins!
You may also used sed print and quit:
sed -n '10{p;q;}' file # print line 10
You can also use Perl for this:
perl -wnl -e '$.== NUM && print && exit;' some.file
As a followup to CaffeineConnoisseur's very helpful benchmarking answer... I was curious as to how fast the 'mapfile' method was compared to others (as that wasn't tested), so I tried a quick-and-dirty speed comparison myself as I do have bash 4 handy. Threw in a test of the "tail | head" method (rather than head | tail) mentioned in one of the comments on the top answer while I was at it, as folks are singing its praises. I don't have anything nearly the size of the testfile used; the best I could find on short notice was a 14M pedigree file (long lines that are whitespace-separated, just under 12000 lines).
Short version: mapfile appears faster than the cut method, but slower than everything else, so I'd call it a dud. tail | head, OTOH, looks like it could be the fastest, although with a file this size the difference is not all that substantial compared to sed.
$ time head -11000 [filename] | tail -1
[output redacted]
real 0m0.117s
$ time cut -f11000 -d$'\n' [filename]
[output redacted]
real 0m1.081s
$ time awk 'NR == 11000 {print; exit}' [filename]
[output redacted]
real 0m0.058s
$ time perl -wnl -e '$.== 11000 && print && exit;' [filename]
[output redacted]
real 0m0.085s
$ time sed "11000q;d" [filename]
[output redacted]
real 0m0.031s
$ time (mapfile -s 11000 -n 1 ary < [filename]; echo ${ary[0]})
[output redacted]
real 0m0.309s
$ time tail -n+11000 [filename] | head -n1
[output redacted]
real 0m0.028s
Hope this helps!
The fastest solution for big files is always tail|head, provided that the two distances:
from the start of the file to the starting line. Lets call it S
the distance from the last line to the end of the file. Be it E
are known. Then, we could use this:
mycount="$E"; (( E > S )) && mycount="+$S"
howmany="$(( endline - startline + 1 ))"
tail -n "$mycount"| head -n "$howmany"
howmany is just the count of lines required.
Some more detail in https://unix.stackexchange.com/a/216614/79743
All the above answers directly answer the question. But here's a less direct solution but a potentially more important idea, to provoke thought.
Since line lengths are arbitrary, all the bytes of the file before the nth line need to be read. If you have a huge file or need to repeat this task many times, and this process is time-consuming, then you should seriously think about whether you should be storing your data in a different way in the first place.
The real solution is to have an index, e.g. at the start of the file, indicating the positions where the lines begin. You could use a database format, or just add a table at the start of the file. Alternatively create a separate index file to accompany your large text file.
e.g. you might create a list of character positions for newlines:
awk 'BEGIN{c=0;print(c)}{c+=length()+1;print(c+1)}' file.txt > file.idx
then read with tail, which actually seeks directly to the appropriate point in the file!
e.g. to get line 1000:
tail -c +$(awk 'NR=1000' file.idx) file.txt | head -1
This may not work with 2-byte / multibyte characters, since awk is "character-aware" but tail is not.
I haven't tested this against a large file.
Also see this answer.
Alternatively - split your file into smaller files!
If you got multiple lines by delimited by \n (normally new line). You can use 'cut' as well:
echo "$data" | cut -f2 -d$'\n'
You will get the 2nd line from the file. -f3 gives you the 3rd line.
Using what others mentioned, I wanted this to be a quick & dandy function in my bash shell.
Create a file: ~/.functions
Add to it the contents:
getline() {
line=$1
sed $line'q;d' $2
}
Then add this to your ~/.bash_profile:
source ~/.functions
Now when you open a new bash window, you can just call the function as so:
getline 441 myfile.txt
Lots of good answers already. I personally go with awk. For convenience, if you use bash, just add the below to your ~/.bash_profile. And, the next time you log in (or if you source your .bash_profile after this update), you will have a new nifty "nth" function available to pipe your files through.
Execute this or put it in your ~/.bash_profile (if using bash) and reopen bash (or execute source ~/.bach_profile)
# print just the nth piped in line
nth () { awk -vlnum=${1} 'NR==lnum {print; exit}'; }
Then, to use it, simply pipe through it. E.g.,:
$ yes line | cat -n | nth 5
5 line
To print nth line using sed with a variable as line number:
a=4
sed -e $a'q:d' file
Here the '-e' flag is for adding script to command to be executed.
After taking a look at the top answer and the benchmark, I've implemented a tiny helper function:
function nth {
if (( ${#} < 1 || ${#} > 2 )); then
echo -e "usage: $0 \e[4mline\e[0m [\e[4mfile\e[0m]"
return 1
fi
if (( ${#} > 1 )); then
sed "$1q;d" $2
else
sed "$1q;d"
fi
}
Basically you can use it in two fashions:
nth 42 myfile.txt
do_stuff | nth 42
This is not a bash solution, but I found out that top choices didn't satisfy my needs, eg,
sed 'NUMq;d' file
was fast enough, but was hanging for hours and did not tell about any progress. I suggest compiling this cpp program and using it to find the row you want. You can compile it with g++ main.cpp, where main.cpp is the file with the content below. I got a.out and executed it with ./a.out
#include <iostream>
#include <string>
#include <fstream>
using namespace std;
int main() {
string filename;
cout << "Enter filename ";
cin >> filename;
int needed_row_number;
cout << "Enter row number ";
cin >> needed_row_number;
int progress_line_count;
cout << "Enter at which every number of rows to monitor progress ";
cin >> progress_line_count;
char ch;
int row_counter = 1;
fstream fin(filename, fstream::in);
while (fin >> noskipws >> ch) {
int ch_int = (int) ch;
if (row_counter == needed_row_number) {
cout << ch;
}
if (ch_int == 10) {
if (row_counter == needed_row_number) {
return 0;
}
row_counter++;
if (row_counter % progress_line_count == 0) {
cout << "Progress: line " << row_counter << endl;
}
}
}
return 0;
}
To get an nth line (single line)
If you want something that you can later customize without having to deal with bash you can compile this c program and drop the binary in your custom binaries directory. This assumes that you know how to edit the .bashrc file
accordingly (only if you want to edit your path variable), If you don't know, this is a helpful link.
To run this code use (assuming you named the binary "line").
line [target line] [target file]
example
line 2 somefile.txt
The code:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(int argc, char* argv[]){
if(argc != 3){
fprintf(stderr, "line needs a line number and a file name");
exit(0);
}
int lineNumber = atoi(argv[1]);
int counter = 0;
char *fileName = argv[2];
FILE *fileReader = fopen(fileName, "r");
if(fileReader == NULL){
fprintf(stderr, "Failed to open file");
exit(0);
}
size_t lineSize = 0;
char* line = NULL;
while(counter < lineNumber){
getline(&line, &linesize, fileReader);
counter++
}
getline(&line, &lineSize, fileReader);
printf("%s\n", line);
fclose(fileReader);
return 0;
}
EDIT: removed the fseek and replaced it with a while loop
I've put some of the above answers into a short bash script that you can put into a file called get.sh and link to /usr/local/bin/get (or whatever other name you prefer).
#!/bin/bash
if [ "${1}" == "" ]; then
echo "error: blank line number";
exit 1
fi
re='^[0-9]+$'
if ! [[ $1 =~ $re ]] ; then
echo "error: line number arg not a number";
exit 1
fi
if [ "${2}" == "" ]; then
echo "error: blank file name";
exit 1
fi
sed "${1}q;d" $2;
exit 0
Ensure it's executable with
$ chmod +x get
Link it to make it available on the PATH with
$ ln -s get.sh /usr/local/bin/get
UPDATE 1 : found much faster method in awk
just 5.353 secs to obtain a row above 133.6 mn :
rownum='133668997'; ( time ( pvE0 < ~/master_primelist_18a.txt |
LC_ALL=C mawk2 -F'^$' -v \_="${rownum}" -- '!_{exit}!--_' ) )
in0: 5.45GiB 0:00:05 [1.02GiB/s] [1.02GiB/s] [======> ] 71%
( pvE 0.1 in0 < ~/master_primelist_18a.txt |
LC_ALL=C mawk2 -F'^$' -v -- ; ) 5.01s user
1.21s system 116% cpu 5.353 total
77.37219=195591955519519519559551=0x296B0FA7D668C4A64F7F=
===============================================
I'd like to contest the notion that perl is faster than awk :
so while my test file isn't nearly quite as many rows, it's also twice the size, at 7.58 GB -
I even gave perl some built-in advantageous - like hard-coding in the row number, and also going second, thus gaining any potential speedups from OS caching mechanism, if any
f="$( grealpath -ePq ~/master_primelist_18a.txt )"
rownum='133668997'
fg;fg; pv < "${f}" | gwc -lcm
echo; sleep 2;
echo;
( time ( pv -i 0.1 -cN in0 < "${f}" |
LC_ALL=C mawk2 '_{exit}_=NR==+__' FS='^$' __="${rownum}"
) ) | mawk 'BEGIN { print } END { print _ } NR'
sleep 2
( time ( pv -i 0.1 -cN in0 < "${f}" |
LC_ALL=C perl -wnl -e '$.== 133668997 && print && exit;'
) ) | mawk 'BEGIN { print } END { print _ } NR' ;
fg: no current job
fg: no current job
7.58GiB 0:00:28 [ 275MiB/s] [============>] 100%
148,110,134 8,134,435,629 8,134,435,629 <<<< rows, chars, and bytes
count as reported by gnu-wc
in0: 5.45GiB 0:00:07 [ 701MiB/s] [=> ] 71%
( pv -i 0.1 -cN in0 < "${f}" | LC_ALL=C mawk2 '_{exit}_=NR==+__' FS='^$' ; )
6.22s user 2.56s system 110% cpu 7.966 total
77.37219=195591955519519519559551=0x296B0FA7D668C4A64F7F=
in0: 5.45GiB 0:00:17 [ 328MiB/s] [=> ] 71%
( pv -i 0.1 -cN in0 < "${f}" | LC_ALL=C perl -wnl -e ; )
14.22s user 3.31s system 103% cpu 17.014 total
77.37219=195591955519519519559551=0x296B0FA7D668C4A64F7F=
I can re-run the test with perl 5.36 or even perl-6 if u think it's gonna make a difference (haven't installed either), but a gap of
7.966 secs (mawk2) vs. 17.014 secs (perl 5.34)
between the two, with the latter more than double the prior, it seems clear which one is indeed meaningfully faster to fetch a single row way deep in ASCII files.
This is perl 5, version 34, subversion 0 (v5.34.0) built for darwin-thread-multi-2level
Copyright 1987-2021, Larry Wall
mawk 1.9.9.6, 21 Aug 2016, Copyright Michael D. Brennan

Calculate CPU per process

I'm trying to write a script that gives back the CPU usage (in %) for a specific process I need to use the /proc/PID/stat because ps aux is not present on the embedded system.
I tried this:
#!/usr/bin/env bash
PID=$1
PREV_TIME=0
PREV_TOTAL=0
while true;do
TOTAL=$(grep '^cpu ' /proc/stat |awk '{sum=$2+$3+$4+$5+$6+$7+$8+$9+$10; print sum}')
sfile=`cat /proc/$PID/stat`
PROC_U_TIME=$(echo $sfile|awk '{print $14}')
PROC_S_TIME=$(echo $sfile|awk '{print $15}')
PROC_CU_TIME=$(echo $sfile|awk '{print $16}')
PROC_CS_TIME=$(echo $sfile|awk '{print $17}')
let "PROC_TIME=$PROC_U_TIME+$PROC_CU_TIME+$PROC_S_TIME+$PROC_CS_TIME"
CALC="scale=2 ;(($PROC_TIME-$PREV_TIME)/($TOTAL-$PREV_TOTAL)) *100"
USER=`bc <<< $CALC`
PREV_TIME="$PROC_TIME"
PREV_TOTAL="$TOTAL"
echo $USER
sleep 1
done
But is doesn't give the correct value if i compare this to top. Do some of you know where I make a mistake?
Thanks
Under a normal invocation of top (no arguments), the %CPU column is the proportion of ticks used by the process against the total ticks provided by one CPU, over a period of time.
From the top.c source, the %CPU field is calculated as:
float u = (float)p->pcpu * Frame_tscale;
where pcpu for a process is the elapsed user time + system time since the last display:
hist_new[Frame_maxtask].tics = tics = (this->utime + this->stime);
...
if(ptr) tics -= ptr->tics;
...
// we're just saving elapsed tics, to be converted into %cpu if
// this task wins it's displayable screen row lottery... */
this->pcpu = tics;
and:
et = (timev.tv_sec - oldtimev.tv_sec)
+ (float)(timev.tv_usec - oldtimev.tv_usec) / 1000000.0;
Frame_tscale = 100.0f / ((float)Hertz * (float)et * (Rc.mode_irixps ? 1 : Cpu_tot));
Hertz is 100 ticks/second on most systems (grep 'define HZ' /usr/include/asm*/param.h), et is the elapsed time in seconds since the last displayed frame, and Cpu_tot is the numer of CPUs (but the 1 is what's used by default).
So, the equation on a system using 100 ticks per second for a process over T seconds is:
(curr_utime + curr_stime - (last_utime + last_stime)) / (100 * T) * 100
The script becomes:
#!/bin/bash
PID=$1
SLEEP_TIME=3 # seconds
HZ=100 # ticks/second
prev_ticks=0
while true; do
sfile=$(cat /proc/$PID/stat)
utime=$(awk '{print $14}' <<< "$sfile")
stime=$(awk '{print $15}' <<< "$sfile")
ticks=$(($utime + $stime))
pcpu=$(bc <<< "scale=4 ; ($ticks - $prev_ticks) / ($HZ * $SLEEP_TIME) * 100")
prev_ticks="$ticks"
echo $pcpu
sleep $SLEEP_TIME
done
The key differences between this approach and that of your original script is that top is computing its CPU time percentages against 1 CPU, whereas you were attempting to do so against the aggregate total for all CPUs. It's also true that you can compute the exact aggregate ticks over a period of time by doing Hertz * time * n_cpus, and that it may not necessarily be the case that the numbers in /proc/stat will sum correctly:
$ grep 'define HZ' /usr/include/asm*/param.h
/usr/include/asm-generic/param.h:#define HZ 100
$ grep ^processor /proc/cpuinfo | wc -l
16
$ t1=$(awk '/^cpu /{sum=$2+$3+$4+$5+$6+$7+$8+$9+$10; print sum}' /proc/stat) ; sleep 1 ; t2=$(awk '/^cpu /{sum=$2+$3+$4+$5+$6+$7+$8+$9+$10; print sum}' /proc/stat) ; echo $(($t2 - $t1))
1602

Is there a shell command to delay a buffer?

I am looking for a shell command X such as, when I execute:
command_a | X 5000 | command_b
the stdout of command_a is written in stdin of command_b (at least) 5 seconds later.
A kind of delaying buffer.
As far as I know, buffer/mbuffer can write at constant rate (a fixed number of bytes per second). Instead, I would like a constant delay in time (t=0 is when X read a command_a output chunk, at t=5000 it must write this chunk to command_b).
[edit] I've implemented it: https://github.com/rom1v/delay
I know you said you're looking for a shell command, but what about using a subshell to your advantage? Something like:
command_a | (sleep 5; command_b)
So to grep a file cat-ed through (I know, I know, bad use of cat, but just an example):
cat filename | (sleep 5; grep pattern)
A more complete example:
$ cat testfile
The
quick
brown
fox
$ cat testfile | (sleep 5; grep brown)
# A 5-second sleep occurs here
brown
Or even, as Michale Kropat recommends, a group command with sleep would also work (and is arguably more correct). Like so:
$ cat testfile | { sleep 5; grep brown; }
Note: don't forget the semicolon after your command (here, the grep brown), as it is necessary!
As it seemed such a command dit not exist, I implemented it in C:
https://github.com/rom1v/delay
delay [-b <dtbufsize>] <delay>
Something like this?
#!/bin/bash
while :
do
read line
sleep 5
echo $line
done
Save the file as "slowboy", then do
chmod +x slowboy
and run as
command_a | ./slowboy | command_b
This might work
time_buffered () {
delay=$1
while read line; do
printf "%d %s\n" "$(date +%s)" "$line"
done | while read ts line; do
now=$(date +%s)
if (( now - ts < delay)); then
sleep $(( now - ts ))
fi
printf "%s\n" "$line"
done
}
commandA | time_buffered 5 | commandB
The first loop tags each line of its input with a timestamp and immediately feeds it to the second loop. The second loop checks the timestamp of each line, and will sleep if necessary until $delay seconds after it was first read before outputting the line.
Your question intrigued me, and I decided to come back and play with it. Here is a basic implementation in Perl. It's probably not portable (ioctl), tested on Linux only.
The basic idea is:
read available input every X microseconds
store each input chunk in a hash, with current timestamp as key
also push current timestamp on a queue (array)
lookup oldest timestamps on queue and write + discard data from the hash if delayed long enough
repeat
Max buffer size
There is a max size for stored data. If reached, additional data will not be read until space becomes available after writing.
Performance
It is probably not fast enough for your requirements (several Mb/s). My max throughput was 639 Kb/s, see below.
Testing
# Measure max throughput:
$ pv < /dev/zero | ./buffer_delay.pl > /dev/null
# Interactive manual test, use two terminal windows:
$ mkfifo data_fifo
terminal-one $ cat > data_fifo
terminal-two $ ./buffer_delay.pl < data_fifo
# now type in terminal-one and see it appear delayed in terminal-two.
# It will be line-buffered because of the terminals, not a limitation
# of buffer_delay.pl
buffer_delay.pl
#!/usr/bin/perl
use strict;
use warnings;
use IO::Select;
use Time::HiRes qw(gettimeofday usleep);
require 'sys/ioctl.ph';
$|++;
my $delay_usec = 3 * 1000000; # (3s) delay in microseconds
my $buffer_size_max = 10 * 1024 * 1024 ; # (10 Mb) max bytes our buffer is allowed to contain.
# When buffer is full, incoming data will not be read
# until space becomes available after writing
my $read_frequency = 10; # Approximate read frequency in Hz (will not be exact)
my %buffer; # the data we are delaying, saved in chunks by timestamp
my #timestamps; # keys to %buffer, used as a queue
my $buffer_size = 0; # num bytes currently in %buffer, compare to $buffer_size_max
my $time_slice = 1000000 / $read_frequency; # microseconds, min time for each discrete read-step
my $sel = IO::Select->new([\*STDIN]);
my $overflow_unread = 0; # Num bytes waiting when $buffer_size_max is reached
while (1) {
my $now = sprintf "%d%06d", gettimeofday; # timestamp, used to label incoming chunks
# input available?
if ($overflow_unread || $sel->can_read($time_slice / 1000000)) {
# how much?
my $available_bytes;
if ($overflow_unread) {
$available_bytes = $overflow_unread;
}
else {
$available_bytes = pack("L", 0);
ioctl (STDIN, FIONREAD(), $available_bytes);
$available_bytes = unpack("L", $available_bytes);
}
# will it fit?
my $remaining_space = $buffer_size_max - $buffer_size;
my $try_to_read_bytes = $available_bytes;
if ($try_to_read_bytes > $remaining_space) {
$try_to_read_bytes = $remaining_space;
}
# read input
if ($try_to_read_bytes > 0) {
my $input_data;
my $num_read = read (STDIN, $input_data, $try_to_read_bytes);
die "read error: $!" unless defined $num_read;
exit if $num_read == 0; # EOF
$buffer{$now} = $input_data; # save input
push #timestamps, $now; # save the timestamp
$buffer_size += length $input_data;
if ($overflow_unread) {
$overflow_unread -= length $input_data;
}
elsif (length $input_data < $available_bytes) {
$overflow_unread = $available_bytes - length $input_data;
}
}
}
# write + delete any data old enough
my $then = $now - $delay_usec; # when data is old enough
while (scalar #timestamps && $timestamps[0] < $then) {
my $ts = shift #timestamps;
print $buffer{$ts} if defined $buffer{$ts};
$buffer_size -= length $buffer{$ts};
die "Serious problem\n" unless $buffer_size >= 0;
delete $buffer{$ts};
}
# usleep any remaining time up to $time_slice
my $time_left = (sprintf "%d%06d", gettimeofday) - $now;
usleep ($time_slice - $time_left) if $time_slice > $time_left;
}
Feel free to post comments and suggestions below!

Bash tool to get nth line from a file

Is there a "canonical" way of doing that? I've been using head -n | tail -1 which does the trick, but I've been wondering if there's a Bash tool that specifically extracts a line (or a range of lines) from a file.
By "canonical" I mean a program whose main function is doing that.
head and pipe with tail will be slow for a huge file. I would suggest sed like this:
sed 'NUMq;d' file
Where NUM is the number of the line you want to print; so, for example, sed '10q;d' file to print the 10th line of file.
Explanation:
NUMq will quit immediately when the line number is NUM.
d will delete the line instead of printing it; this is inhibited on the last line because the q causes the rest of the script to be skipped when quitting.
If you have NUM in a variable, you will want to use double quotes instead of single:
sed "${NUM}q;d" file
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For adding lines with sed, you can check this:
sed: insert a line in a certain position
I have a unique situation where I can benchmark the solutions proposed on this page, and so I'm writing this answer as a consolidation of the proposed solutions with included run times for each.
Set Up
I have a 3.261 gigabyte ASCII text data file with one key-value pair per row. The file contains 3,339,550,320 rows in total and defies opening in any editor I have tried, including my go-to Vim. I need to subset this file in order to investigate some of the values that I've discovered only start around row ~500,000,000.
Because the file has so many rows:
I need to extract only a subset of the rows to do anything useful with the data.
Reading through every row leading up to the values I care about is going to take a long time.
If the solution reads past the rows I care about and continues reading the rest of the file it will waste time reading almost 3 billion irrelevant rows and take 6x longer than necessary.
My best-case-scenario is a solution that extracts only a single line from the file without reading any of the other rows in the file, but I can't think of how I would accomplish this in Bash.
For the purposes of my sanity I'm not going to be trying to read the full 500,000,000 lines I'd need for my own problem. Instead I'll be trying to extract row 50,000,000 out of 3,339,550,320 (which means reading the full file will take 60x longer than necessary).
I will be using the time built-in to benchmark each command.
Baseline
First let's see how the head tail solution:
$ time head -50000000 myfile.ascii | tail -1
pgm_icnt = 0
real 1m15.321s
The baseline for row 50 million is 00:01:15.321, if I'd gone straight for row 500 million it'd probably be ~12.5 minutes.
cut
I'm dubious of this one, but it's worth a shot:
$ time cut -f50000000 -d$'\n' myfile.ascii
pgm_icnt = 0
real 5m12.156s
This one took 00:05:12.156 to run, which is much slower than the baseline! I'm not sure whether it read through the entire file or just up to line 50 million before stopping, but regardless this doesn't seem like a viable solution to the problem.
AWK
I only ran the solution with the exit because I wasn't going to wait for the full file to run:
$ time awk 'NR == 50000000 {print; exit}' myfile.ascii
pgm_icnt = 0
real 1m16.583s
This code ran in 00:01:16.583, which is only ~1 second slower, but still not an improvement on the baseline. At this rate if the exit command had been excluded it would have probably taken around ~76 minutes to read the entire file!
Perl
I ran the existing Perl solution as well:
$ time perl -wnl -e '$.== 50000000 && print && exit;' myfile.ascii
pgm_icnt = 0
real 1m13.146s
This code ran in 00:01:13.146, which is ~2 seconds faster than the baseline. If I'd run it on the full 500,000,000 it would probably take ~12 minutes.
sed
The top answer on the board, here's my result:
$ time sed "50000000q;d" myfile.ascii
pgm_icnt = 0
real 1m12.705s
This code ran in 00:01:12.705, which is 3 seconds faster than the baseline, and ~0.4 seconds faster than Perl. If I'd run it on the full 500,000,000 rows it would have probably taken ~12 minutes.
mapfile
I have bash 3.1 and therefore cannot test the mapfile solution.
Conclusion
It looks like, for the most part, it's difficult to improve upon the head tail solution. At best the sed solution provides a ~3% increase in efficiency.
(percentages calculated with the formula % = (runtime/baseline - 1) * 100)
Row 50,000,000
00:01:12.705 (-00:00:02.616 = -3.47%) sed
00:01:13.146 (-00:00:02.175 = -2.89%) perl
00:01:15.321 (+00:00:00.000 = +0.00%) head|tail
00:01:16.583 (+00:00:01.262 = +1.68%) awk
00:05:12.156 (+00:03:56.835 = +314.43%) cut
Row 500,000,000
00:12:07.050 (-00:00:26.160) sed
00:12:11.460 (-00:00:21.750) perl
00:12:33.210 (+00:00:00.000) head|tail
00:12:45.830 (+00:00:12.620) awk
00:52:01.560 (+00:40:31.650) cut
Row 3,338,559,320
01:20:54.599 (-00:03:05.327) sed
01:21:24.045 (-00:02:25.227) perl
01:23:49.273 (+00:00:00.000) head|tail
01:25:13.548 (+00:02:35.735) awk
05:47:23.026 (+04:24:26.246) cut
With awk it is pretty fast:
awk 'NR == num_line' file
When this is true, the default behaviour of awk is performed: {print $0}.
Alternative versions
If your file happens to be huge, you'd better exit after reading the required line. This way you save CPU time See time comparison at the end of the answer.
awk 'NR == num_line {print; exit}' file
If you want to give the line number from a bash variable you can use:
awk 'NR == n' n=$num file
awk -v n=$num 'NR == n' file # equivalent
See how much time is saved by using exit, specially if the line happens to be in the first part of the file:
# Let's create a 10M lines file
for ((i=0; i<100000; i++)); do echo "bla bla"; done > 100Klines
for ((i=0; i<100; i++)); do cat 100Klines; done > 10Mlines
$ time awk 'NR == 1234567 {print}' 10Mlines
bla bla
real 0m1.303s
user 0m1.246s
sys 0m0.042s
$ time awk 'NR == 1234567 {print; exit}' 10Mlines
bla bla
real 0m0.198s
user 0m0.178s
sys 0m0.013s
So the difference is 0.198s vs 1.303s, around 6x times faster.
According to my tests, in terms of performance and readability my recommendation is:
tail -n+N | head -1
N is the line number that you want. For example, tail -n+7 input.txt | head -1 will print the 7th line of the file.
tail -n+N will print everything starting from line N, and head -1 will make it stop after one line.
The alternative head -N | tail -1 is perhaps slightly more readable. For example, this will print the 7th line:
head -7 input.txt | tail -1
When it comes to performance, there is not much difference for smaller sizes, but it will be outperformed by the tail | head (from above) when the files become huge.
The top-voted sed 'NUMq;d' is interesting to know, but I would argue that it will be understood by fewer people out of the box than the head/tail solution and it is also slower than tail/head.
In my tests, both tails/heads versions outperformed sed 'NUMq;d' consistently. That is in line with the other benchmarks that were posted. It is hard to find a case where tails/heads was really bad. It is also not surprising, as these are operations that you would expect to be heavily optimized in a modern Unix system.
To get an idea about the performance differences, these are the number that I get for a huge file (9.3G):
tail -n+N | head -1: 3.7 sec
head -N | tail -1: 4.6 sec
sed Nq;d: 18.8 sec
Results may differ, but the performance head | tail and tail | head is, in general, comparable for smaller inputs, and sed is always slower by a significant factor (around 5x or so).
To reproduce my benchmark, you can try the following, but be warned that it will create a 9.3G file in the current working directory:
#!/bin/bash
readonly file=tmp-input.txt
readonly size=1000000000
readonly pos=500000000
readonly retries=3
seq 1 $size > $file
echo "*** head -N | tail -1 ***"
for i in $(seq 1 $retries) ; do
time head "-$pos" $file | tail -1
done
echo "-------------------------"
echo
echo "*** tail -n+N | head -1 ***"
echo
seq 1 $size > $file
ls -alhg $file
for i in $(seq 1 $retries) ; do
time tail -n+$pos $file | head -1
done
echo "-------------------------"
echo
echo "*** sed Nq;d ***"
echo
seq 1 $size > $file
ls -alhg $file
for i in $(seq 1 $retries) ; do
time sed $pos'q;d' $file
done
/bin/rm $file
Here is the output of a run on my machine (ThinkPad X1 Carbon with an SSD and 16G of memory). I assume in the final run everything will come from the cache, not from disk:
*** head -N | tail -1 ***
500000000
real 0m9,800s
user 0m7,328s
sys 0m4,081s
500000000
real 0m4,231s
user 0m5,415s
sys 0m2,789s
500000000
real 0m4,636s
user 0m5,935s
sys 0m2,684s
-------------------------
*** tail -n+N | head -1 ***
-rw-r--r-- 1 phil 9,3G Jan 19 19:49 tmp-input.txt
500000000
real 0m6,452s
user 0m3,367s
sys 0m1,498s
500000000
real 0m3,890s
user 0m2,921s
sys 0m0,952s
500000000
real 0m3,763s
user 0m3,004s
sys 0m0,760s
-------------------------
*** sed Nq;d ***
-rw-r--r-- 1 phil 9,3G Jan 19 19:50 tmp-input.txt
500000000
real 0m23,675s
user 0m21,557s
sys 0m1,523s
500000000
real 0m20,328s
user 0m18,971s
sys 0m1,308s
500000000
real 0m19,835s
user 0m18,830s
sys 0m1,004s
Wow, all the possibilities!
Try this:
sed -n "${lineNum}p" $file
or one of these depending upon your version of Awk:
awk -vlineNum=$lineNum 'NR == lineNum {print $0}' $file
awk -v lineNum=4 '{if (NR == lineNum) {print $0}}' $file
awk '{if (NR == lineNum) {print $0}}' lineNum=$lineNum $file
(You may have to try the nawk or gawk command).
Is there a tool that only does the print that particular line? Not one of the standard tools. However, sed is probably the closest and simplest to use.
Save two keystrokes, print Nth line without using bracket:
sed -n Np <fileName>
^ ^
\ \___ 'p' for printing
\______ '-n' for not printing by default
For example, to print 100th line:
sed -n 100p foo.txt
This question being tagged Bash, here's the Bash (≥4) way of doing: use mapfile with the -s (skip) and -n (count) option.
If you need to get the 42nd line of a file file:
mapfile -s 41 -n 1 ary < file
At this point, you'll have an array ary the fields of which containing the lines of file (including the trailing newline), where we have skipped the first 41 lines (-s 41), and stopped after reading one line (-n 1). So that's really the 42nd line. To print it out:
printf '%s' "${ary[0]}"
If you need a range of lines, say the range 42–666 (inclusive), and say you don't want to do the math yourself, and print them on stdout:
mapfile -s $((42-1)) -n $((666-42+1)) ary < file
printf '%s' "${ary[#]}"
If you need to process these lines too, it's not really convenient to store the trailing newline. In this case use the -t option (trim):
mapfile -t -s $((42-1)) -n $((666-42+1)) ary < file
# do stuff
printf '%s\n' "${ary[#]}"
You can have a function do that for you:
print_file_range() {
# $1-$2 is the range of file $3 to be printed to stdout
local ary
mapfile -s $(($1-1)) -n $(($2-$1+1)) ary < "$3"
printf '%s' "${ary[#]}"
}
No external commands, only Bash builtins!
You may also used sed print and quit:
sed -n '10{p;q;}' file # print line 10
You can also use Perl for this:
perl -wnl -e '$.== NUM && print && exit;' some.file
As a followup to CaffeineConnoisseur's very helpful benchmarking answer... I was curious as to how fast the 'mapfile' method was compared to others (as that wasn't tested), so I tried a quick-and-dirty speed comparison myself as I do have bash 4 handy. Threw in a test of the "tail | head" method (rather than head | tail) mentioned in one of the comments on the top answer while I was at it, as folks are singing its praises. I don't have anything nearly the size of the testfile used; the best I could find on short notice was a 14M pedigree file (long lines that are whitespace-separated, just under 12000 lines).
Short version: mapfile appears faster than the cut method, but slower than everything else, so I'd call it a dud. tail | head, OTOH, looks like it could be the fastest, although with a file this size the difference is not all that substantial compared to sed.
$ time head -11000 [filename] | tail -1
[output redacted]
real 0m0.117s
$ time cut -f11000 -d$'\n' [filename]
[output redacted]
real 0m1.081s
$ time awk 'NR == 11000 {print; exit}' [filename]
[output redacted]
real 0m0.058s
$ time perl -wnl -e '$.== 11000 && print && exit;' [filename]
[output redacted]
real 0m0.085s
$ time sed "11000q;d" [filename]
[output redacted]
real 0m0.031s
$ time (mapfile -s 11000 -n 1 ary < [filename]; echo ${ary[0]})
[output redacted]
real 0m0.309s
$ time tail -n+11000 [filename] | head -n1
[output redacted]
real 0m0.028s
Hope this helps!
The fastest solution for big files is always tail|head, provided that the two distances:
from the start of the file to the starting line. Lets call it S
the distance from the last line to the end of the file. Be it E
are known. Then, we could use this:
mycount="$E"; (( E > S )) && mycount="+$S"
howmany="$(( endline - startline + 1 ))"
tail -n "$mycount"| head -n "$howmany"
howmany is just the count of lines required.
Some more detail in https://unix.stackexchange.com/a/216614/79743
All the above answers directly answer the question. But here's a less direct solution but a potentially more important idea, to provoke thought.
Since line lengths are arbitrary, all the bytes of the file before the nth line need to be read. If you have a huge file or need to repeat this task many times, and this process is time-consuming, then you should seriously think about whether you should be storing your data in a different way in the first place.
The real solution is to have an index, e.g. at the start of the file, indicating the positions where the lines begin. You could use a database format, or just add a table at the start of the file. Alternatively create a separate index file to accompany your large text file.
e.g. you might create a list of character positions for newlines:
awk 'BEGIN{c=0;print(c)}{c+=length()+1;print(c+1)}' file.txt > file.idx
then read with tail, which actually seeks directly to the appropriate point in the file!
e.g. to get line 1000:
tail -c +$(awk 'NR=1000' file.idx) file.txt | head -1
This may not work with 2-byte / multibyte characters, since awk is "character-aware" but tail is not.
I haven't tested this against a large file.
Also see this answer.
Alternatively - split your file into smaller files!
If you got multiple lines by delimited by \n (normally new line). You can use 'cut' as well:
echo "$data" | cut -f2 -d$'\n'
You will get the 2nd line from the file. -f3 gives you the 3rd line.
Using what others mentioned, I wanted this to be a quick & dandy function in my bash shell.
Create a file: ~/.functions
Add to it the contents:
getline() {
line=$1
sed $line'q;d' $2
}
Then add this to your ~/.bash_profile:
source ~/.functions
Now when you open a new bash window, you can just call the function as so:
getline 441 myfile.txt
Lots of good answers already. I personally go with awk. For convenience, if you use bash, just add the below to your ~/.bash_profile. And, the next time you log in (or if you source your .bash_profile after this update), you will have a new nifty "nth" function available to pipe your files through.
Execute this or put it in your ~/.bash_profile (if using bash) and reopen bash (or execute source ~/.bach_profile)
# print just the nth piped in line
nth () { awk -vlnum=${1} 'NR==lnum {print; exit}'; }
Then, to use it, simply pipe through it. E.g.,:
$ yes line | cat -n | nth 5
5 line
To print nth line using sed with a variable as line number:
a=4
sed -e $a'q:d' file
Here the '-e' flag is for adding script to command to be executed.
After taking a look at the top answer and the benchmark, I've implemented a tiny helper function:
function nth {
if (( ${#} < 1 || ${#} > 2 )); then
echo -e "usage: $0 \e[4mline\e[0m [\e[4mfile\e[0m]"
return 1
fi
if (( ${#} > 1 )); then
sed "$1q;d" $2
else
sed "$1q;d"
fi
}
Basically you can use it in two fashions:
nth 42 myfile.txt
do_stuff | nth 42
This is not a bash solution, but I found out that top choices didn't satisfy my needs, eg,
sed 'NUMq;d' file
was fast enough, but was hanging for hours and did not tell about any progress. I suggest compiling this cpp program and using it to find the row you want. You can compile it with g++ main.cpp, where main.cpp is the file with the content below. I got a.out and executed it with ./a.out
#include <iostream>
#include <string>
#include <fstream>
using namespace std;
int main() {
string filename;
cout << "Enter filename ";
cin >> filename;
int needed_row_number;
cout << "Enter row number ";
cin >> needed_row_number;
int progress_line_count;
cout << "Enter at which every number of rows to monitor progress ";
cin >> progress_line_count;
char ch;
int row_counter = 1;
fstream fin(filename, fstream::in);
while (fin >> noskipws >> ch) {
int ch_int = (int) ch;
if (row_counter == needed_row_number) {
cout << ch;
}
if (ch_int == 10) {
if (row_counter == needed_row_number) {
return 0;
}
row_counter++;
if (row_counter % progress_line_count == 0) {
cout << "Progress: line " << row_counter << endl;
}
}
}
return 0;
}
To get an nth line (single line)
If you want something that you can later customize without having to deal with bash you can compile this c program and drop the binary in your custom binaries directory. This assumes that you know how to edit the .bashrc file
accordingly (only if you want to edit your path variable), If you don't know, this is a helpful link.
To run this code use (assuming you named the binary "line").
line [target line] [target file]
example
line 2 somefile.txt
The code:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(int argc, char* argv[]){
if(argc != 3){
fprintf(stderr, "line needs a line number and a file name");
exit(0);
}
int lineNumber = atoi(argv[1]);
int counter = 0;
char *fileName = argv[2];
FILE *fileReader = fopen(fileName, "r");
if(fileReader == NULL){
fprintf(stderr, "Failed to open file");
exit(0);
}
size_t lineSize = 0;
char* line = NULL;
while(counter < lineNumber){
getline(&line, &linesize, fileReader);
counter++
}
getline(&line, &lineSize, fileReader);
printf("%s\n", line);
fclose(fileReader);
return 0;
}
EDIT: removed the fseek and replaced it with a while loop
I've put some of the above answers into a short bash script that you can put into a file called get.sh and link to /usr/local/bin/get (or whatever other name you prefer).
#!/bin/bash
if [ "${1}" == "" ]; then
echo "error: blank line number";
exit 1
fi
re='^[0-9]+$'
if ! [[ $1 =~ $re ]] ; then
echo "error: line number arg not a number";
exit 1
fi
if [ "${2}" == "" ]; then
echo "error: blank file name";
exit 1
fi
sed "${1}q;d" $2;
exit 0
Ensure it's executable with
$ chmod +x get
Link it to make it available on the PATH with
$ ln -s get.sh /usr/local/bin/get
UPDATE 1 : found much faster method in awk
just 5.353 secs to obtain a row above 133.6 mn :
rownum='133668997'; ( time ( pvE0 < ~/master_primelist_18a.txt |
LC_ALL=C mawk2 -F'^$' -v \_="${rownum}" -- '!_{exit}!--_' ) )
in0: 5.45GiB 0:00:05 [1.02GiB/s] [1.02GiB/s] [======> ] 71%
( pvE 0.1 in0 < ~/master_primelist_18a.txt |
LC_ALL=C mawk2 -F'^$' -v -- ; ) 5.01s user
1.21s system 116% cpu 5.353 total
77.37219=195591955519519519559551=0x296B0FA7D668C4A64F7F=
===============================================
I'd like to contest the notion that perl is faster than awk :
so while my test file isn't nearly quite as many rows, it's also twice the size, at 7.58 GB -
I even gave perl some built-in advantageous - like hard-coding in the row number, and also going second, thus gaining any potential speedups from OS caching mechanism, if any
f="$( grealpath -ePq ~/master_primelist_18a.txt )"
rownum='133668997'
fg;fg; pv < "${f}" | gwc -lcm
echo; sleep 2;
echo;
( time ( pv -i 0.1 -cN in0 < "${f}" |
LC_ALL=C mawk2 '_{exit}_=NR==+__' FS='^$' __="${rownum}"
) ) | mawk 'BEGIN { print } END { print _ } NR'
sleep 2
( time ( pv -i 0.1 -cN in0 < "${f}" |
LC_ALL=C perl -wnl -e '$.== 133668997 && print && exit;'
) ) | mawk 'BEGIN { print } END { print _ } NR' ;
fg: no current job
fg: no current job
7.58GiB 0:00:28 [ 275MiB/s] [============>] 100%
148,110,134 8,134,435,629 8,134,435,629 <<<< rows, chars, and bytes
count as reported by gnu-wc
in0: 5.45GiB 0:00:07 [ 701MiB/s] [=> ] 71%
( pv -i 0.1 -cN in0 < "${f}" | LC_ALL=C mawk2 '_{exit}_=NR==+__' FS='^$' ; )
6.22s user 2.56s system 110% cpu 7.966 total
77.37219=195591955519519519559551=0x296B0FA7D668C4A64F7F=
in0: 5.45GiB 0:00:17 [ 328MiB/s] [=> ] 71%
( pv -i 0.1 -cN in0 < "${f}" | LC_ALL=C perl -wnl -e ; )
14.22s user 3.31s system 103% cpu 17.014 total
77.37219=195591955519519519559551=0x296B0FA7D668C4A64F7F=
I can re-run the test with perl 5.36 or even perl-6 if u think it's gonna make a difference (haven't installed either), but a gap of
7.966 secs (mawk2) vs. 17.014 secs (perl 5.34)
between the two, with the latter more than double the prior, it seems clear which one is indeed meaningfully faster to fetch a single row way deep in ASCII files.
This is perl 5, version 34, subversion 0 (v5.34.0) built for darwin-thread-multi-2level
Copyright 1987-2021, Larry Wall
mawk 1.9.9.6, 21 Aug 2016, Copyright Michael D. Brennan

Resources