Use Bash scripting to select columns and rows with specific name - bash
I'm working with a very large text file (4GB) and I want to make a smaller file with only the data I need in it. It is a tab deliminated file and there are row and column headers. I basically want to select a subset of the data that has a given column and/or row name.
colname_1 colname_2 colname_3 colname_4
row_1 1 2 3 5
row_2 4 6 9 1
row_3 2 3 4 2
I'm planning to have a file with a list of the columns I want.
colname_1 colname_3
I'm a newbie to bash scripting and I really don't know how to do this. I saw other examples, but they all new what column number they wanted in advance and I don't. Sorry if this is a repeat question, I tried to search.
I would want the result to be
colname_1 colname_3
row_1 1 3
row_2 2 9
row_3 2 4
Bash works best as "glue" between standard command-line utilities. You can write loops which read each line in a massive file, but it's painfully slow because bash is not optimized for speed. So let's see how to use a few standard utilities -- grep, tr, cut and paste -- to achieve this goal.
For simplicity, let's put the desired column headings into a file, one per line. (You can always convert a tab-separated line of column headings to this format; we're going to do just that with the data file's column headings. But one thing at a time.)
$ printf '%s\n' colname_{1,3} > columns
$ cat columns
colname_1
colname_2
An important feature of the printf command-line utility is that it repeats its format until it runs out of arguments.
Now, we want to know which column in the data file each of these column headings corresponds to. We could try to write this as a loop in awk or even in bash, but if we convert the header line of the data file into a file with one header per line, we can use grep to tell us, by using the -n option (which prefixes the output with the line number of the match).
Since the column headers are tab-separated, we can get turn them into separate lines just by converting tabs to newlines using tr:
$ head -n1 giga.dat | tr '\t' '\n'
colname_1
colname_2
colname_3
colname_4
Note the blank line at the beginning. That's important, because colname_1 actually corresponds to column 2, since the row headers are in column 1.
So let's look up the column names. Here, we will use several grep options:
-F The pattern argument consists of several patterns, one per line, which are interpreted as ordinary strings instead of regexes.
-x The pattern must match the complete line.
-n The output should be prefixed by the line number of the match.
If we have Gnu grep, we could also use -f columns to read the patterns from the file named columns. Or if we're using bash, we could use the bashism "$(<columns)" to insert the contents of the file as a single argument to grep. But for now, we'll stay Posix compliant:
$ head -n1 giga.dat | tr '\t' '\n' | grep -Fxn "$(cat columns)"
2:colname_1
4:colname_3
OK, that's pretty close. We just need to get rid of everything other than the line number; comma-separate the numbers, and put a 1 at the beginning.
$ { echo 1
> grep -Fxn "$(<columns)" < <(head -n1 giga.dat | tr '\t' '\n')
> } | cut -f1 -d: | paste -sd,
1,2,4
cut -f1 Select field 1. The argument could be a comma-separated list, as in cut -f1,2,4.
cut -d: Use : instead of tab as a field separator ("delimiter")
paste -s Concatenate the lines of a single file instead of corresponding lines of several files
paste -d, Use a comma instead of tab as a field separator.
So now we have the argument we need to pass to cut in order to select the desired columns:
$ cut -f"$({ echo 1
> head -n1 giga.dat | tr '\t' '\n' | grep -Fxn -f columns
> } | cut -f1 -d: | paste -sd,)" giga.dat
colname_1 colname_3
row_1 1 3
row_2 4 9
row_3 2 4
You can actually do this by keeping track of the array indexes for the columns that match the column names in your file containing the column list. After you have found the array indexes in the data file for the column names in your column list file, you simply read your data file (beginning at the second line) and output the row_label plus the data for the columns at the array index you determined in matching the column list file to the original columns.
There are probably several ways to approach this and the following assumes the data in each column does not contain any whitespace. The use of arrays presumes bash (or other advanced shell supporting arrays) and not POSIX shell.
The script takes two file names as input. The first is your original data file. The second is your column list file. An approach could be:
#!/bin/bash
declare -a cols ## array holding original columns from original data file
declare -a csel ## array holding columns to select (from file 2)
declare -a cpos ## array holding array indexes of matching columns
cols=( $(head -n 1 "$1") ) ## fill cols from 1st line of data file
csel=( $(< "$2") ) ## read select columns from file 2
## fill column position array
for ((i = 0; i < ${#csel[#]}; i++)); do
for ((j = 0; j < ${#cols[#]}; j++)); do
[ "${csel[i]}" = "${cols[j]}" ] && cpos+=( $j )
done
done
printf " "
for ((i = 0; i < ${#csel[#]}; i++)); do ## output header row
printf " %s" "${csel[i]}"
done
printf "\n" ## output newline
unset cols ## unset cols to reuse in reading lines below
while read -r line; do ## read each data line in data file
cols=( $line ) ## separate into cols array
printf "%s" "${cols[0]}" ## output row label
for ((j = 0; j < ${#cpos[#]}; j++)); do
[ "$j" -eq "0" ] && { ## handle format for first column
printf "%5s" "${cols[$((${cpos[j]}+1))]}"
continue
} ## output remaining columns
printf "%13s" "${cols[$((${cpos[j]}+1))]}"
done
printf "\n"
done < <( tail -n+2 "$1" )
Using your example data as follows:
Data File
$ cat dat/col+data.txt
colname_1 colname_2 colname_3 colname_4
row_1 1 2 3 5
row_2 4 6 9 1
row_3 2 3 4 2
Column Select File
$ cat dat/col.txt
colname_1 colname_3
Example Use/Output
$ bash colnum.sh dat/col+data.txt dat/col.txt
colname_1 colname_3
row_1 1 3
row_2 4 9
row_3 2 4
Give it a try and let me know if you have any questions. Note, bash isn't known for its blinding speed handling large files, but as long as the column list isn't horrendously long, the script should be reasonably fast.
Related
How to select a specific percentage of lines?
Goodmorning ! I have a file.csv with 140 lines and 26 columns. I need to sort the lines in according the values in column 23. This is an exemple : Controller1,NA,ASHEBORO,ASH,B,,3674,4572,1814,3674,4572,1814,1859,#NAME?,0,124.45%,49.39%,19%,1,,"Big Risk, No Spare disk",45.04%,4.35%,12.63%,160,464,,,,,,0,1,1,1,0,410,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller2,EU,FR,URG,D,,0,0,0,0,0,0,0,#NAME?,0,#DIV/0!,#DIV/0!,#DIV/0!,1,,#N/A,0.00%,0.00%,#DIV/0!,NO STATS,-1088,,,,,,#N/A,#N/A,#N/A,#N/A,0,#N/A,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller3,EU,FR,URG,D,,0,0,0,0,0,0,0,#NAME?,0,#DIV/0!,#DIV/0!,#DIV/0!,1,,#N/A,0.00%,0.00%,#DIV/0!,NO STATS,-2159,,,,,,#N/A,#N/A,#N/A,#N/A,0,#N/A,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller4,NA,STARR,STA,D,,4430,6440,3736,4430,6440,3736,693,#NAME?,0,145.38%,84.35%,18%,1,,No more Data disk,65.17%,19.18%,-2.18%,849,-96,,,,,,0,2,1,2,2,547,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, To sort the lines according the values of the column 23, I do this : awk -F "%*," '$23 > 4' myfikle.csv The result : Controller1,NA,ASHEBORO,ASH,B,,3674,4572,1814,3674,4572,1814,1859,#NAME?,0,124.45%,49.39%,19%,1,,"Big Risk, No Spare disk",45.04%,4.35%,12.63%,160,464,,,,,,0,1,1,1,0,410,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller4,NA,STARR,STA,D,,4430,6440,3736,4430,6440,3736,693,#NAME?,0,145.38%,84.35%,18%,1,,No more Data disk,65.17%,19.18%,-2.18%,849,-96,,,,,,0,2,1,2,2,547,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, In my example, I use the value of 4% in column 23, the goal being to retrieve all the rows with their value in % which increases significantly in column 23. The problem is that I can't base myself on the 4% value because it is only representative of the current table. So I have to find another way to retrieve the rows that have a high value in column 23. I have to sort the Controllers in descending order according to the percentage in column 23, I prefer to process the first 10% of the sorted lines to make sure I have the controllers with a large percentage. The goal is to be able to vary the percentage according to the number of lines in the table. Do you have any tips for that ? Thanks ! :)
I could have sworn that this question was a duplicate, but so far I couldn't find a similar question. Whether your file is sorted or not does not really matter. From any file you can extract the NUMBER first lines with head -n NUMBER. There is no built-in way to specify the number percentually, but you can compute that PERCENT% of your file's lines are NUMBER lines. percentualHead() { percent="$1" file="$2" linesTotal="$(wc -l < "$file")" (( lines = linesTotal * percent / 100 )) head -n "$lines" "$file" } or shorter but less readable percentualHead() { head -n "$(( "$(wc -l < "$2")" * "$1" / 100 ))" "$2" } Calling percentualHead 10 yourFile will print the first 10% of lines from yourFile to stdout. Note that percentualHead only works with files because the file has to be read twice. It does not work with FIFOs, <(), or pipes.
If you want to use standard tools, you'll need to read the file twice. But if you're content to use perl, you can simply do: perl -e 'my #sorted = sort <>; print #sorted[0..$#sorted * .10]' input-file
Here is one for GNU awk to get the top p% from the file but they are outputed in the order of appearance: $ awk -F, -v p=0.5 ' # 50 % of top $23 records NR==FNR { # first run a[NR]=$23 # hash precentages to a, NR as key next } FNR==1 { # second run, at beginning n=asorti(a,a,"#val_num_desc") # sort percentages to descending order for(i=1;i<=n*p;i++) # get only the top p % b[a[i]] # hash their NRs to b } (FNR in b) # top p % BUT not in order ' file file | cut -d, -f 23 # file processed twice, cut 23rd for demo 45.04% 19.18% Commenting this in a bit.
while loops in parallel with input from splited file
I am stuck on that. So I have this while-read loop within my code that is taking so long and I would like to run it in many processors. But, I'd like to split the input file and run 14 loops (because I have 14 threads), one for each splited file, in parallel. Thing is that I don't know how to tell the while loop which file to get and work with. For example, in a regular while-read loop I would code: while read line do <some code> done < input file or variable... But in this case I would like to split the above input file in 14 files and run 14 while loops in parallel, one for each splited file. I tried : split -n 14 input_file find . -name "xa*" | \ parallel -j 14 | \ while read line do <lot of stuff> done also tried split -n 14 input_file function loop { while read line do <lot of stuff> done } export -f loop parallel -j 14 ::: loop But neither I was able to tell which file would be the input to the loop so parallel would understand "take each of those xa* files and place into individual loops in parallel" An example of the input file (a list of strings) AEYS01000010.10484.12283 CVJT01000011.50.2173 KF625180.1.1799 KT949922.1.1791 LOBZ01000025.54942.57580 EDIT This is the code. The output is a table (741100 lines) with some statistics regarding DNA sequences alignments already made. The loop takes an input_file (no broken lines, varies from 500 to ~45000 lines, 800Kb) with DNA sequence acessions, reads it line-by-line and look for each correspondent full taxonomy for those acessions in a databank (~45000 lines). Then, it does a few sums/divisions. Output is a .tsv and looks like this (an example for sequence "KF625180.1.1799"): Rate of taxonomies for this sequence in %: KF625180.1.1799 D_6__Bacillus_atrophaeus Taxonomy %aligned number_ocurrences_in_the_alignment num_ocurrences_in_databank %alingment/databank D_6__Bacillus_atrophaeus 50% 1 20 5% D_6__Bacillus_amyloliquefaciens 50% 1 154 0.649351% $ head input file AEYS01000010.10484.12283 CVJT01000011.50.217 KF625180.1.1799 KT949922.1.1791 LOBZ01000025.54942.57580 Two additional files are also used inside the loop. They are not the loop input. 1) a file called alnout_file that only serves for finding how many hits (or alignments) a given sequence had against the databank. It was also previously made outside this loop. It can vary in the number of lines from hundreads to thousands. Only columns 1 and 2 matters here. Column1 is the name of the sequence and col2 is the name of all sequences it matched in the databnk. It looks like that: $ head alnout_file KF625180.1.1799 KF625180.1.1799 100.0 431 0 0 1 431 1 431 -1 0 KF625180.1.1799 KP143082.1.1457 99.3 431 1 2 1 431 1 429 -1 0 KP143082.1.1457 KF625180.1.1799 99.3 431 1 2 1 429 1 431 -1 0 2) a databank .tsv file containing ~45000 taxonomies correspondent to the DNA sequences. Each taxonomy is in one line: $ head taxonomy.file.tsv KP143082.1.1457 D_0__Bacteria;D_1__Firmicutes;D_2__Bacilli;D_3__Bacillales;D_4__Bacillaceae;D_5__Bacillus;D_6__Bacillus_amyloliquefaciens KF625180.1.1799 D_0__Bacteria;D_1__Firmicutes;D_2__Bacilli;D_3__Bacillales;D_4__Bacillaceae;D_5__Bacillus;D_6__Bacillus_atrophaeus So, given sequence KF625180.1.1799. I previously aligned it against a databank containing ~45000 other DNA sequences and got an output whis has all the accessions to sequences that it matched. What the loop does is that it finds the taxonomies for all those sequences and calculates the "statistics" I mentionded previously. Code does it for all the DNA-sequences-accesions I have. TAXONOMY=path/taxonomy.file.tsv while read line do #find hits hits=$(grep $line alnout_file | cut -f 2) completename=$(grep $line $TAXONOMY | sed 's/D_0.*D_4/D_4/g') printf "\nRate of taxonomies for this sequence in %%:\t$completename\n" printf "Taxonomy\t%aligned\tnumber_ocurrences_in_the_alignment\tnum_ocurrences_in_databank\t%alingment/databank\n" #find hits and calculate the frequence (%) of the taxonomy in the alignment output # ex.: Bacillus_subtilis 33 freqHits=$(grep "${hits[#]}" $TAXONOMY | \ cut -f 2 | \ awk '{a[$0]++} END {for (i in a) {print i, "\t", a[i]/NR*100, "\t", a[i]}}' | \ sed -e 's/D_0.*D_5/D_5/g' -e 's#\s\t\s#\t#g' | \ sort -k2 -hr) # print frequence of each taxonomy in the databank freqBank=$(while read line; do grep -c "$line" $TAXONOMY; done < <(echo "$freqHits" | cut -f 1)) #print cols with taxonomy and calculations paste <(printf %s "$freqHits") <(printf %s "$freqBank") | awk '{print $1,"\t",$2"%","\t",$3,"\t",$4,"\t",$3/$4*100"%"}' done < input_file It is a lot of greps and parsing so it takes about ~12h running in one processor for doing it to all the 45000 DNA sequence accessions. The, I would like to split input_file and do it in all the processors I have (14) because it would the time spend in that. Thank you all for being so patient with me =)
You are looking for --pipe. In this case you can even use the optimized --pipepart (version >20160621): export TAXONOMY=path/taxonomy.file.tsv doit() { while read line do #find hits hits=$(grep $line alnout_file | cut -f 2) completename=$(grep $line $TAXONOMY | sed 's/D_0.*D_4/D_4/g') printf "\nRate of taxonomies for this sequence in %%:\t$completename\n" printf "Taxonomy\t%aligned\tnumber_ocurrences_in_the_alignment\tnum_ocurrences_in_databank\t%alingment/databank\n" #find hits and calculate the frequence (%) of the taxonomy in the alignment output # ex.: Bacillus_subtilis 33 freqHits=$(grep "${hits[#]}" $TAXONOMY | \ cut -f 2 | \ awk '{a[$0]++} END {for (i in a) {print i, "\t", a[i]/NR*100, "\t", a[i]}}' | \ sed -e 's/D_0.*D_5/D_5/g' -e 's#\s\t\s#\t#g' | \ sort -k2 -hr) # print frequence of each taxonomy in the databank freqBank=$(while read line; do grep -c "$line" $TAXONOMY; done < <(echo "$freqHits" | cut -f 1)) #print cols with taxonomy and calculations paste <(printf %s "$freqHits") <(printf %s "$freqBank") | awk '{print $1,"\t",$2"%","\t",$3,"\t",$4,"\t",$3/$4*100"%"}' done } export -f doit parallel -a input_file --pipepart doit This will chop input_file into 10*ncpu blocks (where ncpu is the number of CPU threads), pass each block to doit, run ncpu jobs in parallel. That said I think your real problem is spawning too many programs: If you rewrite doit in Perl or Python I will expect you will see a major speedup.
As an alternative I threw together a quick test. #! /bin/env bash mkfifo PIPELINE # create a single queue cat "$1" > PIPELINE & # supply it with records { declare -i cnt=0 max=14 while (( ++cnt <= max )) # spawn loop creates worker jobs do printf -v fn "%02d" $cnt while read -r line # each work loop reads common stdin... do echo "$fn:[$line]" sleep 1 done >$fn.log 2>&1 & # these run in background in parallel done # this one exits } < PIPELINE # *all* read from the same queue wait cat [0-9][0-9].log Doesn't need split, but does need a mkfifo. Obviously, change the code inside the internal loop.
This answers what you asked, namely how to process in parallel the 14 files you get from running split. However, I don't think it is the best way of doing whatever it is that you are trying to do - but we would need some answers from you for that. So, let's make a million line file and split it into 14 parts: seq 1000000 > 1M split -n 14 1M part- That gives me 14 files called part-aa through part-an. Now your question is how to process those 14 parts in parallel - (read the last line first): #!/bin/bash # This function will be called for each of the 14 files DoOne(){ # Pick up parameters job=$1 file=$2 # Count lines in specified file lines=$(wc -l < "$file") echo "Job No: $job, file: $file, lines: $lines" } # Make the function above known to processes spawned by GNU Parallel export -f DoOne # Run 14 parallel instances of "DoOne" passing job number and filename to each parallel -k -j 14 DoOne {#} {} ::: part-?? Sample Output Job No: 1, file: part-aa, lines: 83861 Job No: 2, file: part-ab, lines: 72600 Job No: 3, file: part-ac, lines: 70295 Job No: 4, file: part-ad, lines: 70295 Job No: 5, file: part-ae, lines: 70294 Job No: 6, file: part-af, lines: 70295 Job No: 7, file: part-ag, lines: 70295 Job No: 8, file: part-ah, lines: 70294 Job No: 9, file: part-ai, lines: 70295 Job No: 10, file: part-aj, lines: 70295 Job No: 11, file: part-ak, lines: 70295 Job No: 12, file: part-al, lines: 70294 Job No: 13, file: part-am, lines: 70295 Job No: 14, file: part-an, lines: 70297 You would omit the -k argument to GNU Parallel normally - I only added it so the output comes in order.
I think that using a bunch of grep and awk commands is the wrong approach here - you would be miles better off using Perl, or awk. As you have not provided any sample files I generated some using this code: #!/bin/bash for a in {A..Z} {0..9} ; do for b in {A..Z} {0..9} ; do for c in {A..Z} {0..9} ; do echo "${a}${b}${c}" done done done > a # Now make file "b" which has the same stuff but shuffled into a different order gshuf < a > b Note that there are 26 letters in the alphabet, so if I add the digits 0..9 to the letters of the alphabet, I get 36 alphanumeric digits and if I nest 3 loops of that I get 36^3 or 46,656 lines which matches your file sizes roughly. File a now looks like this: AAA AAB AAC AAD AAE AAF File b looks like this: UKM L50 AOC 79U K6S 6PO 12I XEV WJN Now I want to loop through a finding the corresponding line in b. First, I use your approach: time while read thing ; do grep $thing b > /dev/null ; done < a That takes 9 mins 35 seconds. If I now exit grep on the first match, on average I will find it in the middle, which means the time will be halved since I won't continue to needlessly read b after I find what I want. time while read thing ; do grep -m1 $thing b > /dev/null ; done < a That improves the time down to 4 mins 30 seconds. If I now use awk to read the contents of b into an associative array (a.k.a. hash) and then read the elements of a and find them in b like this: time awk 'FNR==NR{a[$1]=$1; next} {print a[$1]}' b a > /dev/null That now runs in 0.07 seconds. Hopefully you get the idea of what I am driving at. I expect Perl would do this in the same time and also provide more expressive facilities for the maths in the middle of your loop too.
I hope this small script helps you out: function process { while read line; do echo "$line" done < $1 } function loop { file=$1 chunks=$2 dir=`mktemp -d` cd $dir split -n l/$chunks $file for i in *; do process "$i" & done rm -rf $dir } loop /tmp/foo 14 It runs the process loop on the specified file with the specified number of chunks (without splitting lines) in parallel (using & to put each invocation in the background). I hope it gets you started.
This can do the job for You, I am not familiar with parallel instead using native bash spawning processes &: function loop () { while IFS= read -r -d $'\n' do # YOUR BIG STUFF done < "${1}" } arr_files=(./xa*) for i in "${arr_files[#]}" do loop "${i}" & done wait
Renumbering numbers in a text file based on an unique mapping
I have a big txt file with 2 columns and more than 2 million rows. Every value represents an id and there may be duplicates. There are about 100k unique ids. 1342342345345 34523453452343 0209239498238 29349203492342 2349234023443 99203900992344 2349234023443 182834349348 2923000444 9902342349234 I want to identify each id and re-number all of them starting from 1. It should re-number duplicates also using the same new id. If possible, it should be done using bash. The output could be something like: 123 485934 34 44834 167 34564 167 2345 2 34564
Doing this in pure bash will be really slow. I'd recommend: tr -s '[:blank:]' '\n' <file | sort -un | awk ' NR == FNR {id[$1] = FNR; next} {for (i=1; i<=NF; i++) {$i = id[$i]}; print} ' - file 4 8 3 7 5 9 5 2 1 6
With bash and sort: #!/bin/bash shopt -s lastpipe declare -A hash # declare associative array index=1 # read file and fill associative array while read -r a b; do echo "$a" echo "$b" done <file | sort -nu | while read -r x; do hash[$x]="$((index++))" done # read file and print values from associative array while read -r a b; do echo "${hash[$a]} ${hash[$b]}" done < file Output: 4 8 3 7 5 9 5 2 1 6 See: man bash and man sort
Pure Bash, with a single read of the file: declare -A hash index=1 while read -r a b; do [[ ${hash[$a]} ]] || hash[$a]=$((index++)) # assign index only if not set already [[ ${hash[$b]} ]] || hash[$b]=$((index++)) # assign index only if not set already printf '%s %s\n' "${hash[$a]}" "${hash[$b]}" done < file > file.indexed Notes: the index is assigned in the order read (not based on sorting) we make a single pass through the file (not two as in other solutions) Bash's read is slower than awk; however, if the same logic is implemented in Perl or Python, it will be much faster this solution is more CPU bound because of the hash lookups Output: 1 2 3 4 5 6 5 7 8 9
Just keep a monotonic counter and a table of seen numbers; when you see a new id, give it the value of the counter and increment: awk '!a[$1]{a[$1]=++N} {$1=a[$1]} !a[$2]{a[$2]=++N} {$2=a[$2]} 1' input
awk 'NR==FNR { ids[$1] = ++c; next } { print ids[$1], ids[$2] } ' <( { cut -d' ' -f1 renum.in; cut -d' ' -f2 renum.in; } | sort -nu ) renum.in join the two columns into one then sort the that into numerical order (-n), and make unique (-u), before using awk to use this sequence to generate an array of mappings between old to new ids. Then for each line in input, swap ids and print.
Sorting and printing a file in bash UNIX
I have a file with a bunch of paths that look like so: 7 /usr/file1564 7 /usr/file2212 6 /usr/file3542 I am trying to use sort to pull out and print the path(s) with the most occurrences. Here it what I have so far: cat temp| sort | uniq -c | sort -rk1 > temp I am unsure how to only print the highest occurrences. I also want my output to be printed like this: 7 1564 7 2212 7 being the total number of occurrences and the other numbers being the file numbers at the end of the name. I am rather new to bash scripting so any help would be greatly appreciated!
To emit only the first line of output (with the highest number, since you're doing a reverse numeric sort immediately prior), pipe through head -n1. To remove all content which is not either a number or whitespace, pipe through tr -cd '0-9[:space:]'. To filter for only the values with the highest number, allowing there to be more than one: { read firstnum name && printf '%s\t%s\n' "$firstnum" "$name" while read -r num name; do [[ $num = $firstnum ]] || break printf '%s\t%s\n' "$num" "$name" done } < temp
If you want to avoid sort and you are allowed to use awk, then you can do this: awk '{ if($1>maxcnt) {s=$1" "substr($2,10,4); maxcnt=$1} else if($1==maxcnt) {s=s "\n"$1" "substr($2,10,4)}} END{print s}' \ temp
Getting specific lines of a file
I have this file with 25 million rows. I want to get specific 10 million lines from this file I have the indices of these lines in another file. How can I do it efficiently?
Assuming that the list of lines is in a file list-of-lines and the data is in data-file, and that the numbers in list-of-lines are in ascending order, then you could write: current=0 while read wanted do while ((current < wanted)) do if read -u 3 line then ((current++)) else break 2 fi done echo "$line" done < list-of-lines 3< data-file This uses the Bash extension that allows you to specify which file descriptor read should read from (read -u 3 to read from file descriptor 3). The list of line numbers to be printed is read from standard input; the data file is read from file descriptor 3. This makes one pass through each of the two files, which is within a constant factor of optimal. If the list-of-lines is not sorted, replace the last line with the following, which uses the Bash extension called process substitution: done < <(sort -n list-of-lines) 3< data-file
Assume that the file containing line indices is called "no.txt" and the data file is "input.txt". awk '{printf "%08d\n", $1}' no.txt > no.1.txt nl -n rz -w 8 input.txt | join - no.1.txt | cut -d " " -f1 --complement > output.txt The output.txt will have the lines wanted. I am not sure if this is efficient enough. It seems to be faster than this script (https://stackoverflow.com/a/22926494/3264368) under my environment though. Some explanations: The 1st command preprocess the indices file so that the numbers are right adjusted with leading zeroes and width 8 (since number of rows in input.txt is known to be 25M) The 2nd command will print the rows and line numbers with exactly the same format as in the preprocessed index file, then join them to get the wanted rows (cut to remove the line numbers).
Since you said the file with lines you're looking for is sorted, you can loop through the two files in awk: awk 'BEGIN{getline nl < "line_numbers.txt"} NR == nl {print; getline nl < "line_numbers.txt"}' big_file.txt This will read each line in each file precisely once.
Like your index file is index.txt and datafile is data.txt then you can do it using sed like as follows #!/bin/bash while read line_no do sed ''$line_no'q;d' data.txt done < input.txt
You could run a loop that reads from the 25 million lined file and when the loop counter reaches a line number that you want tell it to write that line. EX: String line = ""; int count = 0; while((line = br.readLine())!=null) { if(count == indice) { System.out.println(line) //or file write }