I have a non-determinate list of file names that I would like to output to the user in a script. I don't mind if it's a paragraph or in columns (like the out put of ls. How does ls manage it?). In fact I only have the following requirements:
file names need to stay on the same line (yes, that even means files with a space in their name. If someone is dumb enough to use a newline in a filename, though, they deserve what they get.)
If the output is formatted as a paragraph, I'd like to see it indented on the left and right to separate it from other text. Sort of like the way apt-get upgrade handles the list of packages to install.
I would love not to write my own function for this - at least not a complicated one. There are so many text formatting utilities in linux!
The utility should be available in the default Ubuntu install.
It should handle relatively large input, just in case. Something like 2000 characters or so?
It seems like a simple proposition, but I can't seem to get it to work. The column command is out simply because it can't handle large chunks of data. fmt and fold both don't care about delimiters. printf looks like it would work... if I wrote a script for it.
Is there a more flexible tool I've overlooked, or a simple way to do this?
Here I have a simple formatter that, it seems to me, is good enough
% ls | awk '
NR==1 {for(i=1;i<9;i++)printf "----+----%d", i; print ""
line=" " $0;l=2+length($0);next}
{if(l+1+length($0)>80){
print line; line = " " $0 ; l = 2+length($0) ; next}
{l=l+length($0)+1; line=line " " $0}}'
----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
3inarow.py 5s.py a.csv a.not1.pdf a.pdf as de.1 asde.1 asdef.txt asde.py a.sh
a.tex auto a.wk bizarre.py board.py cc2012xyz2_5_5dp.csv cc2012xyz2_5_5dp.html
cc.py col.pdf col.sh col.sh~ col.tex com.py data data.a datazip datidisk
datizip.py dd.py doc1.pdf doc1.tex doc2 doc2.pdf doc2.tex doc3.pdf doc3.tex
e.awk Exit file file1 file2 geomedian.py group_by_1st group_by_1st.2
group_by_1st.mawk integers its.py join.py light.py listluatexfonts mask.py
mat.rix my_data nostop.py numerize.py pepp.py pepp.pyc pi.pdf pippo muore
pippo.py pi.py pi.tex pizza.py plocol.py points.csv points.py puddu puffo
%
I had to simulate input using ls because you didn't care to show how to access your list of files. The window width is arbitrary as well, but it's easy to provide a value to a -V width=... option of awk
Edit
I added a header line, an unrequested header line, to my awk script because I wanted to test the effectiveness of the (very simple) algorithm.
Addendum
I'd like to stress that the simple formatter above doesn't split "file names" like this across lines, as in the following example:
% ls -d1 b*
bia nconodi
bianconodi.pdf
bianconodi.ppt
bin
b.txt
% ls | awk '
NR==1 {for(i=1;i<9;i++)printf "----+----%d", i; print ""
line=" " $0;l=2+length($0);next}
{if(l+1+length($0)>80){
print line; line = " " $0 ; l = 2+length($0) ; next}
{l=l+length($0)+1; line=line " " $0}}'
----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
04_Tower.pdf 2plots.py 2.txt a.csv aiuole asdefff a.txt a.txt~ auto
bia nconodi bianconodi.pdf bianconodi.ppt bin Borsa Ferna.jpg b.txt
...
%
As you can see, in the first line there is enough space to print bia but not enough for the real filename bia nconodi, that hence is printed on the second line.
Addendum 2
This is the formatter the OP eventually went with:
local margin=4
local max=10
echo -e "$filenames" | awk -v width=$(tput cols) -v margin=$margin '
NR==1 {
for (i=0; i<margin; i++) {
line = line " "
}
line = line $0;
l = margin + margin + length($0);
next;
}
{
if (l+1+length($0) > width) {
print line;
line = ""
for (i=0; i<margin; i++) line=line " "
line = line $0 ;
l = margin + margin + length($0) ;
next;
}
{
l = l + length($0) + 1;
line = line " " $0;
}
}
END {
print line;
}'
Perhaps you're looking for /usr/bin/fold?
printf '%s ' * | fold -w 77 | sed -e 's/^/ /'
Replace the * with your list, of course; if your files are in an array (they should be; storing filenames in scalar variables is lossy), that'd be "${your_array[#]}".
If you have your filenames in a variable this will create 3 columns, you can change -3 to whatever number of columns you want
echo "$var" | pr -3 -t
or if you need to get them from the filesystem:
find . -printf "%f\n" 2>/dev/null | pr -3 -t
From what you stated in the comments, I think this may be what you are looking for. The find command prints the file or directory name along with a newline and you can put additional filtering of the filenames by piping through grep or sed prior to pr - the pr command is for print and the -3 states 3 columns and -t is for omit headers and trailers - you can adjust it to your preferences.
Related
I have a giant file (6gb) which is a csv and the rows look like so:
"87687","institute Polytechnic, Brazil"
"342424","university of India, India"
"24343","univefrsity columbia, Bogata, Colombia"
and I would like to remove all punctuation and lower the case of second column yielding:
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
what would be the most efficient way to do this on the terminal?
Tried:
cat TEXTFILE | tr -d '[:punct:]' > OUTFILE
problem: resultant is not in lowercase and tr seems to act on both columns not just the ssecond.
With a real CSV parser in Perl, the robust/reliable way, using just one process.
As far as it's line by line, the 6GB requirement of file size should not be an issue.
#!/usr/bin/perl
use strict; use warnings; # harness
use Text::CSV; # load the needed module (install it)
use feature qw/say/; # say = print("...\n")
# create an instance of a new CSV parser
my $csv = Text::CSV->new({ auto_diag => 1 });
# open a File Handle or exit with error
open my $fh, "<:encoding(utf8)", "file.csv" or die "file.csv: $!";
while (my $row = $csv->getline ($fh)) { # parse line by line
$_ = $row->[1]; # parse only column 2
s/[\s[:punct:]]//g; # removes both space(s) and punct(s)
$_ = lc $_; # Lower Case current value $_
$row->[1] = qq/"$_"/; # edit changes and (re)"quote"
say join ",", #$row; # display the whole current row
}
close $fh; # close the File Handle
Output
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
install
cpan Text::CSV
Here's an approach using xsv and process substitution:
paste -d, \
<(xsv select 1 infile.csv) \
<(xsv select 2 infile.csv | sed 's/[[:blank:][:punct:]]*//g;s/.*/\L&/')
The sed command first removes all blanks and punctuation, then lowercases the entire match.
This also works when the first field contains blanks and commas, and retains quoting where required.
Using sed
$ sed -E ':a;s/([^,]*,)([^ ,]*)[ ,]([[:alpha:]]+)/\1\L\2\3/;ta' input_file
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia
I suggest using this awk solution, which should work with any version of awk:
awk 'BEGIN{FS=OFS="\",\""} {
gsub(/[^[:alnum:]"]+/, "", $2); $2 = tolower($2)} 1' file
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
Details:
We make "," input and output field separators in BEGIN block
gsub(/[^[:alnum:]"]+/, "", $2): Strip all non-alphanumeric characters except "
$2 = tolower($2): Lowercase second column
One GNU awk (for gensub()) idea:
awk '
BEGIN { FS=OFS="\"" }
{ $4=gensub(/[^[:alnum:]]/,"","g",tolower($4)) }
1'
This generates:
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
Another sed approach -
sed -E 's/ +//g; s/([^"]),/\1/g; s/"([^"]*)"/"\L\1"/g' file
I don't like how that leaves no flexibility, and makes you rewrite the logic if you find something else you want to remove, though.
Another in awk -
awk -F'[", ]+' '
{ printf "\"%s\",\"", $2;
for(c=3;c<=NF;c++) printf "%s", tolower($c);
print "\"";
}' file
This approach lets you define and add any additional offending characters into the field delimiters without editing your logic.
$: pat=$"[\"',_;:!##\$%)(* -]+"
$: echo "$pat"
["',_;:!##$%)(* -]+
$: cat file
"87687","institute 'Polytechnic, Brazil"
"342424","university; of-India, India"
"24343","univefrsity )columbia, Bogata, Colombia"
$: awk -F"$pat" '{printf "\"%s\",\"", $2; for(c=3;c<=NF;c++) printf "%s", tolower($c); print "\"" }' file
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
(I hate the way that lone single quote throws the markup color/format parsing off, lol)
Another way using ruby. Edited the data to show only the second field is modified.
% ruby -r 'csv' -e 'f = open("file");
CSV.parse(f) do |i|
puts "\"" + i[0] + "\",\"" + i[1].downcase.gsub(/[ ,]/,"") + "\"" end'
"8768, 7","institutepolytechnicbrazil"
"342 424","universityofindiaindia"
"243 43","univefrsitycolumbiabogatacolombia"
Using FastCSV gives a huge speedup
gem install fastcsv
% ruby -r 'fastcsv' -e 'f = open("file");
FastCSV.raw_parse(f) do |i|
puts "\"" + i[0] + "\",\"" + i[1].downcase.gsub(/[ ,]/,"") + "\"" end'
"8768, 7","institutepolytechnicbrazil"
"342 424","universityofindiaindia"
"243 43","univefrsitycolumbiabogatacolombia"
Data
% cat file
"8768, 7","institute Polytechnic, Brazil"
"342 424","university of India, India"
"243 43","univefrsity columbia, Bogata, Colombia"
With your shown samples and attempts please try following GNU awk code using match function of it. Using regex (^"[^"]*",")([^"]*)(".*)$ in match function which will create 3 capturing groups and will store the value into arr and respectively I am fetching the values of it later in program to meet OP's requirement.
awk '
match($0,/(^"[^"]*",")([^"]*)(".*)$/,arr){
gsub(/[^[:alnum:]]+/,"",arr[2])
print arr[1] tolower(arr[2]) arr[3]
}
' Input_file
This might work for you (GNU sed):
sed -E s'/("[^"]*",)/\1\n/;h;s/.*\n//;s/[[:punct:] ]//g;s/.*/"\L&"/;H;g;s/\n.*\n//' file
Divide and rule.
Partition the line into two fields, make a copy, process the second field removing punctuation and spaces, re-quote and lowercase and then re-assemble the fields
An alternative, perhaps?
sed -E ':a;s/^("[^"]*",".*)[^[:alpha:]"](.*)/\L\1\2/;ta' file
Here is a way to do so in PHP.
Note: PHP will not output double quotes unless needed by the first column. The second column will never need double quotes, it has no space or special characters.
$max_line_length = 100;
if (($fp = fopen("file.csv", "r")) !== FALSE) {
while (($data = fgetcsv($fp, $max_line_length, ",")) !== FALSE) {
$data[1] = strtolower(preg_replace('/[\s[:punct:]]/', '', $data[1]));
fputcsv(STDOUT, $data, ',', '"');
}
fclose($fp);
}
I have two files File01 and File02.
File01, looks like this:
BU24DRAFT_430534
BU24DRAFT_488391
BU24DRAFT_488386
BU24DRAFT_417707
BU24DRAFT_417704
BU24DRAFT_488335
BU24DRAFT_429509
BU24DRAFT_210092
BU24DRAFT_229465
BU24DRAFT_498094
BU24DRAFT_416051
BU24DRAFT_482795
BU24DRAFT_4305
BU24DRAFT_10621
BU24DRAFT_4883
File02, looks like this:
XP_033390445.1_uncharacterized_protein_BU24DRAFT_430534_Aaosphaeria_arxii_CBS_175.79
XP_033390442.1_uncharacterized_protein_BU24DRAFT_488391_Aaosphaeria_arxii_CBS_175.79
XP_033390437.1_uncharacterized_protein_BU24DRAFT_488386_Aaosphaeria_arxii_CBS_175.79
XP_033390400.1_uncharacterized_protein_BU24DRAFT_417707_Aaosphaeria_arxii_CBS_175.79
XP_033390397.1_uncharacterized_protein_BU24DRAFT_417704_Aaosphaeria_arxii_CBS_175.79
XP_033390371.1_uncharacterized_protein_BU24DRAFT_488335_Aaosphaeria_arxii_CBS_175.79
XP_033376581.1_uncharacterized_protein_BU24DRAFT_429509_Aaosphaeria_arxii_CBS_175.79
XP_033376580.1_uncharacterized_protein_BU24DRAFT_210092_Aaosphaeria_arxii_CBS_175.79
XP_033376578.1_uncharacterized_protein_BU24DRAFT_229465,_partial_Aaosphaeria_arxii_CBS_175.79
XP_033376577.1_uncharacterized_protein_BU24DRAFT_498094,_partial_Aaosphaeria_arxii_CBS_175.79
XP_033376576.1_uncharacterized_protein_BU24DRAFT_416051,_partial_Aaosphaeria_arxii_CBS_175.79
XP_033376575.1_uncharacterized_protein_BU24DRAFT_482795,_partial_Aaosphaeria_arxii_CBS_175.79
Using the string from File01, via grep, I would like to identify the lines in File02 that match and with this information generate a file that would look like this:
XP_033390442.1_uncharacterized_protein_BU24DRAFT_488391_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_488391
XP_033390437.1_uncharacterized_protein_BU24DRAFT_488386_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_488386
XP_033390400.1_uncharacterized_protein_BU24DRAFT_417707_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_417707
XP_033390397.1_uncharacterized_protein_BU24DRAFT_417704_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_417704
XP_033390371.1_uncharacterized_protein_BU24DRAFT_488335_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_488335
XP_033376581.1_uncharacterized_protein_BU24DRAFT_429509_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_429509
XP_033376580.1_uncharacterized_protein_BU24DRAFT_210092_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_210092
XP_033376578.1_uncharacterized_protein_BU24DRAFT_229465,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_229465
XP_033376577.1_uncharacterized_protein_BU24DRAFT_498094,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_498094
XP_033376576.1_uncharacterized_protein_BU24DRAFT_416051,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_416051
XP_033376575.1_uncharacterized_protein_BU24DRAFT_482795,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_482795
I tried generating such file using the following code:
while read r;do CMD01=$(echo $r);CMD02=$(grep $r File01); echo "$CMD02 $CMD01";done < File02 | awk '(NR>1) && ($2 > 2 ) '
The problem I run into is that what I obtain extra matching lines:
XP_033390445.1_uncharacterized_protein_BU24DRAFT_430534_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_4305
XP_033390371.1_uncharacterized_protein_BU24DRAFT_488335_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_4883
Where, for example, the string: BU24DRAFT_4305 is wrongly recognizing the string: XP_033390445.1_uncharacterized_protein_BU24DRAFT_430534_Aaosphaeria_arxii_CBS_175.79
This result is incorrect. The string in File01 must match a string in File02 that has a complete version of File01's string
Any ideas that could help me will be appreciated.
For the updated sample input and full-matching requirement and assuming you never have any regexp metacharacters in file1 and that the matching strings in file2 are never at the start or end of the line:
$ awk 'NR==FNR{strs[$0]; next} {for (str in strs) if ($0 ~ ("[^[:alnum:]]"str"[^[:alnum:]]")) print $0, str}' file1 file2
XP_033390445.1_uncharacterized_protein_BU24DRAFT_430534_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_430534
XP_033390442.1_uncharacterized_protein_BU24DRAFT_488391_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_488391
XP_033390437.1_uncharacterized_protein_BU24DRAFT_488386_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_488386
XP_033390400.1_uncharacterized_protein_BU24DRAFT_417707_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_417707
XP_033390397.1_uncharacterized_protein_BU24DRAFT_417704_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_417704
XP_033390371.1_uncharacterized_protein_BU24DRAFT_488335_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_488335
XP_033376581.1_uncharacterized_protein_BU24DRAFT_429509_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_429509
XP_033376580.1_uncharacterized_protein_BU24DRAFT_210092_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_210092
XP_033376578.1_uncharacterized_protein_BU24DRAFT_229465,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_229465
XP_033376577.1_uncharacterized_protein_BU24DRAFT_498094,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_498094
XP_033376576.1_uncharacterized_protein_BU24DRAFT_416051,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_416051
XP_033376575.1_uncharacterized_protein_BU24DRAFT_482795,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_482795
Original answer doing partial matching:
The correct approach is 1 call to awk:
$ awk 'NR==FNR{strs[$0]; next} {for (str in strs) if (index($0,str)) print $0, str}' file1 file2
XP_033376575.1_uncharacterized_protein_BU24DRAFT_482795,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_482795
XP_033376576.1_uncharacterized_protein_BU24DRAFT_416051,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_416051
XP_033376577.1_uncharacterized_protein_BU24DRAFT_498094,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_498094
XP_033376578.1_uncharacterized_protein_BU24DRAFT_229465,_partial_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_229465
XP_033376580.1_uncharacterized_protein_BU24DRAFT_210092_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_210092
XP_033376581.1_uncharacterized_protein_BU24DRAFT_429509_Aaosphaeria_arxii_CBS_175.79 BU24DRAFT_429509
See https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice and https://mywiki.wooledge.org/Quotes for some of the issues with the script in your question.
So, it looks like yours mostly works. A lot of what you are doing here is unnecessary. Here is your script broken into multiple lines for readability:
while read r; do
CMD01=$(echo $r)
CMD02=$(grep $r zztest01)
echo "$CMD02 $CMD01"
done < <(head zztest) | awk '(NR>1) && ($2 > 2 ) '
First, CMD01=$(echo $r): This is really the same (or intended to be) as CMD01="$r" so kind of useless.
Then, < <(head zztest): You are using head to output the contents of the file. This actually works just as well with a simple redirection like this: < zztest.
Last, | awk '(NR>1) && ($2 > 2 ) ': This appears to just be some sort of logic on whether we are going to print anything or not.
Here is a simplified version:
while read r; do
CMD02=$(grep "$r" zztest01) && echo "$CMD02 $r"
done < zztest
Explanation
CMD02=$(grep $r zztest01) && echo "$CMD02 $r": The main part of this is really two commands separated by &&. This means execute the second command if the first one succeeded. grep will return a "failure" code if it does not find what it is looking for. So, if grep does not find a match, echo will not run.
The output of grep will be stored in the variable $CMD02. Then, you will echo that along with $r for each match.
If you really want to keep this on one line like the original:
while read r; do CMD02=$(grep "$r" zztest01) && echo "$CMD02 $r"; done < zztest
Update
If you want to avoid partial matches as Ed asked, you can change the grep to this grep "$r[^0-9]" zztest01. This will avoid a match if there is a trailing digit after the initial match string (which is really an assumption given the sample).
While not explicit in the question, it seems that each pattern should only match single line in the input file (File02).
Based on this observation, possible to improve performance of the solution from Ed Morton:
awk '
NR==FNR{strs[$0]; next}
{ for (str in strs) if (index($0,str)) { print $0, str ; delete strs[str]; next } }
' file1 file2
For large files. with many patterns, it will reduce runtime by a factor of 4.
I have a file named list.txt containing a (supplier,product) pair and I must show the number of products from every supplier and their names using Linux terminal
Sample input:
stationery:paper
grocery:apples
grocery:pears
dairy:milk
stationery:pen
dairy:cheese
stationery:rubber
And the result should be something like:
stationery: 3
stationery: paper pen rubber
grocery: 2
grocery: apples pears
dairy: 2
dairy: milk cheese
Save the input to file, and remove the empty lines. Then use GNU datamash:
datamash -s -t ':' groupby 1 count 2 unique 2 < file
Output:
dairy:2:cheese,milk
grocery:2:apples,pears
stationery:3:paper,pen,rubber
The following pipeline shoud do the job
< your_input_file sort -t: -k1,1r | sort -t: -k1,1r | sed -E -n ':a;$p;N;s/([^:]*): *(.*)\n\1:/\1: \2 /;ta;P;D' | awk -F' ' '{ print $1, NF-1; print $0 }'
where
sort sorts the lines according to what's before the colon, in order to ease the successive processing
the cryptic sed joins the lines with common supplier
awk counts the items for supplier and prints everything appropriately.
Doing it with awk only, as suggested by KamilCuk in a comment, would be a much easier job; doing it with sed only would be (for me) a nightmare. Using both is maybe silly, but I enjoyed doing it.
If you need a detailed explanation, please comment, and I'll find time to provide one.
Here's the sed script written one command per line:
:a
$p
N
s/([^:]*): *(.*)\n\1:/\1: \2 /
ta
P
D
and here's how it works:
:a is just a label where we can jump back through a test or branch command;
$p is the print command applied only to the address $ (the last line); note that all other commands are applied to every line, since no address is specified;
N read one more line and appends it to the current pattern space, putting a \newline in between; this creates a multiline in the pattern space
s/([^:]*): *(.*)\n\1:/\1: \2 / captures what's before the first colon on the line, ([^:]*), as well as what follows it, (.*), getting rid of eccessive spaces, *;
ta tests if the previous s command was successful, and, if this is the case, transfers the control to the line labelled by a (i.e. go to step 1);
P prints the leading part of the multiline up to and including the embedded \newline;
D deletes the leading part of the multiline up to and including the embedded \newline.
This should be close to the only awk code I was referring to:
< os awk -F: '{ count[$1] += 1; items[$1] = items[$1] " " $2 } END { for (supp in items) print supp": " count[supp], "\n"supp":" items[supp]}'
The awk script is more readable if written on several lines:
awk -F: '{ # for each line
# we use the word before the : as the key of an associative array
count[$1] += 1 # increment the count for the given supplier
items[$1] = items[$1] " " $2 # concatenate the current item to the previous ones
}
END { # after processing the whole file
for (supp in items) # iterate on the suppliers and print the result
print supp": " count[supp], "\n"supp":" items[supp]
}
I have a file called Type1.txt, that looks like this:
$ cat Type1.txt
ID.580.G3C0
TTTTTTTTTTT
ID.580.G3C8
ATTATATC-AAA
ID.580.GXC16
ATTATTTC-ACG-TTTTTCCTA
ID.694.G9C3
ATTATATC-ACG-AAATCCTA
ID.694.G9C3
etc...
I want to write a bash script to count the instances of each ID and export it into another file that provides a summary, something like this:
ID.580 = 3
ID.694 = 1
etc...
So far the script is messy and unusable.
For the above I have the following:
#!/bin/bash
for Count in `grep -c "ID.580" Type1.txt; do
echo $Count=ID.580
done > Result.txt #Allows to count only for that single ID.
I have over a thousand ID.XXX, making this code unusable since it's not plausible to add individual ID.XXX for each search. Thank you for the help!
Shell
The code below uses the standard UNIX utilities, and does not assume that the second part of the ID is exactly 3 characters, but will find ID.1.123123123 and ID.1234.123123 and properly only take the first dot-delimited part. As it is
grep '^ID\.[0-9]' Type1.txt | cut -d . -f 1-2 | sort \
| uniq -c | awk '{ print $2" = "$1 }'
grep filters only lines beginning with ID. followed by 1 digit (at least)
cut uses . as the field delimiter, and only outputting fields 1 and 2, thus removing
everything after and including the second . on the line.
sort sorts the lines for uniq to work
uniq prints each line from its input prefixed with a count
awk part reverses these fields and prints them separated with =.
If the first part of the ID can contain letters too, change the end of regular expression to [0-9] to [0-9A-Z]. for example
The pipeline outputs
ID.580 = 3
ID.694 = 2
Python
As the Python is popular among biologists, you might want to hone your python skills instead:
from collections import Counter
counter = Counter()
with open('Type1.txt') as f:
for line in f:
if line.startswith('ID.'):
top_id = '.'.join(line.split('.', 2)[:2])
counter[top_id] += 1
for top_id, count in sorted(counter.items()):
print("%s = %d" % (top_id, count))
The results are exactly identical.
grep '^ID.[0-9][0-9][0-9]' input_file | cut -c1-6 | sort | uniq -c
works?
TL;DR
Given your particular corpus and grouping strategy, there's more than one way to get the results you need. Here are two alternative solutions, one in awk, and one in Ruby.
GNU awk
One way is to use GNU awk to perform the following steps:
match just the ID lines
split matching input lines into fields
select and print the fields you need
sort the lines in the filtered result
count the adjacent duplicates
perform any specialized formatting on the result
For example:
$ awk '/^ID/ {split($0, a, "."); print a[1] "." a[2]}' /tmp/foo |
sort | uniq --count | awk '{print $2 " = " $1}'
ID.580 = 3
ID.694 = 2
With the corpus you provided in your question, this takes an average of 8 ms on my system. A larger corpus will take longer, of course, but unless you have a really huge data set this should be fast enough for most purposes.
Ruby
Ruby offers what I consider a more elegant solution, but is in fact slower. The idea here is to store the relevant portion of your IDs as hash keys, and increment a counter each time you encounter a given ID. For example, consider this Ruby one-liner:
$ ruby -ne 'BEGIN { id = Hash.new(0) }
id[$&] += 1 if /\AID\.\d+/
END { id.each_pair do |k,v| puts "#{k} = #{v}" end }' /tmp/foo
ID.580 = 3
ID.694 = 2
This solution takes around 45 ms to process the same corpus, so I wouldn't recommend it over the awk pipeline just for transforming output. The main advantage to doing it this way is that you have an actual data structure (e.g. a Hash object) that you could manipulate in a more full-featured program.
Here is awk one liner:
$ awk -F. '$1=="ID"{a[$2,$3]++}END{for (i in a) {split(i,ind,SUBSEP); r[ind[1]]++}for (i in r) print "ID."i" = "r[i]}' file
ID.694 = 1
ID.580 = 3
And here is a pure bash solution:
#!/bin/bash
while IFS=. read -r pre id code rest
do
[[ $pre == ID ]] || continue
[[ ${a[$id]} =~ \."$code"\. ]] || {
a[$id]="${a[$id]}.$code."
((count[$id]++));
}
done < file
for i in "${!count[#]}"
do
echo "ID.$i = ${count[$i]}"
done
$ ./script.sh
ID.580 = 3
ID.694 = 1
awk might work too...
awk '/ID.580/{x++}END{print x}' test.txt
You can put this in a for loop
for i in ID.580 ID.694
do
awk '/'$i'/{x++}END{print x}' test.txt
done
I know -A -B -C could be used to show context around the grep keyword.
My question is, how to show different context on different keyword?
For example, how do I show -A 5 for cat, -B 4 for dog, and -C 1 for monkey:
egrep -A3 "cat|dog|monkey" <file>
// this just show 3 after lines for each keyword.
i don't think there's any way to do it with a single grep call, but you could run it through grep once for each variable and concatenate the output:
var=$(grep -n -A 5 cat file)$'\n'$(grep -n -B 4 dog file)$'\n'$(grep -n -C 1 monkey file)
var=$(sort -un <(echo "$var"))
now echo "$var" will produce the same output as you would have gotten from your single command, plus line numbers and context indicators (the : prefix indicates a line that matched the pattern exactly, and the - prefix indicates a line being included because of the -A -B and/or -C options).
the reason i included the line numbers thus far is to preserve the order of the results you would have seen had you managed to do this in one statement. if you like them, great, but if not, you can use the following line to cut them out:
var=$(cut -d: -f2- <(echo "$var") | cut -d- -f2-)
this passes it through once to cut the exact matching lines' prefixes, then again to cut the context matches' prefixes.
pretty? no. but it works.
I'm afraid grep won't do that. You'll have to use a different tool. Perhaps write your own program.
Something like this would do it:
awk '
BEGIN{ ARGV[ARGC++] = ARGV[1] }
function prtB(nr) { for (i=FNR-nr; i<FNR; i++) print a[i] }
function prtA(nr) { for (i=FNR+1; i<=FNR+nr; i++) print a[i] }
NR==FNR{ a[NR]; next }
/cat/ { print; prtA(5) }
/dog/ { prtB(4); print }
/monkey/ { prtB(1); print; prtA(1) }
' file
check the math on the loops in the functions. You didn't say how you'd want to handle lines that contain monkey AND dog, for example.
EDIT: here's an untested solution that would print the maximum context around any match and let you specify the contexts on the command line and won't use as much memory as the above cheap and cheerful solution:
awk -v cxts="cat:0:5\ndog:4:0\nmonkey:1:1" '
BEGIN{
ARGV[ARGC++] = ARGV[1]
numCxts = split(cxts,cxtsA,RS)
for (i=1;i<=numCxts;i++) {
regex = cxtsA[i]
n = split(regex,rangeA,/:/)
sub(/:[^:]+:[^:]+$/,"",regex)
endA[regex] = rangeA[n]
startA[regex] = rangeA[n-1]
regexA[regex]
}
}
NR==FNR{
for (regex in regexA) {
if ($0 ~ regex) {
start = NR - startA[regex]
end = NR + endA[regex]
for (i=start; i<=end; i++) {
prt[i]
}
}
}
next
}
FNR in prt
' file
Separate the searched for patterns in the cxts variable with whatever your RS value is, newline by default.