Addition if identical value over line - bash

I have a CSV like:
1015,5
1015,4
1035,17
1035,11
1009,1
1009,4
1026,9
1004,5
1004,5
1009,1
I search a way to obtain : an addition of the second number if the first number match
1015,9
1035,28
1009,6
1026,9
1004,10

Try this :
awk 'BEGIN{FS=OFS=","}{a[$1]+=$2}END{for(i in a){print i,a[i]}}' file
This is the awk snippet that every shell coder should know from the top of his head.

Related

extracting text blocks from input file with awk or sed and save each block in a separate output file

I am trying to use "awk" to extract text blocks (first field/column only, but multiple lines, the number of lines vary between blocks) based on separators (# and --. These columns represent sequence IDs.
Using "awk" I am able to separate the blocks and print the first column, but I can not redirect these text blocks to separate output files.
Code:
awk '/#/,/--/{print $1}' OTU_test.txt
Ideally, I would like to save each file (text block excluding the separators) based on some text found in the first line of each block (e.g. MEMB.nem.6; MEMB.nem. is content, but the number changes)
Example of input file
enter image description here
#OTU_MEMB.nem.6
EF494252.1.2070 6750.0 D_0__Eukaryota;D_1__Opisthokonta;D_2__Nucletmycea;D_3__Fungi;D_7__Dothideomycetes;D_8__Capnodiales;D_9__uncultured fungus 1.000
FJ235519.1.1436 5957.0 D_0__Eukaryota;D_1__Opisthokonta;D_2__Nucletmycea;D_3__Fungi;D_7__Dothideomycetes;D_8__Capnodiales;D_9__uncultured fungus 1.000
New.ReferenceOTU9219 5418.0 D_0__Eukaryota;D_1__Opisthokonta;D_2__Nucletmycea;D_3__Fungi 1.000
GQ120120.1.1635 471.0 D_0__Eukaryota;D_1__Opisthokonta;D_2__Nucletmycea;D_3__Fungi;D_7__Dothideomycetes;D_8__Capnodiales;D_9__uncultured fungus 0.990
--
#OTU_MEMB.nem.163
New.CleanUp.ReferenceOTU59580 12355.0 D_0__Eukaryota;D_1__Opisthokonta;D_2__Holozoa;D_3__Metazoa (Animalia);D_7__Chromadorea;D_8__Monhysterida 0.700
New.ReferenceOTU11809 1312.0 D_0__Eukaryota;D_1__Opisthokonta;D_2__Holozoa;D_3__Metazoa (Animalia);D_7__Chromadorea;D_8__Monhysterida 0.770
--
#OTU_MEMB.nem.35
New.CleanUp.ReferenceOTU120578 12116.0 D_0__Eukaryota;D_1__Opisthokonta;D_2__Holozoa;D_3__Metazoa (Animalia);D_7__Chromadorea;D_8__Desmoscolecida;D_9__Desmoscolex sp. DeCoSp2 0.780
Expected output files (first column only, no separators).
MEMB.nem.6.txt
EF494252.1.2070
FJ235519.1.1436
New.ReferenceOTU9219
GQ120120.1.1635
MEMB.nem.163.txt
New.CleanUp.ReferenceOTU59580
New.ReferenceOTU11809
MEMB.nem.35.txt
New.CleanUp.ReferenceOTU120578
I have searched a lot, but so far I have been unsuccessful. I would be very happy if someone can advice me.
Thanks,
Tiago
awk '
sub(/^#OTU_/,"") {
close(out)
out = $0 ".txt"
next
}
!/^--/ {
print $1 > out
}
' file

bash: identifying the first value list that also exists in another list

I have been trying to come up with a nice way in BASH to find the first entry in list A that also exists in list B. Where A and B are in separate files.
A B
1024dbeb 8e450d71
7e474d46 8e450d71
1126daeb 1124dae9
7e474d46 7e474d46
1124dae9 3217a53b
In the example above, 7e474d46 is the first entry in A also appearing in B, So I would return 7e474d46.
Note: A can be millions of entries, and B can be around 300.
awk is your friend.
awk 'NR==FNR{a[$1]++;next}{if(a[$1]>=1){print $1;exit}}' file2 file1
7e474d46
Note : Check the [ previous version ] of this answer too which assumed that values are listed in a single file as two columns. This one is wrote after you have clarified that values are fed as two files in [ this ] comment.
Though few points are not clear, like how about if a number in A list is coming 2 times or more?(IN your given example itself d46 comes 2 times). Considering that you need all the line numbers of list A which are present in List B, then following will help you in same.
awk '{col1[$1]=col1[$1]?col1[$1]","FNR:FNR;col2[$2];} END{for(i in col1){if(i in col2){print col1[i],i}}}' Input_file
OR(NON-one liner form of above solution)
awk '{
col1[$1]=col1[$1]?col1[$1]","FNR:FNR;
col2[$2];
}
END{
for(i in col1){
if(i in col2){
print col1[i],i
}
}
}
' Input_file
Above code will provide following output.
3,5 7e474d46
6 1124dae9
creating array col1 here whose index is first field and array col2 whose index is $2. col1's value is current line's value and it will be concatenating it's own value too. Now in END section of awk traversing through col1 array and then checking if any value of col1 is present in array col2 too, if yes then printing col1's value and it's index.
If you have GNU grep, you can try this:
grep -m 1 -f B A

Find lines that have partial matches

So I have a text file that contains a large number of lines. Each line is one long string with no spacing, however, the line contains several pieces of information. The program knows how to differentiate the important information in each line. The program identifies that the first 4 numbers/letters of the line coincide to a specific instrument. Here is a small example portion of the text file.
example text file
1002IPU3...
POIPIPU2...
1435IPU1...
1812IPU3...
BFTOIPD3...
1435IPD2...
As you can see, there are two lines that contain 1435 within this text file, which coincides with a specific instrument. However these lines are not identical. The program I'm using can not do its calculation if there are duplicates of the same station (ie, there are two 1435* stations). I need to find a way to search through my text files and identify if there are any duplicates of the partial strings that represent the stations within the file so that I can delete one or both of the duplicates. If I could have BASH script output the number of the lines containing the duplicates and what the duplicates lines say, that would be appreciated. I think there might be an easy way to do this, but I haven't been able to find any examples of this. Your help is appreciated.
If all you want to do is detect if there are duplicates (not necessarily count or eliminate them), this would be a good starting point:
awk '{ if (++seen[substr($0, 1, 4)] > 1) printf "Duplicates found : %s\n",$0 }' inputfile.txt
For that matter, it's a good starting point for counting or eliminating, too, it'll just take a bit more work...
If you want the count of duplicates:
awk '{a[substr($0,1,4)]++} END {for (i in a) {if(a[i]>1) print i": "a[i]}}' test.in
1435: 2
or:
{
a[substr($0,1,4)]++ # put prefixes to array and count them
}
END { # in the end
for (i in a) { # go thru all indexes
if(a[i]>1) print i": "a[i] # and print out the duplicate prefixes and their counts
}
}
Slightly roundabout but this should work-
cut -c 1-4 file.txt | sort -u > list
for i in `cat list`;
do
echo -n "$i "
grep -c ^"$i" file.txt #This tells you how many occurrences of each 'station'
done
Then you can do whatever you want with the ones that occur more than once.
Use following Python script(syntax of python 2.7 version used)
#!/usr/bin/python
file_name = "device.txt"
f1 = open(file_name,'r')
device = {}
line_count = 0
for line in f1:
line_count += 1
if device.has_key(line[:4]):
device[line[:4]] = device[line[:4]] + "," + str(line_count)
else:
device[line[:4]] = str(line_count)
f1.close()
print device
here the script reads each line and initial 4 character of each line are considered as device name and creates a key value pair device with key representing device name and value as line numbers where we find the string(device name)
following would be output
{'POIP': '2', '1435': '3,6', '1002': '1', '1812': '4', 'BFTO': '5'}
this might help you out!!

Conditional search and replace in Shell

I have a string processing requirement where I want to take a line from line number n and edit it (replace #2 to #3) and then insert the new edited string to line number n+1
Here is what my input file looks like
Input File:-
x/a y/a z/a
x/a#2 y/a#2 z/a#2
x/b y/b z/b
x/b#2 y/b#2 z/b#2
Expected output is as below. Notice the third line with #3. This is what I am expecting.
x/a y/a z/a
x/a#2 y/a#2 z/a#2
x/a#3 y/a#3 z/a#3
x/b y/b z/b
x/b#2 y/b#2 z/b#2
What I have tried:-
I have basic understanding of sed. So i was able to search and replace a string using:
sed '/a#2/ s/a#2/a#3/' -i $file
However I am not able to figure out a way to insert it to next line where it was picked up.
Any help will be appreciated.
TIA
You can simply print the line you want to edit, before you edit it:
sed '/a#2/{ p; s/a#2/a#3/g; }'

SPLIT file by Script (bash, cpp) - numbers in columns

I have files with some columns filled by numbers (float). I would need to split these files according to the value in one of the columns (can set as the first one). This means, when
a b c
in my file the value c fullfils 0.05<=c<=0.1 then create the file named c and copy the whole columns there which fullfils the c-condition...
is this possible? I can something small with bash, awk, something also with c++.
I have searched for some solutions but - I can the data sort of course and only read the first number of the line..
I don't know.
Please, very please.
Thank you
Jane
As you mentioned awk, the basic rule in awk is 'match a line (either by default or with a regexp, condition or line number)' AND 'do something because you found a match'.
awk uses values like $1, $2, $3 to indicate which column in the current line of data it is looking at. $0 refers to the whole line. So ...
awk '
BEGIN{
afile="afile.txt"
bfile="bfile.txt"
cfile="cfile.txt"
}
{
# test c value between .05 and .1
if ($3 >= 0.05 && $3 <= 0.1) print $0 > cfile
} inputData
Note that I am testing the value of the third column (c in your example). You can use $2 to test b column, etc.
If you don't know about the sort of condition test I have included >= 0.5 && $3 <= 0.1 you'll have some learning ahead of you.
Questions in the form of 1. I have this input, 2. I want this output. 3. (but) I'm getting this output, 4. with this code .... {code here} .... have a much better chance of getting a reasonable response in a reasonable amount of time ;-)
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
If I understand your requirements correctly:
awk '{print > $3}' file ...

Resources