I am very new to this kind of work so bear with me please :) I am trying to calculate means over ranges of patterns. E.g. I have two files which are tab delimited:
The file coverage.txt contains two colums. The first colum indicates the position and the second the value assigned to that postion. There are ca. 4*10^6 positions.
coverage.txt
1 10
2 30
3 5
4 10
The second file "patterns.txt" contains three columns 1. the name of the pattern, 2. the starting position of the pattern and 3. end position of the pattern. The pattern ranges do not overlap. There are ca. 3000 patterns.
patterns.txt
rpoB 1 2
gyrA 3 4
Now I want to calculate the mean of the values assigned to the positions of the different patterns and write the output to a new file containing the first colum of the patterns.txt as an identifier.
output.txt
rpoB 20
gyrA 7.5
I think this can be accomplished using awk but I do not know where to start. Your help would be greatly appreciated!
With four million positions, it might be time to reach for a more substantial programming language than shell/awk, but you can do it in a single pass with something like this:
awk '{
if (FILENAME ~ "patterns.txt") {
min[$1]=$2
max[$1]=$3
} else {
for (pat in min) {
if ($1 >= min[pat] && $1 <= max[pat]) {
total[pat] += $2
count[pat] += 1
}
}
}
}
END {
for (pat in total) {
print pat,total[pat]/count[pat]
}
}' patterns.txt coverage.txt
This omits any patterns that don't have any data in the coverage file; you can change the loop in the END to loop over everything in the patterns file instead and just output 0s for the ones that didn't show up.
Related
I'm fairly new to linux/bash shell and I'm really having trouble printing two values (the highest and lowest) from a particular column in a text file. The file is formatted like this:
Geoff Audi 2:22:35.227
Bob Mercedes 1:24:22.338
Derek Jaguar 1:19:77.693
Dave Ferrari 1:08:22.921
As you can see the final column is a timing, I'm trying to use awk to print out the highest and lowest timing in the column. I'm really stumped, I've tried:
awk '{print sort -n < $NF}' timings.txt
However that didn't even seem to sort anything, I just received an output of:
1
0
1
0
...
Repeating over and over, it went on for longer but I didn't want a massive line of it when you get the point after the first couple iterations.
My desired output would be:
Min: 1:08:22.921
Max: 2:22:35.227
After question clarifications: if the time field always has a same number of digits in the same place, e.g. h:mm:ss.ss, the solution can be drastically simplified. Namely, we don't need to convert time to seconds to compare it anymore, we can do a simple string/lexicographical comparison:
$ awk 'NR==1 {m=M=$3} {$3<m&&m=$3; $3>M&&M=$3} END {printf("min: %s\nmax: %s",m,M)}' file
min: 1:08:22.921
max: 2:22:35.227
The logic is the same as in the (previous) script below, just using a simpler string-only based comparison for ordering values (determining min/max). We can do that since we know all timings will conform to the same format, and if a < b (for example "1:22:33" < "1:23:00") we know a is "smaller" than b. (If values are not consistently formatted, then by using the lexicographical comparison alone, we can't order them, e.g. "12:00:00" < "3:00:00".)
So, on first value read (first record, NR==1), we set the initial min/max value to the timing read (in the 3rd field). For each record we test if the current value is smaller than the current min, and if it is, we set the new min. Similarly for the max. We use short circuiting instead if to make expressions shorter ($3<m && m=$3 is equivalent to if ($3<m) m=$3). In the END we simply print the result.
Here's a general awk solution that accepts time strings with variable number of digits for hours/minutes/seconds per record:
$ awk '{split($3,t,":"); s=t[3]+60*(t[2]+60*t[1]); if (s<min||NR==1) {min=s;min_t=$3}; if (s>max||NR==1) {max=s;max_t=$3}} END{print "min:",min_t; print "max:",max_t}' file
min: 1:22:35.227
max: 10:22:35.228
Or, in a more readable form:
#!/usr/bin/awk -f
{
split($3, t, ":")
s = t[3] + 60 * (t[2] + 60 * t[1])
if (s < min || NR == 1) {
min = s
min_t = $3
}
if (s > max || NR == 1) {
max = s
max_t = $3
}
}
END {
print "min:", min_t
print "max:", max_t
}
For each line, we convert the time components (hours, minutes, seconds) from the third field to seconds which we can later simply compare as numbers. As we iterate, we track the current min val and max val, printing them in the END. Initial values for min and max are taken from the first line (NR==1).
Given your statements that the time field is actually a duration and the hours component is always a single digit, this is all you need:
$ awk 'NR==1{min=max=$3} {min=(min<$3?min:$3); max=(max>$3?max:$3)} END{print "Min:", min ORS "Max:", max}' file
Min: 1:08:22.921
Max: 2:22:35.227
You don't want to run sort inside of awk (even with the proper syntax).
Try this:
sed 1d timings.txt | sort -k3,3n | sed -n '1p; $p'
where
the first sed will remove the header
sort on the 3rd column numerically
the second sed will print the first and last line
So I have a text file that contains a large number of lines. Each line is one long string with no spacing, however, the line contains several pieces of information. The program knows how to differentiate the important information in each line. The program identifies that the first 4 numbers/letters of the line coincide to a specific instrument. Here is a small example portion of the text file.
example text file
1002IPU3...
POIPIPU2...
1435IPU1...
1812IPU3...
BFTOIPD3...
1435IPD2...
As you can see, there are two lines that contain 1435 within this text file, which coincides with a specific instrument. However these lines are not identical. The program I'm using can not do its calculation if there are duplicates of the same station (ie, there are two 1435* stations). I need to find a way to search through my text files and identify if there are any duplicates of the partial strings that represent the stations within the file so that I can delete one or both of the duplicates. If I could have BASH script output the number of the lines containing the duplicates and what the duplicates lines say, that would be appreciated. I think there might be an easy way to do this, but I haven't been able to find any examples of this. Your help is appreciated.
If all you want to do is detect if there are duplicates (not necessarily count or eliminate them), this would be a good starting point:
awk '{ if (++seen[substr($0, 1, 4)] > 1) printf "Duplicates found : %s\n",$0 }' inputfile.txt
For that matter, it's a good starting point for counting or eliminating, too, it'll just take a bit more work...
If you want the count of duplicates:
awk '{a[substr($0,1,4)]++} END {for (i in a) {if(a[i]>1) print i": "a[i]}}' test.in
1435: 2
or:
{
a[substr($0,1,4)]++ # put prefixes to array and count them
}
END { # in the end
for (i in a) { # go thru all indexes
if(a[i]>1) print i": "a[i] # and print out the duplicate prefixes and their counts
}
}
Slightly roundabout but this should work-
cut -c 1-4 file.txt | sort -u > list
for i in `cat list`;
do
echo -n "$i "
grep -c ^"$i" file.txt #This tells you how many occurrences of each 'station'
done
Then you can do whatever you want with the ones that occur more than once.
Use following Python script(syntax of python 2.7 version used)
#!/usr/bin/python
file_name = "device.txt"
f1 = open(file_name,'r')
device = {}
line_count = 0
for line in f1:
line_count += 1
if device.has_key(line[:4]):
device[line[:4]] = device[line[:4]] + "," + str(line_count)
else:
device[line[:4]] = str(line_count)
f1.close()
print device
here the script reads each line and initial 4 character of each line are considered as device name and creates a key value pair device with key representing device name and value as line numbers where we find the string(device name)
following would be output
{'POIP': '2', '1435': '3,6', '1002': '1', '1812': '4', 'BFTO': '5'}
this might help you out!!
I have file with two columns. First column is string, second is positive number. in If first field (string) doesn't have double in file (so, first field is unique for the file), I want to copy that unique line to (let's say) result.txt. If first field does have duplicate in file, then I want to subtract second field (number) in those duplicated lines. By the way, file will have one duplicate max, no more than that. I want to save that also in result.txt. So, output file will have all lines with unique values of first field and lines in which first field is duplicated name and second is subtracted value from those duplicates. Files are not sorted. Here is example:
INPUT FILE:
hello 7
something 8
hey 9
hello 8
something 12
nathanforyou 23
OUTPUT FILE that I need (result.txt):
hello 1
something 4
hey 9
nathanforyou 23
I can't have negative numbers in ending file, so I have to subtract smaller number from bigger. What have I tried so far? All kinds of sort (I figure out how to find non-duplicate lines and put them in separate file, but choked on duplicate substraction), arrays in awk (I saved all lines in array, and do "for" clause... problem is that I don't know how to get second field from array element that is line) etc. By the way, problem is more complicated than I described (I have four fields, first two are the same and so on), but at the end - it comes to this.
$ cat tst.awk
{ val[$1,++cnt[$1]] = $2 }
END {
for (name in cnt) {
if ( cnt[name] == 1 ) {
print name, val[name,1]
}
else {
val1 = val[name,1]
val2 = val[name,2]
print name, (val1 > val2 ? val1 - val2 : val2 - val1)
}
}
}
$ awk -f tst.awk file
hey 9
hello 1
nathanforyou 23
something 4
In its basic form, I am given a text file with state vote results from the 2012 Presidential Election and I need to write a one line shell script in Unix to determine which candidate won. The file has various fields, one of which is CandidateName and the other is TotalVotes. Each record in the file is the results from one precinct within the state, thus there are many records for any given CandidateName, so what I'd like to be able to do is sort the data according to CandidateName and then ultimately sum the TotalVotes for each unique CandidateName (so the sum starts at a unique CandidateName and ends before the next unique CandidateName).
No need for sorting with awk and its associative arrays. For convenience, the data file format can be:
precinct1:candidate name1:732
precinct1:candidate2 name:1435
precinct2:candidate name1:9920
precinct2:candidate2 name:1238
Thus you need to create totals of field 3 based on field 2 with : as the delimiter.
awk -F: '{sum[$2] += $3} END { for (name in sum) { print name " = " sum[name] } }' data.file
Some versions of awk can sort internally; others can't. I'd use the sort program to process the results:
sort -t= -k2nb
(field separator is the = sign; the sort is on field 2, which is a numeric field, possibly with leading blanks).
Not quite one line, but will work
$ cat votes.txt
Colorado Obama 50
Colorado Romney 20
Colorado Gingrich 30
Florida Obama 60
Florida Romney 20
Florida Gingrich 30
script
while read loc can num
do
if ! [ ${!can} ]
then
cans+=($can)
fi
(( $can += num ))
done < votes.txt
for can in ${cans[*]}
do
echo $can ${!can}
done
output
Obama 110
Romney 40
Gingrich 60
I am trying to resolve locations in lat and long in one file to a couple of named fields in another file.
I have one file that is like this..
f1--f2--f3--------f4-------- f5---
R 20175155 41273951N078593973W 18012
R 20175156 41274168N078593975W 18000
R 20175157 41274387N078593976W 17999
R 20175158 41274603N078593977W 18024
R 20175159 41274823N078593978W 18087
Each character is in a specific place so I need to define fields based on characters.
f1 char 18-21; f2 char 22 - 25; f3 char 26-35; f4 char 36-45; f5 char 62-66.
I have another much larger csv file that has fields 11, 12, and 13 to correspond to f3, f4, f5.
awk -F',' '{print $11, $12, $13}'
41.46703821 -078.98476926 519.21
41.46763555 -078.98477791 524.13
41.46824123 -078.98479015 526.67
41.46884129 -078.98480615 528.66
41.46943371 -078.98478482 530.50
I need to find the closest match to file 1 field 1 && 2 in file 2 field 11 && 12;
When the closest match is found I need to insert field 1, 2, 3, 4, 5 from file 1 into file 2 field 16, 17, 18, 19, 20.
As you can see the format is slightly different. File 1 breaks down like this..
File 1
f3-------f4--------
DDMMSSdd DDDMMSSdd
41273951N078593973W
File 2
f11-------- f12---------
DD dddddddd DDD dddddddd
41.46703821 -078.98476926
N means f3 is a positive number, W means f4 is a negative number.
I changed file 1 with sed, ridiculous one liner that works great.. (better way???)
cat $file1 |sed 's/.\{17\}//' |sed 's/\(.\{4\}\)\(.\{4\}\)\(.\{9\}\)\(.\)\(.\{9\}\)\(.\)\(.\{16\}\)\(.\{5\}\)/\1,\2,\3,\4,\5,\6,\8/'|sed 's/\(.\{10\}\)\(.\{3\}\)\(.\{2\}\)\(.\{2\}\)\(.\{2\}\)\(.\{3\}\)\(.\{3\}\)\(.\{2\}\)\(.*\)/\1\2,\3,\4.\5\6\7,\8\9/'|sed 's/\(.\{31\}\)\(.\{2\}\)\(.*\)/\1,\2.\3/'
2017,5155, 41,27,39.51,N,078,59,39.73,W,18012
2017,5156, 41,27,41.68,N,078,59,39.75,W,18000
2017,5157, 41,27,43.87,N,078,59,39.76,W,17999
2017,5158, 41,27,46.03,N,078,59,39.77,W,18024
2017,5159, 41,27,48.23,N,078,59,39.78,W,18087
Now I have to convert the formats.. (RESOLVED this (see below)--problem -- The numbers are rounded off too far. I need to have at least six decimal places.)
awk -F',' '{for (i=1;i<=NF;i++) {if (i <= 2) printf ($i","); else if (i == 3&&$6 == "S") printf("-"$3+($4/60)+($5/3600)","); else if (i == 3&&$6 == "N") printf($3+($4/60)+($5/3600)","); else if (i == 7&&$10 == "W") printf("-"$7+($8/60)+($9/3600)","); else if (i == 7&&$10 == "E") printf($7+($8/60)+($9/3600)","); if (i == 11) printf ($i"\n")}}'
2017,5155,41.461,-78.9944,18012
2017,5156,41.4616,-78.9944,18000
2017,5157,41.4622,-78.9944,17999
2017,5158,41.4628,-78.9944,18024
2017,5159,41.4634,-78.9944,18087
That's where I'm at.
RESOLVED THIS
*I need to get the number format to have at least 6 decimal places from this formula.*
printf($3+($4/60)+($5/3600))
Added "%.8f"
printf("%.8f", $3+($4/60)+($5/3600))
Next issue will be to match the fields file 1 f3 and f4 to the closest match in file 2 f11 and f12.
Any ideas?
Then I will need to calculate the distance between the fields.
In Excel the formuls would be like this..
=ATAN2(COS(lat1)*SIN(lat2)-SIN(lat1)*COS(lat2)*COS(lon2-lon1), SIN(lon2-lon1)*COS(lat2))
What could I use for that calculation?
*UPDATE---
I am looking at a short distance for the matching locations. I was thinking about applying something simple like Pythagoras’ theorem for the nearest match. Maybe even use less decimal places. It's got to be many times faster.
maybe something like this..*
x = (lon2-lon1) * Math.cos((lat1+lat2)/2);
y = (lat2-lat1);
d = Math.sqrt(x*x + y*y) * R;
Then I could do the heavy calculations required for greater accuracy after the final file is updated.
Thanks
You can't do the distance calculation after you perform the closest match: closest is defined by comparison of the distance values. Awk can evaluate the formula that you want (looks like great-circle distance?). Take a look at this chapter to see what you need.
The big problem is finding the nearest match. Write an awk script that takes a single line of file 1 and outputs the lines in file 2 with an extra column. That column is the calculation of the distance between the pair of points according to your distance formula. If you sort that file numerically (sort -n) then your closest match is at the top. Then you need a script that loops over each line in file 1, calls your awk script, uses head -n1 to pull out the closest match and then output it in the format that you want.
This is all possible in bash and awk, but it would be a much simpler script in Python. Depends on which you prefer.