Combining summary statistics from multiple input files in Bash - bash

I want to generate some Summary statistics for "Mary" based on data in multiple files.
input1.txt looks like
Jose 88518 95 75 95 62 100 78 68
Alex 97502 84 79 80 73 88 95 79 85 93
Mary 98765 80 75 100 51 83 75 99 50 75 89 94
...
input2.txt looks like
Jack 32954 100 98 95 100 93 100 99 98 100 100
Mary 98765 85 83 96 77 81 84 98 75 87
Lisa 83746 100 100 100 100 99 100 98 100 100 100
...
Running the following one-liner code in Bash for input1.txt:
awk '/Mary/{for(n=3;n<=NF;n++) print $n}' input1.txt | Rscript -e 'summary (as.numeric (readLines ("stdin")))'
The results are:
Min. 1st Qu. Median Mean 3rd Qu. Max.
50.00 75.00 80.00 79.18 91.50 100.00
Running the following code for input2.txt:
awk '/Mary/{for(n=3;n<=NF;n++) print $n}' input2.txt | Rscript -e 'summary (as.numeric (readLines ("stdin")))'
The results are:
Min. 1st Qu. Median Mean 3rd Qu. Max.
75.00 81.00 84.00 85.11 87.00 98.00
How can I write a one-liner solution to combine "Mary"'s stats from each data file into one report that results in something similar to the following?
Min. 1st Qu. Median Mean 3rd Qu. Max.
50.00 75.00 80.00 79.18 91.50 100.00
75.00 81.00 84.00 85.11 87.00 98.00

I think you need to use a bash for loop.
for file in $(ls input*.txt); do awk '/Mary/{for(n=3;n<=NF;n++) print $n}' $file | Rscript -e 'summary (as.numeric (readLines ("stdin")))'; done
Probably you will end with two headers now, but as we do not have visibility on how the headers are created it makes hard to suggest.
Min. 1st Qu. Median Mean 3rd Qu. Max.

Related

Can't generate any alignments in MCScanX

I'm trying to find collinearity between a group of genes from two different species using MCScanX. But I don't know what I could be possibly doing wrong anymore. I've checked both input files countless times (.gff and .blast), and they seem to be in line with what the manual says.
Like, for the first species, I've downloaded the gff file from figshare. I already had the fasta file containing only the proteins of interest (that I also got from figshare), so gene ids matched. Then, I downloaded both the gff and the protein fasta file from coffee genome hub. I used the coffee proteins fasta file as the reference genome in rBLAST to align the first specie's genes against it. After blasting (and keeping only the first five best alignments with e-values greater than 1e-10), I filtered both gff files so they only contained genes that matched those in the blast file, and then concatenated them. So the final files look like this:
View (test.blast) #just imagine they're tab separated values
sp1.id1 sp2.id1 44.186 43 20 1 369 411 206 244 0.013 37.4sp1.id1 sp2.id2 25.203 123 80 4 301 413 542 662 0.00029 43.5sp1.id1 sp2.id3 27.843 255 130 15 97 333 458 676 1.75e-05 47.8sp1.id1 sp2.id4 26.667 105 65 3 301 396 329 430 0.004 39.7sp1.id1 sp2.id5 27.103 107 71 3 301 402 356 460 0.000217 43.5sp1.id2 sp2.id6 27.368 95 58 2 40 132 54 139 0.41 32sp1.id2 sp2.id7 27.5 120 82 3 23 138 770 888 0.042 35sp1.id2 sp2.id8 38.596 57 35 0 21 77 126 182 0.000217 42sp1.id2 sp2.id9 36.17 94 56 2 39 129 633 725 1.01e-05 46.6sp1.id2 sp2.id10 37.288 59 34 2 75 133 345 400 0.000105 43.1sp1.id3 sp2.id11 33.846 65 42 1 449 512 360 424 0.038 37.4sp1.id3 sp2.id12 40 50 16 2 676 725 672 707 6.7 30sp1.id3 sp2.id13 31.707 41 25 1 370 410 113 150 2.3 30.4sp1.id3 sp2.id14 31.081 74 45 1 483 550 1 74 3.3 30sp1.id3 sp2.id15 35.938 64 39 1 377 438 150 213 0.000185 43.5
View (test.gff) #just imagine they're tab separated values
ex0 sp2.id1 78543527 78548673ex0 sp2.id2 97152108 97154783ex1 sp2.id3 16555894 16557150ex2 sp2.id4 3166320 3168862ex3 sp2.id5 7206652 7209129ex4 sp2.id6 5079355 5084496ex5 sp2.id7 27162800 27167939ex6 sp2.id8 5584698 5589330ex6 sp2.id9 7085405 7087405ex7 sp2.id10 1105021 1109131ex8 sp2.id11 24426286 24430072ex9 sp2.id12 2734060 2737246ex9 sp2.id13 179361 183499ex10 sp2.id14 893983 899296ex11 sp2.id15 23731978 23733073ts1 sp1.id1 5444897 5448367ts2 sp1.id2 28930274 28935578ts3 sp1.id3 10716894 10721909
So I moved both files to the test folder inside MCScanX directory and ran MCScan (using Ubuntu 20.04.5 LTS, the WSL feature) with:
../MCScanX ./test
I've also tried
../MCScanX -b 2 ./test
(since "-b 2" is the parameter for inter-species patterns of syntenic blocks)
but all I ever get is
255 matches imported (17 discarded)85 pairwise comparisons0 alignments generated
What am I missing????
I should be getting a test.synteny file that, as per the manual's example, looks like this:
## Alignment 0: score=9171.0 e_value=0 N=187 at1&at1 plus
0- 0: AT1G17240 AT1G72300 0
0- 1: AT1G17290 AT1G72330 0
...
0-185: AT1G22330 AT1G78260 1e-63
0-186: AT1G22340 AT1G78270 3e-174
##Alignment 1: score=5084.0 e_value=5.6e-251 N=106 at1&at1 plus

Delete repeated rows keeping one closer to another file using awk

I have two files
$cat file1.txt
0105 20 20 95 50
0106 20 20 95 50
0110 20 20 88 60
0110 20 20 88 65
0115 20 20 82 70
0115 20 20 82 70
0115 20 20 82 75
If you see the file1.txt, there are repeated values in column-1, which are 0110 and 0115.
So I would like to keep one row only based on the column-5 values, which are closer to corresponding values in a reference file (file2.txt). Here closer means the equal or the nearest value in file2.txt. I don't want to change any value in file1.txt, but just to select one row.
$cat file2.txt
0105 20 20 95 50
0106 20 20 95 50
0107 20 20 95 52
0110 20 20 88 65 34
0112 20 20 82 80 23
0113 20 20 82 85 32
0114 20 20 82 70 23
0115 20 20 82 72
0118 20 20 87 79
0120 20 20 83 79
So if we compare the two files, we must keep 0110 20 20 88 65, as the column-5 entry (i.e. 65) in file1.txt is closer that in reference file (i.e. 65 in file2.txt) and delete the other repeated rows. Similarly we must keep 0115 20 20 82 70 because 70 is closer to 72 and delete other two rows starting with 0115
Desire output:
0105 20 20 95 50
0106 20 20 95 50
0110 20 20 88 65
0115 20 20 82 70
I am trying with the following script, but not getting my desire result.
awk 'FNR==NR { a[$5]; next } $5 in a ' file1.txt file2.txt > test.txt
awk '{a[NR]=$1""$2} a[NR]!=a[NR-1]{print}' test.txt
My fortran program algorithm is:
# check each entries in column-1 in file1.txt with next rows if they are same or not
i.e. for i=1,i++ do # Here i is ith row
for j=1,j++ do
if a[i,j] != a[i+1,j]; then print the whole row as it is,
else
# find the row b[i,j] in file2.txt starting with a[i,j]
# and compare the 5th column i.e. b[i,j+5] with all a[i,j+5] starting with a[i,j] in file1.txt
# and take the differences to find closest one
e.g. if we have 3 rows starting with same entry, then
we select the a[i,j] in which diff(b[i,j+5],a[i,j+5]) is minumum i=1,2,3
awk 'BEGIN {
while ((getline line < "file2.txt")>0) {
split(line, f);
file2[f[1]] = line;
}
}
{
if (!($1 in result)) result[$1] = $0;
split(result[$1], a);
split(file2[$1], f);
if (abs(f[5]-$5) < abs(f[5]-a[5])) result[$1] = $0;
}
END {
for (i in result) print result[i];
}
function abs(n) {
return (n < 0 ? -n : n);
}' file1.txt | sort

How to replace list of numbers in column for random numbers in other column in BASH environment

I have a tab file with two columns like that
5 6 14 22 23 25 27 84 85 88 89 94 95 98 100 6 94
6 8 17 20 193 205 209 284 294 295 299 304 305 307 406 205 284 307 406
2 10 13 40 47 58 2 13 40 87
and the desired output should be
5 6 14 22 23 25 27 84 85 88 89 94 95 98 100 14 27
6 8 17 20 193 205 209 284 294 295 299 304 305 307 406 6 209 299 305
2 10 13 23 40 47 58 87 10 23 40 58
I would like to change the numbers in 2nd column for random numbers in 1st column resulting in an output in 2nd column with the same number of numbers. I mean e.g. if there are four numbers in 2nd column for x row, the output must have four random numbers from 1st column for this row, and so on...
I'm try to create two arrays by AWK and split and replace every number in 2nd column for numbers in 1st column but not in a randomly way. I have seen the rand() function but I don't know exactly how joint these two things in a script. Is it possible to do in BASH environment or are there other better ways to do it in BASH environment? Thanks in advance
awk to the rescue!
$ awk -F'\t' 'function shuf(a,n)
{for(i=1;i<n;i++)
{j=i+int(rand()*(n+1-i));
t=a[i]; a[i]=a[j]; a[j]=t}}
function join(a,n,x,s)
{for(i=1;i<=n;i++) {x=x s a[i]; s=" "}
return x}
BEGIN{srand()}
{an=split($1,a," ");
shuf(a,an);
bn=split($2,b," ");
delete m; delete c; j=0;
for(i=1;i<=bn;i++) m[b[i]];
# pull elements from a upto required sample size,
# not intersecting with the previous sample set
for(i=1;i<=an && j<bn;i++) if(!(a[i] in m)) c[++j]=a[i];
cn=asort(c);
print $1 FS join(c,cn)}' file
5 6 14 22 23 25 27 84 85 88 89 94 95 98 100 85 94
6 8 17 20 193 205 209 284 294 295 299 304 305 307 406 20 205 294 295
2 10 13 23 40 47 58 87 10 13 47 87
shuffle (standard algorithm) the input array, sample required number of elements, additional requirement is no intersection with the existing sample set. Helper structure map to keep existing sample set and used for in tests. The rest should be easy to read.
Assuming that there is a tab delimiting the two columns, and each column is a space delimited list:
awk 'BEGIN{srand()}
{n=split($1,a," ");
m=split($2,b," ");
printf "%s\t",$1;
for (i=1;i<=m;i++)
printf "%d%c", a[int(rand() * n) +1], (i == m) ? "\n" : " "
}' FS=\\t input
Try this:
# This can be an external file of course
# Note COL1 and COL2 seprated by hard TAB
cat <<EOF > d1.txt
5 6 14 22 23 25 27 84 85 88 89 94 95 98 100 6 94
6 8 17 20 193 205 209 284 294 295 299 304 305 307 406 205 284 307 406
2 10 13 40 47 58 2 13 40 87
EOF
# Loop to read each line, not econvert TAB to:, though could have used IFS
cat d1.txt | sed 's/ /:/' | while read LINE
do
# Get the 1st column data
COL1=$( echo ${LINE} | cut -d':' -f1 )
# Get col1 number of items
NUM_COL1=$( echo ${COL1} | wc -w )
# Get col2 number of items
NUM_COL2=$( echo ${LINE} | cut -d':' -f2 | wc -w )
# Now split col1 items into an array
read -r -a COL1_NUMS <<< "${COL1}"
COL2=" "
# THis loop runs once for each COL2 item
COUNT=0
while [ ${COUNT} -lt ${NUM_COL2} ]
do
# Generate a random number to use as teh random index for COL1
COL1_IDX=${RANDOM}
let "COL1_IDX %= ${NUM_COL1}"
NEW_NUM=${COL1_NUMS[${COL1_IDX}]}
# Check for duplicate
DUP_FOUND=$( echo "${COL2}" | grep ${NEW_NUM} )
if [ -z "${DUP_FOUND}" ]
then
# Not a duplicate, increment loop conter and do next one
let "COUNT = COUNT + 1 "
# Add the random COL1 item to COL2
COL2="${COL2} ${COL1_NUMS[${COL1_IDX}]}"
fi
done
# Sort COL2
COL2=$( echo ${COL2} | tr ' ' '\012' | sort -n | tr '\012' ' ' )
# Print
echo ${COL1} :: ${COL2}
done
Output:
5 6 14 22 23 25 27 84 85 88 89 94 95 98 100 :: 88 95
6 8 17 20 193 205 209 284 294 295 299 304 305 307 406 :: 20 299 304 305
2 10 13 40 47 58 :: 2 10 40 58

convert comma separated list in text file into columns in bash

I've managed to extract data (from an html page) that goes into a table, and I've isolated the columns of said table into a text file that contains the lines below:
[30,30,32,35,34,43,52,68,88,97,105,107,107,105,101,93,88,80,69,55],
[28,6,6,50,58,56,64,87,99,110,116,119,120,117,114,113,103,82,6,47],
[-7,,,43,71,30,23,28,13,13,10,11,12,11,13,22,17,3,,-15,-20,,38,71],
[0,,,3,5,1.5,1,1.5,0.5,0.5,0,0.5,0.5,0.5,0.5,1,0.5,0,-0.5,-0.5,2.5]
Each bracketed list of numbers represents a column. What I'd like to do is turn these lists into actual columns that I can work with in different data formats. I'd also like to be sure to include that blank parts of these lists too (i.e., "[,,,]")
This is basically what I'm trying to accomplish:
30 28 -7 0
30 6
32 6
35 50 43 3
34 58 71 5
43 56 30 1.5
52 64 23 1
. . . .
. . . .
. . . .
I'm parsing data from a web page, and ultimately planning to make the process as automated as possible so I can easily work with the data after I output it to a nice format.
Anyone know how to do this, have any suggestions, or thoughts on scripting this?
Since you have your lists in python, just do it in python:
l=[["30", "30", "32"], ["28","6","6"], ["-7", "", ""], ["0", "", ""]]
for i in zip(*l):
print "\t".join(i)
produces
30 28 -7 0
30 6
32 6
awk based solution:
awk -F, '{gsub(/\[|\]/, ""); for (i=1; i<=NF; i++) a[i]=a[i] ? a[i] OFS $i: $i}
END {for (i=1; i<=NF; i++) print a[i]}' file
30 28 -7 0
30 6
32 6
35 50 43 3
34 58 71 5
43 56 30 1.5
52 64 23 1
..........
..........
Another solution, but it works only for file with 4 lines:
$ paste \
<(sed -n '1{s,\[,,g;s,\],,g;s|,|\n|g;p}' t) \
<(sed -n '2{s,\[,,g;s,\],,g;s|,|\n|g;p}' t) \
<(sed -n '3{s,\[,,g;s,\],,g;s|,|\n|g;p}' t) \
<(sed -n '4{s,\[,,g;s,\],,g;s|,|\n|g;p}' t)
30 28 -7 0
30 6
32 6
35 50 43 3
34 58 71 5
43 56 30 1.5
52 64 23 1
68 87 28 1.5
88 99 13 0.5
97 110 13 0.5
105 116 10 0
107 119 11 0.5
107 120 12 0.5
105 117 11 0.5
101 114 13 0.5
93 113 22 1
88 103 17 0.5
80 82 3 0
69 6 -0.5
55 47 -15 -0.5
-20 2.5
38
71
Updated: or another version with preprocessing:
$ sed 's|\[||;s|\][,]\?||' t >t2
$ paste \
<(sed -n '1{s|,|\n|g;p}' t2) \
<(sed -n '2{s|,|\n|g;p}' t2) \
<(sed -n '3{s|,|\n|g;p}' t2) \
<(sed -n '4{s|,|\n|g;p}' t2)
If a file named data contains the data given in the problem (exactly as defined above), then the following bash command line will produce the output requested:
$ sed -e 's/\[//' -e 's/\]//' -e 's/,/ /g' <data | rs -T
Example:
cat data
[30,30,32,35,34,43,52,68,88,97,105,107,107,105,101,93,88,80,69,55],
[28,6,6,50,58,56,64,87,99,110,116,119,120,117,114,113,103,82,6,47],
[-7,,,43,71,30,23,28,13,13,10,11,12,11,13,22,17,3,,-15,-20,,38,71],
[0,,,3,5,1.5,1,1.5,0.5,0.5,0,0.5,0.5,0.5,0.5,1,0.5,0,-0.5,-0.5,2.5]
$ sed -e 's/[//' -e 's/]//' -e 's/,/ /g' <data | rs -T
30 28 -7 0
30 6 43 3
32 6 71 5
35 50 30 1.5
34 58 23 1
43 56 28 1.5
52 64 13 0.5
68 87 13 0.5
88 99 10 0
97 110 11 0.5
105 116 12 0.5
107 119 11 0.5
107 120 13 0.5
105 117 22 1
101 114 17 0.5
93 113 3 0
88 103 -15 -0.5
80 82 -20 -0.5
69 6 38 2.5
55 47 71

bash script to sort

I have this file I created:
Kuala Lumpur 78 56
Seoul 86 66
Karachi 95 75
Tokyo 85 60
Lahore 85 75
Manila 90 85
On the command line I can sort it no problem using sort -t and delimit with a tab space, but now I'm trying to write a script to read this in and print out different sorts. Now if I read into an array and tell it to store by the tab the "Kuala Lumpur" line is thrown off and then, so is the sort. What do i do about that space. I don't want to take it out or replace with a comma but if I have to I will.
#!/bin/bash
cat asiapac-temps | sort -t' ' -k 1,1d
echo ""
cat asiapac-temps | sort -t' ' -k 2,2n
echo ""
cat asiapac-temps | sort -t' ' -k 3
this is what I'm using now. I was trying to do this in a different way so to not use sort over and over
The output is:
By city:
Karachi 95 75
Kuala Lumpur 78 56
Lahore 85 75
Manila 90 85
Seoul 86 66
Tokyo 85 60
by high temp (col2)
Kuala Lumpur 78 56
Lahore 85 75
Tokyo 85 60
Seoul 86 66
Manila 90 85
Karachi 95 75
by low temp (col3)
Kuala Lumpur 78 56
Tokyo 85 60
Seoul 86 66
Karachi 95 75
Lahore 85 75
Manila 90 85
Since feature requests to mark a comment as an answer remain declined, I copy the above solution here.
You can't sort anything once and output 3 different results. Any time you write a loop in shell you've probably got the wrong approach (shell is primarily an environment from which to call tools, not a programming language). Just calling sort each time you want to produce sorted output will almost certainly be simpler and more efficient than any approach you can come up with involving array indexing. – Ed Morton
If your question is "how do I input the tab character from the command line", the answer is "you don't need to" -- sort recognizes the tab character as a separator by default.

Resources