Check if string exist in non-consecutive lines in a given column - bash

I have files with the following format:
ATOM 8962 CA VAL W 8 8.647 81.467 25.656 1.00115.78 C
ATOM 8963 C VAL W 8 10.053 80.963 25.506 1.00114.60 C
ATOM 8964 O VAL W 8 10.636 80.422 26.442 1.00114.53 O
ATOM 8965 CB VAL W 8 7.643 80.389 25.325 1.00115.67 C
ATOM 8966 CG1 VAL W 8 6.476 80.508 26.249 1.00115.54 C
ATOM 8967 CG2 VAL W 8 7.174 80.526 23.886 1.00115.26 C
ATOM 4440 O TYR S 89 4.530 166.005 -14.543 1.00 95.76 O
ATOM 4441 CB TYR S 89 2.847 168.812 -13.864 1.00 96.31 C
ATOM 4442 CG TYR S 89 3.887 169.413 -14.756 1.00 98.43 C
ATOM 4443 CD1 TYR S 89 3.515 170.073 -15.932 1.00100.05 C
ATOM 4444 CD2 TYR S 89 5.251 169.308 -14.451 1.00100.50 C
ATOM 4445 CE1 TYR S 89 4.464 170.642 -16.779 1.00100.70 C
ATOM 4446 CE2 TYR S 89 6.219 169.868 -15.298 1.00101.40 C
ATOM 4447 CZ TYR S 89 5.811 170.535 -16.464 1.00100.46 C
ATOM 4448 OH TYR S 89 6.736 171.094 -17.321 1.00100.20 O
ATOM 4449 N LEU S 90 3.944 166.393 -12.414 1.00 94.95 N
ATOM 4450 CA LEU S 90 5.079 165.622 -11.914 1.00 94.44 C
ATOM 5151 N LEU W 8 -66.068 209.785 -11.037 1.00117.44 N
ATOM 5152 CA LEU W 8 -64.800 210.035 -10.384 1.00116.52 C
ATOM 5153 C LEU W 8 -64.177 208.641 -10.198 1.00116.71 C
ATOM 5154 O LEU W 8 -64.513 207.944 -9.241 1.00116.99 O
ATOM 5155 CB LEU W 8 -65.086 210.682 -9.033 1.00115.76 C
ATOM 5156 CG LEU W 8 -64.274 211.829 -8.478 1.00113.89 C
ATOM 5157 CD1 LEU W 8 -64.528 211.857 -7.006 1.00111.94 C
ATOM 5158 CD2 LEU W 8 -62.828 211.612 -8.739 1.00112.96 C
In principle, column 5 (W, in this case, which represents the chain ID) should be identical only in consecutive chunks. However, in files with too many chains, there are no enough letters of the alphabet to assign a single ID per chain and therefore duplicity may occur.
I would like to be able to check whether or not this is the case. In other words I would like to know if a given chain ID (A-Z, always in the 5th column) is present in non-consecutive chunks. I do not mind if it changes from W to S, I would like to know if there are two chunks sharing the same chain ID. In this case, if W or S reappear at some point. In fact, this is only a problem if they also share the first and the 6th columns, but I do not want to complicate things too much.
I do not want to print the lines, just to know the name of the file in which the issue occurs and the chain ID (in this case W), in order to solve the problem. In fact, I already know how to solve the problem, but I need to identify the problematic files to focus on those ones and not repairing already sane files.
SOLUTION (thanks to all for your help and namely to sehe):
for pdb in $(ls *.pdb) ; do
hit=$(awk -v pdb="$pdb" '{ if ( $1 == "ATOM" ) { print $0 } }' $pdb | cut -c22-23 | uniq | sort | uniq -dc)
[ "$hit" ] && echo $pdb = $hit
done

For this particular sample:
cut -c22-23 t | uniq | sort | uniq -dc
Will output
2 W
(the 22nd column contains 2 runs of the letter 'W')

untested
awk '
seen[$5] && $5 != current {
print "found non-consecutive chain on line " NR
exit
}
{ current = $5; seen[$5] = 1 }
' filename

Here you go, this awk script is tested and takes into account not just 'W':
{
if (ln[$5] && ln[$5] + 1 != NR) {
print "dup " $5 " at line " NR;
}
ln[$5] = NR;
}

Related

Find a column from a file in another file based on value and order

I have two files and would like to find out which parts of file 1 occur in the same order/sequence of file 2 based on one of multiple columns (col4). The files are sorted based on an identifier in col1 (from 1 to n) but the identifier is not the between the files. The column in file 1 always occurs as one block in file 2.
file1:
x 1
x 2
x 3
file2:
y 5
y 1
y 2
y 3
y 6
output:
y 1
y 2
y 3
Another thing to take into consideration is, that the entries in the column to be filtered on are not unique.
I already tried
awk 'FNR==NR{ a[$2]=$2;next } ($2 in a)' file1 file2 > output
but it only works if you have unique identifiers.
To clarify it with real life data: I would like to extract the rows where I have the same order based on column 4.
File1:
ATOM 13 O ALA A 2 37.353 35.331 -19.903 1.00 71.02 O
ATOM 18 O TRP A 3 38.607 32.133 -18.273 1.00 69.13 O
File2:
ATOM 1 N MET A 1 42.218 38.990 -18.511 1.00 64.21 N
ATOM 10 CA ALA A 2 38.451 37.475 -20.033 1.00 71.02 C
ATOM 13 O ALA A 2 37.353 35.331 -19.903 1.00 71.02 O
ATOM 18 O TRP A 3 38.607 32.133 -18.273 1.00 69.13 O
ATOM 29 CA ILE A 4 38.644 33.633 -15.907 1.00 72.47 C

How do I line number a set of columns taking into account the strings values of the first column (UNIX shell)

Can someone help me? I would like to number a tabulated file in UNIX depends on the columns in that file. However, the last column from some rows have SAME LETTERS AND LENGTH between them but different order and must be considered the same if the other previous columns are the same too. In summary, the input is something like
rs758613821 574290 insertion_inframe P 285 AAAP
rs758613821 574290 insertion_inframe P 285 APAA
rs758613821 574290 insertion_inframe P 285 APLA
rs1367252071 574290 deletion_inframe CADDL 134 F
rs538 3246 frameshift_variant F 97 FGLYP
rs538 3246 frameshift_variant F 97 PYFLG
And the output should be
1 rs758613821 574290 insertion_inframe P 285 AAAP
1 rs758613821 574290 insertion_inframe P 285 APAA
2 rs758613821 574290 insertion_inframe P 285 APLA
3 rs1367252071 574290 deletion_inframe CADDL 134 F
4 rs538 3246 frameshift_variant F 97 FGLYP
4 rs538 3246 frameshift_variant F 97 PYFLG
and so on...
In this way I have performed the code as follow
awk 'BEGIN {FS=OFS="\t"} function intern(sym) { if (sym in table)
return table[sym]
return table[sym] = ++counter }
{ print intern($1"\t"$2"\t"$3"\t"$4"\t"$5"\t"$6), $0 }' "input" > "output";
Nevertheless I didn't solve the problem concerning to the last column to assign the same number if are same letters and length although different order. Is it possible to do it in UNIX environment? I think maybe by means of substr function or similar like that but I'm not sure what will be a properly code. Thanks in advance for the support and helping!
Not sure this is what you want to do but give it a try
$ awk 'function canon(f) {n=split(f,a,"");
asort(a); c="";
for(i=1;i<=n;i++) c=c a[i];
return c;}
{key=canon($NF)}
!(key in keys) {keys[key]=++ctr}
{print keys[key],$0}' file
1 rs758613821 574290 insertion_inframe P 285 AAAP
1 rs758613821 574290 insertion_inframe P 285 APAA
2 rs758613821 574290 insertion_inframe P 285 APLA
3 rs1367252071 574290 deletion_inframe CADDL 134 F
4 rs538 3246 frameshift_variant F 97 FGLYP
4 rs538 3246 frameshift_variant F 97 PYFLG
convert the last field into canonical form and count the unique instances.
To use the full record as key do this instead
...
{line=$0;
$NF=canon($NF);
key=$0}
!(key in keys) {keys[key]=++ctr}
{print keys[key],line}' file
copy the line, replace last field with canonical form, use the updated line as key, count unique instances, print count and original line

conditional replacement in a file based on a column

I have a file with several columns that looks like this:
MARKER EA NEA N_x EA_y NEA_y N_y
rs1000000 G A 231410.0 G A 118230.0
rs10000010 T C 322079.0 C T 118230.0
rs10000017 C T 233146.0 C T 118230.0
rs10000023 G T 233860.0 T G 118230.0
rs10000027 C G 72852.4 C G 118230.0
rs10000029 T C 179950.0 NA NA NA
rs1000002 C T 233932.0 C T 118230.0
I want to replace values in columns EA and NEA with values from EA_y and NEA_y, but if EA_y and NEA_y are NA then I want to keep values in EA and NEA.
I can do it in R but using ifelse but I would like to learn how to do it with awk or similar.
Note: the file has approximately 3 million rows
Using awk you can do this easily:
awk '$5 != "NA" && $6 != "NA" {$2=$5; $3=$6} 1' file | column -t
MARKER EA_y NEA_y N_x EA_y NEA_y N_y
rs1000000 G A 231410.0 G A 118230.0
rs10000010 T C 322079.0 T C 118230.0
rs10000017 C T 233146.0 C T 118230.0
rs10000023 G T 233860.0 G T 118230.0
rs10000027 C G 72852.4 C G 118230.0
rs10000029 T C 179950.0 NA NA NA
rs1000002 C T 233932.0 C T 118230.0
I used column -t for tabular formatting of output.
Since fields 5, 6, 7 are always set to "NA" at the same time, you can use:
awk -v OFS="\t" 'NR>1&&$7!="NA"{$2=$5;$3=$6}1' file
If you want to proceed several files, avoid to use a loop on the output of the ls command, it's better to use find that gives you more control on how the path looks like.

Use awk to print first column, second column and one column every four after that

I have a text file and I want to extract the first column, second column and one column every 3 after that. Also, I want to get rid of row #2. How can I do that using awk or similar?
An example:
I have a text file as such:
A B C D E F G H I J .. N (header 1)
A B C D E F G H I J .. N (header 2)
A B C D E F G H I J .. N (row 1)
A B C D E F G H I J .. N (row 2)
A B C D E F G H I J .. N (row n)
I'm trying to get it as
A B F J .. N (header 1)
A B F J .. N (row 1)
A B F J .. N (row 2)
A B F J .. N (row n)
Thanks.
P.s. I've tried playing with the solutions mentioned in How to print every 4th column up to nth column and from (n+1)th column to last using awk? but the solution doesn't work for me
$ awk 'NR!=2{out=$1; for (i=2;i<=NF;i+=4) out = out OFS $i; print out}' file
A B F J 1)
A B F J 1)
A B F J 2)
A B F J n)
The above output is messy because of the ...s and comments in your sample input that make it untestable. Always post ACTUAL, testable input/output, not a description or other abstract representation of such. And don't repeat the same data on every line as that makes it hard to map the output fields to the input and so makes it harder to understand your requirements. This would have been a more useful example:
$ cat file
101 102 103 104 105 106 107 108 109 110 111
201 202 203 204 205 206 207 208 209 210 211
301 302 303 304 305 306 307 308 309 310 311
401 402 403 404 405 406 407 408 409 410 411
$ awk 'NR!=2{out=$1; for (i=2;i<=NF;i+=4) out = out OFS $i; print out}' file
101 102 106 110
301 302 306 310
401 402 406 410

BioPython: Residues size differ from position

I'm currently working with a data set of PDBs and I'm interested in the sizes of the residues (number of atom per residue). I realized the number of atoms -len(residue.child_list) - differed from residues in different proteins even though being the same residue. For example: Residue 'LEU' having 8 atoms in one protein but having 19 in another!
My guess is an error in the PDB or in the PDBParser(), nevertheless the differences are huge!
For example in the case of the molecule 3OQ2:
r = model['B'][88]
r1 = model['B'][15] # residue at chain B position 15
In [287]: r.resname
Out[287]: 'VAL'
In [288]: r1.resname
Out[288]: 'VAL'
But
In [274]: len(r.child_list)
Out[274]: 16
In [276]: len(r1.child_list)
Out[276]: 7
So even within a single molecule there's difference in the number of atoms. I'd like to know if this is normal biologically, or if there's something wrong. Thank you.
strong text
I just looked at the PDB provided and the difference is really the fact that for the first VAL (88) there are atomic coordinates for Hydrogens (H) whereas for the other VAL (15) there isn't.
ATOM 2962 N VAL B 88 33.193 42.159 23.916 1.00 11.01 N
ANISOU 2962 N VAL B 88 1516 955 1712 56 -227 -128 N
ATOM 2963 CA VAL B 88 33.755 43.168 24.800 1.00 12.28 C
ANISOU 2963 CA VAL B 88 1782 1585 1298 356 -14 286 C
ATOM 2964 C VAL B 88 35.255 43.284 24.530 1.00 12.91 C
ANISOU 2964 C VAL B 88 1661 1672 1573 -249 0 -435 C
ATOM 2965 O VAL B 88 35.961 42.283 24.451 1.00 14.78 O
ANISOU 2965 O VAL B 88 1897 1264 2453 30 -293 21 O
ATOM 2966 CB VAL B 88 33.524 42.841 26.286 1.00 12.81 C
ANISOU 2966 CB VAL B 88 1768 1352 1747 -50 -221 -304 C
ATOM 2967 CG1 VAL B 88 34.166 43.892 27.160 1.00 16.03 C
ANISOU 2967 CG1 VAL B 88 2292 1980 1819 -147 73 -8 C
ATOM 2968 CG2 VAL B 88 32.020 42.727 26.586 1.00 17.67 C
ANISOU 2968 CG2 VAL B 88 2210 2728 1774 -363 -401 83 C
ATOM 2969 H VAL B 88 33.642 41.425 23.899 1.00 13.21 H
ATOM 2970 HA VAL B 88 33.340 44.035 24.608 1.00 14.73 H
ATOM 2971 HB VAL B 88 33.941 41.979 26.492 1.00 15.37 H
ATOM 2972 HG11 VAL B 88 34.011 43.670 28.081 1.00 19.23 H
ATOM 2973 HG12 VAL B 88 35.110 43.912 26.983 1.00 19.23 H
ATOM 2974 HG13 VAL B 88 33.777 44.746 26.959 1.00 19.23 H
ATOM 2975 HG21 VAL B 88 31.902 42.523 27.516 1.00 21.20 H
ATOM 2976 HG22 VAL B 88 31.596 43.562 26.377 1.00 21.20 H
ATOM 2977 HG23 VAL B 88 31.647 42.026 26.047 1.00 21.20 H
I would go about filtering out these atoms for every residue in analysis. Then you should almost always get the same number of atoms. As someone mentioned the other thing you have to consider is what Biopython call 'disordered residues'. These are residues for which you have more than one alternative location for the atoms in the crystal lattice (they call this 'altloc'). Sorting this out should solve your problem.
Let me know if you need help with filtering out these atoms.
Fábio

Resources