conditional replacement in a file based on a column - bash

I have a file with several columns that looks like this:
MARKER EA NEA N_x EA_y NEA_y N_y
rs1000000 G A 231410.0 G A 118230.0
rs10000010 T C 322079.0 C T 118230.0
rs10000017 C T 233146.0 C T 118230.0
rs10000023 G T 233860.0 T G 118230.0
rs10000027 C G 72852.4 C G 118230.0
rs10000029 T C 179950.0 NA NA NA
rs1000002 C T 233932.0 C T 118230.0
I want to replace values in columns EA and NEA with values from EA_y and NEA_y, but if EA_y and NEA_y are NA then I want to keep values in EA and NEA.
I can do it in R but using ifelse but I would like to learn how to do it with awk or similar.
Note: the file has approximately 3 million rows

Using awk you can do this easily:
awk '$5 != "NA" && $6 != "NA" {$2=$5; $3=$6} 1' file | column -t
MARKER EA_y NEA_y N_x EA_y NEA_y N_y
rs1000000 G A 231410.0 G A 118230.0
rs10000010 T C 322079.0 T C 118230.0
rs10000017 C T 233146.0 C T 118230.0
rs10000023 G T 233860.0 G T 118230.0
rs10000027 C G 72852.4 C G 118230.0
rs10000029 T C 179950.0 NA NA NA
rs1000002 C T 233932.0 C T 118230.0
I used column -t for tabular formatting of output.

Since fields 5, 6, 7 are always set to "NA" at the same time, you can use:
awk -v OFS="\t" 'NR>1&&$7!="NA"{$2=$5;$3=$6}1' file
If you want to proceed several files, avoid to use a loop on the output of the ls command, it's better to use find that gives you more control on how the path looks like.

Related

Random sampling in bash

I have an ensemble with a large number of samples in it ( say 100 different samples at different time in one ensemble). My ensemble looks like this
20
-166.26604715
C -6.8775736572 0.7377700983 -1.2173950464
C -6.3769524449 2.0225374370 -1.4858792908
C -5.9530432940 -0.2309614983 -0.7933107594
C 0.924046 0.593909 0.306394
C 0.578941 0.740133 0.786926
C 0.43637 0.332195 0.77888
C 0.100887 0.785084 0.835159
C 0.761209 0.496077 0.426298
C 0.945798 0.821802 0.709269
C 0.157828 0.119752 0.909685
C 0.868084 0.449256 0.705432
C 0.399686 0.645049 0.696163
C 0.300211 0.591664 0.956569
C 0.156318 0.796877 0.132388
C 0.548236 0.984306 0.823073
C 0.422985 0.964365 0.793915
C 0.173531 0.568816 0.93252
C 0.205224 0.0199054 0.84918
C 0.726009 0.758101 0.197576
C 0.924046 0.593909 0.306394
20
-166.45321715
C -6.8775736572 0.7377700983 -1.2173950464
C -6.3769524449 2.0225374370 -1.4858792908
C -5.9530432940 -0.2309614983 -0.7933107594
C 0.924046 0.593909 0.306394
C 0.578941 0.740133 0.786926
C 0.43637 0.332195 0.77888
C 0.100887 0.785084 0.835159
C 0.761209 0.496077 0.426298
C 0.945798 0.821802 0.709269
C 0.157828 0.119752 0.909685
C 0.868084 0.449256 0.705432
C 0.399686 0.645049 0.696163
C 0.300211 0.591664 0.956569
C 0.156318 0.796877 0.132388
C 0.548236 0.984306 0.823073
C 0.422985 0.964365 0.793915
C 0.173531 0.568816 0.93252
C 0.205224 0.0199054 0.84918
C 0.726009 0.758101 0.197576
C 0.924046 0.593909 0.306394
20
-166.41234567
..
..
continues
Where the first line represents the number of atoms so \s+20 is my repeating pattern and it repeats after 22nd lines – the second line represents energy and from third line on – spatial coordinates (x, y, z). I want to randomly sample out for example just 4 samples (out of 100 in this example so 4*22 = 88 lines).Each sample (4 samples) should have the same data structure as shown above (2 headers + 20 lines) – I think I could use random number generators in python but because I am using bash for rest of the code I would like to see if there is a way in bash. Thanks in advance!
Your sample file is not proper for testing. So I created this one and changed 106 to 20 to keep it small
20
-166.26604715
C -6.8775736572 0.7377700983 -1.2173950464
C -6.3769524449 2.0225374370 -1.4858792908
C -5.9530432940 -0.2309614983 -0.7933107594
C 0.924046 0.593909 0.306394
C 0.578941 0.740133 0.786926
C 0.43637 0.332195 0.77888
C 0.100887 0.785084 0.835159
C 0.761209 0.496077 0.426298
C 0.945798 0.821802 0.709269
C 0.157828 0.119752 0.909685
C 0.868084 0.449256 0.705432
C 0.399686 0.645049 0.696163
C 0.300211 0.591664 0.956569
C 0.156318 0.796877 0.132388
C 0.548236 0.984306 0.823073
C 0.422985 0.964365 0.793915
C 0.173531 0.568816 0.93252
C 0.205224 0.0199054 0.84918
C 0.726009 0.758101 0.197576
C 0.924046 0.593909 0.306394
So, the goal is to create a random sample of size N from the records starting line 3 to 22 (2 headers + 20 records).
$ awk -v s=4 'NR==1 {n=$1}
NR<3;
NR>2 && NR<=n+2 {print | "shuf -n"s}' file
20
-166.26604715
C 0.945798 0.821802 0.709269
C 0.548236 0.984306 0.823073
C 0.157828 0.119752 0.909685
C 0.422985 0.964365 0.793915
Here I picked the sample size as 4. Reads the number of records, prints the first two lines and samples requested records out of the number of records specified.
Note that this is sampling without replacement, meaning the same record can not be picked more than once, usually that's what is desired.
You may want to print the new number of records on top perhaps, but that's an easy change, left as an exercise...
UPDATE
For multiple data sets in the same structure (actually the number of records don't have to be the same) you need these modifications.
$ awk -v s=4 'BEGIN {cmd="shuf -n"s; n=-2}
r==n+2 {n=$1; close(cmd)}
{r=(NR-1)%(n+2)+1}
r<=2;
r>2 && r<=n+2 {print | cmd }' file.3
20
-166.26604715
C 0.422985 0.964365 0.793915
C 0.205224 0.0199054 0.84918
C 0.399686 0.645049 0.696163
C 0.726009 0.758101 0.197576
20
-166.26604715
C 0.43637 0.332195 0.77888
C 0.761209 0.496077 0.426298
C -6.3769524449 2.0225374370 -1.4858792908
C 0.205224 0.0199054 0.84918
20
-166.26604715
C 0.156318 0.796877 0.132388
C 0.157828 0.119752 0.909685
C -6.8775736572 0.7377700983 -1.2173950464
C -5.9530432940 -0.2309614983 -0.7933107594
r is the relative position index within data set, and some special handling is required for line 1 (hence n=-2}. Also need to close the command after each data set to flush the buffers. Otherwise the logic is essentially the same with NR replaced with r

Processing data swapped over files BASH

First, I would like to apologize for my extremely basic knowledge about coding. Then I hope that I will be able to express myself correctly about my issue. Do no hesitate to ask for further clarifications or anything else...
I'm encountering troubles postprocessing data...
My goal is to recombine data which were swapped.
EDIT : here is a .rar folder containing my test example which works and the one that I try to make working... (do not be afraid by the time it requires to process the data)
https://drive.google.com/file/d/1AEPUc8haT5_Z3LR3jnZZlpyfxhdDwwo6/view?usp=sharing
EDIT 2 : Here is what I expect on paper (Its my TestReorder3OK folder in my rar archive)
enter image description here
EDIT 3 : MINIMAL COMPLETE EXAMPLE
Script :
#!/bin/bash
# Definir le nombre de replica
NP=3
NP1=$[NP-1]
rm torder*
for repl in `seq 0 $NP1`
do
echo $repl
# colle la colonne 2 du fichier .lammps dans un fichier rep_0, puis dans la seconde boucle, la colonne 3 dans rep_1, etc.
awk -v rep=$repl '{r2=rep+2;print $r2}' < log.lammps > rep_$repl
i=0
j=0
# cree une boucle dans la boucle
for a in `cat rep_$repl`
do
i=$[i+1]
j=$[j+3]
head -$i screen.$repl.temp | tail -1 >> torder.$a
head -$j ccccd2_H_${repl}_col.bak2 | tail -3 >> ccccd2_H_${a}_temp_col.bak2
done
done
log.lammps file
1 0 1 2
2 1 0 2
3 1 2 0
Starting at column 2, this file contains the number associated to the inputs below. Here is an expanded explanation :
column 2 has three values : 0, 1 and 1 ; the 0 is associated to the first three lines of the file ccccd2_H_0_col.bak2, the next three ones are associated the 1 and the last three ones again to the value 1.
column 3 has also three values : 1, 0 and 2 ; the 1 is associated to the first three lines of the file ccccd2_H_1_col.bak2, the next three ones are associated the 0 and the last three ones again to the value 2.
Same story for column 4.
Now what I want, is that every set of three lines associated to the 0 value go into a single file. Every set of three lines associated to the 1 value go into another single file, and the sets of three lines associated to the 2 value to a last file.
Inputs :
ccccd2_H_0_col.bak2
blank line
N a b c
C d e f
N g h i
C j k l
N m n o
C p q r
ccccd2_H_1_col.bak2
blank line
N s t u
C v w x
N y z a
C b c d
N e f g
C h i j
ccccd2_H_2_col.bak2
blank line
N k l m
C n o p
N q r s
C t u v
N w x y
C z a b
Outputs : These are the desired outputs and the one that I get for simple test files
ccccd2_H_0_temp_col
blank line
N a b c
C d e f
N y z a
C b c d
N w x y
C z a b
ccccd2_H_1_temp_col
blank line
N g h i
C j k l
N m n o
C p q r
N s t u
C v w x
ccccd2_H_2_temp_col
blank line
N e f g
C h i j
N k l m
C n o p
N q r s
C t u v
This works fine on small test files (as shown here), but not on my real system. For my real system, I have the log.lammps file that contains 14 rows and 10,001 lines, and my input files that contain 121,121 lines (so 10,001 * block of 121 lines). It creates files 10 times larger with more data than it should.
Can you enlighten me about my issue ? I think this is linked to the difference of line number from my files containing a single row and the files containing cartesian coordinates, but I really don't understand the link nor the way to solve it...
Thank you in advance...
I think I understand what you're trying do do now and this GNU awk script (for ARGIND, ENDFILE and inbuilt open file management) will do it:
$ cat ../tst.awk
ARGIND == 1 {
for (inFileNr=2; inFileNr<=NF; inFileNr++) {
outFileNrs[inFileNr,NR] = $inFileNr
}
next
}
ENDFILE { RS = "" }
{ print ORS $0 > ("ccccd2_H_" outFileNrs[ARGIND,FNR] "_temp_col") }
Look:
INPUT:
$ ls
ccccd2_H_0_col.bak2 ccccd2_H_1_col.bak2 ccccd2_H_2_col.bak2 log.lammps
$ cat log.lammps
1 0 1 2
2 1 0 2
3 1 2 0
$ paste ccccd2_H_0_col.bak2 ccccd2_H_1_col.bak2 ccccd2_H_2_col.bak2 | sed 's/\t/\t\t/g'
N a b c N s t u N k l m
C d e f C v w x C n o p
N g h i N y z a N q r s
C j k l C b c d C t u v
N m n o N e f g N w x y
C p q r C h i j C z a b
SCRIPT EXECUTION:
$ awk -f ../tst.awk log.lammps ccccd2_H_0_col.bak2 ccccd2_H_1_col.bak2 ccccd2_H_2_col.bak2
OUTPUT:
$ ls
ccccd2_H_0_col.bak2 ccccd2_H_1_col.bak2 ccccd2_H_2_col.bak2 log.lammps
ccccd2_H_0_temp_col ccccd2_H_1_temp_col ccccd2_H_2_temp_col
$ paste ccccd2_H_0_temp_col ccccd2_H_1_temp_col ccccd2_H_2_temp_col | sed 's/\t/\t\t/g'
N a b c N g h i N e f g
C d e f C j k l C h i j
N y z a N m n o N k l m
C b c d C p q r C n o p
N w x y N s t u N q r s
C z a b C v w x C t u v

Concatenate last columns from multiple files of one type

I am trying to cat the last 2 columns of multiple text files side by side. The files are in a directory of various types of files. All files have >2 columns, but no guarantee all files have the same number of columns.
For example, if I have:
file1.txt
1 a b J H
2 b c E E
3 c d L L
4 d e L L
5 e f O O
file2.txt
1 a b M B
2 b c O E
3 c d O E
I want:
J H M B
E E O E
L L O E
L L
O O
The closest I've got is:
awk '{print $(NF-1), "\t", $NF}' *.txt
Which is almost what I want.
For the concatenation, I was thinking something like here for concatenation
pr -m -t one.txt two.txt
awk 'NR==FNR{a[NR]=$(NF-1)" "$NF;next}{print $(NF-1),$NF,a[FNR]}' file2.txt file1.txt
Tested:
> cat temp2
1 a b M B
2 b c O E
3 c d O E
> cat temp1
1 a b J H
2 b c E E
3 c d L L
4 d e L L
5 e f O O
> awk 'NR==FNR{a[NR]=$(NF-1)" "$NF;next}{print $(NF-1),$NF,a[FNR]}' temp2 temp1
J H M B
E E O E
L L O E
L L
O O
>
join -a1 -a2 one.txt two.txt | cut -d' ' -f4,5,8,9

Removing certain columns from a text file [duplicate]

This question already has answers here:
Deleting columns from a file with awk or from command line on linux
(4 answers)
Closed 8 years ago.
I have a text file that looks like this:
A B C A B C A B C A B
G T C A G T C A G T C
A B C A B C A B C A B
A B C A B C A B C A B
A D E A B D E A B D E
A B C A B C A B C A B
C B D G C B D G C B D
Is there a way to extract only certain columns and leave the other columns intact?
For example removing only columns 2 and 5:
A C A C A B C A B
G C A T C A G T C
A C A C A B C A B
A C A C A B C A B
A E A D E A B D E
A C A C A B C A B
C D G B D G C B D
Thanks in advance.
UPDATE:
Found this answer using awk, but this extract whole "block" of columns and I only want to extract some.
Awk for extracting columns 3 to 5:
awk -F 'FS' 'BEGIN{FS="\t"}{for (i=1; i<=NF-1; i++) if(i<3 || i>5) {printf $i FS};{print $NF}}' input.txt
in your case you could do
cat your_file |cut -d ' ' --complement -s -f2,5
where ' ' is the delimiter(in your case the space)

Check if string exist in non-consecutive lines in a given column

I have files with the following format:
ATOM 8962 CA VAL W 8 8.647 81.467 25.656 1.00115.78 C
ATOM 8963 C VAL W 8 10.053 80.963 25.506 1.00114.60 C
ATOM 8964 O VAL W 8 10.636 80.422 26.442 1.00114.53 O
ATOM 8965 CB VAL W 8 7.643 80.389 25.325 1.00115.67 C
ATOM 8966 CG1 VAL W 8 6.476 80.508 26.249 1.00115.54 C
ATOM 8967 CG2 VAL W 8 7.174 80.526 23.886 1.00115.26 C
ATOM 4440 O TYR S 89 4.530 166.005 -14.543 1.00 95.76 O
ATOM 4441 CB TYR S 89 2.847 168.812 -13.864 1.00 96.31 C
ATOM 4442 CG TYR S 89 3.887 169.413 -14.756 1.00 98.43 C
ATOM 4443 CD1 TYR S 89 3.515 170.073 -15.932 1.00100.05 C
ATOM 4444 CD2 TYR S 89 5.251 169.308 -14.451 1.00100.50 C
ATOM 4445 CE1 TYR S 89 4.464 170.642 -16.779 1.00100.70 C
ATOM 4446 CE2 TYR S 89 6.219 169.868 -15.298 1.00101.40 C
ATOM 4447 CZ TYR S 89 5.811 170.535 -16.464 1.00100.46 C
ATOM 4448 OH TYR S 89 6.736 171.094 -17.321 1.00100.20 O
ATOM 4449 N LEU S 90 3.944 166.393 -12.414 1.00 94.95 N
ATOM 4450 CA LEU S 90 5.079 165.622 -11.914 1.00 94.44 C
ATOM 5151 N LEU W 8 -66.068 209.785 -11.037 1.00117.44 N
ATOM 5152 CA LEU W 8 -64.800 210.035 -10.384 1.00116.52 C
ATOM 5153 C LEU W 8 -64.177 208.641 -10.198 1.00116.71 C
ATOM 5154 O LEU W 8 -64.513 207.944 -9.241 1.00116.99 O
ATOM 5155 CB LEU W 8 -65.086 210.682 -9.033 1.00115.76 C
ATOM 5156 CG LEU W 8 -64.274 211.829 -8.478 1.00113.89 C
ATOM 5157 CD1 LEU W 8 -64.528 211.857 -7.006 1.00111.94 C
ATOM 5158 CD2 LEU W 8 -62.828 211.612 -8.739 1.00112.96 C
In principle, column 5 (W, in this case, which represents the chain ID) should be identical only in consecutive chunks. However, in files with too many chains, there are no enough letters of the alphabet to assign a single ID per chain and therefore duplicity may occur.
I would like to be able to check whether or not this is the case. In other words I would like to know if a given chain ID (A-Z, always in the 5th column) is present in non-consecutive chunks. I do not mind if it changes from W to S, I would like to know if there are two chunks sharing the same chain ID. In this case, if W or S reappear at some point. In fact, this is only a problem if they also share the first and the 6th columns, but I do not want to complicate things too much.
I do not want to print the lines, just to know the name of the file in which the issue occurs and the chain ID (in this case W), in order to solve the problem. In fact, I already know how to solve the problem, but I need to identify the problematic files to focus on those ones and not repairing already sane files.
SOLUTION (thanks to all for your help and namely to sehe):
for pdb in $(ls *.pdb) ; do
hit=$(awk -v pdb="$pdb" '{ if ( $1 == "ATOM" ) { print $0 } }' $pdb | cut -c22-23 | uniq | sort | uniq -dc)
[ "$hit" ] && echo $pdb = $hit
done
For this particular sample:
cut -c22-23 t | uniq | sort | uniq -dc
Will output
2 W
(the 22nd column contains 2 runs of the letter 'W')
untested
awk '
seen[$5] && $5 != current {
print "found non-consecutive chain on line " NR
exit
}
{ current = $5; seen[$5] = 1 }
' filename
Here you go, this awk script is tested and takes into account not just 'W':
{
if (ln[$5] && ln[$5] + 1 != NR) {
print "dup " $5 " at line " NR;
}
ln[$5] = NR;
}

Resources