Replace blank fields with zeros in AWK - bash

I wish to replace blank fields with zeros using awk but when I use sed 's/ /0/' file, I seem to replace all white spaces when I only wish to consider missing data. Using awk '{print NF}' file returns different field numbers (i.e. 9,4) due to some empty fields
input
590073920 20120523 0 M $480746499 CM C 500081532 SP
501298333 0 M *BB
501666604 0 M *OO
90007162 7 M +178852
90007568 3 M +189182
output
590073920 20120523 0 M $480746499 CM C 500081532 SP
501298333 0 0 M *BB 0 0 0 0
501666604 0 0 M *OO 0 0 0 0
90007162 0 7 M +178852 0 0 0 0
90007568 0 3 M +189182 0 0 0 0

Using GNU awk FIELDWIDTHS feature for fixed width processing:
$ awk '{for(i=1;i<=NF;i++)if($i~/^ *$/)$i=0}1' FIELDWIDTHS="11 9 5 2 16 3 2 11 2" file | column -t
590073920 20120523 0 M $480746499 CM C 500081532 SP
501298333 0 0 M *BB 0 0 0 0
501666604 0 0 M *OO 0 0 0 0
90007162 0 7 M +178852 0 0 0 0
90007568 0 3 M +189182 0 0 0 0

Related

Splitting a large file containing multiple molecules

I have a file that contains 10,000 molecules. Each molecule is ending with keyword $$$$. I want to split the main files into 10,000 separate files so that each file will have only 1 molecule. Each molecule have different number of lines. I have tried sed on test_file.txt as:
sed '/$$$$/q' test_file.txt > out.txt
input:
$ cat test_file.txt
ashu
vishu
jyoti
$$$$
Jatin
Vishal
Shivani
$$$$
output:
$ cat out.txt
ashu
vishu
jyoti
$$$$
I can loop it through whole main file to create 10,000 separate files but how to delete the last molecule that was just moved to new file from main file. Or please suggest if there is a better method for it, which I believe there is. Thanks.
Edit1:
$ cat short_library.sdf
untitled.cdx
csChFnd80/09142214492D
31 34 0 0 0 0 0 0 0 0999 V2000
8.4660 6.2927 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.4660 4.8927 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.2124 2.0951 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.4249 2.7951 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0 0
2 3 1 0 0 0 0
30 31 1 0 0 0 0
31 26 1 0 0 0 0
M END
> <Mol_ID> (1)
1
> <Formula> (1)
C22H24ClFN4O3
> <URL> (1)
http://www.selleckchem.com/products/Gefitinib.html
$$$$
Dimesna.cdx
csChFnd80/09142214492D
16 13 0 0 0 0 0 0 0 0999 V2000
2.4249 1.4000 0.0000 S 0 0 0 0 0 0 0 0 0 0 0 0
3.6415 2.1024 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.8540 1.4024 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.4904 1.7512 0.0000 Na 0 3 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
1 14 2 0 0 0 0
M END
> <Mol_ID> (2)
2
> <Formula> (2)
C4H8Na2O6S4
> <URL> (2)
http://www.selleckchem.com/products/Dimesna.html
$$$$
Here's a simple solution with standard awk:
LANG=C awk '
{ mol = (mol == "" ? $0 : mol "\n" $0) }
/^\$\$\$\$\r?$/ {
outFile = "molecule" ++fn ".sdf"
print mol > outFile
close(outFile)
mol = ""
}
' input.sdf
If you have csplit from GNU coreutils:
csplit -s -z -n5 -fmolecule test_file.txt '/^$$$$$/+1' '{*}'
This will do the whole job directly in bash:
molsplit.sh
#!/bin/bash
filenum=0
end=1
while read -r line; do
if [[ $end -eq 1 ]]; then
end=0
filenum=$((filenum + 1))
exec 3>"molecule${filenum}.sdf"
fi
echo "$line" 1>&3
if [[ "$line" = '$$$$' ]]; then
end=1
exec 3>&-
fi
done
Input is read from stdin, though that would be easy enough to change. Something like this:
./molsplit.sh < test_file.txt
ADDENDUM
From subsequent commentary, it seems that the input file being processed has Windows line endings, whereas the processing environment's native line ending format is UNIX-style. In that case, if the line-termination style is to be preserved then we need to modify how the delimiters are recognized. For example, this variation on the above will recognize any line that starts with $$$$ as a molecule delimiter:
#!/bin/bash
filenum=0
end=1
while read -r line; do
if [[ $end -eq 1 ]]; then
end=0
filenum=$((filenum + 1))
exec 3>"molecule${filenum}.sdf"
fi
echo "$line" 1>&3
case $line in
'$$$$'*) end=1; exec 3>&-;;
esac
done
The same statement that sets the current output file name also closes the previous one. close(_)^_ here is same as close(_)^0, which ensures the filename always increments for the next one, even if the close() action resulted in an error.
— if the output file naming scheme allows for leading-edge zeros, then change that bit to close(_)^(_<_), which ALWAYS results in a 1, for any possible string or number, including all forms of zero, the empty string, inf-inities, and nans.
mawk2 'BEGIN { getline __<(_ = "/dev/null")
ORS = RS = "[$][$][$][$][\r]?"(FS = RS)
__*= gsub("[^$\n]+", __, ORS)
} NF {
print > (_ ="mol" (__+=close(_)^_) ".txt") }' test_file.txt
The first part about getline from /dev/null neither sets $0 | NF nor modifies NR | FNR, but it's existence ensures the first time close(_) is called it wouldn't error out.
gcat -n mol12345.txt
1 Shivani
2 jyoti
3 Shivani
4 $$$$
it was reasonably speedy - from 5.60 MB synthetic test file created 187,710 files in 11.652 secs.

Replace values of one column based on other column conditions in shell

I have a tab separated text file below. I want to match values in column 2 and replace the values in column 5. The condition is if there are X or Y in column 2, I want column 5 to have 1 just like in the result below.
1:935662:C:CA 1 0 935662 0
1:941119:A:G 2 0 941119 0
1:942934:G:C 3 0 942934 0
1:942951:C:T X 0 942951 0
1:943937:C:T X 0 943937 0
1:944858:A:G Y 0 944858 0
1:945010:C:A X 0 945010 0
1:946247:G:A 1 0 946247 0
result:
1:935662:C:CA 1 0 935662 0
1:941119:A:G 2 0 941119 0
1:942934:G:C 3 0 942934 0
1:942951:C:T X 0 942951 1
1:943937:C:T X 0 943937 1
1:944858:A:G Y 0 944858 1
1:945010:C:A X 0 945010 1
1:946247:G:A 1 0 946247 0
I tried awk -F'\t' '{ $5 = ($2 == X ? 1 : $2) } 1' OFS='\t' file.txt but I am not sure how to match both X and Y in one step.
With awk:
awk 'BEGIN{FS=OFS="\t"} $2=="X" || $2=="Y"{$5="1"}1' file
Output:
1:935662:C:CA 1 0 935662 0
1:941119:A:G 2 0 941119 0
1:942934:G:C 3 0 942934 0
1:942951:C:T X 0 942951 1
1:943937:C:T X 0 943937 1
1:944858:A:G Y 0 944858 1
1:945010:C:A X 0 945010 1
1:946247:G:A 1 0 946247 0
See: 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR
Assuming you want $5 to be zero (as opposed to remaining unchanged) if the condition is false:
$ awk 'BEGIN{FS=OFS="\t"} {$5=($2 ~ /^[XY]$/)} 1' file
1:935662:C:CA 1 0 935662 0
1:941119:A:G 2 0 941119 0
1:942934:G:C 3 0 942934 0
1:942951:C:T X 0 942951 1
1:943937:C:T X 0 943937 1
1:944858:A:G Y 0 944858 1
1:945010:C:A X 0 945010 1
1:946247:G:A 1 0 946247 0

Common lines from 2 files based on 2 columns per file

I have two file:
file1:
1 imm_1_898835 0 908972 0 A
1 vh_1_1108138 0 1118275 T C
1 vh_1_1110294 0 1120431 A G
1 rs9729550 0 1135242 C A
file2:
1 exm1916089 0 865545 0 0
1 exm44 0 865584 0 G
1 exm46 0 865625 0 G
1 exm47 0 865628 A G
1 exm51 0 908972 0 G
1 exmF 0 1120431 C A
I want to obtain a file that is the overlap between file 1 and 2 based on columns 1 and 4,and I would print the common values for columns 1 and 4 and also columns 2 for file1 and file2.
e.g
I want:
1 908972 imm_1_898835 exm51
1 1120431 vh_1_1110294 exmF
Could you please try following.
awk 'FNR==NR{a[$1,$4]=$2;next} (($1,$4) in a){print $1,$4,a[$1,$4],$2}' file1 file2

Removing duplicate lines with different columns

I have a file which looks like follows:
ENSG00000197111:I12 0
ENSG00000197111:I12 1
ENSG00000197111:I13 0
ENSG00000197111:I18 0
ENSG00000197111:I2 0
ENSG00000197111:I3 0
ENSG00000197111:I4 0
ENSG00000197111:I5 0
ENSG00000197111:I5 1
I have some lines that are duplicated but I cannot remove by sort -u because the second column has different values for them (1 or 0). How do I remove such duplicates by keeping the lines with second column as 1 such that the file will be
ENSG00000197111:I12 1
ENSG00000197111:I13 0
ENSG00000197111:I18 0
ENSG00000197111:I2 0
ENSG00000197111:I3 0
ENSG00000197111:I4 0
ENSG00000197111:I5 1
you can use awk and or operator, if the order isn't mandatory
awk '{d[$1]=d[$1] || $2}END{for(k in d) print k, d[k]}' file
you get
ENSG00000197111:I2 0
ENSG00000197111:I3 0
ENSG00000197111:I4 0
ENSG00000197111:I5 1
ENSG00000197111:I12 1
ENSG00000197111:I13 0
ENSG00000197111:I18 0
Edit, only sort solution
You can use sort with a double pass, example
sort -k1,1 -k2,2r file | sort -u -k1,1
you get,
ENSG00000197111:I12 1
ENSG00000197111:I13 0
ENSG00000197111:I18 0
ENSG00000197111:I2 0
ENSG00000197111:I3 0
ENSG00000197111:I4 0
ENSG00000197111:I5 1

command using awk only outputting 1 line

I have two files:
file 1:
rs3094315 1 0 742429 G A
rs12124819 1 0 766409 G A
rs2272756 1 0 871896 A G
rs3128126 1 0 952073 G A
rs3934834 1 0 995669 A G
rs3766192 1 0 1007060 G A
file 2:
rs12565286 1 0 711153 C G
rs12138618 1 0 740098 A G
rs3094315 1 0 742429 G A
rs3131968 1 0 744055 A G
rs12562034 1 0 758311 A G
rs2905035 1 0 765522 A G
rs12124819 1 0 766409 G A
rs2980319 1 0 766985 A T
rs4040617 1 0 769185 G A
rs2980300 1 0 775852 T C
rs4951864 1 0 787889 C T
rs12132517 1 0 788664 A G
rs950122 1 0 836727 C G
rs2272756 1 0 871896 A G
rs3128126 1 0 952073 G A
rs3121561 1 0 980243 T C
rs3813193 1 0 988364 C G
rs4075116 1 0 993492 C T
rs3934834 1 0 995669 T C
rs3766193 1 0 1007033 C G
rs3766192 1 0 1007060 C T
rs3766191 1 0 1007450 T C
The files have many more matches in the first column after these shown here, there are about 500k lines in both files.
I'm trying to use the following command to find matches in the first column (rs####) and if found, put the matches on one line in a new folder.
awk 'NF==FNR{s=$1; a[s]=$0; next} a[$1]{print $0" "a[$1]}' file1 file2 > mergedfiles
However, this command only gives 1 match (shown below) in mergedfiles and I just can't figure out what is going wrong. It's probably something really easy :s. Thanks in advance if you are able to clear this problem up.
rs3766192 1 0 1007060 C T rs3766192 1 0 1007060 G A
Use:
NR==FNR
Your condition only picks up the sixth line (because there are 6 fields in the first file)!

Resources