bash: transform scaffold fasta - bash
I have a fasta file with the following sequences:
>NZ_OCNF01123018.1
TACAAATACAACAAATACAAGTACACCAAGTACAAATACAAGTATCCCAAGTACAAATACAAGTA
TCCCAAGTACAAATACAAGTATTCCAAGTACAAATACAAAACCTGTTGAGCAACCTAAACCTGTTGAAC
AGCCCAAACCTGTTGAACAGCNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNAAACCTTTATCCGCACTTA
CGAGCAAATACACCAATACCGCTTTATCGGCACAGTCTGCCCAAATTGACGGATGCACCATGTTACCCAACAC
ATCAATCAACGTTTGTGGGATCACCTGAAAAAGGGCGCGGTTTGTGGTTGATG
>NZ_OCNF01123018.2
AATTGTCGTGTAAAGCCACACCAAACCCCATTATAGCCCCAAAAACACCAAAAAGGCTGCCTGAACCACATTTCAGACAG
And I want to split the all sequences in the file that contain multiple N at the site where it occurs and make two sequences out of it.
Expected solution:
>NZ_OCNF01123018.1
TACAAATACAACAAATACAAGTACACCAAGTACAAATACAAGTATCCCAAGTACAAATACAAGTA
TCCCAAGTACAAATACAAGTATTCCAAGTACAAATACAAAACCTGTTGAGCAACCTAAACCTGTTGAAC
AGCCCAAACCTGTTGAACAGC
>contig1
AAACCTTTATCCGCACTTA
CGAGCAAATACACCAATACCGCTTTATCGGCACAGTCTGCCCAAATTGACGGATGCACCATGTTACCCAACAC
ATCAATCAACGTTTGTGGGATCACCTGAAAAAGGGCGCGGTTTGTGGTTGATG
>NZ_OCNF01123018.2
AATTGTCGTGTAAAGCCACACCAAACCCCATTATAGCCCCAAAAACACCAAAAAGGCTGCCTGAACCACATTTCAGACAG
my (inelegant) approach would be this:
perl -pe 's/[N]+/\*/g' $file | perl -pe 's/\*/\n>contig1\n/g'
of course that also replaces the N of the sequence header and creates headers without a sequence. As a plus, it would be nice to number the new 'contigs' from 1 to x in case there are multiple sequences with N.
What do you suggest?
I'd suggest to use split instead of trying to get a regex just right, and in a script instead of a brittle and crammed "one"-liner.
use warnings;
use strict;
use feature 'say';
my $file = shift #ARGV;
die "Usage: $0 filename\n" if !$file; # also check submitted $file
my $content = do { # or: my $content = Path::Tiny::path($file)->slurp;
local $/;
open my $fh, '<', $file or die "Can't open $file: $!";
<$fh>;
};
my #f = grep { /\S/ } split /(?<!>)NN+/, $content;
say shift #f;
my $cnt;
for (#f) {
say "\n>contig", (++$cnt), ":\n$_";
}
This slurps the file into $content since NN+ can span multiple lines; Path::Tiny module can make that cleaner. The first element of the obtained array needs no >contig so it is shifted away.
The negative lookbehind (?<!...) makes the regex in split's separator pattern match NN+ only when not preceded by >, thus protecting (excluding) header lines that may start with that. If headers may contain consecutive N which are not right after > then you need to refine this.
I expaned your perl one-liner a bit:
cat file.fasta | \
perl -pe 's/\n//g unless /^>/; s/>/\n>/g;' | \
perl -pe 's/N+(?{$n++})/\n>contig${n}\n/g unless /^>/'
the first part is to remove newlines between bases, the second part is to replace continuous 'N'.
Related
Unscramble words Challenge - improve my bash solution
There is a Capture the Flag challenge I have two files; one with scrambled text like this with about 550 entries dnaoyt cinuertdso bda haey tolpap ... The second file is a dictionary with about 9,000 entries radar ccd gcc fcc historical ... The goal is to find the right, unscrambled version of the word, which is contained in the dictionary file. My approach is to sort the characters from the first word from the first file and then look up if the first word from the second file has the same length. If so then sort that too and compare them. This is my fully functional bash script, but it is very slow. #!/bin/bash while IFS="" read -r p || [ -n "$p" ] do var=0 ro=$(echo $p | perl -F -lane 'print sort #F') len_ro=${#ro} while IFS="" read -r o || [ -n "$o" ] do ro2=$(echo $o | perl -F -lane 'print sort # F') len_ro2=${#ro2} let "var+=1" if [ $len_ro == $len_ro2 ]; then if [ $ro == $ro2 ]; then echo $o >> new.txt echo $var >> whichline.txt fi fi done < dictionary.txt done < scrambled-words.txt I have also tried converting all characters to ASCII integers and sum each word, but while comparing I realized that the sum of a different char pattern may have the same sum. [edit] For the records: - no anagrams contained in dictionary - to get the flag, you need to export the unscrambled words as one blob and ans make a SHA-Hash out of it (thats the flag) - link to ctf for guy who wanted the files https://challenges.reply.com/tamtamy/user/login.action
You're better off creating a lookup dictionary (keyed by the sorted word) from the dictionary file. Your loop body is executed 550 * 9,000 = 4,950,000 times (O(N*M)). The solution I propose executes two loops of at most 9,000 passes each (O(N+M)). Bonus: It finds all possible solutions at no cost. #!/usr/bin/perl use strict; use warnings qw( all ); use feature qw( say ); my $dict_qfn = "dictionary.txt"; my $scrambled_qfn = "scrambled-words.txt"; sub key { join "", sort split //, $_[0] } my %dict; { open(my $fh, "<", $dict_qfn) or die("Can't open \"$dict_qfn\": $!\n"); while (<$fh>) { chomp; push #{ $dict{key($_)} }, $_; } } { open(my $fh, "<", $scrambled_qfn) or die("Can't open \"$scrambled_qfn\": $!\n"); while (<$fh>) { chomp; my $matches = $dict{key($_)}; say "$_ matches #$matches" if $matches; } } I wouldn't be surprised if this only takes one millionths of the time of your solution for the sizes you provided (and it scales so much better than yours if you were to increase the sizes).
I would do something like this with gawk gawk ' NR == FNR { dict[csort()] = $0 next } { print dict[csort()] } function csort( chars, sorted) { split($0, chars, "") asort(chars) for (i in chars) sorted = sorted chars[i] return sorted }' dictionary.txt scrambled-words.txt
Here's perl-free solution I came up with using sort and join: sort_letters() { # Splits each letter onto a line, sorts the letters, then joins them # e.g. "hello" becomes "ehllo" echo "${1}" | fold-b1 | sort | tr -d '\n' } # For each input file... for input in "dict.txt" "words.txt"; do # Convert each line to [sorted] [original] # then sort and save the results with a .sorted extension while read -r original; do sorted=$(sort_letters "${original}") echo "${sorted} ${original}" done < "${input}" | sort > "${input}.sorted" done # Join the two files on the [sorted] word # outputting the scrambled and unscrambed words join -j 1 -o 1.2,2.2 "words.txt.sorted" "dict.txt.sorted"
I tried something very alike, but a bit different. #!/bin/bash exec 3<scrambled-words.txt while read -r line <&3; do printf "%s" ${line} | perl -F -lane 'print sort #F' done>scrambled-words_sorted.txt exec 3>&- exec 3<dictionary.txt while read -r line <&3; do printf "%s" ${line} | perl -F -lane 'print sort #F' done>dictionary_sorted.txt exec 3>&- printf "" > whichline.txt exec 3<scrambled-words_sorted.txt while read -r line <&3; do counter="$((++counter))" grep -n -e "^${line}$" dictionary_sorted.txt | cut -d ':' -f 1 | tr -d '\n' >>whichline.txt printf "\n" >>whichline.txt done exec 3>&- As you can see I don't create a new.txt file; instead I only create whichline.txt with a blank line where the word doesn't match. You can easily paste them up to create new.txt. The logic behind the script is nearly the logic behind yours, with the exception that I called perl less times and I save two support files. I think (but I am not sure) that creating them and cycle only one file will be better than ~5kk calls of perl. This way "only" ~10k times is called. Finally, I decided to use grep because it's (maybe) the fastest regex matcher, and searching for the entire line the lenght is intrinsic in the regex. Please, note that what #benjamin-w said is still valid and, in that case, grep will reply badly and I did not managed it! I hope this could help [:
remove only *some* fullstops from a csv file
If I have lines like the following: 1,987372,987372,C,T,.,.,.,.,.,.,.,.,1,D,.,.,.,.,.,.,.,1.293,12.23,0.989,0.973,D,.,.,.,.,0.253,0,4.08,0.917,1.048,1.000,1.000,12.998 1,987393,987393,C,T,.,.,.,.,.,.,.,.,1,D,.,.,.,.,.,.,0.152,1.980,16.09,0.999,0.982,D,-0.493,T,0.335,T,0.696,0,5.06,0.871,0.935,0.998,0.997,16.252 how can I replace all instances of ,., with ,?, I want to preserve actual decimal places in the numbers so I can't just do sed 's/./?/g' file however when doing: sed 's/,.,/,?,/g' file this only appears to work in some cases. i.e. there are still instances of ,., hanging around. anyone have any pointers? Thanks
This should work : sed ':a;s/,\.,/,?,/g;ta' file With successive ,., strings, after a substitution succeeded, next character to be processed will be the following . that doesn't match the pattern, so with you need a second pass. :a is a label for upcoming loop ,\., will match dot between commas. Note that the dot must be escaped because . is for matching any character (,a, would match with ,.,). g is for general substitution ta tests previous substitution and if it succeeded, loops to :a label for remaining substitutions.
Using sed it is possible by running a loop as shown in above answer however problem is easily solved using perl command line with lookarounds: perl -pe 's/(?<=,)\.(?=,)/?/g' file 1,987372,987372,C,T,?,?,?,?,?,?,?,?,1,D,?,?,?,?,?,?,?,1.293,12.23,0.989,0.973,D,?,?,?,?,0.253,0,4.08,0.917,1.048,1.000,1.000,12.998 1,987393,987393,C,T,?,?,?,?,?,?,?,?,1,D,?,?,?,?,?,?,0.152,1.980,16.09,0.999,0.982,D,-0.493,T,0.335,T,0.696,0,5.06,0.871,0.935,0.998,0.997,16.252 This command doesn't need a loop because instead of matching surrounding commas we're just asserting their position using a lookbehind and lookahead.
All that's necessary is a single substitution $ perl -pe 's/,\.(?=,)/,?/g' dots.csv 1,987372,987372,C,T,?,?,?,?,?,?,?,?,1,D,?,?,?,?,?,?,?,1.293,12.23,0.989,0.973,D,?,?,?,?,0.253,0,4.08,0.917,1.048,1.000,1.000,12.998 1,987393,987393,C,T,?,?,?,?,?,?,?,?,1,D,?,?,?,?,?,?,0.152,1.980,16.09,0.999,0.982,D,-0.493,T,0.335,T,0.696,0,5.06,0.871,0.935,0.998,0.997,16.252
You have an example using sed style regular expressions. I'll offer an alternative - parse the CSV, and then treat each thing as a 'field': #!/usr/bin/perl use strict; use warnings; #iterate input row by row while ( <DATA> ) { #remove linefeeds chomp; #split this row on , my #row = split /,/; #iterate each field foreach my $field ( #row ) { #replace this field with "?" if it's "." $field = "?" if $field eq "."; } #stick this row together again. print join ",", #row,"\n"; } __DATA__ 1,987372,987372,C,T,.,.,.,.,.,.,.,.,1,D,.,.,.,.,.,.,.,1.293,12.23,0.989,0.973,D,.,.,.,.,0.253,0,4.08,0.917,1.048,1.000,1.000,12.998 1,987393,987393,C,T,.,.,.,.,.,.,.,.,1,D,.,.,.,.,.,.,0.152,1.980,16.09,0.999,0.982,D,-0.493,T,0.335,T,0.696,0,5.06,0.871,0.935,0.998,0.997,16.252 This is more verbose than it needs to be, to illustrate the concept. This could be reduced down to: perl -F, -lane 'print join ",", map { $_ eq "." ? "?" : $_ } #F' If your CSV also has quoting, then you can break out the Text::CSV module, which handles that neatly.
You just need 2 passes since the trailing , found on a ,., match isn't available to match the leading , on the next ,.,: $ sed 's/,\.,/,?,/g; s/,\.,/,?,/g' file 1,987372,987372,C,T,?,?,?,?,?,?,?,?,1,D,?,?,?,?,?,?,?,1.293,12.23,0.989,0.973,D,?,?,?,?,0.253,0,4.08,0.917,1.048,1.000,1.000,12.998 1,987393,987393,C,T,?,?,?,?,?,?,?,?,1,D,?,?,?,?,?,?,0.152,1.980,16.09,0.999,0.982,D,-0.493,T,0.335,T,0.696,0,5.06,0.871,0.935,0.998,0.997,16.252 The above will work in any sed on any OS.
Extracting the first two characters from a file in perl into another file
I'm having a little bit of trouble with my code below -- I'm trying to figure out how to open up all these text files (.csv files that end in DIS that all have one line in them) and get the first two characters (these are all numbers) from them and print them into another file of the same name, with a ".number" suffix. Some of these .DIS files don't have anything in them, in which case I want to print "0". Lastly, I would like to go through each original .DIS file and delete the first 3 characters -- I did this through bash. my #DIS = <*.DIS>; foreach my $file (#DIS){ my $name = $file; my $output = "$name.number"; open(INHANDLE, "< $file") || die("Could not open file"); while(<INHANDLE>){ open(OUT_FILE,">$output") || die; my $line = $_; chomp ($line); my $string = $line; if ($string eq ""){ print "0"; } else { print substr($string,0,2); } } system("sed -i 's/\(.\{3\}\)//' $file"); } When I run this code, I get a list of numbers are concatenated together and empty .DIS.number files. I'm rather new to Perl, so any help would be appreciated!
When I run this code, I get a list of numbers are concatenated together and empty .DIS.number files. This is because of this line. print substr($string,0,2); print defaults to printing to STDOUT (ie. the screen). You need to give it the filehandle to print to. print OUT_FILE substr($string,0,2); They're being concatenated because print just prints what you tell it to, it won't put newlines in for you (there are some global variables which can change this, don't mess with them). You have to add the newline yourself. print OUT_FILE substr($string,0,2), "\n"; As a final note, when working with files in Perl I would suggest using lexical filehandles, Path::Tiny, and autodie. They will avoid a great number of classic problems working with files in Perl.
I suggest you do it like this Each *.dis file is opened and the contents read into $text. Then a regex substitution is used to remove the first three characters from the string and capture the first two in $1 If the substitution succeeded then the contents of $1 are written to the number file, otherwise the original file is empty (or shorter than two characters) and a zero is written instead. The remaining contents of $text are then written back to the *.dis file use strict; use warnings; use v5.10.1; use autodie; for my $dis_file ( glob '*.DIS' ) { my $text = do { open my $fh, '<', $dis_file; <$fh>; }; my $num_file = "$dis_file.number"; open my $dis_fh, '>', $dis_file; open my $num_fh, '>', $num_file; if ( defined $text and $text =~ s/^(..).?// ) { print $num_fh "$1\n"; print $dis_fh $text; } else { print $num_fh "0\n"; print $dis_fh "-\n"; } }
this awk script extract the first two chars of each file to it's own file. Empty files expected to have one empty line based on the spec. awk 'FNR==1{pre=substr($0,1,2);pre=length(pre)==2?pre:0; print pre > FILENAME".number"}' *.DIS This will remove the first 3 chars cut -c 4- Bash for loop will be better to do both, which we'll need to modify the awk script little bit for f in *.DIS; do awk 'NR==1{pre=substr($0,1,2);$0=length(pre)==2?pre:0; print}' $f > $f.number; cut -c 4- $f > $f.cut; done explanation: loop through all files in *.DTS, for the first line of each file, try to get first two chars (1,2) of the line ($0) assign to pre. If the length of pre is not two (either the line is empty or with 1 char only) set the line to 0 or else use pre; print the line, output file name will be input file appended with .number suffix. The $0 assignment is a trick to save couple keystrokes since print without arguments prints $0, otherwise you can provide the argument. Ideally you should quote "$f" since it may contain space in file name...
Using sed on text files with a csv
I've been trying to do bulk find and replace on two text files using a csv. I've seen the questions that SO suggests, and none seem to answer my question. I've created two variables for the two text files I want to modify. The csv has two columns and hundreds of rows. The first column contains strings (none have whitespaces) already in the text file that need to be replaced with the corresponding strings in same row in the second column. As a test, I tried the script #!/bin/bash test1='long_file_name.txt' find='string1' replace='string2' sed -e "s/$find/$replace/g" $test1 > $test1.tmp && mv $test1.tmp $test1 This was successful, except that I need to do it once for every row in the csv, using the values given by the csv in each row. My hunch is that my while loop was used wrongly, but I can't find the error. When I execute the script below, I get the command line prompt, which makes me think that something has happened. When I check the text files, nothing's changed. The two text files, this script, and the csv are all in the same folder (it's also been my working directory when I do this). #!/bin/bash textfile1='long_file_name1.txt' textfile2='long_file_name2.txt' while IFS=, read f1 f2 do sed -e "s/$f1/$f2/g" $textfile1 > $textfile1.tmp && \ mv $textfile1.tmp $textfile1 sed -e "s/$f1/$f2/g" $textfile2 > $textfile2.tmp && \ mv $textfile2.tmp $textfile2 done <'findreplace.csv' It seems to me that this code should do what I want it to do (but doesn't); perhaps I'm misunderstanding something fundamental (I'm new to bash scripting)? The csv looks like this, but with hundreds of rows. All a_i's should be replaced with their counterpart b_i in the next column over. a_1 b_1 a_2 b_2 a_3 b_3 Something to note: All the strings actually contain underscores, just in case this affects something. I've tried wrapping the variable name in braces a la ${var}, but it still doesn't work. I appreciate the solutions, but I'm also curious to know why the above doesn't work. (Also, I would vote everyone up, but I lack the reputation to do so. However, know that I appreciate and am learning a lot from your answers!)
If you are going to process lot of data and your patterns can contain a special character I would consider using Perl. Especially if you are going to have a lot of pairs in findreplace.csv. You can use following script as filter or in-place modification with lot of files. As side effect, it will load replacements and create Aho-Corrasic automaton only once per invocation which will make this solution pretty efficient (O(M+N) instead of O(M*N) in your solution). #!/usr/bin/perl use strict; use warnings; use autodie; my $in_place = ( #ARGV and $ARGV[0] =~ /^-i(.*)/ ) ? do { shift; my $backup_extension = $1; my $backup_name = $backup_extension =~ /\*/ ? sub { ( my $fn = $backup_extension ) =~ s/\*/$_[0]/; $fn } : sub { shift . $backup_extension }; my $oldargv = '-'; sub { if ( $ARGV ne $oldargv ) { rename( $ARGV, $backup_name->($ARGV) ); open( ARGVOUT, '>', $ARGV ); select(ARGVOUT); $oldargv = $ARGV; } }; } : sub { }; die "$0: File with replacements required." unless #ARGV; my ( $re, %replace ); do { my $filename = shift; open my $fh, '<', $filename; %replace = map { chomp; split ',', $_, 2 } <$fh>; close $fh; $re = join '|', map quotemeta, keys %replace; $re = qr/($re)/; }; while (<>) { $in_place->(); s/$re/$replace{$1}/g; } continue {print} Usage: ./replace.pl replace.csv <file.in >file.out as well as ./replace.pl replace.csv file.in >file.out or in-place ./replace.pl -i replace.csv file1.csv file2.csv file3.csv or with backup ./replace.pl -i.orig replace.csv file1.csv file2.csv file3.csv or with backup whit placeholder ./replace.pl -ithere.is.\*.original replace.csv file1.csv file2.csv file3.csv
You should convert your CSV file to a sed.script with the following command: cat replace.csv | awk -F, '{print "s/" $1 "/" $2 "/g";}' > sed.script And then you will be able to do a one pass replacement: sed -i -f sed.script longfilename.txt This will be a faster implementation of what you wanna do. BTW, sorry, but I do not understand what is wrong with your script which should work except if your CSV file has more than 2 columns.
How to remove lines with same domain
Have large txt file of 1 million lines. Example: http://e-planet.ru/hosting/ http://www.anelegantchaos.org/ http://site.ru/e3-den-vtoroj/ https://escrow.webmoney.ru/about.aspx http://e-planet.ru/feedback.html How to clean it of lines with same domains? I need clean one of http://e-planet.ru/hosting/ or http://e-planet.ru/feedback.html
I didn't understand your question at first. Here is an awk 1-liner : awk -F'/' '!a[$3]++' myfile Test input : http://e-planet.ru/hosting/ http://www.anelegantchaos.org/ http://site.ru/e3-den-vtoroj/ https://escrow.webmoney.ru/about.aspx http://e-planet.ru/feedback.html https://escrow.webmoney.ru/woopwoop httpp://whatever.com/slk Output : http://e-planet.ru/hosting/ http://www.anelegantchaos.org/ http://site.ru/e3-den-vtoroj/ https://escrow.webmoney.ru/about.aspx httpp://whatever.com/slk Here, the second occurences of http://e-planet.ru/ and https://escrow.webmoney.ru/ are removed. This script splits the lines using / as a separator, and compares the 3rd column (the domain) to see if there are duplicates. If it is unique, it will be printed. It is to be noted that it only works if ALL urls are preceeded by whateverprotocol//. The double slash is important because this is what makes the 3rd column the domain
use strict; use warnings; open my $in, '<', 'in.txt' or die $!; my %seen; while(<$in>){ chomp; my ($domain) = /[http:|https]\/\/(.+?)\//g; $seen{$domain}++; print "$_\n" if $seen{$domain} == 1; }
Sorry I'm not able to reply to fugu post, I think the problem might be that you have more that one URL in one line, so try this out: use strict; use warnings; open my $in, '<', 'in.txt' or die $!; my %seen; while(<$in>){ chomp; for (split /\s/) { my ($url) = /[http:|https]\/\/(.+?)\//g; $seen{$url}++; print "$_\n" if $seen{$url} == 1; } }
If all you care about are the domains of those URI's, then I suggest that you filter them out first. Then it's a simple process of sorting them and specifying you only want unique entries: perl -lne 'print $1 if m{//(.+?)/}' file | sort | uniq > uniq_domains.txt