How to add an empty line at the end of these commands? - bash

I am in a situation where I have so many fastq files that I want to convert to fasta.
Since they belong to the same sample, I would like to merge the fasta files to get a single file.
I tried running these two commands:
sed -n '1~4s/^#/>/p;2~4p' INFILE.fastq > OUTFILE.fasta
cat infile.fq | awk '{if(NR%4==1) {printf(">%s\n",substr($0,2));} else if(NR%4==2) print;}' > file.fa
And the output files is correctly a fasta file.
However I get a problem in the next step. When I merge files with this command:
cat $1 >> final.fasta
The final file apparently looks correct. But when I run makeblastdb it gives me the following error:
FASTA-Reader: Ignoring invalid residues at position(s): On line 512: 1040-1043, 1046-1048, 1050-1051, 1053, 1055-1058, 1060-1061, 1063, 1066-1069, 1071-1076
Looking at what's on that line I found that a file header was put at the end of the previous file sequence. And it turns out like this:
GGCTTAAACAGCATT>e45dcf63-78cf-4769-96b7-bf645c130323
So how can I add a blank line to the end of the file within the scripts that convert fastq to fasta?
So that when I merge they are placed on top of each other correctly and not at the end of the sequence of the previous file.

So how can I add a blank line to the end of the file within the
scripts that convert fastq to fasta?
I would use GNU sed following replace
cat $1 >> final.fasta
using
sed '$a\\n' $1 >> final.fasta
Explanation: meaning of expression for sed is at last line ($) append newline (\n) - this action is undertaken before default one of printing. If you prefer GNU AWK then you might same behavior following way
awk '{print}END{print ""}' $1 >> final.fasta
Note: I was unable to test any of solution as you doesnot provide enough information to this. I assume above line is somewhere inside loop and $1 is always name of file existing in current working directory.

if the only thing you need is extra blank line, and the input files are within 1.5 GB in size, then just directly do :
awk NF=NF RS='^$' FS='\n' OFS='\n'
Should work for mawk 1/2, gawk, and nawk, maybe others as well. This works despite appearing not to do anything special is that the extra \n comes from ORS.

Related

How to split a text file content by a string?

Suppose I've got a text file that consists of two parts separated by delimiting string ---
aa
bbb
---
cccc
dd
I am writing a bash script to read the file and assign the first part to var part1 and the second part to var part2:
part1= ... # should be aa\nbbb
part2= ... # should be cccc\ndd
How would you suggest write this in bash ?
You can use awk:
foo="$(awk 'NR==1' RS='---\n' ORS='' file.txt)"
bar="$(awk 'NR==2' RS='---\n' ORS='' file.txt)"
This would read the file twice, but handling text files in the shell, i.e. storing their content in variables should generally be limited to small files. Given that your file is small, this shouldn't be a problem.
Note: Depending on your actual task, you may be able to just use awk for the whole thing. Then you don't need to store the content in shell variables, and read the file twice.
A solution using sed:
foo=$(sed '/^---$/q;p' -n file.txt)
bar=$(sed '1,/^---$/b;p' -n file.txt)
The -n command line option tells sed to not print the input lines as it processes them (by default it prints them). sed runs a script for each input line it processes.
The first sed script
/^---$/q;p
contains two commands (separated by ;):
/^---$/q - quit when you reach the line matching the regex ^---$ (a line that contains exactly three dashes);
p - print the current line.
The second sed script
1,/^---$/b;p
contains two commands:
1,/^---$/b - starting with line 1 until the first line matching the regex ^---$ (a line that contains only ---), branch to the end of the script (i.e. skip the second command);
p - print the current line;
Using csplit:
csplit --elide-empty-files --quiet --prefix=foo_bar file.txt "/---/" "{*}" && sed -i '/---/d' foo_bar*
If version of coreutils >= 8.22, --suppress-matched option can be used and sed processing is not required, like
csplit --suppress-matched --elide-empty-files --quiet --prefix=foo_bar file.txt "/---/" "{*}".

Use grep only on specific columns in many files?

Basically, I have one file with patterns and I want every line to be searched in all text files in a certain directory. I also only want exact matches. The many files are zipped.
However, I have one more condition. I need the first two columns of a line in the pattern file to match the first two columns of a line in any given text file that is searched. If they match, the output I want is the pattern(the entire line) followed by all the names of the text files that a match was found in with their entire match lines (not just first two columns).
An output such as:
pattern1
file23:"text from entire line in file 23 here"
file37:"text from entire line in file 37 here"
file156:"text from entire line in file 156 here"
pattern2
file12:"text from entire line in file 12 here"
file67:"text from entire line in file 67 here"
file200:"text from entire line in file 200 here"
I know that grep can take an input file, but the problem is that it takes every pattern in the pattern file and searches for them in a given text file before moving onto the next file, which makes the above output more difficult. So I thought it would be better to loop through each line in a file, print the line, and then search for the line in the many files, seeing if the first two columns match.
I thought about this:
cat pattern_file.txt | while read line
do
echo $line >> output.txt
zgrep -w -l $line many_files/*txt >> output.txt
done
But with this code, it doesn't search by the first two columns only. Is there a way so specify the first two columns for both the pattern line and for the lines that grep searches through?
What is the best way to do this? Would something other than grep, like awk, be better to use? There were other questions like this, but none that used columns for both the search pattern and the searched file.
Few lines from pattern file:
1 5390182 . A C 40.0 PASS DP=21164;EFF=missense_variant(MODERATE|MISSENSE|Aag/Cag|p.Lys22Gln/c.64A>C|359|AT1G15670|protein_coding|CODING|AT1G15670.1|1|1)
1 5390200 . G T 40.0 PASS DP=21237;EFF=missense_variant(MODERATE|MISSENSE|Gcc/Tcc|p.Ala28Ser/c.82G>T|359|AT1G15670|protein_coding|CODING|AT1G15670.1|1|1)
1 5390228 . A C 40.0 PASS DP=21317;EFF=missense_variant(MODERATE|MISSENSE|gAa/gCa|p.Glu37Ala/c.110A>C|359|AT1G15670|protein_coding|CODING|AT1G15670.1|1|1)
Few lines from a file in searched files:
1 10699576 . G A 36 PASS DP=4 GT:GQ:DP 1|1:36:4
1 10699790 . T C 40 PASS DP=6 GT:GQ:DP 1|1:40:6
1 10699808 . G A 40 PASS DP=7 GT:GQ:DP 1|1:40:7
They both in reality are much larger.
It sounds like this might be what you want:
awk 'NR==FNR{a[$1,$2]; next} ($1,$2) in a' patternfile anyfile
If it's not then update your question to provide a clear, simple statement of your requirements and concise, testable sample input and expected output that demonstrates your problem and that we could test a potential solution against.
if anyfile is actually a zip file then you'd do something like:
zcat anyfile | awk 'NR==FNR{a[$1,$2]; next} ($1,$2) in a' patternfile -
Replace zcat with whatever command you use to produce text from your zip file if that's not what you use.
Per the question in the comments, if both input files are compressed and your shell supports it (e.g. bash) you could do:
awk 'NR==FNR{a[$1,$2]; next} ($1,$2) in a' <(zcat patternfile) <(zcat anyfile)
otherwise just uncompress patternfile to a tmp file first and use that in the awk command.
Use read to parse the pattern file's columns and add an anchor to the zgrep pattern :
while read -r column1 column2 rest_of_the_line
do
echo "$column1 $column2 $rest_of_the_line"
zgrep -w -l "^$column1\s*$column2" many_files/*txt
done < pattern_file.txt >> output.txt
read is able to parse lines into multiple variables passed as parameters, the last of which getting the rest of the line. It will separate fields around characters of the $IFS Internal Field Separator (by default tabulations, spaces and linefeeds, can be overriden for the read command by using while IFS='...' read ...).
Using -r avoids unwanted escapes and makes the parsing more reliable, and while ... do ... done < file performs a bit better since it avoids an useless use of cat. Since the output of all the commands inside the while is redirected I also put the redirection on the while rather than on each individual commands.

How to insert a specific character at a specific line of a file using sed or awk?

I want to use command to edit the specific line of a file instead of using vi. This is the thing. If there is a # starting with the line, then replace the # to make it uncomment. Otherwise, add the # to make it comment. I'd like to use sed or awk. But it won't work as expected.
This is the file.
what are you doing now?
what are you gonna do? stab me?
this is interesting.
This is a test.
go big
don't be rude.
For example, I just want to add the # at the beginning of the the line 4 This is a test if it doesn't start with #. And if it starts with #, then remove the #.
I've already tried via sed & gawk (awk)
gawk -i inplace '$1!="#" {print "#",$0;next};{print substr($0,3,length-1)}' file
sed -i /test/s/^#// file # make it uncomment
sed -i /test/s/^/#/ file # make it comment
I don't know how to use if else to make sed work. I could only make it with a single command, then use another regex to make the opposite.
Using gawk, it works as the main line. But it will mess the rest of the code up.
This might work for you (GNU sed):
sed '4{s/^/#/;s/^##//}' file
On line 4 prepend a # to the line and if there 2 #'s remove them.
Could also be written:
sed '4s/^/#/;4s/^##//' file
This will remove # from the start of line 4 or add it if it wasn't already there:
sed -i '4s/^#/\n/; 4s/^[^\n]/#&/; 4s/^\n//' File
The above assume GNU sed. If you have BSD/MacOS sed, some minor changes will be required.
When sed reads a new line, the one thing that we know for sure about the new line is that it does not contain \n. (If it did, it would be two lines, not one.) Using this knowledge, the script works by:
s/^#/\n/
If the fourth line starts with #, replace # with \n. (The \n serves as a notice that the line had originally been commented out.)
4s/^[^\n]/#&/
If the fourth line now starts with anything other than \n (meaning that it was not originally commented), put a # in front.
4s/^\n//
If the fourth line now starts with \n, remove it.
Alternative: Modifying lines that contain test
To comment/uncomment lines that contain test:
sed '/test/{s/^#/\n/; s/^[^\n]/#&/; s/^\n//}' File
Alternative: using awk
The exact same logic can be applied using awk. If we want to comment/uncomment line 4:
awk 'NR==4 {sub(/^#/, "\n"); sub(/^[^\n]/, "#&"); sub(/^\n/, "")} 1' File
If we want to comment/uncomment any line containing test:
awk '/test/ {sub(/^#/, "\n"); sub(/^[^\n]/, "#&"); sub(/^\n/, "")} 1' File
Alternative: using sed but without newlines
To comment/uncomment any line containing test:
sed '/test/{s/^#//; t; s/^/#/; }' File
How it works:
s/^#//; t
If the line begins with #, then remove it.
t tells sed that, if the substitution succeeded, then it should skip the rest of the commands.
s/^/#/
If we get to this command, that means that the substitution did not succeed (meaning the line was not originally commented out), so we insert #.
If you end up on a system with a sed that doesn't support in-place editing, you can fall back to its uncle ed:
ed -s file 2>/dev/null <<EOF
4 s/^/#/
s/^##//
w
q
EOF
(Standard error is redirected to /dev/null because in ed, unlike sed, it's an error if s doesn't replace anything and a question mark is thus printed to standard error.)
$ awk 'NR==4{$0=(sub(/^#/,"") ? "" : "#") $0} 1' file
what are you doing now?
what are you gonna do? stab me?
this is interesting.
#This is a test.
go big
don't be rude.
$ awk 'NR==4{$0=(sub(/^#/,"") ? "" : "#") $0} 1' file |
awk 'NR==4{$0=(sub(/^#/,"") ? "" : "#") $0} 1'
what are you doing now?
what are you gonna do? stab me?
this is interesting.
This is a test.
go big
don't be rude.

Remove a header from a file during parsing

My script gets every .csv file in a dir and writes them into a new file together. It also edits the files such that certain information is written into every row for a all of a file's entries. For instance this file called "trap10c_7C000000395C1641_160110.csv":
"",1/10/2016
"Timezone",-6
"Serial No.","7C000000395C1641"
"Location:","LS_trap_10c"
"High temperature limit (�C)",20.04
"Low temperature limit (�C)",-0.02
"Date - Time","Temperature (�C)"
"8/10/2015 16:00",30.0
"8/10/2015 18:00",26.0
"8/10/2015 20:00",24.5
"8/10/2015 22:00",24.0
Is converted into this format
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,Location:,LS_trap_10c
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,High,temperature,limit,(�C),20.04
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,Low,temperature,limit,(�C),-0.02
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,Date,-,Time,Temperature,(�C)
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,8/10/2015,16:00,30.0
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,8/10/2015,18:00,26.0
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,8/10/2015,20:00,24.5
LS_trap_10c,7C000000395C1641,trap10c_7C000000395C1641_160110.csv,8/10/2015,22:00,24.0
I use this script to do this:
dos2unix *.csv
gawk '{print FILENAME, $0}' *.csv>>all_master.erin
sed -i 's/Serial No./SerialNo./g' all_master.erin
sed -i 's/ /,/g' all_master.erin
gawk -F, '/"SerialNo."/ {sn = $3}
/"Location:"/ {loc = $3}
/"([0-9]{1,2}\/){2}[0-9]{4} [0-9]{2}:[0-9]{2}"/ {lin = $0}
{$0 =loc FS sn FS $0}1' all_master.erin > formatted_log.csv
sed -i 's/\"//g' formatted_log.csv
sed -i '/^,/ d' formatted_log.csv
rm all_master.erin
printf "\nDone\n"
I want to remove the messy header from the formatted_log.csv file. I've tried and failed to use a sed, as it seems to remove things that I don't want to remove. Is sed the best way to approach this problem? The current sed fixes some problems with the header, but I want the header gone entirely. Any lines that say "serial no." and "location" are important and require information. The other lines can be removed entirely.
I suppose you edited your script before posting; as it stands, it will not produce the posted output (all_master.erin should be $(<all_master.erin) except in the first occurrence).
You don’t specify many vital details of the format of your input files, so we must guess them. Here are my guesses:
You ignore the first two lines and the subsequent empty third line.
The 4th and 5th lines are useful, since they provide the serial number and location you want to use in all lines of that file
The 6th, 7th and 8th lines are useless.
For each file, you want to discard the first four lines of the posted output.
With these assumptions, this is how I would modify your script:
#!/bin/bash
dos2unix *.csv
awk -vFS=, -vOFS=, \
'{gsub("\"","")}
FNR==4{s=$2}
FNR==5{l=$2}
FNR>8{gsub(" ",OFS);print l,s,FILENAME,$0}' \
*.csv > formatted_log.CSV
printf "\nDone\n"
Explanation of the awk script:
First we delete all double quotes with gsub("\"",""). Then, if the line number is 4, we set the variable s to the second field, which is the serial number. If the line number is 5, we set the variable l to the second field, which is the location. If the line number is greater than 8, we do two things. First, we execute gsub(" ",OFS) to replace all spaces with the value of the output field separator: this is needed because the intended output makes two separate fields of date and time, which were only one field in the input. Second, we print the line preceded by the values of l, s and FILENAME as requested.
Note that I’m using the (questionable) Unix trick of naming the output file with an all-caps extension .CSV to avoid it being wrongly matched by a subsequent *.csv. A better solution would be to put it in another directory, but I don’t know anything about your directory tree so I suggest you modify the output file name yourself.
You could use awk to remove anything
with less than 3 columns in your final file:
awk 'NF>=3' file

Merging specific lines in a CSV file if specific trend is found out

I have been stuck with creating the script for below mentioned scenario:
I have a file a.csv with content as
123,fsfs,4124124,412412
1314,fasfwe,42145,rwr
1234,fwtrwqt,twt
wqrfsdgaseg
12424,23532,fafwe,gewgt
14214,wet,wertwtw,wet
What happens is, due to some application, the CSV content of one line gets printed on the second line.
My task is to find such occurrence and merge the such lines in a new file.
so new files will contains only required CSV records I tried few things using sed, but couldn't succeed.
$ awk -F, '!length $4 && length $3 {printf "%s,", $0;next}1' file
123,fsfs,4124124,412412
1314,fasfwe,42145,rwr
1234,fwtrwqt,twt,wqrfsdgaseg
12424,23532,fafwe,gewgt
14214,wet,wertwtw,wet
All previous answers seem great, but I wanted to add a sed answer as well because sed is awesome! (And sed was added as a tag so we were missing a sed answer.)
This answer should work on multiple lines, provided that the cut always happen on a separator and that that separator is omitted (see input example for these assumptions).
sed ':l;/\([^,]*,\)\{3\}[^,]*/!{;N;s/\n/,/g;bl;}' <file_in >file_out
What it does is :
defines a label (:l)
tests if there are four fields (/\([^,]*,\)\{3\}[^,]*/)
If there isn't (!), execute the block ({;N;s/\n/,/g;bl;})
The block :
reads the next line into the buffer (N)
replaces the newline with the separator (s/\n/,/g)
loops by branching to our :llabel (bl)
Proof :
$ sed ':l;/\([^,]*,\)\{3\}[^,]*/!{;N;s/\n/,/g;bl;}' <<EOF
> 123,fsfs,4124124,412412
> 1314,fasfwe,42145,rwr
> 1234,fwtrwqt,twt
> wqrfsdgaseg
> 12424,23532,fafwe,gewgt
> 14214,wet,wertwtw,wet
> EOF
123,fsfs,4124124,412412
1314,fasfwe,42145,rwr
1234,fwtrwqt,twt,wqrfsdgaseg
12424,23532,fafwe,gewgt
14214,wet,wertwtw,wet

Resources