Alternating output in bash for loop from two grep - bash

I'm trying to search through files and extract two pieces of relevant information every time they appear in the file. The code I currently have:
#!/bin/bash
echo "Utilized reads from ustacks output" > reads.txt
str1="utilized reads:"
str2="Parsing"
for file in /home/desaixmg/novogene/stacks/sample01/conda_ustacks.o*; do
reads=$(grep $str1 $file | cut -d ':' -f 3
samples=$(grep $str2 $file | cut -d '/' -f 8
echo $samples $reads >> reads.txt
done
It is doing each line for the file (the files have varying numbers of instances of these phrases) and gives me the output per row for each file:
PopA_15.fq 1081264
PopA_16.fq PopA_17.fq 1008416 554791
PopA_18.fq PopA_20.fq PopA_21.fq 604610 531227 595129
...
I want it to match each instance (i.e. 1st instance of both greps next two each other):
PopA_15.fq 1081264
PopA_16.fq 1008416
PopA_17.fq 554791
PopA_18.fq 604610
PopA_20.fq 531227
PopA_21.fq 595129
...
How do I do this? Thank you

Considering that your Input_file is same as sample shown and number of columns are even on each line with 1 PopA value and other will be with digit values. Following awk may help you in same.
awk '{for(i=1;i<=(NF/2);i++){print $i,$((NF/2)+i)}}' Input_file
Output will be as follows.
PopA_15.fq 1081264
PopA_16.fq 1008416
PopA_17.fq 554791
PopA_18.fq 604610
PopA_20.fq 531227
PopA_21.fq 595129
In case you want to pass output of a command to awk command then you could do like your command | awk command... no need to add Input_file to above awk command.

This is what ended up working for me...any tips for more efficient code are definitely welcome
#!/bin/bash
echo "Utilized reads from ustacks output" > reads.txt
str1="utilized reads:"
str2="Parsing"
for file in /home/desaixmg/novogene/stacks/sample01/conda_ustacks.o*; do
reads=$(grep $str1 $file | cut -d ':' -f 3)
samples=$(grep $str2 $file | cut -d '/' -f 8)
paste <(echo "$samples" | column -t) <(echo "$reads" | column -t) >> reads.txt
done
This provides the desired output described above.

Related

Trying to create a script that counts the length of a all the reads in a fastq file but getting no return

I am trying go count the length of each read in a fastq file from illumina sequencing and outputting this to a tsv or any sort of file so I can then later also look at this and count the number of reads per file. So I need to cycle down the file and eactract each line that has a read on it (every 4th line) then get its length and store this as an output
num=2
for file in *.fastq
do
echo "counting $file"
function file_length(){
wc -l $file | awk '{print$FNR}'
}
for line in $file_length
do
awk 'NR==$num' $file | chrlen > ${file}read_length.tsv
num=$((num + 4))
done
done
Currently all I get the counting $file and no other output but also no errors
Your script contains a lot of errors in both syntax and algorithm. Please try shellcheck to see what is the problem. The most issue will be the $file_length part.
You may want to call a function file_length() here but it is just
an undefined variable which is evaluated as null in the for loop.
If you just want to count the length of the 4th line of *.fastq files,
please try something like:
for file in *.fastq; do
awk 'NR==4 {print length}' "$file" > "${file}_length.tsv"
done
Or if you want to put the results together in a single tsv file, try:
tsvfile="read_lenth.tsv"
for file in *.fastq; do
echo -n -e "$file\t" >> "$tsvfile"
awk 'NR==4 {print length}' "$file" >> "$tsvfile"
done
Hope this helps.

Reading recent entry from a file based on a key

Input file, fruits.txt:
JAN,APPLE
FEB,MANGO
JAN,ORANGE
MAR,APPLE
FEB,APPLE
Expected output file:
MAR,APPLE
FEB,APPLE
JAN,ORANGE
For getting the above output, below code is used:
#!/bin/sh
declare -A m_arr
cat fruits.txt > /tmp/ID.part
while read line
do
Month=$(echo $line | cut -d, -f1)
Fruits=$(echo $line | cut -d, -f2)
m_arr[${Month}]=${Fruits}
done < /tmp/ID.part
for i in ${!m_arr[#]}
do
echo "$i,${m_arr[$i]}"
done
This works fine for small number of data in input file. I have 200 000 entries and observed that cut command is very slow. Tried with awk as well, did not get a better result. My requirement is to read the file from row1, with the key as column1. I need to updated entry for each key.
I think this can be done pretty easily with Awk, you just need to hash the values of $1 in $2 once you delimit the file with a , separator
awk -v FS=, -v OFS=, '{key[$1]=$2; next}END{for (i in key) print i,key[i]}' file
Also if you want to speed up things while processing a million line file, you can change the localization settings to speed up the execution while parsing, you can pass LC_ALL=C locally to the command. See Stéphane Chazelas's answer on what "LC_ALL=C" does?
In bash version 4, you can declare an associative array and populate it with the result of read, splitting your lines with a custom IFS:
$ declare -A a
$ while IFS=, read key value; do a["$key"]="$value"; done < fruits.txt
$ declare -p a
declare -A a=([MAR]="APPLE" [FEB]="APPLE" [JAN]="ORANGE" )
If you want to generate that specific output from the array, you'll also require a loop:
$ for key in "${!a[#]}"; do printf '%s,%s\n' "$key" "${a[$key]}"; done
MAR,APPLE
FEB,APPLE
JAN,ORANGE
The shortest one using GNU datamash:
datamash -st, -g1 last 2 <file
g1 - group by the 1st column
last 2 - keep the last value of the group
The output:
FEB,APPLE
JAN,ORANGE
MAR,APPLE

Output matching lines in linux

I want to match the numbers in the first file with the 2nd column of second file and get the matching lines in a separate output file. Kindly let me know what is wrong with the code?
I have a list of numbers in a file IDS.txt
10028615
1003
10096344
10100
10107393
10113978
10163178
118747520
I have a second File called src1src22.txt
From src:'1' To src:'22'
CHEMBL3549542 118747520
CHEMBL548732 44526300
CHEMBL1189709 11740251
CHEMBL405440 44297517
CHEMBL310280 10335685
expected newoutput.txt
CHEMBL3549542 118747520
I have written this code
while read line; do cat src1src22.txt | grep -i -w "$line" >> newoutput.txt done<IDS.txt
Your command line works - except you're missing a semicolon:
while read line; do grep -i -w "$line" src1src22.txt; done < IDS.txt >> newoutput.txt
I have found an efficient way to perform the task. Instead of a loop try this -f gives the pattern in the file next to it and searches in the next file. The chance of invalid character length which can occur with grep is reduced and looping slows the process down.
grep -iw -f IDS.txt src1src22.tx >>newoutput.txt
Try this -
awk 'NR==FNR{a[$2]=$1;next} $1 in a{print a[$1],$0}' f2 f1
CHEMBL3549542 118747520
Where f2 is src1src22.txt

KSH: Loop performance

I need to process a file with approximately 120k lines that has the following format using ksh:
"[UserId=USER1]";"Client=001";"Locked_Status=0";"TYPE=A";"Last_Logon=00000000";"Valid_To=99991231";"Password_Change=20120131";"Last_Password_Change=29990"
"[UserId=USER2]";"Client=000";"Locked_Status=0";"TYPE=A";"Last_Logon=20141020";"Valid_To=00000000";"Password_Change=20140620";"Last_Password_Change=9501"
"[UserId=USER3]";"Client=002";"Locked_Status=0";"TYPE=A";"Last_Logon=00000000";"Valid_To=99991231";"Password_Change=20140304";"Last_Password_Change=9817"
The output should be something like:
[UserId=USER1] Client=001
Locked_Status=0
TYPE=A
Last_Logon=00000000
Valid_To=99991231
Password_Change=20120131
Last_Password_Change=29985
[UserId=USER2]
Client=000
Locked_Status=0
TYPE=A
Last_Logon=20141020
Valid_To=00000000
Password_Change=20140620
Last_Password_Change=9496
[UserId=User3]
Client=002
Locked_Status=0
TYPE=A
Last_Logon=00000000
Valid_To=99991231
Password_Change=20140304
Last_Password_Change=9812
I initially used the following code do process the file:
for a in $(<$1)
do
a=$(echo $a|sed -e 's/;/ /g' -e 's/"//g')
for b in $a
do
print $b
done
done
It was taking around 3hrs to process 120k lines.
Then I tried to improved the code changing it to the following:
for a in $(<$1)
do
printf "\n$(echo $a|sed -e 's/"//g' -e 's/;/\\n/g')"
done
That gave me 2hrs processing time however it still takes too long to process 120k lines
At last I tried this code which processed the 120k lines in 3secs!
perl -ne '
chomp;
s/\"//g;
s/;/\n/g;
print;
' <$1
Is there anyway I can improve the code in KSH to achieve similar performance? I believe that I must be missing something in my KSH code... Help me to find out please.
Thanks in advance
How about:
tr ';' '\n' < file | tr -d '"'
Your code is assigning every whitespace-delimited word to variable "a" in turn, and thus invoking sed once for each word in the file. Clearly a lot of accumulated overhead spawning all those processes. The idiom to iterate over the lines of a file is:
while IFS= read -r line; do ...; done < file
You suggestion worked perfectly
Host> wc -l /tmp/MyTest
114449 /tmp/MyTest
Host> time tr ';' '\n' < /tmp/MyTest | tr -d '"' > /tmp/zuza.out
real 0m1.04s
user 0m1.06s
sys 0m0.08s
Host> time perl -ne '
chomp;
s/\"//g;
s/;/\n/g;
print "\n$_";
' </tmp/MyTest > /tmp/zuza
real 0m1.30s
user 0m0.60s
sys 0m0.08s

modify the contents of a file without a temp file

I have the following log file which contains lines like this
1345447800561|FINE|blah#13|txReq
1345447800561|FINE|blah#13|Req
1345447800561|FINE|blah#13|rxReq
1345447800561|FINE|blah#14|txReq
1345447800561|FINE|blah#15|Req
I am trying extract the first field from each line and depending on whether it belongs to blah#13 or blah#14, blah#15 i am creating the corresponding files using the following script, which seems quite in-efficient in terms of the number of temp files creates. Any suggestions on how I can optimize it ?
cat newLog | grep -i "org.arl.unet.maca.blah#13" >> maca13
cat newLog | grep -i "org.arl.unet.maca.blah#14" >> maca14
cat newLog | grep -i "org.arl.unet.maca.blah#15" >> maca15
cat maca10 | grep -i "txReq" >> maca10TxFrameNtf_temp
exec<blah10TxFrameNtf_temp
while read line
do
echo $line | cut -d '|' -f 1 >>maca10TxFrameNtf
done
cat maca10 | grep -i "Req" >> maca10RxFrameNtf_temp
while read line
do
echo $line | cut -d '|' -f 1 >>maca10TxFrameNtf
done
rm -rf *_temp
Something like this ?
for m in org.arl.unet.maca.blah#13 org.arl.unet.maca.blah#14 org.arl.unet.maca.blah#15
do
grep -i "$m" newLog | grep "txReq" | cut -d' ' -f1 > log.$m
done
I've found it useful at times to use ex instead of grep/sed to modify text files in place without using temps ... saves the trouble of worrying about uniqueness and writability to the temp file and its directory etc. Plus it just seemed cleaner.
In ksh I would use a code block with the edit commands and just pipe that into ex ...
{
# Any edit command that would work at the colon prompt of a vi editor will work
# This one was just a text substitution that would replace all contents of the line
# at line number ${NUMBER} with the word DATABASE ... which strangely enough was
# necessary at one time lol
# The wq is the "write/quit" command as you would enter it at the vi colon prompt
# which are essentially ex commands.
print "${NUMBER}s/.*/DATABASE/"
print "wq"
} | ex filename > /dev/null 2>&1

Resources