KSH: Loop performance - performance

I need to process a file with approximately 120k lines that has the following format using ksh:
"[UserId=USER1]";"Client=001";"Locked_Status=0";"TYPE=A";"Last_Logon=00000000";"Valid_To=99991231";"Password_Change=20120131";"Last_Password_Change=29990"
"[UserId=USER2]";"Client=000";"Locked_Status=0";"TYPE=A";"Last_Logon=20141020";"Valid_To=00000000";"Password_Change=20140620";"Last_Password_Change=9501"
"[UserId=USER3]";"Client=002";"Locked_Status=0";"TYPE=A";"Last_Logon=00000000";"Valid_To=99991231";"Password_Change=20140304";"Last_Password_Change=9817"
The output should be something like:
[UserId=USER1] Client=001
Locked_Status=0
TYPE=A
Last_Logon=00000000
Valid_To=99991231
Password_Change=20120131
Last_Password_Change=29985
[UserId=USER2]
Client=000
Locked_Status=0
TYPE=A
Last_Logon=20141020
Valid_To=00000000
Password_Change=20140620
Last_Password_Change=9496
[UserId=User3]
Client=002
Locked_Status=0
TYPE=A
Last_Logon=00000000
Valid_To=99991231
Password_Change=20140304
Last_Password_Change=9812
I initially used the following code do process the file:
for a in $(<$1)
do
a=$(echo $a|sed -e 's/;/ /g' -e 's/"//g')
for b in $a
do
print $b
done
done
It was taking around 3hrs to process 120k lines.
Then I tried to improved the code changing it to the following:
for a in $(<$1)
do
printf "\n$(echo $a|sed -e 's/"//g' -e 's/;/\\n/g')"
done
That gave me 2hrs processing time however it still takes too long to process 120k lines
At last I tried this code which processed the 120k lines in 3secs!
perl -ne '
chomp;
s/\"//g;
s/;/\n/g;
print;
' <$1
Is there anyway I can improve the code in KSH to achieve similar performance? I believe that I must be missing something in my KSH code... Help me to find out please.
Thanks in advance

How about:
tr ';' '\n' < file | tr -d '"'
Your code is assigning every whitespace-delimited word to variable "a" in turn, and thus invoking sed once for each word in the file. Clearly a lot of accumulated overhead spawning all those processes. The idiom to iterate over the lines of a file is:
while IFS= read -r line; do ...; done < file

You suggestion worked perfectly
Host> wc -l /tmp/MyTest
114449 /tmp/MyTest
Host> time tr ';' '\n' < /tmp/MyTest | tr -d '"' > /tmp/zuza.out
real 0m1.04s
user 0m1.06s
sys 0m0.08s
Host> time perl -ne '
chomp;
s/\"//g;
s/;/\n/g;
print "\n$_";
' </tmp/MyTest > /tmp/zuza
real 0m1.30s
user 0m0.60s
sys 0m0.08s

Related

Bash: replace specific text with its translation

There is a huge file, in it I want to replace all the text between '=' and '\n' with its translation, here is an example:
input:
screen.LIGHT_COLOR=Lighting Color
screen.LIGHT_M=Light (Morning)
screen.AMBIENT_M=Ambient (Morning)
output:
screen.LIGHT_COLOR=Цвет Освещения
screen.LIGHT_M=Свет (Утро)
screen.AMBIENT_M=Эмбиент (Утро)
All I have managed to do until now is to extract and translate the targeted text.
while IFS= read -r line
do
echo $line | cut -d= -f2- | trans -b en:ru
done < file.txt
output:
Цвет Освещения
Свет (Утро)
Эмбиент (Утро)
*trans is short for translate-shell. It is slow, but does the job. -b for brief translation; en:ru means English to Russian.
If you have any suggestions or solutions i'll be glad to know, thanks!
edit, in case someone needs it:
After discovering trans-shell limitations I ended up going with the #TaylorG. suggestion. It is seams that translation-shell allows around 110 request per some time. Processing each line seperatly results in 1300 requests, which breaks the script.
long story short, it is faster to pack all the data into a single request. Its possible to reduce processing time from couple of minutes to mere seconds. sorry for the messy code, it's my third day with:
cut -s -d = -f 1 en_US.lang > option_en.txt
cut -s -d = -f 2 en_US.lang > value_en.txt
# merge lines
sed ':a; N; $!ba; s/\n/ :: /g' value_en.txt > value_en_block.txt
trans -b en:ru -i value_en_block.txt -o value_ru_block.txt
sed 's/ :: /\n/g' value_ru_block.txt > value_ru.txt
paste -d = option_en.txt value_ru.txt > ru_RU.lang
# remove trmporary files
rm option_en.txt value_en.txt value_en_block.txt value_ru.txt value_ru_block.txt
Thanks Taylor G., Armali and every commentator
Using pipe in a large loop is expensive. You can try the following instead.
cut -s -d = -f 1 file.txt > name.txt
cut -s -d = -f 2- file.txt | trans -b en:ru > translate.txt
paste -d = name.txt translate.txt
It shall be much faster than your current script. I'm not sure how your trans method is written. It needs to be updated to process batch input if it's not, e.g. using a while loop.
trans() {
while read -r line; do
# do translate and print result
done
}
You already did most of the work, though it can be optimized a bit. What's missing is just to output the first part of the line up to the equal sign together with the translation:
while IFS== read left right
do echo $left=`trans -b en:ru <<<$right`
done <file.txt

Bash capturing in brace expansion

What would be the best way to use something like a capturing group in regex for brace expansion. For example:
touch {1,2,3,4,5}myfile{1,2,3,4,5}.txt
results in all permutations of the numbers and 25 different files. But in case I just want to have files like 1myfile1.txt, 2myfile2.txt,... with the first and second number the same, this obviously doesn't work. Therefore I'm wondering what would be the best way to do this?
I'm thinking about something like capturing the first number, and using it a second time. Ideally without a trivial loop.
Thanks!
Not using a regex but a for loop and sequence (seq) you get the same result:
for i in $(seq 1 5); do touch ${i}myfile${i}.txt; done
Or tidier:
for i in $(seq 1 5);
do
touch ${i}myfile${i}.txt;
done
As an example, using echo instead of touch:
➜ for i in $(seq 1 5); do echo ${i}myfile${i}.txt; done
1myfile1.txt
2myfile2.txt
3myfile3.txt
4myfile4.txt
5myfile5.txt
Variation on MTwarog's answer with one less pipe/subprocess:
$ echo {1..5} | tr ' ' '\n' | xargs -I '{}' touch {}myfile{}.txt
$ ls -1 *myfile*
1myfile1.txt
2myfile2.txt
3myfile3.txt
4myfile4.txt
5myfile5.txt
You can use AWK to do that:
echo {1..5} | tr ' ' '\n' | awk '{print $1"filename"$1".txt"}' | xargs touch
Explanation:
echo {1..5} - prints range of numbers
tr ' ' '\n' - splits numbers to separate lines
awk '{print $1"filename"$1}' - enables you to format output using previously printed numbers
xargs touch - passes filenames to touch command (creates files)

Alternating output in bash for loop from two grep

I'm trying to search through files and extract two pieces of relevant information every time they appear in the file. The code I currently have:
#!/bin/bash
echo "Utilized reads from ustacks output" > reads.txt
str1="utilized reads:"
str2="Parsing"
for file in /home/desaixmg/novogene/stacks/sample01/conda_ustacks.o*; do
reads=$(grep $str1 $file | cut -d ':' -f 3
samples=$(grep $str2 $file | cut -d '/' -f 8
echo $samples $reads >> reads.txt
done
It is doing each line for the file (the files have varying numbers of instances of these phrases) and gives me the output per row for each file:
PopA_15.fq 1081264
PopA_16.fq PopA_17.fq 1008416 554791
PopA_18.fq PopA_20.fq PopA_21.fq 604610 531227 595129
...
I want it to match each instance (i.e. 1st instance of both greps next two each other):
PopA_15.fq 1081264
PopA_16.fq 1008416
PopA_17.fq 554791
PopA_18.fq 604610
PopA_20.fq 531227
PopA_21.fq 595129
...
How do I do this? Thank you
Considering that your Input_file is same as sample shown and number of columns are even on each line with 1 PopA value and other will be with digit values. Following awk may help you in same.
awk '{for(i=1;i<=(NF/2);i++){print $i,$((NF/2)+i)}}' Input_file
Output will be as follows.
PopA_15.fq 1081264
PopA_16.fq 1008416
PopA_17.fq 554791
PopA_18.fq 604610
PopA_20.fq 531227
PopA_21.fq 595129
In case you want to pass output of a command to awk command then you could do like your command | awk command... no need to add Input_file to above awk command.
This is what ended up working for me...any tips for more efficient code are definitely welcome
#!/bin/bash
echo "Utilized reads from ustacks output" > reads.txt
str1="utilized reads:"
str2="Parsing"
for file in /home/desaixmg/novogene/stacks/sample01/conda_ustacks.o*; do
reads=$(grep $str1 $file | cut -d ':' -f 3)
samples=$(grep $str2 $file | cut -d '/' -f 8)
paste <(echo "$samples" | column -t) <(echo "$reads" | column -t) >> reads.txt
done
This provides the desired output described above.

Writing a script for large text file manipulation (iterative substitution of duplicated lines), weird bugs and very slow.

I am trying to write a script which takes a directory containing text files (384 of them) and modifies duplicate lines that have a specific format in order to make them not duplicates.
In particular, I have files in which some lines begin with the '#' character and contain the substring 0:0. A subset of these lines are duplicated one or more times. For those that are duplicated, I'd like to replace 0:0 with i:0 where i starts at 1 and is incremented.
So far I've written a bash script that finds duplicated lines beginning with '#', writes them to a file, then reads them back and uses sed in a while loop to search and replace the first occurrence of the line to be replaced. This is it below:
#!/bin/bash
fdir=$1"*"
#for each fastq file
for f in $fdir
do
(
#find duplicated read names and write to file $f.txt
sort $f | uniq -d | grep ^# > "$f".txt
#loop over each duplicated readname
while read in; do
rname=$in
i=1
#while this readname still exists in the file increment and replace
while grep -q "$rname" $f; do
replace=${rname/0:0/$i:0}
sed -i.bu "0,/$rname/s/$rname/$replace/" "$f"
let "i+=1"
done
done < "$f".txt
rm "$f".txt
rm "$f".bu
done
echo "done" >> progress.txt
)&
background=( $(jobs -p) )
if (( ${#background[#]} ==40)); then
wait -n
fi
done
The problem with it is that its impractically slow. I ran it on a 48 core computer for over 3 days and it hardly got through 30 files. It also seemed to have removed about 10 files and I'm not sure why.
My question is where are the bugs coming from and how can I do this more efficiently? I'm open to using other programming languages or changing my approach.
EDIT
Strangely the loop works fine on one file. Basically I ran
sort $f | uniq -d | grep ^# > "$f".txt
while read in; do
rname=$in
i=1
while grep -q "$rname" $f; do
replace=${rname/0:0/$i:0}
sed -i.bu "0,/$rname/s/$rname/$replace/" "$f"
let "i+=1"
done
done < "$f".txt
To give you an idea of what the files look like below are a few lines from one of them. The thing is that even though it works for the one file, it's slow. Like multiple hours for one file of 7.5 M. I'm wondering if there's a more practical approach.
With regard to the file deletions and other bugs I have no idea what was happening Maybe it was running into memory collisions or something when they were run in parallel?
Sample input:
#D00269:138:HJG2TADXX:2:1101:0:0 1:N:0:CCTAGAAT+ATTCCTCT
GATAAGGACGGCTGGTCCCTGTGGTACTCAGAGTATCGCTTCCCTGAAGA
+
CCCFFFFFHHFHHIIJJJJIIIJJIJIJIJJIIBFHIHIIJJJJJJIJIG
#D00269:138:HJG2TADXX:2:1101:0:0 1:N:0:CCTAGAAT+ATTCCTCT
CAAGTCGAACGGTAACAGGAAGAAGCTTGCTTCTTTGCTGACGAGTGGCG
Sample output:
#D00269:138:HJG2TADXX:2:1101:1:0 1:N:0:CCTAGAAT+ATTCCTCT
GATAAGGACGGCTGGTCCCTGTGGTACTCAGAGTATCGCTTCCCTGAAGA
+
CCCFFFFFHHFHHIIJJJJIIIJJIJIJIJJIIBFHIHIIJJJJJJIJIG
#D00269:138:HJG2TADXX:2:1101:2:0 1:N:0:CCTAGAAT+ATTCCTCT
CAAGTCGAACGGTAACAGGAAGAAGCTTGCTTCTTTGCTGACGAGTGGCG
Here's some code that produces the required output from your sample input.
Again, it is assumed that your input file is sorted by the first value (up to the first space character).
time awk '{
#dbg if (dbg) print "#dbg:prev=" prev
if (/^#/ && prev!=$1) {fixNum=0 ;if (dbg) print "prev!=$1=" prev "!=" $1}
if (/^#/ && (prev==$1 || NR==1) ) {
prev=$1
n=split($1,tmpArr,":") ; n++
#dbg if (dbg) print "tmpArr[6]="tmpArr[6] "\tfixNum="fixNum
fixNum++;tmpArr[6]=fixNum;
# magic to rebuild $1 here
for (i=1;i<n;i++) {
tmpFix ? tmpFix=tmpFix":"tmpArr[i]"" : tmpFix=tmpArr[i]
}
$1=tmpFix ; $0=$0
print $0
}
else { tmpFix=""; print $0 }
}' file > fixedFile
output
#D00269:138:HJG2TADXX:2:1101:1:0 1:N:0:CCTAGAAT+ATTCCTCT
GATAAGGACGGCTGGTCCCTGTGGTACTCAGAGTATCGCTTCCCTGAAGA
+
CCCFFFFFHHFHHIIJJJJIIIJJIJIJIJJIIBFHIHIIJJJJJJIJIG
#D00269:138:HJG2TADXX:2:1101:2:0 1:N:0:CCTAGAAT+ATTCCTCT
CAAGTCGAACGGTAACAGGAAGAAGCTTGCTTCTTTGCTGACGAGTGGCG
I've left a few of the #dbg:... statements in place (but they are now commented out) to show how you can run a small set of data as you have provided, and watch the values of variables change.
Assuming a non-csh, you should be able to copy/paste the code block into a terminal window cmd-line and replace file > fixFile at the end with your real file name and a new name for the fixed file. Recall that awk 'program' file > file (actually, any ...file>file) will truncate the existing file and then try to write, SO you can lose all the data of a file trying to use the same name.
There are probably some syntax improvements that will reduce the size of this code, and there might be 1 or 2 things that could be done that will make the code faster, but this should run very quickly. If not, please post the result of time command that should appear at the end of the run, i.e.
real 0m0.18s
user 0m0.03s
sys 0m0.06s
IHTH
#!/bin/bash
i=4
sort $1 | uniq -d | grep ^# > dups.txt
while read in; do
if [ $((i%4))=0 ] && grep -q "$in" dups.txt; then
x="$in"
x=${x/"0:0 "/$i":0 "}
echo "$x" >> $1"fixed.txt"
else
echo "$in" >> $1"fixed.txt"
fi
let "i+=1"
done < $1

read line from file and save them in a comma separated string to a variable

I want to read lines from a text file and save them in a variable.
cat ${1} | while read name; do
namelist=${name_list},${name}
done
the file looks like this:
David
Kevin
Steve
etc.
and i want to get this output instead
David, Kevin, Steve etc.
and save it to the variable ${name_list}
The command:
$ tr -s '\n ' ',' < sourcefile.txt # Replace newlines and spaces with [,]
This will likely return a , as the last character (and potentially the first).
To shave of the comma(s) and return a satisfying result:
$ name_list=$(tr -s '\n ' ',' < sourcefile.txt) # store the previous result
$ name_list=${tmp%,} # shave off the last comma
$ name_list=${tmp#,} # shave off any first comma
EDIT
This solution runs 44% faster and yields consistent and valid results across all Unix platforms.
# This solution
python -mtimeit -s 'import subprocess' "subprocess.call('tmp=$(tr -s "\n " "," < input.txt);echo ${tmp%,} >/dev/null',shell = True)"
100 loops, best of 3: 3.71 msec per loop
# Highest voted:
python -mtimeit -s 'import subprocess' "subprocess.call('column input.txt | sed "s/\t/,/g" >/dev/null',shell = True)"
100 loops, best of 3: 6.69 msec per loop
name_list=""
for name in `cat file.txt`
do VAR="$name_list,$i"
done
EDIT: this script leaves a "," at the beginning of name_list. There are a number of ways to fix this. For example, in bash this should work:
name_list=""
for name in `cat file.txt`; do
if [[ -z $name_list ]]; then
name_list="$i"
else
name_list="$name_list,$i"
fi
done
RE-EDIT: so, thanks to the legitimate complaints of Fredrik:
name_list=""
while read name
do
if [[ -z $name_list ]]; then
name_list="$name"
else
name_list="$name_list,$name"
fi
done < file.txt
Using column, and sed:
namelist=$(column input | sed 's/\t/,/g')
variable=`perl -lne 'next if(/^\s*$/);if($a){$a.=",$_"}else{$a=$_};END{print $a}' your_file`

Resources