perl text mining code can't handle massive amounts of data - performance

I'm doing a large text mining project. I have 100,000 text files. I've extracted two- and three-word phrases from sets of 1,000 documents at a time and have created 100 files. Each file has roughly 8 million lines in this format:
total_references num_docs_referencing_phrase phrase
I want to create an aggregate list of total references and number of docs referencing each phrase by processing the 100 intermediate files. To that end I wrote this program.
#!/usr/bin/perl -w
$| = 1 ; # Don't buffer output
use File::Find ;
$dir = "/home/sl/phrase-counts" ;
find(\&processFile, $dir) ;
for $key ( keys %TOTALREFS ) {
print "$TOTALREFS{$key} $NUMDOCS{$key} ${key}\n" ;
}
sub processFile {
my $file = $_ ;
my $fullName = $File::Find::name ;
if ( $fullName =~ /\.txt$/ ) {
$date = `date` ;
chomp $date ;
print "($date) file: $fullName\n" ;
open INFILE, "$fullName" or die "Cannot read ${fullName}";
while ( <INFILE> ) {
my $line = $_ ;
chomp $line ;
( $totalRefs, $numDocs, $phrase ) = split (/\s+/, $line, 3) ;
$TOTALREFS{$phrase} += $totalRefs ;
$NUMDOCS{$phrase} += $numDocs ;
}
close ( INFILE ) ;
}
}
The code produces strange errors after 8 or so files are processed and then it hangs, i.e. it stops listing files it should be processing.
Use of uninitialized value $date in scalar chomp at ./getCounts line 21.
Use of uninitialized value $date in concatenation (.) or string at ./getCounts line 22.
I don't believe the problem is really my date command, especially since it runs fine for a number of early files processed and because the problem does not occur at the same point in the run every time I run it. I assume the problem is that my program is consuming too much system resource and corrupting the state of the running environment. Running top and watching memory use go up to 97% of the machine concerns me although I notice that the errors and hang occur before top shows little memory left. And, there is some swap on the machine.
My question is, how can I rewrite this program to actually complete its execution? With 8 million lines of data for each of 100 files there could be 800 million lines of output although I would guess that the total is more likely in the range of 50-100 million lines. I have done some cleanup of the data and could consider more aggressive sanitizing of phrases to cut down on the numbers but I'd like to understand how I can design this code better.
I've seen articles that tell programmers to put their data into a database. My concern is the time it might take to update a database 100 million times.
Suggestions?

It looks like you're running on a *nix system, so make sort do all the work for you. It knows how to use memory efficiently.
sort -k 3 all_your_input_files*.txt > sorted.txt
Why do this? Because now all lines corresponding to the same phrase appear in a single block within the file, so you can compute totals easily: just write a short Perl script that adds the current line's numbers to the current totals, and writes them out whenever the phrase changes from the previous line (and at the end):
my ($oldPhrase, $totTotalRefs, $totNumDocs) = (undef, 0, 0);
while ( <INFILE> ) {
my $line = $_ ;
chomp $line ;
( $totalRefs, $numDocs, $phrase ) = split (/\s+/, $line, 3) ;
if (defined($oldPhrase) && $phrase ne $oldPhrase) {
print "$totTotalRefs $totNumDocs $oldPhrase\n" ;
$totTotalRefs = $totNumDocs = 0;
}
$totTotalRefs += $totalRefs ;
$totNumDocs += $numDocs ;
$oldPhrase = $phrase;
}
close ( INFILE ) ;
print "$totTotalRefs $totNumDocs $oldPhrase\n" ;
The above code is untested, but should work with appropriate boilerplate added I think.
[EDIT: Fix bug in which $oldPhrase never gets set, as suggested by Sol.]

You are storing all of the different phrases as keys for both %TOTALREFS and %NUMDOCS, so things are at least twice as bad as they need to be.
I suggest you try the following
Add use strict and use warnings (instead of -w) and declare all of your variables properly
Don't use capitals in your variable names. Capital letters are reserved for global identifiers
Don't start 100 subprocesses just to get the time of day. Just use localtime like this
printf "(%s) file: %s\n", scalar localtime, $full_name;
Use find just to generate an array of the files to be processed, so it would look like this
my #files;
find(sub {
push #files, $File::Find::name if -f and /\.txt$/i;
}, $dir) ;
Then you can process each file with a simple for loop
for my $file (#files) {
...
}
Take two passes through the files, the first time generating a hash that relates each phrase to an integer starting at zero, and the second that uses those integers to index arrays #total_refs and #num_docs and increment their elements
You may still run out of memory, but those measures will certainly give you a better chance.
Update
Just to be clear, this is how I imagine it would work. I've done this as a single pass, but it may be better to write it as two passes as I described so that you can check your intermediate data.
Note that this isn't tested apart from making sure that it compiles.
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
use autodie;
STDOUT->autoflush;
use File::Find;
my $dir = '/home/sl/phrase-counts';
my #files;
find(sub {
push #files, $File::Find::name if -f and /\.txt$/i;
}, $dir);
my (%phrases, #total_refs, #num_docs);
my $num_phrases = 0;
for my $file (#files) {
printf "(%s) file: %s\n", scalar localtime, $file;
open my $in_fh, '<', $file;
while (<$in_fh>) {
chomp;
my ($total_refs, $num_docs, $phrase) = split ' ', $_, 3;
my $phrase_num = $phrases{$phrase} //= $num_phrases++;
$total_refs[$phrase_num] += $total_refs;
$num_docs[$phrase_num] += $num_docs;
}
}
for my $phrase (keys %phrases) {
my $phrase_num = $phrases{$phrase};
printf "%s %s %s\n",
$total_refs[$phrase_num],
$num_docs[$phrase_num],
$phrase_num;
}

Trying to use more resources than available causes exceptions for being unable to allocate memory or results in system calls returning error messages. It doesn't corruption memory.
In this case, the result of backticks is undef, which means the command could not be executed. That could very well be because you have insufficient memory left. Where did you get the idea that being unable to execute a program is the result of corrupted memory?! Furthermore, you have an error you don't understand, yet you didn't check what error was returned? Backticks sets $? (and $! when $? is negative) as per system. Assuming it's a bug in Perl is a very bad assumption to make, especially when the system tells you what error occurred.
Use less memory, either through the use of a more appropriate and/or efficient data structure, or by keeping a portion of the data out of memory (e.g. on disk or in a database).

Related

Checking really fast if one of many strings exists in one of many other strings, in Perl

Let's say I have a set of 100,000 strings, each one around 20-50 bytes on average.
Then let's say I have another set of 100,000,000 strings, each one also around 20-50 bytes on average.
I'd like to go through all 100,000,000 strings from the second set and check if any one of the strings from the first set exist in any one string from the second set.
Example: string from first set: "abc", string from second set: "123abc123" = match!
I've tried using Perl's index(), but it's not fast enough. Is there a better way to do this type of matching?
I found Algorithm::AhoCorasik::XS on CPAN, which implements the classic, very efficient multiple string search Aho-Corasick algorithm (Same one used by grep -F), and it seems to be reasonably fast (About half the speed of an equivalent grep invocation):
Example script:
#!/usr/bin/env perl
use warnings;
use strict;
use autodie;
use feature qw/say/;
use Algorithm::AhoCorasick::XS;
open my $set1, "<", "set1.txt";
my #needles = <$set1>;
chomp #needles;
my $search = Algorithm::AhoCorasick::XS->new(\#needles);
open my $set2, "<", "set2.txt";
while (<$set2>) {
chomp;
say if defined $search->first_match($_);
}
and using it (With randomly-generated test files):
$ wc -l set1.txt set2.txt
10000 set1.txt
500000 set2.txt
510000 total
$ time perl test.pl | wc -l
458414
real 0m0.403s
user 0m0.359s
sys 0m0.031s
$ time grep -Ff set1.txt set2.txt | wc -l
458414
real 0m0.199s
user 0m0.188s
sys 0m0.031s
You should use a regex alternation, like:
my #string = qw/abc def ghi/;
my $matcher = qr/#{[join '|', map quotemeta, sort #string]}/;
This should be faster than using index. But it can be made faster yet:
Up to a certain limit, depending on both the length and number of the strings, perl will build a trie for efficient matching; see e.g. https://perlmonks.org/?node_id=670558. You will want to experiment with how many strings you can include in a single regex to generate an array of regexes. Then combine those separate regexes into a single one (untested):
my #search_strings = ...;
my #matchers;
my $string_limit = 3000; # a guess on my part
my #strings = sort #search_strings;
while (my #subset = splice #strings, 0, $string_limit) {
push #matchers, qr/^.*?#{[join '|', map quotemeta, sort #subset]}/s;
}
my $matcher = '(?:' . join('|', map "(??{\$matchers[$_]})", 0..$#matchers) . ')';
$matcher = do { use re 'eval'; qr/$matcher/ };
/$matcher/ and print "line $. matched: $_" while <>;
The (??{...}) construct is needed to join the separate regexes; without it, the subregexes are all just interpolated and the joined regex compiled all together, removing the trie optimization. Each subregex starts with ^.*? so it searches the entire string; without that, the joined regex would have to invoke each subregex separately for each position in the string.
Using contrived data, I'm seeing about 3000 strings searched per second with this approach in a not very fast vm. Using the naive regex is less than 50 strings per second. Using grep, as suggested in a comment by Shawn, is faster (about 4200 strings per second for me) but gives you less control if you want to do things like identify which strings matched or at what positions.
You may want to have a look at https://github.com/leendo/hello-world .
Its parallel processing makes it really fast. Either just type in all search terms individually or as || conjunctions, or (better) adapt it to run the second set programatically in one go.
Here is an idea: you could partition the dictionary into lists of words that have the same 2 or 3 letter prefix. You would then iterate on the large set and for each position in each string, extract the prefix and try and match the strings that have this prefix.
You would use hashtable to store the lists with O(1) lookup time.
If some words in the dictionary are shorter than the prefix length, you would have to special case short words.
Making prefixes longer will make the hashtable larger but the lists shorter, improving the test time.
I have no clue if this can be implemented efficiently in Perl.
The task is quite simple and perhaps used on everyday basis around the globe.
Please see following code snippet for one of many possible implementations
use strict;
use warnings;
use feature 'say';
use Getopt::Long qw(GetOptions);
use Pod::Usage;
my %opt;
my $re;
GetOptions(
'sample|s=s' => \$opt{sample},
'debug|d' => \$opt{debug},
'help|h' => \$opt{help},
'man|m' => \$opt{man}
) or pod2usage(2);
pod2usage(1) if $opt{help};
pod2usage(-verbose => 2) if $opt{man};
pod2usage("$0: No files given.") if ((#ARGV == 0) && (-t STDIN));
$re = read_re($opt{sample});
say "DEBUG: search pattern is ($re)" if $opt{debug};
find_in_file($re);
sub find_in_file {
my $search = shift;
while( <> ) {
chomp;
next unless /$search/;
say;
}
}
sub read_re {
my $filename = shift;
open my $fh, '<', $filename
or die "Couldn't open $filename";
my #data = <$fh>;
close $fh;
chomp #data;
my $re = join('|', #data);
return $re;
}
__END__
=head1 NAME
file_in_file.pl - search strings of one file in other
=head1 SYNOPSIS
yt_video_list.pl [options] -s sample.txt dbfile.txt
Options:
-s,--sample search pattern file
-d,--debug debug flag
-h,--help brief help message
-m,--man full documentation
=head1 OPTIONS
=over 4
=item B<-s,--sample>
Search pattern file
=item B<-d,--debug>
Print debug information.
=item B<-h,--help>
Print a brief help message and exits.
=item B<-m,--man>
Prints the manual page and exits.
=back
B<This program> seaches patterns from B<sample> file in B<dbfile>
=cut

Renaming multiple files at once

I have a large data set of files that are all named like this:
proportional.rank_9.cycle_10157.ratio_9
proportional.rank_9.cycle_10158.ratio_0
proportional.rank_9.cycle_10159.ratio_130
proportional.rank_9.cycle_10160.ratio_7
But of course in hindsight this is going to make reading in the files difficult since I'll need the file names and each may have a different ratio. Is there a way to rename all the files at once so that the ratio_* is gone?
Ideally I'd like them to be in this form:
proportional.rank_9.cycle_10157
proportional.rank_9.cycle_10158
proportional.rank_9.cycle_10159
proportional.rank_9.cycle_10160
//set X to all files of the form proportional.rank
X=$(ls proportional.rank*)
//print out to check X is set
echo $X
//loop over all files of X
for i in $X ; do
before_stem=${i%.ratio_*} //what I want to keep
after_stem=${i#*.ratio_} //what I don't want
new_end =$(printf ".out" $after_stem) //give new ending
mv $i ${before_stem}${new_end} //concatenate the new ending and old beginning
done
%: take everything before the pattern that follows the %
pound sign: take everything after the pattern that follows the #
printf: just like C printf
proportional.rank_9.cycle_10157.ratio_9
proportional.rank_9.cycle_10158.ratio_0
proportional.rank_9.cycle_10159.ratio_130
proportional.rank_9.cycle_10160.ratio_7
becomes
proportional.rank_9.cycle_10157.out
proportional.rank_9.cycle_10158.out
proportional.rank_9.cycle_10159.out
proportional.rank_9.cycle_10160.out

decode_utf8 consumes time in execution

The following is the code snippet that I have been working on to improve its speed.
use strict;
use warnings;
use Encode;
open(IN,"<utf8",$ARGV[0]) or die "Cannot open $ARGV[0]:$!\n"; ##treat it as a huge data of 35,000 lines in devnagari script.
my #in = <IN>;
close(IN);
my $key = "अच्छा"; #key to be matched contains devanagari script as a string
foreach my $in(#in) {
chomp($in);
$key = decode_utf8($key);
$in = decode_utf8($in);
if($key eq $in) {
print "$key: matched\n";
}
else {
print "Not matched\n";
}
}
I am trying to match the lines in the file with key.
By profiling my code following are the results I get.
The results are such that decode_utf8 consumes 34% of time.
Since my data is in utf8 I used decode_utf8.
What can I do to improve the speed here. Any other workaround for replacing decode_utf8 in the code to match the unicode data.
The results are such that decode_utf8 consumes 34% of time.
Well yeah, that's basically all your program does.
More importantly, your code is buggy. You're decoding previously decoded strings!
You decode the contents of the file when you read them (via :utf8), and then you decode the already-decoded content in the loop.
You decode the contents of $key every pass through the loop so that by the fourth pass, you are using decode_utf8(decode_utf8(decode_utf8(decode_utf8($key)))).
Fix:
use utf8; # Source code encoded using UTF-8.
use open ':std', ':encoding(UTF-8)'; # Term provides and expects UTF-8. Default for files.
use strict;
use warnings;
my $key = "अच्छा";
my $found = 0;
while (my $line = <>) {
chomp($line);
if ($line eq $key) {
$found = 1;
last;
}
}
if ($found) {
print "Match found\n";
else {
print "No match\n";
}
This fixes other issues too:
Encodes outputs (using use open ':std').
Doesn't needlessly use global variables. (Use open my $IN instead of open IN.)
Doesn't needlessly load the entire file into memory.
Doesn't needlessly read the entire file.
Doesn't print Not matched 34,999 times when the key is found.
Avoids :utf8 in favour of :encoding(UTF-8).
Doesn't reinvent <>.
Doesn't hide a die in the middle of a line. (Put a line break before or die.)
Doesn't use "cannot". (Use "can't"!)

Find lines that have partial matches

So I have a text file that contains a large number of lines. Each line is one long string with no spacing, however, the line contains several pieces of information. The program knows how to differentiate the important information in each line. The program identifies that the first 4 numbers/letters of the line coincide to a specific instrument. Here is a small example portion of the text file.
example text file
1002IPU3...
POIPIPU2...
1435IPU1...
1812IPU3...
BFTOIPD3...
1435IPD2...
As you can see, there are two lines that contain 1435 within this text file, which coincides with a specific instrument. However these lines are not identical. The program I'm using can not do its calculation if there are duplicates of the same station (ie, there are two 1435* stations). I need to find a way to search through my text files and identify if there are any duplicates of the partial strings that represent the stations within the file so that I can delete one or both of the duplicates. If I could have BASH script output the number of the lines containing the duplicates and what the duplicates lines say, that would be appreciated. I think there might be an easy way to do this, but I haven't been able to find any examples of this. Your help is appreciated.
If all you want to do is detect if there are duplicates (not necessarily count or eliminate them), this would be a good starting point:
awk '{ if (++seen[substr($0, 1, 4)] > 1) printf "Duplicates found : %s\n",$0 }' inputfile.txt
For that matter, it's a good starting point for counting or eliminating, too, it'll just take a bit more work...
If you want the count of duplicates:
awk '{a[substr($0,1,4)]++} END {for (i in a) {if(a[i]>1) print i": "a[i]}}' test.in
1435: 2
or:
{
a[substr($0,1,4)]++ # put prefixes to array and count them
}
END { # in the end
for (i in a) { # go thru all indexes
if(a[i]>1) print i": "a[i] # and print out the duplicate prefixes and their counts
}
}
Slightly roundabout but this should work-
cut -c 1-4 file.txt | sort -u > list
for i in `cat list`;
do
echo -n "$i "
grep -c ^"$i" file.txt #This tells you how many occurrences of each 'station'
done
Then you can do whatever you want with the ones that occur more than once.
Use following Python script(syntax of python 2.7 version used)
#!/usr/bin/python
file_name = "device.txt"
f1 = open(file_name,'r')
device = {}
line_count = 0
for line in f1:
line_count += 1
if device.has_key(line[:4]):
device[line[:4]] = device[line[:4]] + "," + str(line_count)
else:
device[line[:4]] = str(line_count)
f1.close()
print device
here the script reads each line and initial 4 character of each line are considered as device name and creates a key value pair device with key representing device name and value as line numbers where we find the string(device name)
following would be output
{'POIP': '2', '1435': '3,6', '1002': '1', '1812': '4', 'BFTO': '5'}
this might help you out!!

aggregate totals when key changes in Perl

I have an input file with the following format
ant,1
bat,1
bat,2
cat,4
cat,1
cat,2
dog,4
I need to aggregate the col2 for each key (column1) so the result is:
ant,1
bat,3
cat,7
dog,4
Other considerations:
Assume that the input file is sorted
The input file is pretty large (about 1M rows), so I don't want to use an array and take up memory
Each input line should be processed as we read it, and move to the next line
I need to write the results to an outFile
I need to do this in Perl, but a pseudo-code or algorithm would help just as fine
Thanks!
This is what I came up with... want to see if this can be written better/elegant.
open infile, outFile
prev_line = <infile>;
print_line = $prev_line;
while(<>){
curr_line = $_;
#prev_cols=split(',', $prev_line);
#curr_cols=split(',', $curr_line);
if ( $prev_cols[0] eq $curr_cols[0] ){
$prev_cols[1] += curr_cols[1];
$print_line = "$prev_cols[0],$prev_cols[1]\n";
$print_flag = 0;
}
else{
$print outFile "$print_line";
$print_flag = 1;
$print_line = $curr_line;
}
$prev_line = $curr_line;
}
if($print_flag = 1){
print outFile "$curr_line";
}
else{
print outFile "$print_line";
}
#!/usr/bin/perl
use warnings;
use strict;
use integer;
my %a;
while (<>) {
my ($animal, $n) = /^\s*(\S+)\s*,\s*(\S+)/;
$a{$animal} += $n if defined $n;
}
print "$_,${a{$_}}\n" for sort keys %a;
This short code affords you the chance to learn Perl's excellent hash facility, as %a. Hashes are central to Perl. One really cannot write fluent Perl without them.
Observe incidentally that the code exercises Perl's interesting autovivification feature. The first time a particular animal is encountered in the input stream, no count exists, so Perl implicitly assumes a pre-existing count of zero. Thus, the += operator does not fail, even though it seems that it should. It just adds to zero in the first instance.
On the other hand, it may happen that not only the number of data but the number of animals is so large that one would not like to store the hash %a. In this case, one can still calculate totals, provided only that the data are sorted by animal in the input, as they are in your example. In this case, something like the following might suit (though regrettably it is not nearly so neat as the above).
#!/usr/bin/perl
use warnings;
use strict;
use integer;
my $last_animal = undef;
my $total_for_the_last_animal = 0;
sub start_new_animal ($$) {
my $next_animal = shift;
my $n = shift;
print "$last_animal,$total_for_the_last_animal\n"
if defined $last_animal;
$last_animal = $next_animal;
$total_for_the_last_animal = $n;
}
while (<>) {
my ($animal, $n) = /^\s*(\S+)\s*,\s*(\S+)/;
if (
defined($n) && defined($animal) && defined($last_animal)
&& $animal eq $last_animal
) { $total_for_the_last_animal += $n; }
else { start_new_animal $animal, $n; }
}
start_new_animal undef, 0;
Use Perl’s awk mode.
-a
turns on autosplit mode when used with a -n or -p. An implicit split command to the #F array is done as the first thing inside the implicit while loop produced by the -n or -p.
perl -ane 'print pop(#F), "\n";'
is equivalent to
while (<>) {
#F = split(' ');
print pop(#F), "\n";
}
An alternate delimiter may be specified using -F.
All that’s left for you is to accumulate the sums in a hash and print them.
$ perl -F, -lane '$s{$F[0]} += $F[1];
END { print "$_,$s{$_}" for sort keys %s }' input
Output:
ant,1
bat,3
cat,7
dog,4
It's trivial in perl. Loop on the file input. Split the input line on comma. For each key in column one keep a hash to which you add the value in column two. At the end of the file print the list of hash keys and their values. It can be done in one line but that would obfuscate the algorithm.

Resources