Similar substring fast search - algorithm

I need to find a substring SIMILAR to a given pattern in a huge string. The source huge string could be up to 100 Mb in length. The pattern is rather short (10-100 chars). The problem is that I need to find not only exact substrings but also similar substrings that differ from the pattern in several chars (the maximum allowed error count is provided as a parameter).
Is there any idea how to speed up the algorithm?

1)There are many Algorithms related to the string searching. One of them is the famous Knuth–Morris–Pratt Algorithm.
2) You may also want to check Regular Expressions ("Regex") in whatever the language you're using. They will sure help you finding substrings 'similar' to the original ones.
i.e. [Java]
String pat = "Home";
String source = "IgotanewHwme";
for(int i = 0; i < pat.length(); i++){
//split around i .. not including char i itself .. instead, replace it with [a-zA-Z] and match using this new pattern.
String new_pat = "("+pat.substring(0, i)+")"+ "[a-zA-Z]" + "("+pat.substring(i+1, pat.length())+")";
System.out.println(new_pat);
System.out.println(source.matches("[a-zA-Z]*"+new_pat+"[a-zA-Z]*"));
}
and I think it's easy to make it accept any number of error counts.

Sounds like you want Fuzzy/Approximate String Matching. Have a look at the Wikipedia page and see if you can't find an algorithm that suits your needs.

You can have a look at the Levenshtein distance, the Needleman–Wunsch algorithm and the Damerau–Levenshtein distance
They give you metrics evaluating the amount of differences between two strings (ie the numbers of addition, deletion, substitution, etc.). They are often used to measure the variation between DNA.
You will easily find implementations in various languages.

Related

Is there a method to find the most specific pattern for a string?

I'm wondering whether there is a way to generate the most specific regular expression (if such a thing exists) that matches a given string. Here's an illustration of what I want the method to do:
str = "(17 + 31)"
find_pattern(str)
# => /^\(\d+ \+ \d+\)$/ (or something more specific)
My intuition was to use Regex.new to accumulate the desired pattern by looping through str and checking for known patterns like \d, \s, and so on. I suspect there is an easy way for doing this.
This is in essence an algorithm compression problem. The simplest way to match a list of known strings is to use Regexp.union factory method, but that just tries each string in turn, it does not do anything "clever":
combined_rx = Regexp.union( "(17 + 31)", "(17 + 45)" )
=> /\(17\ \+\ 31\)|\(17\ \+\ 45\)/
This can still be useful to construct multi-stage validators, without you needing to write loops to check them all.
However, a generic pattern matcher that could figure out what you mean to match from examples is not really possible. There are too many ways in which you could consider strings to be similar or not. The closest I could think of would be genetic programming where you supply a large list of should match/should not match strings and the code guesses at the best regex by constructing random Regexp objects (a challenge in itself) and seeing how accurately they match and don't match your examples. The best matchers could be combined and mutated and tried again until you got 100% accuracy. This might be a fun project, but ultimately much more effort for most purposes than writing the regular expressions yourself from a description of the problem.
If your problem is heavily constrained - e.g. any example integer could always be replaced by \d+, any example space by \s+ etc, then you could work through the string replacing "matchable units", in fact using the same regular expressions checked in turn. E.g. if you match \A\d+ then consume the match from the string, and add \d+ to your regex. Then take the remainder of the string and look for next matching pattern. Working this way will have its limitations (you must know the full set of patterns you want to match in advance, and all examples would have to be unambiguous). However, it is more tractable than a genetic program.

How to fuzzily search for a dictionary word?

I have read a lot of threads here discussing edit-distance based fuzzy-searches, which tools like Elasticsearch/Lucene provide out of the box, but my problem is a bit different. Suppose I have a dictionary of words, {'cat', 'cot', 'catalyst'}, and a character similarity relation f(x, y)
f(x, y) = 1, if characters x and y are similar
= 0, otherwise
(These "similarities" can be specified by the programmer)
such that, say,
f('t', 'l') = 1
f('a', 'o') = 1
f('f', 't') = 1
but,
f('a', 'z') = 0
etc.
Now if we have a query 'cofatyst', the algorithm should report the following matches:
('cot', 0)
('cat', 0)
('catalyst', 0)
where the number is the 0-based starting index of the match found. I have tried the Aho-Corasick algorithm, and while it works great for exact matching and in the case when a character has relatively less number of "similar" characters, its performance drops exponentially as we increase the number of similar characters for a character. Can anyone point me to a better way of doing this? Fuzziness is an absolute necessity, and it must take in to account character similarities(i.e., not blindly depend on just edit-distances).
One thing to note is that in the wild, the dictionary is going to be really large.
I might try to use the cosine similarity using the position of each character as a feature and mapping the product between features using a match function based on your character relations.
Not a very specific advise, I know, but I hope it helps you.
edited: Expanded answer.
With the cosine similarity, you will compute how similar two vectors are. In your case the normalisation might not make sense. So, what I would do is something very simple (I might be oversimplifying the problem): First, see the matrix of CxC as a dependency matrix with the probability that two characters are related (e.g., P('t' | 'l') = 1). This will also allow you to have partial dependencies to differentiate between perfect and partial matches. After this I will compute, for each position the probability that the letter from each word is not the same (using the complement of P(t_i, t_j)) and then you can just aggregate the results using a sum.
It will count the number of terms that are different for a specific pair of words, and it allows you to define partial dependencies. Furthermore, the implementation is very simple and should scale well. This is why I am not sure if I misunderstood your question.
I am using Fuse JavaScript Library for a project of mine. It is a javascript file which works on JSON dataset. It is quite fast. Have a look at it.
It has implemented a full Bitap algorithm, leveraging a modified version of the Diff, Match & Patch tool by Google(from his site).
The code is simple to understand the algorithm implementation done.

Finding all matches from a large set of Strings in a smaller String

I have a large set of Words and Phrases (a lexicon or dictionary) which includes wildcards. I need to find all instances of those Words and Phrases within a much smaller String (~150 characters at the moment).
Initially, I wanted to run the operation in reverse; that is to check to see if each word in my smaller string exists within the Lexicon, which could be implemented as a Hash Table. The problem is that some of these values in my Lexicon are not single words and many are wildcards (e.g. substri*).
I'm thinking of using the Rabin-Karp algorithm but I'm not sure this is the best choice.
What's an efficient algorithm or method to perform this operation?
Sample Data:
The dictionary contains hundreds of words and can potentially expand. These words may end with wildcard characters (asterisks). Here are some random examples:
good
bad
freed*
careless*
great loss
The text we are analyzing (at this point) are short, informal (grammar-wise) English statements. The prime example of text (again, at this point in time) would be a Twitter Tweet. These are limited to roughly 140 Characters. For example:
Just got the Google nexus without a contract. Hands down its the best phone
I've ever had and the only thing that could've followed my N900.
While it may be helpful to note that we are performing very simple Sentiment Analysis on this text; our Sentiment Analysis technique is not my concern. I'm simply migrating an existing solution to a "real-time" processing system and need to perform some optimizations.
I think this is an excellent use case for the Aho-Corasick string-matching algorithm, which is specifically designed to find all matches of a large set of strings in a single string. It runs in two phases - a first phase in which a matching automaton is created (which can be done in advance and requires only linear time), and a second phase in which the automaton is used to find all matches (which requires only linear time, plus time proportional to the total number of matches). The algorithm can also be adapted to support wildcard searching as well.
Hope this helps!
One answer I wanted to throw out there was the Boyer-Moore search algorithm. It is the algorithm that grep uses. Grep is probably one of the fastest search tools available. In addition, you could use something like GNU Parallel to have grep run in parallel, thus really speeding up the algorithm.
In addition, here is a good article that you might find interesting.
You might still use your original idea, of checking every word in the text against the dictionary. However, in order to run it efficently you need to index the dictionary, to make lookups really fast. A trick used in information retrival systems is to store a so called permuterm index (http://nlp.stanford.edu/IR-book/html/htmledition/permuterm-indexes-1.html).
Basically what you want to do is to store in a dictionary every possible permutation of the words (e.g. for house):
house$
ouse$h
use$ho
...
e$hous
This index can then be used to quickly check against wildcard queries. If for example you have ho*e you can look in the permuterm index for a term beginning with e$ho and you would quickly find a match with house.
The search itself is usually done with some logarithmic search strategy (binary search or b-trees) and thus is usally very fast.
So long as the patterns are for complete words: you don't want or to match storage; spaces and punctuation are match anchors, then a simple way to go is translate your lexicon into a scanner generator input (for example you could use flex), generate a scanner, and then run it over your input.
Scanner generators are designed to identify occurrences of tokens in input, where each token type is described by a regular expression. Flex and similar programs create scanners rapidly. Flex handles up to 8k rules (in your case lexicon entries) by default, and this can be expanded. The generated scanners run in linear time and in practice are very fast.
Internally, the token regular expressions are transformed in a standard "Kleene's theorem" pipeline: First to a NFA, then to a DFA. The DFA is then transformed to its unique minimum form. This is encoded in a HLL table, which is emitted inside a wrapper that implements the scanner by referring to the table. This is what flex does, but other strategies are possible. For example the DFA can be translated to goto code where the DFA state is represented implicitly by the instruction pointer as the code runs.
The reason for the spaces-as-anchors caveat is that scanners created by programs like Flex are normally incapable of identifying overlapping matches: strangers could not match both strangers and range, for example.
Here is a flex scanner that matches the example lexicon you gave:
%option noyywrap
%%
"good" { return 1; }
"bad" { return 2; }
"freed"[[:alpha:]]* { return 3; }
"careless"[[:alpha:]]* { return 4; }
"great"[[:space:]]+"loss" { return 5; }
. { /* match anything else except newline */ }
"\n" { /* match newline */ }
<<EOF>> { return -1; }
%%
int main(int argc, char *argv[])
{
yyin = argc > 1 ? fopen(argv[1], "r") : stdin;
for (;;) {
int found = yylex();
if (found < 0) return 0;
printf("matched pattern %d with '%s'\n", found, yytext);
}
}
And to run it:
$ flex -i foo.l
$ gcc lex.yy.c
$ ./a.out
Good men can only lose freedom to bad
matched pattern 1 with 'Good'
matched pattern 3 with 'freedom'
matched pattern 2 with 'bad'
through carelessness or apathy.
matched pattern 4 with 'carelessness'
This doesn't answer the algorithm question exactly, but check out the re2 library. There are good interfaces in Python, Ruby and assorted other programming languages. In my experience, it's blindly fast, and removed quite similar bottlenecks in my code with little fuss and virtually no extra code on my part.
The only complexity comes with overlapping patterns. If you want the patterns to start at word boundaries, you should be able partition the dictionary into a set of regular expressions r_1, r_2, ..., r_k of the form \b(foobar|baz baz\S*|...) where each group in r_{i+1} has a prefix in r_i. You can then short circuit evaluations since if r_{i+1} matches then r_i must have matched.
Unless you're implementing your algorithm in highly-optimized C, I'd bet that this approach will be faster than any of the (algorithmically superior) answers elsewhere in this thread.
Let me get this straight. You have a large set of queries and one small string and you want to find instances of all those queries in that string.
In that case I suggest you index that small document like crazy so that your search time is as short as possible. Hell. With that document size I would even consider doing small mutations (to match wildcards and so on) and would index them as well.
I had a very similar task.
Here's how I solved it, the performance is unbelievable
http://www.foibg.com/ijita/vol17/ijita17-2-p03.pdf

Is there any algorithm to judge a string is meaningful

The problem is, I have to scan executable file and find out the strings for analysis, use strings.exe from sysinternals. However, How to distinguish meaningful strings and the trivial strings, Is there any algorithm or thought to solve this problem(statistics? probability?).
for example:
extract strings from strings.exe(part of all strings)
S`A
waA
RmA
>rA
5xA
GetModuleHandleA
LocalFree
LoadLibraryA
LocalAlloc
GetCommandLineW
From empirical judgement, the last five strings is meaningful, and the first 5 ones are not.
So how to solve this problem, do not use a dictionary like black list or white list.
Simple algorithm: Break candidate strings into words on first caps/whitespace/digits, and then compare words against some dictionary.
use N-Grams
N-Gram will tell you what is the probability that word is meaningfull. Read about markov chains and n-grams (http://en.wikipedia.org/wiki/N-gram) . Treat each letter as state, and take the set of meaningfull and meaningless words. For example:
Meaningless word are B^^#, #AT
Normal words: BOOK, CAT
create two Language models for them (trigram will be the best) http://en.wikipedia.org/wiki/Language_model
and now you can check in which model word was probably generated and take language model with probability greater than in other one. this will satisfy your condition
remember that you need set of meaningless words ( i think around 1000 will be ok) and not meaningless
Is there a definite rule for meaningful words? Or are they simply words from dictionary?
If they are words from dictionary, then you can use trie's
you can look up a word until the next char is not capitalized. if its capitalized then start from beginning of the trie and look for the next word.
Just my 2 cents.
Ivar

How to elegantly compute the anagram signature of a word in ruby?

Arising out of this question, I'm looking for an elegant (ruby) way to compute the word signature suggested in this answer.
The idea suggested is to sort the letters in the word, and also run length encode repeated letters. So, for example "mississippi" first becomes "iiiimppssss", and then could be further shortened by encoding as "4impp4s".
I'm relatively new to ruby and though I could hack something together, I'm sure this is a one liner for somebody with more experience of ruby. I'd be interested to see people's approaches and improve my ruby knowledge.
edit: to clarify, performance of computing the signature doesn't much matter for my application. I'm looking to compute the signature so I can store it with each word in a large database of words (450K words), then query for words which have the same signature (i.e. all anagrams of a given word, that are actual english words). Hence the focus on space. The 'elegant' part is just to satisfy my curiosity.
The fastest way to create a sorted list of the letters is this:
"mississippi".unpack("c*").sort.pack("c*")
It is quite a bit faster than split('') and join(). For comparison it is also best to pack the array back together into a String, so you dont have to compare arrays.
I'm not much of a Ruby person either, but as I noted on the other comment this seems to work for the algorithm described.
s = "mississippi"
s.split('').sort.join.gsub(/(.)\1{2,}/) { |s| s.length.to_s + s[0,1] }
Of course, you'll want to make sure the word is lowercase, doesn't contain numbers, etc.
As requested, I'll try to explain the code. Please forgive me if I don't get all of the Ruby or reg ex terminology correct, but here goes.
I think the split/sort/join part is pretty straightforward. The interesting part for me starts at the call to gsub. This will replace a substring that matches the regular expression with the return value from the block that follows it. The reg ex finds any character and creates a backreference. That's the "(.)" part. Then, we continue the matching process using the backreference "\1" that evaluates to whatever character was found by the first part of the match. We want that character to be found a minimum of two more times for a total minimum number of occurrences of three. This is done using the quantifier "{2,}".
If a match is found, the matching substring is then passed to the next block of code as an argument thanks to the "|s|" part. Finally, we use the string equivalent of the matching substring's length and append to it whatever character makes up that substring (they should all be the same) and return the concatenated value. The returned value replaces the original matching substring. The whole process continues until nothing is left to match since it's a global substitution on the original string.
I apologize if that's confusing. As is often the case, it's easier for me to visualize the solution than to explain it clearly.
I don't see an elegant solution. You could use the split message to get the characters into an array, but then once you've sorted the list I don't see a nice linear-time concatenate primitive to get back to a string. I'm surprised.
Incidentally, run-length encoding is almost certainly a waste of time. I'd have to see some very impressive measurements before I'd think it worth considering. If you avoid run-length encoding, you can anagrammatize any string, not just a string of letters. And if you know you have only letters and are trying to save space, you can pack them 5 bits to a letter.
---Irma Vep
EDIT: the other poster found join which I missed. Nice.

Resources