When you search in Google "100F to C" how does it know to convert from Fahrenheit to Celsius? Similarly, conversion from different currencies and simple calculation.
What is the data structure used, or is it simple pattern matching the strings?
It's not exactly simple pattern matching. Evaluating the mathematical expressions you can enter is not trivial. For example, here's an algorithm that evaluates a math expression. That's just the evaluation, there's probably a lot of code to detect if it's even valid.
For the currencies conversion and other units, that's simple pattern matching.
it's simple pattern matching
try
100 kmh in mph = no calculation
100 kph in mph = 62.1371192 mph
Related
I have the following RSpec output:
30 examples, 15 failures
I would like to subtract the second number from the first. I have this code:
def capture_passing_score(output)
captures = output.match(/^(?<total>\d+)\s*examples,\s*(?<failed>\d+)\s*failures$/)
captures[:total].to_i - captures[:failed].to_i
end
I am wondering if there is a way to do the calculation within a regular expression. Ideally, I'd avoid the second step in my code, and subtract the numbers within a regex. Performing mathematical operations may not be possible with Ruby's (or any) regex engine, but I couldn't find an answer either way. Is this possible?
Nope.
By every definition I have ever seen, Regular Expressions are about text processing. It is character based pattern matching. Numbers are a class of textual characters in Regex and do not represent their numerical values. While syntactic sugar may mask what is actually being done, you still need to convert the text to a numeric value to perform the subtraction.
WikiPedia
RubyDoc
If you know the format is going to remain consistent, you could do something like this:
output.scan(/\d+/).map(&:to_i).inject(:-)
It's not doing the subtraction via regex, but it does make it more concise.
I'm wondering whether there is a way to generate the most specific regular expression (if such a thing exists) that matches a given string. Here's an illustration of what I want the method to do:
str = "(17 + 31)"
find_pattern(str)
# => /^\(\d+ \+ \d+\)$/ (or something more specific)
My intuition was to use Regex.new to accumulate the desired pattern by looping through str and checking for known patterns like \d, \s, and so on. I suspect there is an easy way for doing this.
This is in essence an algorithm compression problem. The simplest way to match a list of known strings is to use Regexp.union factory method, but that just tries each string in turn, it does not do anything "clever":
combined_rx = Regexp.union( "(17 + 31)", "(17 + 45)" )
=> /\(17\ \+\ 31\)|\(17\ \+\ 45\)/
This can still be useful to construct multi-stage validators, without you needing to write loops to check them all.
However, a generic pattern matcher that could figure out what you mean to match from examples is not really possible. There are too many ways in which you could consider strings to be similar or not. The closest I could think of would be genetic programming where you supply a large list of should match/should not match strings and the code guesses at the best regex by constructing random Regexp objects (a challenge in itself) and seeing how accurately they match and don't match your examples. The best matchers could be combined and mutated and tried again until you got 100% accuracy. This might be a fun project, but ultimately much more effort for most purposes than writing the regular expressions yourself from a description of the problem.
If your problem is heavily constrained - e.g. any example integer could always be replaced by \d+, any example space by \s+ etc, then you could work through the string replacing "matchable units", in fact using the same regular expressions checked in turn. E.g. if you match \A\d+ then consume the match from the string, and add \d+ to your regex. Then take the remainder of the string and look for next matching pattern. Working this way will have its limitations (you must know the full set of patterns you want to match in advance, and all examples would have to be unambiguous). However, it is more tractable than a genetic program.
I have 1,000,000 strings that I want to categorize. The way I do this is to bucket it if it contains a set of words or phrases. The set of words is about 10,000. Ideally I would be able to support regular expressions, but I am focused on making it run fast right now. Example phrases:
ford, porsche, mazda...
I really dont want to match each word against the strings one by one, so I decided to use regular expressions. Unfortunately, I am running into a regular expression issue:
Regexp.new("(a)"*253)
=> /(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)...
Regexp.new("(a)"*254)
RegexpError: regular expression too big: /(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)...
where a would be one of my words or phrases. Right now, I am planning on running 10,000 / 253 matches. I read that the length of the regex heavily impacts performance, but my regex match is really simple and the regexp is created very quickly. I would like to get around the limitation somehow, or use a better solution if anyone has any ideas. Thanks.
You might consider other mechanisms for recognizing 10k words.
Trie: Sometimes called a prefix tree, it is often used by spell checkers for doing word lookups. See Trie on wikipedia
DFA (deterministic finite automata): A DFA is often created by the lexer in a compiler for recognizing the tokens of the language. A DFA runs very quickly. Simple regexes are often compiled into DFAs. See DFA on wikipedia
I have a set of pairs of character strings, e.g.:
abba - aba,
haha - aha,
baa - ba,
exb - esp,
xa - za
The second (right) string in the pair is somewhat similar to the first (left) string.
That is, a character from the first string can be represented by nothing, itself or a character from a small set of characters.
There's no simple rule for this character-to-character mapping, although there are some patterns.
Given several thousands of such string pairs, how do I deduce the transformation rules such that if I apply them to the left strings, I get the right strings?
The solution can be approximate, working correctly for, say, 80-95% of the strings.
Would you recommend to use some kind of a genetic algorithm? If so, how?
If you could align the characters, or rather groups of characters, you could work out tables saying that aa => a, bb => z, and so on. If you had such tables, you could align the characters using http://en.wikipedia.org/wiki/Dynamic_time_warping. One approach is therefore to guess an alignment (e.g. one for one, just as a starting point, or just align the first and last characters of each sequence), work out a translation table from that, use DTW to get a new alignment, work out a revised translation table, and iterate in that way. Perhaps you could wrap this up with enough maths to show that there is some measure of optimality or probability that such passes increase, climbing to a local maximum.
There is probably some way of doing this by modelling a Hidden Markov Model that generates both sequences simultaneously and then deriving rules from that model, but I would not chose this approach unless I was already familiar with HMMs and had software to use as a starting point that I was happy to modify.
You can use text to speech to create sound waves. then compare sound waves with other's and match them with percentages.
This is my theory how Google has such a advanced spell checker.
I need to find a substring SIMILAR to a given pattern in a huge string. The source huge string could be up to 100 Mb in length. The pattern is rather short (10-100 chars). The problem is that I need to find not only exact substrings but also similar substrings that differ from the pattern in several chars (the maximum allowed error count is provided as a parameter).
Is there any idea how to speed up the algorithm?
1)There are many Algorithms related to the string searching. One of them is the famous Knuth–Morris–Pratt Algorithm.
2) You may also want to check Regular Expressions ("Regex") in whatever the language you're using. They will sure help you finding substrings 'similar' to the original ones.
i.e. [Java]
String pat = "Home";
String source = "IgotanewHwme";
for(int i = 0; i < pat.length(); i++){
//split around i .. not including char i itself .. instead, replace it with [a-zA-Z] and match using this new pattern.
String new_pat = "("+pat.substring(0, i)+")"+ "[a-zA-Z]" + "("+pat.substring(i+1, pat.length())+")";
System.out.println(new_pat);
System.out.println(source.matches("[a-zA-Z]*"+new_pat+"[a-zA-Z]*"));
}
and I think it's easy to make it accept any number of error counts.
Sounds like you want Fuzzy/Approximate String Matching. Have a look at the Wikipedia page and see if you can't find an algorithm that suits your needs.
You can have a look at the Levenshtein distance, the Needleman–Wunsch algorithm and the Damerau–Levenshtein distance
They give you metrics evaluating the amount of differences between two strings (ie the numbers of addition, deletion, substitution, etc.). They are often used to measure the variation between DNA.
You will easily find implementations in various languages.