find some sentences - ruby

I'd like to find good way to find some (let it be two) sentences in some text. What will be better - use regexp or split-method? Your ideas?
As requested by Jeremy Stein - there are some examples
Examples:
Input:
The first thing to do is to create the Comment model. We’ll create this in the normal way, but with one small difference. If we were just creating comments for an Article we’d have an integer field called article_id in the model to store the foreign key, but in this case we’re going to need something more abstract.
First two sentences:
The first thing to do is to create the Comment model. We’ll create this in the normal way, but with one small difference.
Input:
Mr. T is one mean dude. I'd hate to get in a fight with him.
First two sentences:
Mr. T is one mean dude. I'd hate to get in a fight with him.
Input:
The D.C. Sniper was executed was executed by lethal injection at a Virginia prison. Death was pronounced at 9:11 p.m. ET.
First two sentences:
The D.C. Sniper was executed was executed by lethal injection at a Virginia prison. Death was pronounced at 9:11 p.m. ET.
Input:
In her concluding remarks, the opposing attorney said that "...in this and so many other instances, two wrongs won’t make a right." The jury seemed to agree.
First two sentences:
In her concluding remarks, the opposing attorney said that "...in this and so many other instances, two wrongs won’t make a right." The jury seemed to agree.
Guys, as you can see - it's not so easy to determine two sentences from text. :(

As you've noticed, sentence tokenizing is a bit tricker than it first might seem. So you may as well take advantage of existing solutions. The Punkt sentence tokenizing algorithm is popular in NLP, and there is a good implementation in the Python Natural Language Toolkit which they describe the use of here. They also describe another approach here.
There's probably other implementations around, or you could also read the original paper describing the Punkt algorithm: Kiss, Tibor and Strunk, Jan (2006): Unsupervised Multilingual Sentence Boundary Detection. Computational Linguistics 32: 485-525.
You can also read another Stack Overflow question about sentence tokenizing here.

your_string = "First sentence. Second sentence. Third sentence"
sentences = your_string.split(".")
=> ["First sentence", " Second sentence", " Third sentence"]
No need to make simple code complicated.
Edit: Now that you've clarified that the real input is more complex that your initial example you should disregard this answer as it doesn't consider edge cases. An initial look at NLP should show you what you're getting into though.
Some of the edge cases that I've seen in the past to be a bit complicated are:
Dates: Some regions use dd.mm.yyyy
Quotes: While he was sighing — "Whatever, do it. Now. And by the way...". This was enough.
Units: He was going at 138 km. while driving on the freeway.
If you plan to parse these texts you should stay away from splits or regular expressions.

This will usually match sentences.
/\S(?:(?![.?!]+\s).)*[.?!]+(?=\s|$)/m
For your example of two sentences, take the first two matches.

irb(main):005:0> a = "The first sentence. The second sentence. And the third"
irb(main):006:0> a.split(".")[0...2]
=> ["The first sentence", " The second sentence"]
irb(main):007:0>
EDIT: here's how you handle the "This is a sentence ...... and another . And yet another ..." case :
irb(main):001:0> a = "This is the first sentence ....... And the second. Let's not forget the third"
=> "This is the first sentence ....... And the second. Let's not forget the thir
d"
irb(main):002:0> a.split(/\.+/)
=> ["This is the first sentence ", " And the second", " Let's not forget the thi rd"]
And you can apply the same range operator ... to extract the first 2.

You will find tips and software links on the sentence boundary detection Wikipedia page.

If you know what sentences to search, Regex should do well searching for
((YOUR SENTENCE HERE)|(YOUR OTHER SENTENCE)){1}
Split would probably use up quite a lot of memory, as it also saves the things you don't need (the whole text that's not your sentence) as Regex only saves the sentence you searched (if it finds it, of course)

If you're segmenting a piece of text into sentences, then what you want to do is begin by determining which punction marks can separate sentences. In general, this is !, ? and . (but if all you care about is a . for the texts your processing, then just go with that).
Now since these can appear inside quotations, or as parts of abbreviations, what you want to do is find each occurrence of these punctuation marks and run some sort of machine learning classifier to determine whether that occurance starts a new sentence, or whether it does something else. This involves training data and a properly-constructed classifier. And it won't be 100% accurate, because there's probably no way to be 100% accurate.
I suggest looking in the literature for sentence segmentation techniques, and have a look at the various natural language processing toolkits that are out there. I haven't really found one for Ruby yet, but I happen to like OpenNLP (which is in Java).

Related

Count Number of Sentence Ruby

I happened to search around everywhere and did not managed to find a solution to count number of sentence in a String using Ruby. Does anyone how to do it?
Example
string = "The best things in an artist’s work are so much a matter of intuition, that there is much to be said for the point of view that would altogether discourage intellectual inquiry into artistic phenomena on the part of the artist. Intuitions are shy things and apt to disappear if looked into too closely. And there is undoubtedly a danger that too much knowledge and training may supplant the natural intuitive feeling of a student, leaving only a cold knowledge of the means of expression in its place. For the artist, if he has the right stuff in him ... "
This string should return number 4.
You can split the text into sentences and count them. Here:
string.scan(/[^\.!?]+[\.!?]/).map(&:strip).count # scan has regex to split string and strip will remove trailing spaces.
# => 4
Explaining regex:
[^\.!?]
Caret inside of a character class [^ ] is the negation operator. Which means we are looking for characters which are not present in list: ., ! and ?.
+
is a greedy operator that returns matches between 1 and unlimited times. (capturing our sentences here and ignoring repetitions like ...)
[\.!?]
matching characters ., ! or ?.
In a nutshell, we are capturing all characters that are not ., ! or ? till we get characters that are ., ! or ?. Which basically can be treated as a sentence (in broad senses).
I think it makes sense to consider a word char followed by a ?! or . the delimiter of a sentence:
string.strip.split(/\w[?!.]/).length
#=> 4
So I'm not considering the ... a delimiter when it hangs on it's own like that:
"I waited a while ... and then I went home"
But then again, maybe I should...
It also occurs to me that maybe a better delimiter is a punctuation followed by some space and a capital letter:
string.split(/[?!.]\s+[A-Z]/).length
#=> 4
Sentences end with full stops, question marks, and exclamation marks. They can also be
separated with dashes and other punctuation, but we won’t worry about these rare cases here.
The split is simple. Instead of asking Ruby to split the text on one type of character, you simply
ask it to split on any of three types of characters, like so:
txt = "The best things in an artist’s work are so much a matter of intuition, that there is much to be said for the point of view that would altogether discourage intellectual inquiry into artistic phenomena on the part of the artist. Intuitions are shy things and apt to disappear if looked into too closely. And there is undoubtedly a danger that too much knowledge and training may supplant the natural intuitive feeling of a student, leaving only a cold knowledge of the means of expression in its place. For the artist, if he has the right stuff in him ... "
sentence_count = txt.split(/\.|\?|!/).length
puts sentence_count
#=> 7
string.squeeze('.!?').count('.!?')
#=> 4

Unscrambling words in a sentence using Natural Language Generation

I have a sentence in English. Now I want to jumble the words up and input that set of words into a program which should unscramble the words according to normal rules of English grammar to output the original sentence. I can vaguely assume it would require Natural Language Generation algorithms.
For eg:
Sentence: Mary has gone for a walk with her dog.
Set of words: {has, for, a, with, her, dog, Mary, gone, walk}
The output should be the same sentence.
I can assume only the set of words will never be enough to generate the original sentence. But what more information must be included to revive the original sentence?
Please guide me as to where I should be starting with.
Language models are things that can take in a text or sentence (any sequence of words) and assign it a probability based on how well the model "recognizes" that text.
To solve your problem, you could take a language model and use it to compute the probability of each possible permutation you can make of the input words. The most probable sentence accord to the model is probably the most coherent one.
For a situation like yours, trying a n-gram model (for n > 2.. I think 2 or 3 should do the trick) or a Hidden Markov model leveraging part of speech tags should do the trick.
You will not be able to solve your problem without additional information. Take this example:
{"happy", "you" "are"}
Can you reconstruct the sentence? Is it "You are happy" or is it "Are you happy"? Note that the words are the same but the meaning changes radically. No matter how good algorithm you write it will not be able to reconstruct the sentence if you can not.
You need to do following thing to get started :-
Maintain a dictionary of english words classified as nouns,adjectives,verbs,etc.
Build grammer rules for english language which you can get from any english tutorial.
try to rearrange the words to match the grammer rules.
Note:- English is very ambiguous language so you might end with something else.
eg.
grammer rule : article noun verb
input words : dog , barks , the
dictionary lookup : dog => noun , barks => verb , the => article
rearrange the words according to the rule.
There can be mutliple rule and word can also be of multiple type so
try all possibilities.

Converting PI digits into text strings

It's kind of interesting that pi's decimal representation never ends and never settles into a permanent repeating pattern. Meaning it's highly possible that pi contains every possible combination of numbers.
This guy calculated 5 trillions 5x(10^12) numbers of pi :D
http://www.numberworld.org/misc_runs/pi-5t/details.html
From the internet: "Converted into ASCII text, somewhere in that infinite string of digits is the name of every person you will ever love, the date, time and manner of your death, and the answers to all the great questions of the universe."
Wondering if somebody has already converted and analyzed the resulting string for known sequences of letters (words/sentences)?
Check out this page: http://pi.nersc.gov/.
It allows you to search for both character strings and hexadecimal sequences. Note that this search engine only has indexed the first 4 billion decimals of pi, and uses a formula for arbitrarily positioned binary or hexadecimal digits after those indexed.
The idea that Pi contains everything ever is a nice idea, but if it's correct, that means there is also an infinite amount of false things about everything ever. For example, if Pi contains a list of all the people you will ever love, then it will also have a list of people that seems that it is a list of people you will love, but in reality it's just a mix of names in a pattern that makes it look legit.
Following the same idea, the date, time, and manner of your death could also be "falsified". For example, let's say you are a man named Jason Delara, and you die at the age of 83 at 11:35 PM in your sleep. In Pi somewhere it can say in ASCII text "Jason Delara will die at age 83, 11:35 PM, passed in his sleep." It would also say somewhere else that "Jason Delara will die at age 35, at 6:00 AM, passed in a car accident." There could be an "infinite" amount of these false predictions.
There's also the fact that, if following the idea from above, all but one the answers to one of the great questions of the universe in that digit are wrong, even if many of the answers make sense. I've thought about this a lot, and I thought "What if there's part of the digit that states which facts are correct and which are not?" The answer is "Then there is an infinite amount of false lists in the digit claiming to do the same as the real list." In short, it would be pointless to convert Pi to ASCII text to try and figure everything out.
I know I'm a little late the party, but I wrote this for anybody who comes here looking for the answers to the universe in an endless, non-repeating decimal.
It is massively convenient that pi is an irrational number we're still finding digits for as if you can't find what you want in the sequence then by definition it just happens to be later on.
As for it containing hidden information - if you create any random sequence long enough, you'll be able to create simple words from the resulting output.
Conspiracy theorists just love to see patterns where there are none. They forget the other noise and are endlessly fascinated by mere coincidences.
Would just like to provide further context this question. Yes, the point is that PI goes on infinitely. That means there are endless possibilities for sentence structure and letter combination. This means every single combination of letters will happen and is happening in PI. So technically, everything in PI could apply to everything in the observable world around us.

Simple filtering out of common words from a text description

Words like "a", "the", "best", "kind". i am pretty sure there are good ways of achieving this
Just to be clear, I am looking for
The simplest solution that can be implemented, preferably in ruby.
I have a high level of tolerance for errors
If a library of common phrases is what i need, perfectly happy with that too
These common words are known as "stop words" - there is a similar stackoverflow question about this here: "Stop words" list for English?
To summarize:
If you have a large amount of text to deal with, it would be worth gathering statistics about the frequency of words in that particular data set, and taking the most frequent words for your stop word list. (That you include "kind" in your examples suggests to me that you might have quite an unusual set of data, e.g. with lots of colloquial expressions like "kind of", so perhaps you would need to do this.)
Since you say you don't mind much about errors, then it may be sufficient to just use a list of stop words for English that someone else has produced, e.g. the fairly long one used by MySQL or anything else that Google turns up.
If you just put these words into a hash in your program it should be easy to filter any list of words.
Common = %w{ a and or to the is in be }
Uncommon = %{
To be, or not to be: that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing end them? To die: to sleep;
No more; and by a sleep to say we end
The heart-ache and the thousand natural shocks
That flesh is heir to, 'tis a consummation
Devoutly to be wish'd. To die, to sleep;
To sleep: perchance to dream: ay, there's the rub;
For in that sleep of death what dreams may come
}.split /\b/
ignore_me, result = {}, []
Common.each { |w| ignore_me[w.downcase] = :Common }
Uncommon.each { |w| result << w unless ignore_me[w.downcase[/\w*/]] }
puts result.join
, not : that question:
Whether 'tis nobler mind suffer
slings arrows of outrageous fortune,
take arms against sea of troubles,
by opposing end them? die: sleep;
No more; by sleep say we end
heart-ache thousand natural shocks
That flesh heir , 'tis consummation
Devoutly wish'd. die, sleep;
sleep: perchance dream: ay, there's rub;
For that sleep of death what dreams may come
This is a variation on DigitalRoss answer.
str=<<EOF
To be, or not to be: that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing end them? To die: to sleep;
No more; and by a sleep to say we end
The heart-ache and the thousand natural shocks
That flesh is heir to, 'tis a consummation
Devoutly to be wish'd. To die, to sleep;
To sleep: perchance to dream: ay, there's the rub;
For in that sleep of death what dreams may come
EOF
common = {}
%w{ a and or to the is in be }.each{|w| common[w] = true}
puts str.gsub(/\b\w+\b/){|word| common[word.downcase] ? '': word}.squeeze(' ')
Also relevant:
What's the fastest way to check if a word from one string is in another string?
Hold on, you need to do some research before you take out stopwords (aka noise words, junk words). Index size and processing resources aren't the only issues. A lot depends on whether end-users will be typing queries, or you will be working with long automated queries.
All search log analysis shows that people tend to type one to three words per query. When that's all a search has to work with, we can't afford to lose anything. For example, a collection might have the word "copyright" on many documents -- making it very common -- but if there's no word in the index, it's impossible to do exact phrase searches or proximity relevance ranking. In addition, there are perfectly legitimate reasons to search for the most common words: people may be looking for "The Who", or worse, "The The".
So while there are technical issues to consider, and taking out stopwords is one solution, it may not be the right solution for the overall problem that you are trying to solve.
If you have an array of words to remove named stop_words, then you get the result from this expression:
description.scan(/\w+/).reject do |word|
stop_words.include? word
end.join ' '
If you want to preserve the non-word characters between each word,
description.scan(/(\w+)(\W+)/).reject do |(word, other)|
stop_words.include? word
end.flatten.join

Looking for algorithm that reverses the sprintf() function output

I am working on a project that requires the parsing of log files. I am looking for a fast algorithm that would take groups messages like this:
The temperature at P1 is 35F.
The temperature at P1 is 40F.
The temperature at P3 is 35F.
Logger stopped.
Logger started.
The temperature at P1 is 40F.
and puts out something in the form of a printf():
"The temperature at P%d is %dF.", Int1, Int2"
{(1,35), (1, 40), (3, 35), (1,40)}
The algorithm needs to be generic enough to recognize almost any data load in message groups.
I tried searching for this kind of technology, but I don't even know the correct terms to search for.
I think you might be overlooking and missed fscanf() and sscanf(). Which are the opposite of fprintf() and sprintf().
Overview:
A naïve!! algorithm keeps track of the frequency of words in a per-column manner, where one can assume that each line can be separated into columns with a delimiter.
Example input:
The dog jumped over the moon
The cat jumped over the moon
The moon jumped over the moon
The car jumped over the moon
Frequencies:
Column 1: {The: 4}
Column 2: {car: 1, cat: 1, dog: 1, moon: 1}
Column 3: {jumped: 4}
Column 4: {over: 4}
Column 5: {the: 4}
Column 6: {moon: 4}
We could partition these frequency lists further by grouping based on the total number of fields, but in this simple and convenient example, we are only working with a fixed number of fields (6).
The next step is to iterate through lines which generated these frequency lists, so let's take the first example.
The: meets some hand-wavy criteria and the algorithm decides it must be static.
dog: doesn't appear to be static based on the rest of the frequency list, and thus it must be dynamic as opposed to static text. We loop through a few pre-defined regular expressions and come up with /[a-z]+/i.
over: same deal as #1; it's static, so leave as is.
the: same deal as #1; it's static, so leave as is.
moon: same deal as #1; it's static, so leave as is.
Thus, just from going over the first line we can put together the following regular expression:
/The ([a-z]+?) jumps over the moon/
Considerations:
Obviously one can choose to scan part or the whole document for the first pass, as long as one is confident the frequency lists will be a sufficient sampling of the entire data.
False positives may creep into the results, and it will be up to the filtering algorithm (hand-waving) to provide the best threshold between static and dynamic fields, or some human post-processing.
The overall idea is probably a good one, but the actual implementation will definitely weigh in on the speed and efficiency of this algorithm.
Thanks for all the great suggestions.
Chris, is right. I am looking for a generic solution for normalizing any kind of text. The solution of the problem boils down to dynmamically finding patterns in two or more similar strings.
Almost like predicting the next element in a set, based on the previous two:
1: Everest is 30000 feet high
2: K2 is 28000 feet high
=> What is the pattern?
=> Answer:
[name] is [number] feet high
Now the text file can have millions of lines and thousands of patterns. I would like to parse the files very, very fast, find the patterns and collect the data sets that are associated with each pattern.
I thought about creating some high level semantic hashes to represent the patterns in the message strings.
I would use a tokenizer and give each of the tokens types a specific "weight".
Then I would group the hashes and rate their similarity. Once the grouping is done I would collect the data sets.
I was hoping, that I didn't have to reinvent the wheel and could reuse something that is already out there.
Klaus
It depends on what you are trying to do, if your goal is to quickly generate sprintf() input, this works. If you are trying to parse data, maybe regular expressions would do too..
You're not going to find a tool that can simply take arbitrary input, guess what data you want from it, and produce the output you want. That sounds like strong AI to me.
Producing something like this, even just to recognize numbers, gets really hairy. For example is "123.456" one number or two? How about this "123,456"? Is "35F" a decimal number and an 'F' or is it the hex value 0x35F? You're going to have to build something that will parse in the way you need. You can do this with regular expressions, or you can do it with sscanf, or you can do it some other way, but you're going to have to write something custom.
However, with basic regular expressions, you can do this yourself. It won't be magic, but it's not that much work. Something like this will parse the lines you're interested in and consolidate them (Perl):
my #vals = ();
while (defined(my $line = <>))
{
if ($line =~ /The temperature at P(\d*) is (\d*)F./)
{
push(#vals, "($1,$2)");
}
}
print "The temperature at P%d is %dF. {";
for (my $i = 0; $i < #vals; $i++)
{
print $vals[$i];
if ($i < #vals - 1)
{
print ",";
}
}
print "}\n";
The output from this isL
The temperature at P%d is %dF. {(1,35),(1,40),(3,35),(1,40)}
You could do something similar for each type of line you need to parse. You could even read these regular expressions from a file, instead of custom coding each one.
I don't know of any specific tool to do that. What I did when I had a similar problem to solve was trying to guess regular expressions to match lines.
I then processed the files and displayed only the unmatched lines. If a line is unmatched, it means that the pattern is wrong and should be tweaked or another pattern should be added.
After around an hour of work, I succeeded in finding the ~20 patterns to match 10000+ lines.
In your case, you can first "guess" that one pattern is "The temperature at P[1-3] is [0-9]{2}F.". If you reprocess the file removing any matched line, it leaves "only":
Logger stopped.
Logger started.
Which you can then match with "Logger (.+).".
You can then refine the patterns and find new ones to match your whole log.
#John: I think that the question relates to an algorithm that actually recognises patterns in log files and automatically "guesses" appropriate format strings and data for it. The *scanf family can't do that on its own, it can only be of help once the patterns have been recognised in the first place.
#Derek Park: Well, even a strong AI couldn't be sure it had the right answer.
Perhaps some compression-like mechanism could be used:
Find large, frequent substrings
Find large, frequent substring patterns. (i.e. [pattern:1] [junk] [pattern:2])
Another item to consider might be to group lines by edit-distance. Grouping similar lines should split the problem into one-pattern-per-group chunks.
Actually, if you manage to write this, let the whole world know, I think a lot of us would like this tool!
#Anders
Well, even a strong AI couldn't be sure it had the right answer.
I was thinking that sufficiently strong AI could usually figure out the right answer from the context. e.g. Strong AI could recognize that "35F" in this context is a temperature and not a hex number. There are definitely cases where even strong AI would be unable to answer. Those are the same cases where a human would be unable to answer, though (assuming very strong AI).
Of course, it doesn't really matter, since we don't have strong AI. :)
http://www.logparser.com forwards to an IIS forum which seems fairly active. This is the official site for Gabriele Giuseppini's "Log Parser Toolkit". While I have never actually used this tool, I did pick up a cheap copy of the book from Amazon Marketplace - today a copy is as low as $16. Nothing beats a dead-tree-interface for just flipping through pages.
Glancing at this forum, I had not previously heard about the "New GUI tool for MS Log Parser, Log Parser Lizard" at http://www.lizardl.com/.
The key issue of course is the complexity of your GRAMMAR. To use any kind of log-parser as the term is commonly used, you need to know exactly what you're scanning for, you can write a BNF for it. Many years ago I took a course based on Aho-and-Ullman's "Dragon Book", and the thoroughly understood LALR technology can give you optimal speed, provided of course that you have that CFG.
On the other hand it does seem you're possibly reaching for something AI-like, which is a different order of complexity entirely.

Resources