dynamic programming word segmentation - algorithm

Suppose I have a string like 'meetateight' and I need to segment it into meaningful words like 'meet' 'at' 'eight' using dynamic programming.
To judge how “good” a block/segment "x = x1x2x3" is, I am given a black box that, on input x, returns a real number quality(x) such that: A large positive value for quality(x) indicates x is close to an English word, and a large negative number indicates x is far from an English word.
I need help with designing an algorithm for the same.
I tried thinking over an algorithm in which I would iteratively add letters based on their quality and segment whenever there is a dip in quality.
But this fails in the above example because it cuts out me instead of meet.
I need suggestions for a better algorithm.
Thanks

What about building a Trie using an English Dictionary and navigating it down scanning your string with all the possible path to leaf (backtracking when you have more than one choice).

You can use dynamic programming, and keep track of the score for each prefix of your input, adding one letter at a time. Each time you add a letter, see if any suffixes can be added on to a prefix you've already used (choosing the one with the best score). For example:
m = 0
me = 1
mee = 0
meet = 1
meeta = 1 (meet + a)
meetat = 1 (meet + at)
meetate = 1 (meet + ate)
meetatei = 1 (meetate + i)
meetateig = 0
meetateigh = 0
meetateight = 1 (meetat + eight)
To handle values between 0 and 1, you can multiply them together. Also save which words you've used so you can split the whole string at the end.

I wrote a program to do this at my blog; it's too long to include here. The basic idea is to chop off prefixes that form words, then recursively process the rest of the input, backtracking when it is not possible to split the entire string.

Related

Deriving the subsolution table for LCS from the brute force solution

I'm working on the LCS problem using dynamic programming. I'm having trouble deriving the DP solution myself without looking at the solution.
I currently reason that given two strings, P and Q:
We can enumerate through all subsequences of P, which is of size 2^n.
We can also enumerate through all subsequences of Q, which is of size 2^m.
So, if we want to check for shared subsequences, the run time would be O(2^n * 2^m) or O(2^(n+m)).
I don't understand how we can go from this brute force solution to the dynamic programming solution. What's the logic for deriving the subsolution table?
I just don't understand how we can jump straight to the subsolution table for DP from this point. What's the logic for doing that?
I understand that we need to identify overlapping subsolutions. But I can't find a good explanation for identifying this then going onto the subsolution table.
Let me know if this question makes sense.
Here's the basic idea that creates the magic for this algorithm.
Consider 2 strings S1 and S2,
S1 = c1,c2,c3,........cm, length = m
and
S2 = b1,b2,b3,........bn, length = n
Say you have a function LCS(arg1,arg2), where
arg1 = S1,m , string S1 of length of m
and
arg2 = S2,n , string S2 of length of n
and LCS(arg1,arg2) will give us the length of longest common subsequence for the 2 arguments.
Now suppose that the last character of both strings is same.
bn = cm
And suppose no other two charcters is same. This means that:
LCS(arg1,arg2) = 1(last character) + 0(remaining strings)
Now if you have understood the above equation, its clear that if instead of no other two characters matching we do have something matching(in the remaining strings) then:
LCS(arg1,arg2) = 1(last character) + LCS(arg1 - cm,arg2 - bn)(Remaining strings)
But if last two chars do not match, then definitely we have to consider the second last 2 characters in each string and thats why we have the following when the last 2 characters does not match:
LCS(arg1,arg2) = max(LCS(arg1 - cm,arg2) , LCS(arg1 ,arg2 - bn))

Splitting a sentence to minimize sentence lengths

I have come across the following problem statement:
You have a sentence written entirely in a single row. You would like to split it into several rows by replacing some of the spaces
with "new row" indicators. Your goal is to minimize the width of the
longest row in the resulting text ("new row" indicators do not count
towards the width of a row). You may replace at most K spaces.
You will be given a sentence and a K. Split the sentence using the
procedure described above and return the width of the longest row.
I am a little lost with where to start. To me, it seems I need to try to figure out every possible sentence length that satisfies the criteria of splitting the single sentence up into K lines.
I can see a couple of edge cases:
There are <= K words in the sentence, therefore return the longest word.
The sentence length is 0, return 0
If neither of those criteria are true, then we have to determine all possible combinations of splitting the sentence and the return the minimum of all those options. This is the part I don't know how to do (and is obviously the heart of the problem).
You can solve it by inverting the problem. Let's say I fix the length of the longest split to L. Can you compute the minimum number of breaks you need to satisfy it?
Yes, you just break before the first word that would go over L and count them up (O(N)).
So now that we have that we just have to find a minimum L that would require less or equal K breaks. You can do a binary search in the length of the input. Final complexity O(NlogN).
First Answer
What you want to achieve is Minimum Raggedness. If you just want the algorithm, it is here as a PDF. If the research paper's link goes bad, please search for the famous paper named Breaking Paragraphs into Lines by Knuth.
However if you want to get your hands over some implementations of the same, in the question Balanced word wrap (Minimum raggedness) in PHP on SO, people have actually given implementation not only in PHP but in C, C++ and bash as well.
Second Answer
Though this is not exactly a correct approach, it is quick and dirty if you are looking for something like that. This method will not return correct answer for every case. It is for those people for whom time to ship their product is more important.
Idea
You already know the length of your input string. Let's call it L;
When putting in K breaks, the best scenario would be to be able to break the string to parts of exactly L / (K + 1) size;
So break your string at that word which makes the resulting sentence part's length least far from L / (K + 1);
My recursive solution, which can be improved through memoization or dynamic programming.
def split(self,sentence, K):
if not sentence: return 0
if ' ' not in sentence or K == 0: return len(sentence)
spaces = [i for i, s in enumerate(sentence) if s == ' ']
res = 100000
for space in spaces:
res = min(res, max(space, self.split(sentence[space+1:], K-1)))
return res

Algorithm for traveling through a sequence of digits

Does anybody know an efficient algorithm for traveling through a sequence of digits by looking for a certain combination, e.g.:
There is this given sequence and I want to find the index of a certain combination of 21??73 in e.g.
... 124321947362862188734738 ...
So I have a pattern 21??94 and need to find out where is the index of:
219473
218873
I assume that there is way to not touch every single digit.
EDIT:
"Lasse V. Karlsen" has brought up an important point that I did forget.
There is no overlapping allowed, e.g.
21217373215573
212173 is ok, then the next would be 215573
Seems like you are looking for the regular expression 21..73 - . stands for "any character"1
Next you just need iterate all matches of this regex.
Most high level languages already have a regex library built in that is simple and easy to use for such tasks.
Note that many regex libraries already take care of "no overlapping" for you, including java:
String s = "21217373215573";
Matcher m = Pattern.compile("21..73").matcher(s);
while (m.find()) System.out.println(m.group());
Will yield the required output of:
212173
215573
(1) This assumes your sequence is of digits in the first place, as your question implies.
Depending on what language you are using, you could use regular expressions of the sort 21\d{2}73 which will look for 21, followed by two digits which are in turn followed by 73. Languages such as C# allow you to get the index of the match, as shown here.
Alternatively, you could construct your own Final State Machine which could be something of the sort:
string input = ...
int index = 0
while(index < input.length - 5)
if(input[index] == 2) && (input[index + 1] == 1) && (input[index + 4] == 7) && (input[index + 5] == 3)
print(index);
index += 6;
else index++
Since you dont know where these combinations start and you are not looking just for the first one, there is no way to not touch each digit (maybe just last n-1 digits, where n is length of combination, because if there is less numbers, there is not enough space).
I just dont know better way then just read whole sequence, because you can have
... 84452121737338494684 ...
and then you have two combinations overlapping. If you are not looking for overlapping combinations, it's just easier version, but it is possibility in your example.
Some non-overlap algorithm pseudo-code:
start := -1; i := 0
for each digit in sequence
if sequence[digit] = combination[i]
if start = -1
start := digit
endif
i++
if i >= length(combination)
possibleCombinations.add(start)
start := -1
i := 0
endif
else
start := -1
endif
end
This should be O(n). Same complexity as looking for one value in unsorted array. If you are looking for overlapping combinations like in my example, then complexity is a bit higher and you have to check each possible start, which add one loop inside checking each found start value. Something that check if combination continue, then leave start value or discarding it when combination is broken. Then complexity will be something like O(n*length(combination)), because there cannot be more starts, then what is length of combination.

OCR'ed and real string similarity

The problem:
There is a set of word S = {W1,W2.. Wn} where n < 10. This set just exists, we do not know its content.
These words are drawn on some image and then recognized. The OCR algorytm is poor as well as dpi and as a result there are mistakes. So we have a second set of errorneous words S' = {W1',W2'..Wn'}
Now we have a word W that is a member of original set S. And now I need and algorythm which, given W and S', return index of the word in S'. most similar to W.
Example. S is {"alpha", "bravo", "charlie"}, S' is for example {"alPha","hravc","onarlio"} (these are real possible ocr erros).
So the target function should return F("alpha") => 0, F("bravo") => 1, F("charlie") => 2
I tried Levenshtein distance, but it does not work well, because it returns small numbers on small strings and OCRed string can be longer than original.
Example if W' is {'hornist','cornrnunist'} and the given word is 'communist' the Levenshtein distance is 4 for the both words, but the right one is second.
Any suggestions?
As a zero approach, I'd suggest you to use the modification of Levenshtein distance algorithm with conditional cost of replacing/deleting/adding characters:
Distance(i, j) = min(Distance(i-1, j-1) + replace_cost(a.charAt(i), b.charAt(j)),
Distance(i-1, j ) + insert_cost(b.charAt(j)),
Distance(i , j-1) + delete_cost(a.charAt(i)))
You can implement function replace_cost in such way, that it will returns small values for visually similar characters (and high values for visually different characters), e.g.:
// visually similar characters
replace_cost('o', '0') = 0.1
replace_cost('o', 'O') = 0.1
replace_cost('O', '0') = 0.1
...
// visually different characters
replace_cost('O', 'K') = 0.9
...
And the similar approach can be used for insert_cost and delete_cost (e.g. you may notice, that during the OCR - some characters are more likely to disappear than others).
Also, in case when approach from above is not enough for you, I'd suggest you to look at Noisy channel model - which is widely used for spelling correction (this subject described very well in Natural Language Processing course by Dan Jurafsky, Christopher Manning - "Week 2 - Spelling Correction").
This appears to be quite difficult to do because the misread strings are not necessarily textually similar to the input, which is why Levinshtein distance won't work for you. The words are visually corrupted, not simply mistyped. You could try creating a dataset of common errors (o => 0, l -> 1, e => o) and then do some sort of comparison based on that.
If you have access to the OCR algorithm, you could run that algorithm again on a much broader set of inputs (with known outputs) and train a neural network to recognize common errors. Then you could use that model to predict mistakes in your original dataset (maybe overkill for an array of only ten items).

Find the words in a long stream of characters. Auto-tokenize

How would you find the correct words in a long stream of characters?
Input :
"The revised report onthesyntactictheoriesofsequentialcontrolandstate"
Google's Output:
"The revised report on syntactic theories sequential controlandstate"
(which is close enough considering the time that they produced the output)
How do you think Google does it?
How would you increase the accuracy?
I would try a recursive algorithm like this:
Try inserting a space at each position. If the left part is a word, then recur on the right part.
Count the number of valid words / number of total words in all the final outputs. The one with the best ratio is likely your answer.
For example, giving it "thesentenceisgood" would run:
thesentenceisgood
the sentenceisgood
sent enceisgood
enceisgood: OUT1: the sent enceisgood, 2/3
sentence isgood
is good
go od: OUT2: the sentence is go od, 4/5
is good: OUT3: the sentence is good, 4/4
sentenceisgood: OUT4: the sentenceisgood, 1/2
these ntenceisgood
ntenceisgood: OUT5: these ntenceisgood, 1/2
So you would pick OUT3 as the answer.
Try a stochastic regular grammar (equivalent to hidden markov models) with the following rules:
for every word in a dictionary:
stream -> word_i stream with probability p_w
word_i -> letter_i1 ...letter_in` with probability q_w (this is the spelling of word_i)
stream -> letter stream with prob p (for any letter)
stream -> epsilon with prob 1
The probabilities could be derived from a training set, but see the following discussion.
The most likely parse is computed using the Viterbi algorithm, which has quadratic time complexity in the number of hidden states, in this case your vocabulary, so you could run into speed issues with large vocabularies. But what if you set all the p_w = 1, q_w = 1 p = .5 Which means, these are probabilities in an artificial language model where all words are equally likely and all non-words are equally unlikely. Of course you could segment better if you didn't use this simplification, but the algorithm complexity goes down by quite a bit. If you look at the recurrence relation in the wikipedia entry you can try and simplify it for this special case. The viterbi parse probability up to position k can be simplified to VP(k) = max_l(VP(k-l) * (1 if text[k-l:k] is a word else .5^l) You can bound l with the maximim length of a word and find if a l letters form a word with a hash search. The complexity of this is independent of the vocabulary size and is O(<text length> <max l>). Sorry this is not a proof, just a sketch but should get you going. Another potential optimization, if you create a trie of the dictionary, you can check if a substring is a prefix of any correct word. So when you query text[k-l:k] and get a negative answer, you already know that the same is true for text[k-l:k+d] for any d. To take advantage of this you would have to rearrange the recursion significantly, so I am not sure this can be fully exploited (it can see comment).
Here is a code in Mathematica I started to develop for a recent code golf.
It is a minimal matching, non greedy, recursive algorithm. That means that the sentence "the pen is mighter than the sword" (without spaces) returns {"the pen is might er than the sword} :)
findAll[s_] :=
Module[{a = s, b = "", c, sy = "="},
While[
StringLength[a] != 0,
j = "";
While[(c = findFirst[a]) == {} && StringLength[a] != 0,
j = j <> StringTake[a, 1];
sy = "~";
a = StringDrop[a, 1];
];
b = b <> " " <> j ;
If[c != {},
b = b <> " " <> c[[1]];
a = StringDrop[a, StringLength[c[[1]]]];
];
];
Return[{StringTrim[StringReplace[b, " " -> " "]], sy}];
]
findFirst[s_] :=
If[s != "" && (c = DictionaryLookup[s]) == {},
findFirst[StringDrop[s, -1]], Return[c]];
Sample Input
ss = {"twodreamstop",
"onebackstop",
"butterfingers",
"dependentrelationship",
"payperiodmatchcode",
"labordistributioncodedesc",
"benefitcalcrulecodedesc",
"psaddresstype",
"ageconrolnoticeperiod",
"month05",
"as_benefits",
"fname"}
Output
twodreamstop = two dreams top
onebackstop = one backstop
butterfingers = butterfingers
dependentrelationship = dependent relationship
payperiodmatchcode = pay period match code
labordistributioncodedesc ~ labor distribution coded es c
benefitcalcrulecodedesc ~ benefit c a lc rule coded es c
psaddresstype ~ p sad dress type
ageconrolnoticeperiod ~ age con rol notice period
month05 ~ month 05
as_benefits ~ as _ benefits
fname ~ f name
HTH
Check spelling correction algorithm. Here is a link to an article on algorithm used in google - http://www.norvig.com/spell-correct.html. Here you will find a scientific paper on this topic from google.
After doing the recursive splitting and dictionary lookup, to increase the quality of word pairs in your your phrase you might be interested to employ Mutual information of Word pairs.
This is essentially going though a training set and finding out M.I. values of word pairs that tells you that Albert Simpson is less Likely than Albert Einstein :)
You can try searching Science Direct for academic papers in this theme. For basic information on Mutual information see http://en.wikipedia.org/wiki/Mutual_information
Last year I had been involved in the phrase search part of a search engine project in which I was trying to parse though wikipedia dataset and rank each word pair. I've got the code in C++ if you care could share it with you if you can find some use of it. It parses wikimedia and for every word pair finds out the mutual information.

Resources