How to find all tuples in a table if and only if the tuple appears once? - relational-algebra

I have a table:
x | y | z
------------
1 | 1 | *
1 | 1 | *
1 | 3 | *
2 | 2 | *
2 | 3 | *
3 | 4 | *
3 | 4 | *
3 | 3 | *
What is the relational algebra representation of only returning all unique (x, y) tuples?
For example, I would like the following (x,y) tuples returned in the above table: (1,3), (2,2) (2,3), and (3,3).
Thanks

Rename R to S
S := ρS/R(R)
Join R and S on x,y
D := R ⋈S.x = R.x ∧ S.y = R.y S
This squares the number of tuples with a particular value for (x,y). Particularly, if a value for (x,y) appears only once in R, it appears only once in D.
Join R and S on x,y,z
E := R ⋈S.x = R.x ∧ S.y = R.y ∧ S.z = R.z S
This basically adds some columns to R. It does not add or remove tuples.
Subtract E from D and project to the attributes of R
F := πx,y,z(D\E)
This removes the tuples from D, that where created by joining a tuple from R to the corresponding tuple in S. The remaining tuples are the ones that where created by joining a tuple in R to a different tuple in S. Particularly, if a value for (x,y) appears only once in R, no tuple in F exists with that value.
Remove the tuples in F from R
R\F

Related

ls command, default (alphabetical) sorting order

here a piece of code :
$> ls
` = _ ; ? ( ] # \ % 1 4 7 a B d E g H J l M o P r S u V x Y
^ > - : ' ) { $ & + 2 5 8 A c D f G i k L n O q R t U w X z
< | , ! " [ } * # 0 3 6 9 b C e F h I K m N p Q s T v W y Z
I'm printing all ASCII character, each element is a folder, and I'm trying to understand the default sorting order of the ls command.
I understand that's there is a case insensitive comparison to sort alphabetic character, with digit coming first.
I've some trouble to understand how special character are sorted, and I'm not able to find something clear. I was thinking it could be related to the ASCII table, but when we see how things are ordered it really make no sens with it... Where is this order coming from ?
Thanks

Prolog Extended Euclidian Algorithm

I have been struggling with some prolog code for several days and I couldn't find a way out of it. I am trying to write the extended Euclidean algorithm and find values p and s in :
a*p + b*s = gcd(a,b)
equation and here is what I have tried :`
common(X,X,X,_,_,_,_,_,_).
common(0,Y,Y,_,_,_,_,_,_).
common(X,0,X,_,_,_,_,_,_).
common(X,Y,_,1,0,L1,L2,SF,TF):-
append(L1,1,[H]),
append(L2,0,[A]),
SF is H ,
TF is A,
common(X,Y,_,0,1,[H],[A],SF,TF).
common(X,Y,_,0,1,L1,L2,SF,TF):-
append(L1,0,[_,S2]),
append(L2,1,[_,T2]),
Q is truncate(X/Y),
S is 1-Q*0,T is 0-Q*1 ,
common(X,Y,_,S,T,[S2,S],
[T2,T],SF,TF).
common(X,Y,N,S,T,[S1,S2],[T1,T2],SF,TF):-
Q is truncate(X/Y),
K is X-(Y*Q),
si_finder(S1,S2,Q,SF),
ti_finder(T1,T2,Q,TF),
common(Y,K,N,S,T,[S2,S],[T2,T],SF,TF).
si_finder(PP,P,Q,C):- C is PP - Q*P.
ti_finder(P2,P1,QA,C2):- C2 is P2 - QA*P1.
After a little search I found that s and p coefficients start from 1 and 0 and the second values for them are 0 and 1 respectively.Then it continues in a pattern which is what I have done in si_finder and ti_finder predicates.Common predicates are where I tried to control the pattern recursively. However the common predicates keeps on returning false in every call. Can anyone help me implement this algorithm in Prolog.
Thanks in advance.
First let's think about the arity of the predicate. Obviously you want to have the numbers A and B as well as the Bézout coefficients P and S as arguments. Since the algorithm is calculating the GCD anyway, it is opportune to have that as an argument as well. That leaves us with arity 5. As we're talking about the extended Euclidean algorithm, let' call the predicate eeuclid/5. Next, consider an example: Let's use the algorithm to calculate P, S and GCD for A=242 and B=69:
quotient (Q) | remainder (B1) | P | S
-------------+-------------------+-------+-------
| 242 | 1 | 0
| 69 | 0 | 1
242/69 = 3 | 242 − 3*69 = 35 | 1 | -3
69/35 = 1 | 69 − 1*35 = 34 | -1 | 4
35/34 = 1 | 35 − 1*34 = 1 | 2 | -7
34/1 = 34 | 34 − 34*1 = 0 | -69 | 242
We can observe the following:
The algorithm stops if the remainder becomes 0
The line before the last row contains the GCD in the remainder column (in this example 1) and the Bézout coefficients in the P and S columns respectively (in this example 2 and -7)
The quotient is calculated from the previous to remainders. So in the next iteration A becomes B and B becomes B1.
P and S are calculated from their respective predecessors. For example: P3 = P1 - 3*P2 = 1 - 3*0 = 1 and S3 = S1 - 3*S2 = 0 - 3*1 = -3. And since it's sufficient to have the previous two P's and S's, we might as well pass them on as pairs, e.g. P1-P2 and S1-S2.
The algorithm starts with the pairs P: 1-0 and S: 0-1
The algorithm starts with the bigger number
Putting all this together, the calling predicate has to ensure that A is the bigger number and, in addition to it's five arguments, it has to pass along the starting pairs 1-0 and 0-1 to the predicate describing the actual relation, here a_b_p_s_/7:
:- use_module(library(clpfd)).
eeuclid(A,B,P,S,GCD) :-
A #>= B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(A,B,P,S,1-0,0-1,GCD).
eeuclid(A,B,P,S,GCD) :-
A #< B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(B,A,S,P,1-0,0-1,GCD).
The first rule of a_b_p_s_/7 describes the base case, where B=0 and the algorithm stops. Then A is the GCD and P1, S1 are the Bézout coefficients. Otherwise the quotient Q, the remainder B1 and the new values for P and S are calculated and a_b_p_s_/7 is called with those new values:
a_b_p_s_(A,0,P1,S1,P1-_P2,S1-_S2,A).
a_b_p_s_(A,B,P,S,P1-P2,S1-S2,GCD) :-
B #> 0,
A #> B, % <- new
Q #= A/B,
B1 #= A mod B,
P3 #= P1-(Q*P2),
S3 #= S1-(Q*S2),
a_b_p_s_(B,B1,P,S,P2-P3,S2-S3,GCD).
Querying this with the above example yields the desired result:
?- eeuclid(242,69,P,S,GCD).
P = 2,
S = -7,
GCD = 1 ;
false.
And indeed: gcd(242,69) = 1 = 2*242 − 7*69
EDIT: On a second thought I would suggest to add two constraints. Firstly Bézout's identity before calling a_b_p_s_/7 and secondly A #> B after the first goal of a_b_p_s_/7. I edited the predicates above and marked the new goals. These additions make eeuclid/5 more versatile. For example, you could ask what numbers A and B have the Bézout coefficients 2 and -7 and 1 as the gcd. There is no unique answer to this query and Prolog will give you residual goals for every potential solution. However, you can ask for a limited range for A and B, say between 0 and 50 and then use label/1 to get actual numbers:
?- [A,B] ins 0..50, eeuclid(A,B,2,-7,1), label([A,B]).
A = 18,
B = 5 ;
A = 25,
B = 7 ;
A = 32,
B = 9 ;
A = 39,
B = 11 ;
A = 46,
B = 13 ;
false. % <- previously loop here
Without the newly added constraints the query would not terminate after the fifth solution. However, with the new constraints Prolog is able to determine, that there are no more solutions between 0 and 50.

Finding the largest power of a number that divides a factorial in haskell

So I am writing a haskell program to calculate the largest power of a number that divides a factorial.
largestPower :: Int -> Int -> Int
Here largestPower a b has find largest power of b that divides a!.
Now I understand the math behind it, the way to find the answer is to repeatedly divide a (just a) by b, ignore the remainder and finally add all the quotients. So if we have something like
largestPower 10 2
we should get 8 because 10/2=5/2=2/2=1 and we add 5+2+1=8
However, I am unable to figure out how to implement this as a function, do I use arrays or just a simple recursive function.
I am gravitating towards it being just a normal function, though I guess it can be done by storing quotients in an array and adding them.
Recursion without an accumulator
You can simply write a recursive algorithm and sum up the result of each call. Here we have two cases:
a is less than b, in which case the largest power is 0. So:
largestPower a b | a < b = 0
a is greater than or equal to b, in that case we divide a by b, calculate largestPower for that division, and add the division to the result. Like:
| otherwise = d + largestPower d b
where d = (div a b)
Or putting it together:
largestPower a b | a < b = 1
| otherwise = d + largestPower d b
where d = (div a b)
Recursion with an accumuator
You can also use recursion with an accumulator: a variable you pass through the recursion, and update accordingly. At the end, you return that accumulator (or a function called on that accumulator).
Here the accumulator would of course be the running product of divisions, so:
largestPower = largestPower' 0
So we will define a function largestPower' (mind the accent) with an accumulator as first argument that is initialized as 1.
Now in the recursion, there are two cases:
a is less than b, we simply return the accumulator:
largestPower' r a b | a < b = r
otherwise we multiply our accumulator with b, and pass the division to the largestPower' with a recursive call:
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Or the full version:
largestPower = largestPower' 1
largestPower' r a b | a < b = r
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Naive correct algorithm
The algorithm is not correct. A "naive" algorithm would be to simply divide every item and keep decrementing until you reach 1, like:
largestPower 1 _ = 0
largestPower a b = sumPower a + largestPower (a-1) b
where sumPower n | n `mod` b == 0 = 1 + sumPower (div n b)
| otherwise = 0
So this means that for the largestPower 4 2, this can be written as:
largestPower 4 2 = sumPower 4 + sumPower 3 + sumPower 2
and:
sumPower 4 = 1 + sumPower 2
= 1 + 1 + sumPower 1
= 1 + 1 + 0
= 2
sumPower 3 = 0
sumPower 2 = 1 + sumPower 1
= 1 + 0
= 1
So 3.
The algorithm as stated can be implemented quite simply:
largestPower :: Int -> Int -> Int
largestPower 0 b = 0
largestPower a b = d + largestPower d b where d = a `div` b
However, the algorithm is not correct for composite b. For example, largestPower 10 6 with this algorithm yields 1, but in fact the correct answer is 4. The problem is that this algorithm ignores multiples of 2 and 3 that are not multiples of 6. How you fix the algorithm is a completely separate question, though.

How to find the maximum number of matching data?

Given a bidimensionnal array such as:
-----------------------
| | 1 | 2 | 3 | 4 | 5 |
|-------------------|---|
| 1 | X | X | O | O | X |
|-------------------|---|
| 2 | O | O | O | X | X |
|-------------------|---|
| 3 | X | X | O | X | X |
|-------------------|---|
| 4 | X | X | O | X | X |
-----------------------
I have to find the largest set of cells currently containing O with a maximum of one cell per row and one per column.
For instance, in the previous example, the optimal answer is 3, when:
row 1 goes with column 4;
row 2 goes with column 1 (or 2);
row 3 (or 4) goes with column 3.
It seems that I have to find an algorithm in O(CR) (where C is the number of columns and R the number of rows).
My first idea was to sort the rows in ascending order according to its number on son. Here is how the algorithm would look like:
For i From 0 To R
For j From 0 To N
If compatible(i, j)
add(a[j], i)
Sort a according to a[j].size
result = 0
For i From 0 To N
For j From 0 to a[i].size
if used[a[i][j]] = false
used[a[i][j]] = true
result = result + 1
break
Print result
Altough I didn't find any counterexample, I don't know whether it always gives the optimal answer.
Is this algorithm correct? Is there any better solution?
Going off Billiska's suggestion, I found a nice implementation of the "Hopcroft-Karp" algorithm in Python here:
http://code.activestate.com/recipes/123641-hopcroft-karp-bipartite-matching/
This algorithm is one of several that solves the maximum bipartite matching problem, using that code exactly "as-is" here's how I solved example problem in your post (in Python):
from collections import defaultdict
X=0; O=1;
patterns = [ [ X , X , O , O , X ],
[ O , O , O , X , X ],
[ X , X , O , X , X ],
[ X , X , O , X , X ]]
G = defaultdict(list)
for i, x in enumerate(patterns):
for j, y in enumerate(patterns):
if( patterns[i][j] ):
G['Row '+str(i)].append('Col '+str(j))
solution = bipartiteMatch(G) ### function defined in provided link
print len(solution[0]), solution[0]

Scoring System Suggestion - weighted mechanism?

I'm trying to validate a series of words that are provided by users. I'm trying to come up with a scoring system that will determine the likelihood that the series of words are indeed valid words.
Assume the following input:
xxx yyy zzz
The first thing I do is check each word individually against a database of words that I have. So, let's say that xxx was in the database, so we are 100% sure it's a valid word. Then let's say that yyy doesn't exist in the database, but a possible variation of its spelling exist (say yyyy). We don't give yyy a score of 100%, but maybe something lower (let's say 90%). Then zzz just doesn't exist at all in the database. So, zzz gets a score of 0%.
So we have something like this:
xxx = 100%
yyy = 90%
zzz = 0%
Assume further that the users are either going to either:
Provide a list of all valid words (most likely)
Provide a list of all invalid words (likely)
Provide a list of a mix of valid and invalid words (not likely)
As a whole, what is a good scoring system to determine a confidence score that xxx yyy zzz is a series of valid words? I'm not looking for anything too complex, but getting the average of the scores doesn't seem right. If some words in the list of words are valid, I think it increases the likelihood that the word not found in the database is an actual word also (it's just a limitation of the database that it doesn't contain that particular word).
NOTE: The input will generally be a minimum of 2 words (and mostly 2 words), but can be 3, 4, 5 (and maybe even more in some rare cases).
EDIT I have added a new section looking at discriminating word groups into English and non-English groups. This is below the section on estimating whether any given word is English.
I think you intuit that the scoring system you've explained here doesn't quite do justice to this problem.
It's great to find words that are in the dictionary - those words can be immediately give 100% and passed over, but what about non-matching words? How can you determine their probability?
This can be explained by a simple comparison between sentences comprising exactly the same letters:
Abergrandly recieved wuzkinds
Erbdnerye wcgluszaaindid vker
Neither sentence has any English words, but the first sentence looks English - it might be about someone (Abergrandly) who received (there was a spelling mistake) several items (wuzkinds). The second sentence is clearly just my infant hitting the keyboard.
So, in the example above, even though there is no English word present, the probability it's spoken by an English speaker is high. The second sentence has a 0% probability of being English.
I know a couple of heuristics to help detect the difference:
Simple frequency analysis of letters
In any language, some letters are more common than others. Simply counting the incidence of each letter and comparing it to the languages average tells us a lot.
There are several ways you could calculate a probability from it. One might be:
Preparation
Compute or obtain the frequencies of letters in a suitable English corpus. The NLTK is an excellent way to begin. The associated Natural Language Processing with Python book is very informative.
The Test
Count the number of occurrences of each letter in the phrase to test
Compute the Linear regression where the co-ordinate of each letter-point is:
X axis: Its predicted frequency from 1.1 above
Y axis: The actual count
Perform a Regression Analysis on the data
English should report a positive r close to 1.0. Compute the R^2 as a probability that this is English.
An r of 0 or below is either no correlation to English, or the letters have a negative correlation. Not likely English.
Advantages:
Very simple to calculate
Disadvantages:
Will not work so well for small samples, eg "zebra, xylophone"
"Rrressseee" would seem a highly probably word
Does not discriminate between the two example sentences I gave above.
Bigram frequencies and Trigram frequencies
This is an extension of letter frequencies, but looks at the frequency of letter pairs or triplets. For example, a u follows a q with 99% frequency (why not 100%? dafuq). Again, the NLTK corpus is incredibly useful.
Above from: http://www.math.cornell.edu/~mec/2003-2004/cryptography/subs/digraphs.jpg
This approach is widely used across the industry, in everything from speech recognition to predictive text on your soft keyboard.
Trigraphs are especially useful. Consider that 'll' is a very common digraph. The string 'lllllllll' therefore consists of only common digraphs and the digraph approach makes it look like a word. Trigraphs resolve this because 'lll' never occurs.
The calculation of this probability of a word using trigraphs can't be done with a simple linear regression model (the vast majority of trigrams will not be present in the word and so the majority of points will be on the x axis). Instead you can use Markov chains (using a probability matrix of either bigrams or trigrams) to compute the probability of a word. An introduction to Markov chains is here.
First build a matrix of probabilities:
X axis: Every bigram ("th", "he", "in", "er", "an", etc)
Y axis: The letters of the alphabet.
The matrix members consist of the probability of the letter of the alphabet following the bigraph.
To start computing probabilities from the start of the word, the X axis digraphs need to include spaces-a, space-b up to space-z - eg the digraph "space" t represents a word starting t.
Computing the probability of the word consists of iterating over digraphs and obtaining the probability of the third letter given the digraph. For example, the word "they" is broken down into the following probabilities:
h following "space" t -> probability x%
e following th -> probability y%
y following he -> probability z%
Overall probability = x * y * z %
This computation solves the issues for a simple frequency analysis by highlighting the "wcgl" as having a 0% probability.
Note that the probability of any given word will be very small and becomes statistically smaller by between 10x to 20x per extra character. However, examining the probability of known English words of 3, 4, 5, 6, etc characters from a large corpus, you can determine a cutoff below which the word is highly unlikely. Each highly unlikely trigraph will drop the likelihood of being English by 1 to 2 orders of magnitude.
You might then normalize the probability of a word, for example, for 8-letter English words (I've made up the numbers below):
Probabilities from Markov chain:
Probability of the best English word = 10^-7 (10% * 10% * .. * 10%)
Cutoff (Probability of least likely English word) = 10^-14 (1% * 1% * .. * 1%)
Probability for test word (say "coattail") = 10^-12
'Normalize' results
Take logs: Best = -7; Test = -12; Cutoff = -14
Make positive: Best = 7; Test = 2; Cutoff = 0
Normalize between 1.0 and 0.0: Best = 1.0; Test = 0.28; Cutoff = 0.0
(You can easily adjust the higher and lower bounds to, say, between 90% and 10%)
Now we've examined how to get a better probability that any given word is English, let's look at the group of words.
The group's definition is that it's a minimum of 2 words, but can be 3, 4, 5 or (in a small number of cases) more. You don't mention that there is any overriding structure or associations between the words, so I am not assuming:
That any group is a phrase, eg "tank commander", "red letter day"
That the group is a sentence or clause, eg " I am thirsty", "Mary needs an email"
However if this assumption is wrong, then the problem becomes more tractable for larger word-groups because the words will conform to English's rules of syntax - we can use, say, the NLTK to parse the clause to gain more insight.
Looking at the probability that a group of words is English
OK, in order to get a feel of the problem, let's look at different use cases. In the following:
I am going to ignore the cases of all words or all not-words as those cases are trivial
I will consider English-like words that you can't be assumed to be in a dictionary, like weird surnames (eg Kardashian), unusual product names (eg stackexchange) and so on.
I will use simple averages of the probabilities assuming that random gibberish is 0% while English-like words are at 90%.
Two words
(50%) Red ajkhsdjas
(50%) Hkdfs Friday
(95%) Kardashians program
(95%) Using Stackexchange
From these examples, I think you would agree that 1. and 2. are likely not acceptable whereas 3. and 4. are. The simple average calculation appears a useful discriminator for two word groups.
Three words
With one suspect words:
(67%) Red dawn dskfa
(67%) Hskdkc communist manifesto
(67%) Economic jasdfh crisis
(97%) Kardashian fifteen minutes
(97%) stackexchange user experience
Clearly 4. and 5. are acceptable.
But what about 1., 2. or 3.? Are there any material differences between 1., 2. or 3.? Probably not, ruling out using Baysian statistics. But should these be classified as English or not? I think that's your call.
With two suspect words:
(33%) Red ksadjak adsfhd
(33%) jkdsfk dsajjds manifesto
(93%) Stackexchange emails Kardashians
(93%) Stackexchange Kardashian account
I would hazard that 1. and 2. are not acceptable, but 3. and 4 definitely are. (Well, except the Kardashians' having an account here - that does not bode well). Again the simple averages can be used as a simple discriminator - and you can choose if it's above or below 67%.
Four words
The number of permutations starts getting wild, so I'll give only a few examples:
One suspect word:
(75%) Programming jhjasd language today
(93%) Hopeless Kardashian tv series
Two suspect words:
(50%) Programming kasdhjk jhsaer today
(95%) Stackexchange implementing Kasdashian filter
Three suspect words:
(25%) Programming sdajf jkkdsf kuuerc
(93%) Stackexchange bitifying Kardashians tweetdeck
In my mind, it's clear which word groups are meaningful aligns with the simple average with the exception of 2.1 - that's again your call.
Interestingly the cutoff point for four word groups might be different from three-word groups, so I'd recommend that your implementation has different a configuration setting for each group. Having different cutoffs is a consequence that the quantum jump from 2->3 and then 3->4 does not mesh with the idea of smooth, continuous probabilities.
Implementing different cutoff values for these groups directly addresses your intuition "Right now, I just have a "gut" feeling that my xxx yyy zzz example really should be higher than 66.66%, but I'm not sure how to express it as a formula.".
Five words
You get the idea - I'm not going to enumerate any more here. However, as you get to five words, it starts to get enough structure that several new heuristics can come in:
Use of Bayesian probabilities/statistics (what is the probability of the third word being a word given that the first two were?)
Parsing the group using the NLTK and looking at whether it makes grammatical sense
Problem cases
English has a number of very short words, and this might cause a problem. For example:
Gibberish: r xu r
Is this English? I am a
You may have to write code to specifically test for 1 and 2 letter words.
TL;DR Summary
Non-dictionary words can be tested for how 'English' (or French or Spanish, etc) they are using letter and trigram frequencies. Picking up English-like words and attributing them a high score is critical to distinguish English groups
Up to four words, a simple average has great discriminatory power, but you probably want to set a different cutoff for 2 words, 3 words and 4 words.
Five words and above you can probably start using Bayesian statistics
Longer word groups if they should be sentences or sentence fragments can be tested using a natural language tool, such as NLTK.
This is a heuristic process and, ultimately, there will be confounding values (such as "I am a"). Writing an perfect statistical analysis routine may therefore not be especially useful compared to a simple average if it can be confounded by a large number of exceptions.
Perhaps you could use Bayes' formula.
You already have numerical guesses for the probability of each word to be real.
Next step is to make educated guesses about the probability of the entire list being good, bad or mixed (i.e., turn "most likely", "likely" and "not likely" into numbers.)
I'll give a bayesian hierarchical model solution. It has a few parameters that must be set by hand, but it is quite robust regarding these parameters, as the simulation below shows. And it can handle not only the scoring system for the word list, but also a probable classification of the user who entered the words. The treatment may be a little technical, but in the end we'll have a routine to calculate the scores as a function of 3 numbers: the number of words in the list, the number of those with an exact match in the database, and the number of those with a partial matching (as in yyyy). The routine is implemented in R, ,but if you never used it, just download the interpreter, copy and paste the code in it's console, and you'll see the results shown here.
BTW english is not my first language, so bear with me... :-)
1. Model Specification:
There are 3 classes of users, named I, II, III. We assume that each word list is generated by a single user, and that the user is drawn randomly from a universe of users. We say that this universe is 70% class I, 25% class II and 5% class III. These numbers can be changed, of course. We have so far
Prob[User=I] = 70%
Prob[User=II] = 25%
Prob[User=III] = 5%
Given the user, we assume conditional independence, i.e., the user will not look to previous words to decide if he'll type in a valid or invalid word.
User I tends to give only valid words, User II only invalid words, and user III is mixed. So we set
Prob[Word=OK | User=I] = 99%
Prob[Word=OK | User=II] = 0.001%
Prob[Word=OK | User=III] = 50%
The probabilities of the word being invalid, given the class of the user, are complimentary. Note that we give a very small, but non-zero probability of a class-II user entering valid words, since even a monkey in front of a typewriter will, eventually type a valid word.
The final step of the model specification regards the database. We assume that, for each word, the query may have 3 outcomes: a total match, a partial match (as in yyyy ) or no match. In probability terms, we assume that
Prob[match | valid] = 98% (not all valid words will be found)
Prob[partial | valid] = 0.2% (a rare event)
Prob[match | INvalid] = 0 (the database may be incomplete, but it has no invalid words)
Prob[partial | INvalid] = 0.1% (a rare event)
The probabilities of not finding the word don't have to be set, as they are complimentary. That's it, our model is set.
2. Notation and Objective
We have a discrete random variable U, taking values in {1, 2, 3} and two discrete random vectors W and F, each of size n (= the number of words), where W_i is 1 if the word is valid and 2 if the word is invalid, and F_i is 1 if the word is found in the database, 2 if it's a partial match and 3 if it's not found.
Only vector F is observable, the others are latent. Using Bayes theorem and the distributions we set up in the model specification, we can calculate
(a) Prob[User=I | F],
i. e., the posterior probability of the user being in class I, given the observed matchings; and
(b) Prob[W=all valid | F],
i. e., the posterior probability that all words are valid, given the observed matchings.
Depending on your objective, you can use one or another as a scoring solution. If you are interested in distinguishing a real user from a computer program, for instance, you can use (a). If you only care about the word list being valid, you should use (b).
I'll try to explain shortly the theory in the next section, but this is the usual setup in the context of bayesian hierarchical models. The reference is Gelman (2004), "Bayesian Data Analysis".
If you want, you can jump to section 4, with the code.
3. The Math
I'll use a slight abuse of notation, as usual in this context, writing
p(x|y) for Prob[X=x|Y=y] and p(x,y) for Prob[X=x,Y=y].
The goal (a) is to calculate p(u|f), for u=1. Using Bayes theorem:
p(u|f) = p(u,f)/p(f) = p(f|u)p(u)/p(f).
p(u) is given. p(f|u) is obtained from:
p(f|u) = \prod_{i=1}^{n} \sum_{w_i=1}^{2} (p(f_i|w_i)p(w_i|u))
p(f|u) = \prod_{i=1}^{n} p(f_i|u)
= p(f_i=1|u)^(m) p(f_i=2|u)^(p) p(f_i=3)^(n-m-p)
where m = number of matchings and p = number of partial matchings.
p(f) is calculated as:
\sum_{u=1}^{3} p(f|u)p(u)
All these can be calculated directly.
Goal (b) is given by
p(w|f) = p(f|w)*p(w)/p(f)
where
p(f|w) = \prod_{i=1}^{n} p(f_i|w_i)
and p(f_i|w_i) is given in the model specification.
p(f) was calculated above, so we need only
p(w) = \sum_{u=1}^{3} p(w|u)p(u)
where
p(w|u) = \prod_{i=1}^{n} p(w_i|u)
So everything is set for implementation.
4. The Code
The code is written as a R script, the constants are set at the beginning, in accordance to what was discussed above, and the output is given by the functions
(a) p.u_f(u, n, m, p)
and
(b) p.wOK_f(n, m, p)
that calculate the probabilities for options (a) and (b), given inputs:
u = desired user class (set to u=1)
n = number of words
m = number of matchings
p = number of partial matchings
The code itself:
### Constants:
# User:
# Prob[U=1], Prob[U=2], Prob[U=3]
Prob_user = c(0.70, 0.25, 0.05)
# Words:
# Prob[Wi=OK|U=1,2,3]
Prob_OK = c(0.99, 0.001, 0.5)
Prob_NotOK = 1 - Prob_OK
# Database:
# Prob[Fi=match|Wi=OK], Prob[Fi=match|Wi=NotOK]:
Prob_match = c(0.98, 0)
# Prob[Fi=partial|Wi=OK], Prob[Fi=partial|Wi=NotOK]:
Prob_partial = c(0.002, 0.001)
# Prob[Fi=NOmatch|Wi=OK], Prob[Fi=NOmatch|Wi=NotOK]:
Prob_NOmatch = 1 - Prob_match - Prob_partial
###### First Goal: Probability of being a user type I, given the numbers of matchings (m) and partial matchings (p).
# Prob[Fi=fi|U=u]
#
p.fi_u <- function(fi, u)
{
unname(rbind(Prob_match, Prob_partial, Prob_NOmatch) %*% rbind(Prob_OK, Prob_NotOK))[fi,u]
}
# Prob[F=f|U=u]
#
p.f_u <- function(n, m, p, u)
{
exp( log(p.fi_u(1, u))*m + log(p.fi_u(2, u))*p + log(p.fi_u(3, u))*(n-m-p) )
}
# Prob[F=f]
#
p.f <- function(n, m, p)
{
p.f_u(n, m, p, 1)*Prob_user[1] + p.f_u(n, m, p, 2)*Prob_user[2] + p.f_u(n, m, p, 3)*Prob_user[3]
}
# Prob[U=u|F=f]
#
p.u_f <- function(u, n, m, p)
{
p.f_u(n, m, p, u) * Prob_user[u] / p.f(n, m, p)
}
# Probability user type I for n=1,...,5:
for(n in 1:5) for(m in 0:n) for(p in 0:(n-m))
{
cat("n =", n, "| m =", m, "| p =", p, "| Prob type I =", p.u_f(1, n, m, p), "\n")
}
##################################################################################################
# Second Goal: Probability all words OK given matchings/partial matchings.
p.f_wOK <- function(n, m, p)
{
exp( log(Prob_match[1])*m + log(Prob_partial[1])*p + log(Prob_NOmatch[1])*(n-m-p) )
}
p.wOK <- function(n)
{
sum(exp( log(Prob_OK)*n + log(Prob_user) ))
}
p.wOK_f <- function(n, m, p)
{
p.f_wOK(n, m, p)*p.wOK(n)/p.f(n, m, p)
}
# Probability all words ok for n=1,...,5:
for(n in 1:5) for(m in 0:n) for(p in 0:(n-m))
{
cat("n =", n, "| m =", m, "| p =", p, "| Prob all OK =", p.wOK_f(n, m, p), "\n")
}
5. Results
This are the results for n=1,...,5, and all possibilities for m and p. For instance, if you have 3 words, one match, one partial match, and one not found, you can be 66,5% sure it's a class-I user. In the same situation, you can attribute a score of 42,8% that all words are valid.
Note that option (a) does not give 100% score to the case of all matches, but option (b) does. This is expected, since we assumed that the database has no invalid words, hence if they are all found, then they are all valid. OTOH, there is a small chance that a user in class II or III can enter all valid words, but this chance decreases rapidly as n increases.
(a)
n = 1 | m = 0 | p = 0 | Prob type I = 0.06612505
n = 1 | m = 0 | p = 1 | Prob type I = 0.8107086
n = 1 | m = 1 | p = 0 | Prob type I = 0.9648451
n = 2 | m = 0 | p = 0 | Prob type I = 0.002062543
n = 2 | m = 0 | p = 1 | Prob type I = 0.1186027
n = 2 | m = 0 | p = 2 | Prob type I = 0.884213
n = 2 | m = 1 | p = 0 | Prob type I = 0.597882
n = 2 | m = 1 | p = 1 | Prob type I = 0.9733557
n = 2 | m = 2 | p = 0 | Prob type I = 0.982106
n = 3 | m = 0 | p = 0 | Prob type I = 5.901733e-05
n = 3 | m = 0 | p = 1 | Prob type I = 0.003994149
n = 3 | m = 0 | p = 2 | Prob type I = 0.200601
n = 3 | m = 0 | p = 3 | Prob type I = 0.9293284
n = 3 | m = 1 | p = 0 | Prob type I = 0.07393334
n = 3 | m = 1 | p = 1 | Prob type I = 0.665019
n = 3 | m = 1 | p = 2 | Prob type I = 0.9798274
n = 3 | m = 2 | p = 0 | Prob type I = 0.7500993
n = 3 | m = 2 | p = 1 | Prob type I = 0.9864524
n = 3 | m = 3 | p = 0 | Prob type I = 0.990882
n = 4 | m = 0 | p = 0 | Prob type I = 1.66568e-06
n = 4 | m = 0 | p = 1 | Prob type I = 0.0001158324
n = 4 | m = 0 | p = 2 | Prob type I = 0.007636577
n = 4 | m = 0 | p = 3 | Prob type I = 0.3134207
n = 4 | m = 0 | p = 4 | Prob type I = 0.9560934
n = 4 | m = 1 | p = 0 | Prob type I = 0.004198015
n = 4 | m = 1 | p = 1 | Prob type I = 0.09685249
n = 4 | m = 1 | p = 2 | Prob type I = 0.7256616
n = 4 | m = 1 | p = 3 | Prob type I = 0.9847408
n = 4 | m = 2 | p = 0 | Prob type I = 0.1410053
n = 4 | m = 2 | p = 1 | Prob type I = 0.7992839
n = 4 | m = 2 | p = 2 | Prob type I = 0.9897541
n = 4 | m = 3 | p = 0 | Prob type I = 0.855978
n = 4 | m = 3 | p = 1 | Prob type I = 0.9931117
n = 4 | m = 4 | p = 0 | Prob type I = 0.9953741
n = 5 | m = 0 | p = 0 | Prob type I = 4.671933e-08
n = 5 | m = 0 | p = 1 | Prob type I = 3.289577e-06
n = 5 | m = 0 | p = 2 | Prob type I = 0.0002259559
n = 5 | m = 0 | p = 3 | Prob type I = 0.01433312
n = 5 | m = 0 | p = 4 | Prob type I = 0.4459982
n = 5 | m = 0 | p = 5 | Prob type I = 0.9719289
n = 5 | m = 1 | p = 0 | Prob type I = 0.0002158996
n = 5 | m = 1 | p = 1 | Prob type I = 0.005694145
n = 5 | m = 1 | p = 2 | Prob type I = 0.1254661
n = 5 | m = 1 | p = 3 | Prob type I = 0.7787294
n = 5 | m = 1 | p = 4 | Prob type I = 0.988466
n = 5 | m = 2 | p = 0 | Prob type I = 0.00889696
n = 5 | m = 2 | p = 1 | Prob type I = 0.1788336
n = 5 | m = 2 | p = 2 | Prob type I = 0.8408416
n = 5 | m = 2 | p = 3 | Prob type I = 0.9922575
n = 5 | m = 3 | p = 0 | Prob type I = 0.2453087
n = 5 | m = 3 | p = 1 | Prob type I = 0.8874493
n = 5 | m = 3 | p = 2 | Prob type I = 0.994799
n = 5 | m = 4 | p = 0 | Prob type I = 0.9216786
n = 5 | m = 4 | p = 1 | Prob type I = 0.9965092
n = 5 | m = 5 | p = 0 | Prob type I = 0.9976583
(b)
n = 1 | m = 0 | p = 0 | Prob all OK = 0.04391523
n = 1 | m = 0 | p = 1 | Prob all OK = 0.836025
n = 1 | m = 1 | p = 0 | Prob all OK = 1
n = 2 | m = 0 | p = 0 | Prob all OK = 0.0008622994
n = 2 | m = 0 | p = 1 | Prob all OK = 0.07699368
n = 2 | m = 0 | p = 2 | Prob all OK = 0.8912977
n = 2 | m = 1 | p = 0 | Prob all OK = 0.3900892
n = 2 | m = 1 | p = 1 | Prob all OK = 0.9861099
n = 2 | m = 2 | p = 0 | Prob all OK = 1
n = 3 | m = 0 | p = 0 | Prob all OK = 1.567032e-05
n = 3 | m = 0 | p = 1 | Prob all OK = 0.001646751
n = 3 | m = 0 | p = 2 | Prob all OK = 0.1284228
n = 3 | m = 0 | p = 3 | Prob all OK = 0.923812
n = 3 | m = 1 | p = 0 | Prob all OK = 0.03063598
n = 3 | m = 1 | p = 1 | Prob all OK = 0.4278888
n = 3 | m = 1 | p = 2 | Prob all OK = 0.9789305
n = 3 | m = 2 | p = 0 | Prob all OK = 0.485069
n = 3 | m = 2 | p = 1 | Prob all OK = 0.990527
n = 3 | m = 3 | p = 0 | Prob all OK = 1
n = 4 | m = 0 | p = 0 | Prob all OK = 2.821188e-07
n = 4 | m = 0 | p = 1 | Prob all OK = 3.046322e-05
n = 4 | m = 0 | p = 2 | Prob all OK = 0.003118531
n = 4 | m = 0 | p = 3 | Prob all OK = 0.1987396
n = 4 | m = 0 | p = 4 | Prob all OK = 0.9413746
n = 4 | m = 1 | p = 0 | Prob all OK = 0.001109629
n = 4 | m = 1 | p = 1 | Prob all OK = 0.03975118
n = 4 | m = 1 | p = 2 | Prob all OK = 0.4624648
n = 4 | m = 1 | p = 3 | Prob all OK = 0.9744778
n = 4 | m = 2 | p = 0 | Prob all OK = 0.05816511
n = 4 | m = 2 | p = 1 | Prob all OK = 0.5119571
n = 4 | m = 2 | p = 2 | Prob all OK = 0.9843855
n = 4 | m = 3 | p = 0 | Prob all OK = 0.5510398
n = 4 | m = 3 | p = 1 | Prob all OK = 0.9927134
n = 4 | m = 4 | p = 0 | Prob all OK = 1
n = 5 | m = 0 | p = 0 | Prob all OK = 5.05881e-09
n = 5 | m = 0 | p = 1 | Prob all OK = 5.530918e-07
n = 5 | m = 0 | p = 2 | Prob all OK = 5.899106e-05
n = 5 | m = 0 | p = 3 | Prob all OK = 0.005810434
n = 5 | m = 0 | p = 4 | Prob all OK = 0.2807414
n = 5 | m = 0 | p = 5 | Prob all OK = 0.9499773
n = 5 | m = 1 | p = 0 | Prob all OK = 3.648353e-05
n = 5 | m = 1 | p = 1 | Prob all OK = 0.001494098
n = 5 | m = 1 | p = 2 | Prob all OK = 0.051119
n = 5 | m = 1 | p = 3 | Prob all OK = 0.4926606
n = 5 | m = 1 | p = 4 | Prob all OK = 0.9710204
n = 5 | m = 2 | p = 0 | Prob all OK = 0.002346281
n = 5 | m = 2 | p = 1 | Prob all OK = 0.07323064
n = 5 | m = 2 | p = 2 | Prob all OK = 0.5346423
n = 5 | m = 2 | p = 3 | Prob all OK = 0.9796679
n = 5 | m = 3 | p = 0 | Prob all OK = 0.1009589
n = 5 | m = 3 | p = 1 | Prob all OK = 0.5671273
n = 5 | m = 3 | p = 2 | Prob all OK = 0.9871377
n = 5 | m = 4 | p = 0 | Prob all OK = 0.5919764
n = 5 | m = 4 | p = 1 | Prob all OK = 0.9938288
n = 5 | m = 5 | p = 0 | Prob all OK = 1
If "average" is no solution because the database lacks of words, I'd say: extend the database :)
another idea could be, to 'weigh' the results, to get light an adjusted average, as an example:
100% = 1.00x weight
90% = 0.95x weight
80% = 0.90x weight
...
0% = 0.50x weight
so for your example you would:
(100*1 + 90*0.95 + 0*0.5) / (100*1 + 100*0.95 + 100*0.5) = 0.75714285714
=> 75.7%
regular average would be 63.3%
Since the order of words is not important in your description, the independent variable is the fraction of valid words. If the fraction is a perfect 1, i.e. all words are found to be perfect matches with the DB, then you are perfectly sure to have the all-valid outcome. If it's zero, i.e. all words are perfect misses in the DB, then you are perfectly sure to have the all-invalid outcome. If you have .5, then this must be the unlikely mixed-up outcome because neither of the other two is possible.
You say the mixed outcome is unlikely while the two extremes are moreso. You are after likelihood of the all-valid outcome.
Let the fraction of valid words (sum of "surenesses" of matches / # of words) be f and hence the desired likelihood of the all-valid outcome be L(f). By the discussion so far, we know L(1)=1 and L(f)=0 for 0<=f<=1/2 .
To honor your information that the mixed outcome is less likely than the all-valid (and the all-invalid) outcome, the shape of L must rise monotonically and quickly from 1/2 toward 1 and reach 1 at f=1.
Since this is heuristic, we might pick any reasonable function with this character. If we're clever it will have a parameter to control the steepness of the step and perhaps another for its location. This lets us tweak what "less likely" means for the middle case.
One such function is this for 1/2 <= f <= 1:
L(f) = 5 + f * (-24 + (36 - 16 * f) * f) + (-4 + f * (16 + f * (-20 + 8 * f))) * s
and zero for 0 <= f < 1/2. Although it's hairy-looking, it's the simplest polynomial that intersects (1/2,0) and (1,1) with slope 0 at f=1 and slope s at f=0.
You can set 0 <= s <= 3 to change the step shape. Here is a shot with s=3, which probably what you want:
If you set s > 3, it shoots above 1 before settling down, not what we want.
Of course there are infinitely many other possibilities. If this one does't work, comment and we'll look for another.
averaging is, of course, rubbish. If the individual word probabilities were accurate, the probability that all words are correct is simply the product, not the average. If you have an estimate for the uncertainties in your individual probabilities, you could work out their product marginalized over all the individual probabilities.

Resources