I would like to design a data structure and algorithm such that, given an array of elements, where each element has a weight according to [a,b], I can achieve constant time insertion and deletion. The deletion is performed randomly where the probability of an element being deleted is proportional to its weight.
I do not believe there is a deterministic algorithm that can achieve both operations in constant time, but I think there are there randomized algorithms that should be can accomplish this?
I don't know if O(1) worst-case time is impossible; I don't see any particular reason it should be. But it's definitely possible to have a simple data structure which achieves O(1) expected time.
The idea is to store a dynamic array of pairs (or two parallel arrays), where each item is paired with its weight; insertion is done by appending in O(1) amortised time, and an element can be removed by index by swapping it with the last element so that it can be removed from the end of the array in O(1) time. To sample a random element from the weighted distribution, choose a random index and generate a random number in the half-open interval [0, 2); if it is less than the element's weight, select the element at that index, otherwise repeat this process until an element is selected. The idea is that each index is equally likely to be chosen, and the probability it gets kept rather than rejected is proportional to its weight.
This is a Las Vegas algorithm, meaning it is expected to complete in a finite time, but with very low probability it can take arbitrarily long to complete. The number of iterations required to sample an element will be highest when every weight is exactly 1, in which case it follows a geometric distribution with parameter p = 1/2, so its expected value is 2, a constant which is independent of the number of elements in the data structure.
In general, if all weights are in an interval [a, b] for real numbers 0 < a <= b, then the expected number of iterations is at most b/a. This is always a constant, but it is potentially a large constant (i.e. it takes many iterations to select a single sample) if the lower bound a is small relative to b.
This is not an answer per se, but just a tiny example to illustrate the algorithm devised by #kaya3
| value | weight |
| v1 | 1.0 |
| v2 | 1.5 |
| v3 | 1.5 |
| v4 | 2.0 |
| v5 | 1.0 |
| total | 7.0 |
The total weight is 7.0. It's easy to maintain in O(1) by storing it in some memory and increasing/decreasing at each insertion/removal.
The probability of each element is simply it's weight divided by total weight.
| value | proba |
| v1 | 1.0/7 | 0.1428...
| v2 | 1.5/7 | 0.2142...
| v3 | 1.5/7 | 0.2142...
| v4 | 2.0/7 | 0.2857...
| v5 | 1.0/7 | 0.1428...
Using the algorithm of #kaya3, if we draw a random index, then the probability of each value is 1/size (1/5 here).
The chance of being rejected is 50% for v1, 25% for v2 and 0% for v4. So at first round, the probability to be selected are:
| value | proba |
| v1 | 2/20 | 0.10
| v2 | 3/20 | 0.15
| v3 | 3/20 | 0.15
| v4 | 4/20 | 0.20
| v5 | 2/20 | 0.10
| total | 14/20 | (70%)
Then the proba of having a 2nd round is 30%, and the proba of each index is 6/20/5 = 3/50
| value | proba 2 rounds |
| v1 | 2/20 + 6/200 | 0.130
| v2 | 3/20 + 9/200 | 0.195
| v3 | 3/20 + 9/200 | 0.195
| v4 | 4/20 + 12/200 | 0.260
| v5 | 2/20 + 6/200 | 0.130
| total | 14/20 + 42/200 | (91%)
The proba to have a 3rd round is 9%, that is 9/500 for each index
| value | proba 3 rounds |
| v1 | 2/20 + 6/200 + 18/2000 | 0.1390
| v2 | 3/20 + 9/200 + 27/2000 | 0.2085
| v3 | 3/20 + 9/200 + 27/2000 | 0.2085
| v4 | 4/20 + 12/200 + 36/2000 | 0.2780
| v5 | 2/20 + 6/200 + 18/2000 | 0.1390
| total | 14/20 + 42/200 + 126/2000 | (97,3%)
So we see that the serie is converging to the correct probabilities. The numerators are multiple of the weight, so it's clear that the relative weight of each element is respected.
This is a sketch of an answer.
With weights only 1, we can maintain a random permutation of the inputs.
Each time an element is inserted, put it at the end of the array, then pick a random position i in the array, and swap the last element with the element at position i.
(It may well be a no-op if the random position turns out to be the last one.)
When deleting, just delete the last element.
Assuming we can use a dynamic array with O(1) (worst case or amortized) insertion and deletion, this does both insertion and deletion in O(1).
With weights 1 and 2, the similar structure may be used.
Perhaps each element of weight 2 should be put twice instead of once.
Perhaps when an element of weight 2 is deleted, its other copy should also be deleted.
So we should in fact store indices instead of the elements, and another array, locations, which stores and tracks the two indices for each element. The swaps should keep this locations array up-to-date.
Deleting an arbitrary element can be done in O(1) similarly to inserting: swap with the last one, delete the last one.
I have a system of equations of the form y=Ax+b where y, x and b are n×1 vectors and A is a n×n (symmetric) matrix.
So here is the wrinkle. Not all of x is unknown. Certain rows of x are specified and the corresponding rows of y are unknown. Below is an example
| 10 | | 5 -2 1 | | * | | -1 |
| * | = | -2 2 0 | | 1 | + | 1 |
| 1 | | 1 0 1 | | * | | 2 |
where * designates unknown quantities.
I have built a solver for problems such as the above in Fortran, but I wanted to know if there is a decent robust solver out-there as part of Lapack or MLK for these types of problems?
My solver is based on a sorting matrix called pivot = [1,3,2] which rearranges the x and y vectors according to known and unknown
| 10 | | 5 1 -2 | | * | | -1 |
| 1 | | 1 1 0 | | * | + | 2 |
| * | | -2 0 2 | | 1 | | 1 |
and the solving using a block matrix solution & LU decomposition
! solves a n×n system of equations where k values are known from the 'x' vector
function solve_linear_system(A,b,x_known,y_known,pivot,n,k) result(x)
use lu
integer(c_int),intent(in) :: n, k, pivot(n)
real(c_double),intent(in) :: A(n,n), b(n), x_known(k), y_known(n-k)
real(c_double) :: x(n), y(n), r(n-k), A1(n-k,n-k), A3(n-k,k), b1(n-k)
integer(c_int) :: i, j, u, code, d, indx(n-k)
u = n-k
!store known `x` and `y` values
x(pivot(u+1:n)) = x_known
y(pivot(1:u)) = y_known
!define block matrices
! |y_known| = | A1 A3 | | * | + |b1|
| | * | = | A3` A2 | | x_known | |b2|
A1 = A(pivot(1:u), pivot(1:u))
A3 = A(pivot(1:u), pivot(u+1:n))
b1 = b(pivot(1:u))
!define new rhs vector
r = y_known -matmul(A3, x_known)-b1
% solve `A1*x=r` with LU decomposition from NR book for 'x'
call ludcmp(A1,u,indx,d,code)
call lubksb(A1,u,indx,r)
% store unknown 'x' values (stored into 'r' by 'lubksb')
x(pivot(1:u)) = r
end function
For the example above the solution is
| 10.0 | | 3.5 |
y = | -4.0 | x = | 1.0 |
| 1.0 | | -4.5 |
PS. The linear systems have typically n<=20 equations.
The problem with only unknowns is a linear least squares problem.
Your a-priori knowledge can be introduced with equality-constraints (fixing some variables), transforming it to an linear equality-constrained least squares problem.
There is indeed an algorithm within lapack solving the latter, called xGGLSE.
Here is some overview.
(It also seems, you need to multiply b with -1 in your case to be compatible with the definition)
Edit: On further inspection, i missed the unknowns within y. Ouch. This is bad.
First, i would rewrite your system into a AX=b form where A and b are known. In your example, and provided that i didn't make any mistakes, it would give :
5 0 1 x1 13
A = 2 1 0 X = x2 and b = 3
1 0 1 x3 -1
Then you can use plenty of methods coming from various libraries, like LAPACK or BLAS depending on the properties of your matrix A (positive-definite ,...). As a starting point, i would suggest a simple method with a direct inversion of the matrix A, especially if your matrix is small. There are also many iterative approach ( Jacobi, Gradients, Gauss seidel ...) that you can use for bigger cases.
Edit : An idea to solve it in 2 steps
First step : You can rewrite your system in 2 subsystem that have X and Y as unknows but dimension are equals to the numbers of unknowns in each vector.
The first subsystem in X will be AX = b which can be solved by direct or iterative methods.
Second step : The second system in Y can be directly resolved once you know X cause Y will be expressed in the form Y = A'X + b'
I think this approach is more general.
I have an analog power signal from a motor. The signal ramps up quickly, but powers off slowly over the course of several seconds. The signal looks almost like a series of plateaus on the descent. The problem is that the signal doesn't settle back to zero. It settles back to an intermediate level unknown, and varying from motor to motor. See chart below.
I'm trying to find a way determine when the motor is off and at that intermediate level.
My thought is to find and store the max point, and calculate the slopes thereafter until the max slope is greater than some large negative slope value like -160 (~ -60 degrees), and declare that the motor must be powering off. The sample points below are with all duplicates removed. (there's about 5000 samples typically).
My problem is determining the X values. In the formula (y2-y1) / (x2 - x1), the x values could far enough away in time that the slope never appears greater than -30 degrees. Picking an absolute number like 10 would fix this, but is there a more mathematically correct method?
The data shows me calculating slope with method described above and the max of 921. ie (y2 -y1) / ( (10+1) - 10). In this scheme, at datapoint 9, i would say the motor is "Off". I'm looking for a more precise means to determine an X value rather than randomly picking 10 for instance.
+---+-----+----------+
| X | Y | Slope |
+---+-----+----------+
| 1 | 65 | 856.000 |
| 2 | 58 | 863.000 |
| 3 | 57 | 864.000 |
| 4 | 638 | 283.000 |
| 5 | 921 | 0.000 |
| 6 | 839 | -82.000 |
| 7 | 838 | -83.000 |
| 8 | 811 | -110.000 |
| 9 | 724 | -197.000 |
+---+-----+----------+
EDIT: A much simpler answer:
Since your motor is either ON or OFF, and ON wattages are strictly higher than OFF wattages, you should be able to discriminate between ON and OFF wattages by maintaining an average wattage, reporting ON if the current measurement is higher than the average and OFF if it is lower.
Count = 0
Average = 500
Whenever a measurement comes in,
Count = Count + 1
Average = Average + (Measurement - Average) / Count
Return Measurement > Average ? ON : OFF
This represents an average of all the values the wattage has ever been. If we want to eventually "forget" the earliest values (before the motor was ever turned on), we could either keep a buffer of recent values and use that for a moving average, or approximate a moving average with an IIR like
Average = (1-X) * Average + X * Measurement
for some X between 0 and 1 (closer to 0 to change more slowly).
Original answer:
You could treat this as an online clustering problem, where you expect three clusters (before the motor turns on, when the motor is on, and when the motor is turned off), or perhaps four (before the motor turns on, peak power, when the motor is running normally, and when the motor turns off). In effect, you're trying to learn what it looks like when a motor is on (or off).
If you don't have any other information about whether the motor is on or off (which could be used to train a model), here's a simple approach:
Define an "Estimate" to contain:
float Value
int Count
Define an "Estimator" to contain:
float TotalError = 0.0
Estimate COLD_OFF = {Value = 0, Count = 1}
Estimate ON = {Value = 1000, Count = 1}
Estimate WARM_OFF = {Value = 500, Count = 1}
a function Update_Estimate(float Measurement)
Find the Estimate E such that E.Value is closest to Measurement
Update TotalError = TotalError + (E.Value - Measurement)*(E.Value - Measurement)
Update E.Value = (E.Value * E.Count + P) / (E.Count + 1)
Update E.Count = E.Count + 1
return E
This takes initial guesses for what the wattages of these stages should be and updates them with the measurements. However, this has some problems. What if our initial guesses are off?
You could initialize some number of Estimators with different possible (e.g. random) guesses for COLD_OFF, ON, and WARM_OFF; after receiving a measurement, let each Estimator update itself and aggregate their values somehow. This aggregation should reward the better estimates. Since you're storing TotalError for each estimate, you could just pick the output of the Estimator that has the lowest TotalError so far, or you could let the Estimators vote (giving each Estimator's vote a weight proportional to 1/(TotalError + 1) or something like that).
As example I have next arrays:
[100,192]
[235,280]
[129,267]
As intersect arrays we get:
[129,192]
[235,267]
Simple exercise for people but problem for creating algorithm that find second multidim array…
Any language, any ideas..
If somebody do not understand me:
I'll assume you wish to output any range that has 2 or more overlapping intervals.
So the output for [1,5], [2,4], [3,3] will be (only) [2,4].
The basic idea here is to use a sweep-line algorithm.
Split the ranges into start- and end-points.
Sort the points.
Now iterate through the points with a counter variable initialized to 0.
If you get a start-point:
Increase the counter.
If the counter's value is now 2, record that point as the start-point for a range in the output.
If you get an end-point
Decrease the counter.
If the counter's value is 1, record that point as the end-point for a range in the output.
Note:
If a start-point and an end-point have the same value, you'll need to process the end-point first if the counter is 1 and the start-point first if the counter is 2 or greater, otherwise you'll end up with a 0-size range or a 0-size gap between two ranges in the output.
This should be fairly simple to do by having a set of the following structure:
Element
int startCount
int endCount
int value
Then you combine all points with the same value into one such element, setting the counts appropriately.
Running time:
O(n log n)
Example:
Input:
[100, 192]
[235, 280]
[129, 267]
(S for start, E for end)
Points | | 100 | 129 | 192 | 235 | 267 | 280 |
Type | | Start | Start | End | Start | End | End |
Count | 0 | 1 | 2 | 1 | 2 | 1 | 0 |
Output | | | [129, | 192] | [235, | 267] | |
This is python implementation of intersection algorithm. Its computcomputational complexity O(n^2).
a = [[100,192],[235,280],[129,267]]
def get_intersections(diapasons):
intersections = []
for d in diapasons:
for check in diapasons:
if d == check:
continue
if d[0] >= check[0] and d[0] <= check[1]:
right = d[1]
if check[1] < d[1]:
right = check[1]
intersections.append([d[0], right])
return intersections
print get_intersections(a)
I'm trying to validate a series of words that are provided by users. I'm trying to come up with a scoring system that will determine the likelihood that the series of words are indeed valid words.
Assume the following input:
xxx yyy zzz
The first thing I do is check each word individually against a database of words that I have. So, let's say that xxx was in the database, so we are 100% sure it's a valid word. Then let's say that yyy doesn't exist in the database, but a possible variation of its spelling exist (say yyyy). We don't give yyy a score of 100%, but maybe something lower (let's say 90%). Then zzz just doesn't exist at all in the database. So, zzz gets a score of 0%.
So we have something like this:
xxx = 100%
yyy = 90%
zzz = 0%
Assume further that the users are either going to either:
Provide a list of all valid words (most likely)
Provide a list of all invalid words (likely)
Provide a list of a mix of valid and invalid words (not likely)
As a whole, what is a good scoring system to determine a confidence score that xxx yyy zzz is a series of valid words? I'm not looking for anything too complex, but getting the average of the scores doesn't seem right. If some words in the list of words are valid, I think it increases the likelihood that the word not found in the database is an actual word also (it's just a limitation of the database that it doesn't contain that particular word).
NOTE: The input will generally be a minimum of 2 words (and mostly 2 words), but can be 3, 4, 5 (and maybe even more in some rare cases).
EDIT I have added a new section looking at discriminating word groups into English and non-English groups. This is below the section on estimating whether any given word is English.
I think you intuit that the scoring system you've explained here doesn't quite do justice to this problem.
It's great to find words that are in the dictionary - those words can be immediately give 100% and passed over, but what about non-matching words? How can you determine their probability?
This can be explained by a simple comparison between sentences comprising exactly the same letters:
Abergrandly recieved wuzkinds
Erbdnerye wcgluszaaindid vker
Neither sentence has any English words, but the first sentence looks English - it might be about someone (Abergrandly) who received (there was a spelling mistake) several items (wuzkinds). The second sentence is clearly just my infant hitting the keyboard.
So, in the example above, even though there is no English word present, the probability it's spoken by an English speaker is high. The second sentence has a 0% probability of being English.
I know a couple of heuristics to help detect the difference:
Simple frequency analysis of letters
In any language, some letters are more common than others. Simply counting the incidence of each letter and comparing it to the languages average tells us a lot.
There are several ways you could calculate a probability from it. One might be:
Preparation
Compute or obtain the frequencies of letters in a suitable English corpus. The NLTK is an excellent way to begin. The associated Natural Language Processing with Python book is very informative.
The Test
Count the number of occurrences of each letter in the phrase to test
Compute the Linear regression where the co-ordinate of each letter-point is:
X axis: Its predicted frequency from 1.1 above
Y axis: The actual count
Perform a Regression Analysis on the data
English should report a positive r close to 1.0. Compute the R^2 as a probability that this is English.
An r of 0 or below is either no correlation to English, or the letters have a negative correlation. Not likely English.
Advantages:
Very simple to calculate
Disadvantages:
Will not work so well for small samples, eg "zebra, xylophone"
"Rrressseee" would seem a highly probably word
Does not discriminate between the two example sentences I gave above.
Bigram frequencies and Trigram frequencies
This is an extension of letter frequencies, but looks at the frequency of letter pairs or triplets. For example, a u follows a q with 99% frequency (why not 100%? dafuq). Again, the NLTK corpus is incredibly useful.
Above from: http://www.math.cornell.edu/~mec/2003-2004/cryptography/subs/digraphs.jpg
This approach is widely used across the industry, in everything from speech recognition to predictive text on your soft keyboard.
Trigraphs are especially useful. Consider that 'll' is a very common digraph. The string 'lllllllll' therefore consists of only common digraphs and the digraph approach makes it look like a word. Trigraphs resolve this because 'lll' never occurs.
The calculation of this probability of a word using trigraphs can't be done with a simple linear regression model (the vast majority of trigrams will not be present in the word and so the majority of points will be on the x axis). Instead you can use Markov chains (using a probability matrix of either bigrams or trigrams) to compute the probability of a word. An introduction to Markov chains is here.
First build a matrix of probabilities:
X axis: Every bigram ("th", "he", "in", "er", "an", etc)
Y axis: The letters of the alphabet.
The matrix members consist of the probability of the letter of the alphabet following the bigraph.
To start computing probabilities from the start of the word, the X axis digraphs need to include spaces-a, space-b up to space-z - eg the digraph "space" t represents a word starting t.
Computing the probability of the word consists of iterating over digraphs and obtaining the probability of the third letter given the digraph. For example, the word "they" is broken down into the following probabilities:
h following "space" t -> probability x%
e following th -> probability y%
y following he -> probability z%
Overall probability = x * y * z %
This computation solves the issues for a simple frequency analysis by highlighting the "wcgl" as having a 0% probability.
Note that the probability of any given word will be very small and becomes statistically smaller by between 10x to 20x per extra character. However, examining the probability of known English words of 3, 4, 5, 6, etc characters from a large corpus, you can determine a cutoff below which the word is highly unlikely. Each highly unlikely trigraph will drop the likelihood of being English by 1 to 2 orders of magnitude.
You might then normalize the probability of a word, for example, for 8-letter English words (I've made up the numbers below):
Probabilities from Markov chain:
Probability of the best English word = 10^-7 (10% * 10% * .. * 10%)
Cutoff (Probability of least likely English word) = 10^-14 (1% * 1% * .. * 1%)
Probability for test word (say "coattail") = 10^-12
'Normalize' results
Take logs: Best = -7; Test = -12; Cutoff = -14
Make positive: Best = 7; Test = 2; Cutoff = 0
Normalize between 1.0 and 0.0: Best = 1.0; Test = 0.28; Cutoff = 0.0
(You can easily adjust the higher and lower bounds to, say, between 90% and 10%)
Now we've examined how to get a better probability that any given word is English, let's look at the group of words.
The group's definition is that it's a minimum of 2 words, but can be 3, 4, 5 or (in a small number of cases) more. You don't mention that there is any overriding structure or associations between the words, so I am not assuming:
That any group is a phrase, eg "tank commander", "red letter day"
That the group is a sentence or clause, eg " I am thirsty", "Mary needs an email"
However if this assumption is wrong, then the problem becomes more tractable for larger word-groups because the words will conform to English's rules of syntax - we can use, say, the NLTK to parse the clause to gain more insight.
Looking at the probability that a group of words is English
OK, in order to get a feel of the problem, let's look at different use cases. In the following:
I am going to ignore the cases of all words or all not-words as those cases are trivial
I will consider English-like words that you can't be assumed to be in a dictionary, like weird surnames (eg Kardashian), unusual product names (eg stackexchange) and so on.
I will use simple averages of the probabilities assuming that random gibberish is 0% while English-like words are at 90%.
Two words
(50%) Red ajkhsdjas
(50%) Hkdfs Friday
(95%) Kardashians program
(95%) Using Stackexchange
From these examples, I think you would agree that 1. and 2. are likely not acceptable whereas 3. and 4. are. The simple average calculation appears a useful discriminator for two word groups.
Three words
With one suspect words:
(67%) Red dawn dskfa
(67%) Hskdkc communist manifesto
(67%) Economic jasdfh crisis
(97%) Kardashian fifteen minutes
(97%) stackexchange user experience
Clearly 4. and 5. are acceptable.
But what about 1., 2. or 3.? Are there any material differences between 1., 2. or 3.? Probably not, ruling out using Baysian statistics. But should these be classified as English or not? I think that's your call.
With two suspect words:
(33%) Red ksadjak adsfhd
(33%) jkdsfk dsajjds manifesto
(93%) Stackexchange emails Kardashians
(93%) Stackexchange Kardashian account
I would hazard that 1. and 2. are not acceptable, but 3. and 4 definitely are. (Well, except the Kardashians' having an account here - that does not bode well). Again the simple averages can be used as a simple discriminator - and you can choose if it's above or below 67%.
Four words
The number of permutations starts getting wild, so I'll give only a few examples:
One suspect word:
(75%) Programming jhjasd language today
(93%) Hopeless Kardashian tv series
Two suspect words:
(50%) Programming kasdhjk jhsaer today
(95%) Stackexchange implementing Kasdashian filter
Three suspect words:
(25%) Programming sdajf jkkdsf kuuerc
(93%) Stackexchange bitifying Kardashians tweetdeck
In my mind, it's clear which word groups are meaningful aligns with the simple average with the exception of 2.1 - that's again your call.
Interestingly the cutoff point for four word groups might be different from three-word groups, so I'd recommend that your implementation has different a configuration setting for each group. Having different cutoffs is a consequence that the quantum jump from 2->3 and then 3->4 does not mesh with the idea of smooth, continuous probabilities.
Implementing different cutoff values for these groups directly addresses your intuition "Right now, I just have a "gut" feeling that my xxx yyy zzz example really should be higher than 66.66%, but I'm not sure how to express it as a formula.".
Five words
You get the idea - I'm not going to enumerate any more here. However, as you get to five words, it starts to get enough structure that several new heuristics can come in:
Use of Bayesian probabilities/statistics (what is the probability of the third word being a word given that the first two were?)
Parsing the group using the NLTK and looking at whether it makes grammatical sense
Problem cases
English has a number of very short words, and this might cause a problem. For example:
Gibberish: r xu r
Is this English? I am a
You may have to write code to specifically test for 1 and 2 letter words.
TL;DR Summary
Non-dictionary words can be tested for how 'English' (or French or Spanish, etc) they are using letter and trigram frequencies. Picking up English-like words and attributing them a high score is critical to distinguish English groups
Up to four words, a simple average has great discriminatory power, but you probably want to set a different cutoff for 2 words, 3 words and 4 words.
Five words and above you can probably start using Bayesian statistics
Longer word groups if they should be sentences or sentence fragments can be tested using a natural language tool, such as NLTK.
This is a heuristic process and, ultimately, there will be confounding values (such as "I am a"). Writing an perfect statistical analysis routine may therefore not be especially useful compared to a simple average if it can be confounded by a large number of exceptions.
Perhaps you could use Bayes' formula.
You already have numerical guesses for the probability of each word to be real.
Next step is to make educated guesses about the probability of the entire list being good, bad or mixed (i.e., turn "most likely", "likely" and "not likely" into numbers.)
I'll give a bayesian hierarchical model solution. It has a few parameters that must be set by hand, but it is quite robust regarding these parameters, as the simulation below shows. And it can handle not only the scoring system for the word list, but also a probable classification of the user who entered the words. The treatment may be a little technical, but in the end we'll have a routine to calculate the scores as a function of 3 numbers: the number of words in the list, the number of those with an exact match in the database, and the number of those with a partial matching (as in yyyy). The routine is implemented in R, ,but if you never used it, just download the interpreter, copy and paste the code in it's console, and you'll see the results shown here.
BTW english is not my first language, so bear with me... :-)
1. Model Specification:
There are 3 classes of users, named I, II, III. We assume that each word list is generated by a single user, and that the user is drawn randomly from a universe of users. We say that this universe is 70% class I, 25% class II and 5% class III. These numbers can be changed, of course. We have so far
Prob[User=I] = 70%
Prob[User=II] = 25%
Prob[User=III] = 5%
Given the user, we assume conditional independence, i.e., the user will not look to previous words to decide if he'll type in a valid or invalid word.
User I tends to give only valid words, User II only invalid words, and user III is mixed. So we set
Prob[Word=OK | User=I] = 99%
Prob[Word=OK | User=II] = 0.001%
Prob[Word=OK | User=III] = 50%
The probabilities of the word being invalid, given the class of the user, are complimentary. Note that we give a very small, but non-zero probability of a class-II user entering valid words, since even a monkey in front of a typewriter will, eventually type a valid word.
The final step of the model specification regards the database. We assume that, for each word, the query may have 3 outcomes: a total match, a partial match (as in yyyy ) or no match. In probability terms, we assume that
Prob[match | valid] = 98% (not all valid words will be found)
Prob[partial | valid] = 0.2% (a rare event)
Prob[match | INvalid] = 0 (the database may be incomplete, but it has no invalid words)
Prob[partial | INvalid] = 0.1% (a rare event)
The probabilities of not finding the word don't have to be set, as they are complimentary. That's it, our model is set.
2. Notation and Objective
We have a discrete random variable U, taking values in {1, 2, 3} and two discrete random vectors W and F, each of size n (= the number of words), where W_i is 1 if the word is valid and 2 if the word is invalid, and F_i is 1 if the word is found in the database, 2 if it's a partial match and 3 if it's not found.
Only vector F is observable, the others are latent. Using Bayes theorem and the distributions we set up in the model specification, we can calculate
(a) Prob[User=I | F],
i. e., the posterior probability of the user being in class I, given the observed matchings; and
(b) Prob[W=all valid | F],
i. e., the posterior probability that all words are valid, given the observed matchings.
Depending on your objective, you can use one or another as a scoring solution. If you are interested in distinguishing a real user from a computer program, for instance, you can use (a). If you only care about the word list being valid, you should use (b).
I'll try to explain shortly the theory in the next section, but this is the usual setup in the context of bayesian hierarchical models. The reference is Gelman (2004), "Bayesian Data Analysis".
If you want, you can jump to section 4, with the code.
3. The Math
I'll use a slight abuse of notation, as usual in this context, writing
p(x|y) for Prob[X=x|Y=y] and p(x,y) for Prob[X=x,Y=y].
The goal (a) is to calculate p(u|f), for u=1. Using Bayes theorem:
p(u|f) = p(u,f)/p(f) = p(f|u)p(u)/p(f).
p(u) is given. p(f|u) is obtained from:
p(f|u) = \prod_{i=1}^{n} \sum_{w_i=1}^{2} (p(f_i|w_i)p(w_i|u))
p(f|u) = \prod_{i=1}^{n} p(f_i|u)
= p(f_i=1|u)^(m) p(f_i=2|u)^(p) p(f_i=3)^(n-m-p)
where m = number of matchings and p = number of partial matchings.
p(f) is calculated as:
\sum_{u=1}^{3} p(f|u)p(u)
All these can be calculated directly.
Goal (b) is given by
p(w|f) = p(f|w)*p(w)/p(f)
where
p(f|w) = \prod_{i=1}^{n} p(f_i|w_i)
and p(f_i|w_i) is given in the model specification.
p(f) was calculated above, so we need only
p(w) = \sum_{u=1}^{3} p(w|u)p(u)
where
p(w|u) = \prod_{i=1}^{n} p(w_i|u)
So everything is set for implementation.
4. The Code
The code is written as a R script, the constants are set at the beginning, in accordance to what was discussed above, and the output is given by the functions
(a) p.u_f(u, n, m, p)
and
(b) p.wOK_f(n, m, p)
that calculate the probabilities for options (a) and (b), given inputs:
u = desired user class (set to u=1)
n = number of words
m = number of matchings
p = number of partial matchings
The code itself:
### Constants:
# User:
# Prob[U=1], Prob[U=2], Prob[U=3]
Prob_user = c(0.70, 0.25, 0.05)
# Words:
# Prob[Wi=OK|U=1,2,3]
Prob_OK = c(0.99, 0.001, 0.5)
Prob_NotOK = 1 - Prob_OK
# Database:
# Prob[Fi=match|Wi=OK], Prob[Fi=match|Wi=NotOK]:
Prob_match = c(0.98, 0)
# Prob[Fi=partial|Wi=OK], Prob[Fi=partial|Wi=NotOK]:
Prob_partial = c(0.002, 0.001)
# Prob[Fi=NOmatch|Wi=OK], Prob[Fi=NOmatch|Wi=NotOK]:
Prob_NOmatch = 1 - Prob_match - Prob_partial
###### First Goal: Probability of being a user type I, given the numbers of matchings (m) and partial matchings (p).
# Prob[Fi=fi|U=u]
#
p.fi_u <- function(fi, u)
{
unname(rbind(Prob_match, Prob_partial, Prob_NOmatch) %*% rbind(Prob_OK, Prob_NotOK))[fi,u]
}
# Prob[F=f|U=u]
#
p.f_u <- function(n, m, p, u)
{
exp( log(p.fi_u(1, u))*m + log(p.fi_u(2, u))*p + log(p.fi_u(3, u))*(n-m-p) )
}
# Prob[F=f]
#
p.f <- function(n, m, p)
{
p.f_u(n, m, p, 1)*Prob_user[1] + p.f_u(n, m, p, 2)*Prob_user[2] + p.f_u(n, m, p, 3)*Prob_user[3]
}
# Prob[U=u|F=f]
#
p.u_f <- function(u, n, m, p)
{
p.f_u(n, m, p, u) * Prob_user[u] / p.f(n, m, p)
}
# Probability user type I for n=1,...,5:
for(n in 1:5) for(m in 0:n) for(p in 0:(n-m))
{
cat("n =", n, "| m =", m, "| p =", p, "| Prob type I =", p.u_f(1, n, m, p), "\n")
}
##################################################################################################
# Second Goal: Probability all words OK given matchings/partial matchings.
p.f_wOK <- function(n, m, p)
{
exp( log(Prob_match[1])*m + log(Prob_partial[1])*p + log(Prob_NOmatch[1])*(n-m-p) )
}
p.wOK <- function(n)
{
sum(exp( log(Prob_OK)*n + log(Prob_user) ))
}
p.wOK_f <- function(n, m, p)
{
p.f_wOK(n, m, p)*p.wOK(n)/p.f(n, m, p)
}
# Probability all words ok for n=1,...,5:
for(n in 1:5) for(m in 0:n) for(p in 0:(n-m))
{
cat("n =", n, "| m =", m, "| p =", p, "| Prob all OK =", p.wOK_f(n, m, p), "\n")
}
5. Results
This are the results for n=1,...,5, and all possibilities for m and p. For instance, if you have 3 words, one match, one partial match, and one not found, you can be 66,5% sure it's a class-I user. In the same situation, you can attribute a score of 42,8% that all words are valid.
Note that option (a) does not give 100% score to the case of all matches, but option (b) does. This is expected, since we assumed that the database has no invalid words, hence if they are all found, then they are all valid. OTOH, there is a small chance that a user in class II or III can enter all valid words, but this chance decreases rapidly as n increases.
(a)
n = 1 | m = 0 | p = 0 | Prob type I = 0.06612505
n = 1 | m = 0 | p = 1 | Prob type I = 0.8107086
n = 1 | m = 1 | p = 0 | Prob type I = 0.9648451
n = 2 | m = 0 | p = 0 | Prob type I = 0.002062543
n = 2 | m = 0 | p = 1 | Prob type I = 0.1186027
n = 2 | m = 0 | p = 2 | Prob type I = 0.884213
n = 2 | m = 1 | p = 0 | Prob type I = 0.597882
n = 2 | m = 1 | p = 1 | Prob type I = 0.9733557
n = 2 | m = 2 | p = 0 | Prob type I = 0.982106
n = 3 | m = 0 | p = 0 | Prob type I = 5.901733e-05
n = 3 | m = 0 | p = 1 | Prob type I = 0.003994149
n = 3 | m = 0 | p = 2 | Prob type I = 0.200601
n = 3 | m = 0 | p = 3 | Prob type I = 0.9293284
n = 3 | m = 1 | p = 0 | Prob type I = 0.07393334
n = 3 | m = 1 | p = 1 | Prob type I = 0.665019
n = 3 | m = 1 | p = 2 | Prob type I = 0.9798274
n = 3 | m = 2 | p = 0 | Prob type I = 0.7500993
n = 3 | m = 2 | p = 1 | Prob type I = 0.9864524
n = 3 | m = 3 | p = 0 | Prob type I = 0.990882
n = 4 | m = 0 | p = 0 | Prob type I = 1.66568e-06
n = 4 | m = 0 | p = 1 | Prob type I = 0.0001158324
n = 4 | m = 0 | p = 2 | Prob type I = 0.007636577
n = 4 | m = 0 | p = 3 | Prob type I = 0.3134207
n = 4 | m = 0 | p = 4 | Prob type I = 0.9560934
n = 4 | m = 1 | p = 0 | Prob type I = 0.004198015
n = 4 | m = 1 | p = 1 | Prob type I = 0.09685249
n = 4 | m = 1 | p = 2 | Prob type I = 0.7256616
n = 4 | m = 1 | p = 3 | Prob type I = 0.9847408
n = 4 | m = 2 | p = 0 | Prob type I = 0.1410053
n = 4 | m = 2 | p = 1 | Prob type I = 0.7992839
n = 4 | m = 2 | p = 2 | Prob type I = 0.9897541
n = 4 | m = 3 | p = 0 | Prob type I = 0.855978
n = 4 | m = 3 | p = 1 | Prob type I = 0.9931117
n = 4 | m = 4 | p = 0 | Prob type I = 0.9953741
n = 5 | m = 0 | p = 0 | Prob type I = 4.671933e-08
n = 5 | m = 0 | p = 1 | Prob type I = 3.289577e-06
n = 5 | m = 0 | p = 2 | Prob type I = 0.0002259559
n = 5 | m = 0 | p = 3 | Prob type I = 0.01433312
n = 5 | m = 0 | p = 4 | Prob type I = 0.4459982
n = 5 | m = 0 | p = 5 | Prob type I = 0.9719289
n = 5 | m = 1 | p = 0 | Prob type I = 0.0002158996
n = 5 | m = 1 | p = 1 | Prob type I = 0.005694145
n = 5 | m = 1 | p = 2 | Prob type I = 0.1254661
n = 5 | m = 1 | p = 3 | Prob type I = 0.7787294
n = 5 | m = 1 | p = 4 | Prob type I = 0.988466
n = 5 | m = 2 | p = 0 | Prob type I = 0.00889696
n = 5 | m = 2 | p = 1 | Prob type I = 0.1788336
n = 5 | m = 2 | p = 2 | Prob type I = 0.8408416
n = 5 | m = 2 | p = 3 | Prob type I = 0.9922575
n = 5 | m = 3 | p = 0 | Prob type I = 0.2453087
n = 5 | m = 3 | p = 1 | Prob type I = 0.8874493
n = 5 | m = 3 | p = 2 | Prob type I = 0.994799
n = 5 | m = 4 | p = 0 | Prob type I = 0.9216786
n = 5 | m = 4 | p = 1 | Prob type I = 0.9965092
n = 5 | m = 5 | p = 0 | Prob type I = 0.9976583
(b)
n = 1 | m = 0 | p = 0 | Prob all OK = 0.04391523
n = 1 | m = 0 | p = 1 | Prob all OK = 0.836025
n = 1 | m = 1 | p = 0 | Prob all OK = 1
n = 2 | m = 0 | p = 0 | Prob all OK = 0.0008622994
n = 2 | m = 0 | p = 1 | Prob all OK = 0.07699368
n = 2 | m = 0 | p = 2 | Prob all OK = 0.8912977
n = 2 | m = 1 | p = 0 | Prob all OK = 0.3900892
n = 2 | m = 1 | p = 1 | Prob all OK = 0.9861099
n = 2 | m = 2 | p = 0 | Prob all OK = 1
n = 3 | m = 0 | p = 0 | Prob all OK = 1.567032e-05
n = 3 | m = 0 | p = 1 | Prob all OK = 0.001646751
n = 3 | m = 0 | p = 2 | Prob all OK = 0.1284228
n = 3 | m = 0 | p = 3 | Prob all OK = 0.923812
n = 3 | m = 1 | p = 0 | Prob all OK = 0.03063598
n = 3 | m = 1 | p = 1 | Prob all OK = 0.4278888
n = 3 | m = 1 | p = 2 | Prob all OK = 0.9789305
n = 3 | m = 2 | p = 0 | Prob all OK = 0.485069
n = 3 | m = 2 | p = 1 | Prob all OK = 0.990527
n = 3 | m = 3 | p = 0 | Prob all OK = 1
n = 4 | m = 0 | p = 0 | Prob all OK = 2.821188e-07
n = 4 | m = 0 | p = 1 | Prob all OK = 3.046322e-05
n = 4 | m = 0 | p = 2 | Prob all OK = 0.003118531
n = 4 | m = 0 | p = 3 | Prob all OK = 0.1987396
n = 4 | m = 0 | p = 4 | Prob all OK = 0.9413746
n = 4 | m = 1 | p = 0 | Prob all OK = 0.001109629
n = 4 | m = 1 | p = 1 | Prob all OK = 0.03975118
n = 4 | m = 1 | p = 2 | Prob all OK = 0.4624648
n = 4 | m = 1 | p = 3 | Prob all OK = 0.9744778
n = 4 | m = 2 | p = 0 | Prob all OK = 0.05816511
n = 4 | m = 2 | p = 1 | Prob all OK = 0.5119571
n = 4 | m = 2 | p = 2 | Prob all OK = 0.9843855
n = 4 | m = 3 | p = 0 | Prob all OK = 0.5510398
n = 4 | m = 3 | p = 1 | Prob all OK = 0.9927134
n = 4 | m = 4 | p = 0 | Prob all OK = 1
n = 5 | m = 0 | p = 0 | Prob all OK = 5.05881e-09
n = 5 | m = 0 | p = 1 | Prob all OK = 5.530918e-07
n = 5 | m = 0 | p = 2 | Prob all OK = 5.899106e-05
n = 5 | m = 0 | p = 3 | Prob all OK = 0.005810434
n = 5 | m = 0 | p = 4 | Prob all OK = 0.2807414
n = 5 | m = 0 | p = 5 | Prob all OK = 0.9499773
n = 5 | m = 1 | p = 0 | Prob all OK = 3.648353e-05
n = 5 | m = 1 | p = 1 | Prob all OK = 0.001494098
n = 5 | m = 1 | p = 2 | Prob all OK = 0.051119
n = 5 | m = 1 | p = 3 | Prob all OK = 0.4926606
n = 5 | m = 1 | p = 4 | Prob all OK = 0.9710204
n = 5 | m = 2 | p = 0 | Prob all OK = 0.002346281
n = 5 | m = 2 | p = 1 | Prob all OK = 0.07323064
n = 5 | m = 2 | p = 2 | Prob all OK = 0.5346423
n = 5 | m = 2 | p = 3 | Prob all OK = 0.9796679
n = 5 | m = 3 | p = 0 | Prob all OK = 0.1009589
n = 5 | m = 3 | p = 1 | Prob all OK = 0.5671273
n = 5 | m = 3 | p = 2 | Prob all OK = 0.9871377
n = 5 | m = 4 | p = 0 | Prob all OK = 0.5919764
n = 5 | m = 4 | p = 1 | Prob all OK = 0.9938288
n = 5 | m = 5 | p = 0 | Prob all OK = 1
If "average" is no solution because the database lacks of words, I'd say: extend the database :)
another idea could be, to 'weigh' the results, to get light an adjusted average, as an example:
100% = 1.00x weight
90% = 0.95x weight
80% = 0.90x weight
...
0% = 0.50x weight
so for your example you would:
(100*1 + 90*0.95 + 0*0.5) / (100*1 + 100*0.95 + 100*0.5) = 0.75714285714
=> 75.7%
regular average would be 63.3%
Since the order of words is not important in your description, the independent variable is the fraction of valid words. If the fraction is a perfect 1, i.e. all words are found to be perfect matches with the DB, then you are perfectly sure to have the all-valid outcome. If it's zero, i.e. all words are perfect misses in the DB, then you are perfectly sure to have the all-invalid outcome. If you have .5, then this must be the unlikely mixed-up outcome because neither of the other two is possible.
You say the mixed outcome is unlikely while the two extremes are moreso. You are after likelihood of the all-valid outcome.
Let the fraction of valid words (sum of "surenesses" of matches / # of words) be f and hence the desired likelihood of the all-valid outcome be L(f). By the discussion so far, we know L(1)=1 and L(f)=0 for 0<=f<=1/2 .
To honor your information that the mixed outcome is less likely than the all-valid (and the all-invalid) outcome, the shape of L must rise monotonically and quickly from 1/2 toward 1 and reach 1 at f=1.
Since this is heuristic, we might pick any reasonable function with this character. If we're clever it will have a parameter to control the steepness of the step and perhaps another for its location. This lets us tweak what "less likely" means for the middle case.
One such function is this for 1/2 <= f <= 1:
L(f) = 5 + f * (-24 + (36 - 16 * f) * f) + (-4 + f * (16 + f * (-20 + 8 * f))) * s
and zero for 0 <= f < 1/2. Although it's hairy-looking, it's the simplest polynomial that intersects (1/2,0) and (1,1) with slope 0 at f=1 and slope s at f=0.
You can set 0 <= s <= 3 to change the step shape. Here is a shot with s=3, which probably what you want:
If you set s > 3, it shoots above 1 before settling down, not what we want.
Of course there are infinitely many other possibilities. If this one does't work, comment and we'll look for another.
averaging is, of course, rubbish. If the individual word probabilities were accurate, the probability that all words are correct is simply the product, not the average. If you have an estimate for the uncertainties in your individual probabilities, you could work out their product marginalized over all the individual probabilities.