Statistical test for parameters testing in algorithms - algorithm

I have an algorithm that uses four operators, each operator has two or three possible values. I want to test the influence of each value on the performance of the algorithm. In total, 36 variants can be derived by varying the values of the operators.
I run the 36 variants on 3000 problem instances, and calculated the average deviation from the best known solutions.
I gathered the results of the values of each parameters similar to Taguchi Design experiments (Operators are the Parameters and values are the Levels).
My question is: what is the most suitable statistical test that can be used to see if there is a statistically significant impact of a value among the other values of each operator?

Related

Average of BLEU scores on two subsets of data is not the same as overall score

For evaluating a sequence generation model, I'm using BLEU1:BLEU4. I separated the test set to two sets and calculated the scores on each set separately, as well as, on the whole test set. Surprisingly, the results I get from the whole test set is not the weighted average of the results I get from each set. For example, consider the BLEU4 scores I get on a set and two subsets of it:
set1, 866 elements: 0.0001529267908
set2, 1010 elements: 0.1625387989
<set1,set2>, 1876 elements: 0.3063472152
How should I aggregate the results on two subsets to get the overall result?
Note: I know that all the elements in set1 are shorter than 4 tokens that's why BLEU4 is almost zero there.
BLEU score is by definition non-linear. As you can see in the original paper by Papineni et al.:
It is a product of two terms: brevity penalty (BP) and a harmonic mean of n-gram precisions. Both the brevity penalty and the harmonic mean are not linear operations with respect to averaging.
Regarding what you should report: since the two tests set look fundamentally different, the best option is to report two separate numbers.
I don't know what your task is, but given that the desired outputs are very short, BLEU might not be the best choice for evaluation. You might consider something edit-based (e.g., TER) or even plain accuracy might do a good job.

How to form precision-recall curve using one test dataset for my algorithm?

I'm working on knowledge graph, more precisely in natural language processing field. To evaluate the components of my algorithm, it is necessary to be able to classify the good and the poor candidates. For this purpose, we manually classified pairs in a dataset.
My system returns the relevant pairs according to the implementation logic. now I'm able to calculate :
Precision = X
Recall = Y
For establishing a complete curve I need the rest of points (X,Y), what should I do?:
build another dataset for test ?
split my dataset ?
or any other solution ?
Neither of your proposed two methods. In short, Precision-recall or ROC curve is designed for classifiers with probabilistic output. That is, instead of simply producing a 0 or 1 (in case of binary classification), you need classifiers that can provide a probability in [0,1] range. This is the function to do it in sklearn, note how the 2nd parameter is called probas_pred.
To turn this probabilities into concrete class prediction, you can then set a threshold, say at .5. Setting such a threshold is problematic however, since you can trade-off precision/recall by varying the threshold, and an arbitrary choice can give false impression of a classifier's performance. To circumvent this, threshold-independent measures like area under ROC or Precision-Recall curve is used. They create thresholds at different intervals, say 0.1,0.2,0.3...0.9, turn probabilities into binary classes and then compute precision-recall for each such threshold.

Which algorithm or statistical method will be best?

I have a table of 21 students (A1…A21) and their 25 characteristics (table 1) and I have another matrix (table 2) which shows if a student likes another student or not (0 means likes and 100 means dislike).
How can I find the least no. of characteristics that can give me similar distance in space as the likeability matrix?
For Example:
If we get 5 dimensions with characteristics C1, C3, C4, C5, C10, then the points A1,..A21 when plotted for these characteristics will have the proportional distance as the likeability matrix.
For example, if A3 and A2 have a small distance between them in that 5D characteristics space, then they will have a corresponding smaller distance/value in the likeability matrix.
Table 1
Table 2
You can make this look like a well-known statistical problem, but you have made assumptions (that similar students like each other), I will make further assumptions, and most of the solutions to the statistical problem are not very respectable, so you should take the results with a pinch of salt.
With 21 students, you have 21*20/2 = 210 pairs of students. Treat each pair as a separate observation. You have a likeability value for that pair. For each pair compute, for each characteristic, the absolute value of the difference between the values for each of the two students. This gives you a vector of 25 elements for each observation. You will now try and predict the 210 likeabilities given the 210 25-long vectors of absolute differences.
Procedures for this go under the names of all-subsets regression and stepwise regression. See https://www.r-bloggers.com/variable-selection-using-automatic-methods/ and https://www.r-bloggers.com/variable-selection-using-automatic-methods/. One way to compute these is to use the free open source statistical package R from https://www.r-project.org/.
For each possible selection of variables you can use linear regression to predict likeability from the vector of absolute differences. From that linear regression you can get a measure of how good the prediction is, and so whether that particular selection of variables was any good or not. All subsets regression uses a variation on branch and bound to work out, for each N, the set of variables of size N which predicts best. Stepwise regression starts off with a possibly incomplete selection of variables and performs a sort of hillclimb, adding or subtracting one variable from the set at each stage, trying all of the variables and choosing the one that gives the best prediction. Typically you start with no variables and add one variable at a time, or start will all variables, and remove one variable at a time. Stepwise selection isn't guaranteed to find the absolute best selection of variables that all subsets regression will find, but all subsets regression can be very expensive.
From this you will get a best selection of variables (probably one best selection for each number of variables) and you may get some indication of statistical significance. You have broken so many rules about multiple testing and independence (inflating 21 observations to 210) that you shouldn't take any statistical significance seriously. If you want some idea of whether you are looking at real information or prettied-up random noise, automate the procedure and see what it looks like on fake data where there is no underlying effect at all, and perhaps on fake data where you have constructed data from which there is an underlying effect that you know about because you have constructed it. See also https://en.wikipedia.org/wiki/Bootstrapping_(statistics) and https://en.wikipedia.org/wiki/Resampling_(statistics)#Permutation_tests

"Covering" the space of all possible histogram shapes

There is a very expensive computation I must make frequently.
The computation takes a small array of numbers (with about 20 entries) that sums to 1 (i.e. the histogram) and outputs something that I can store pretty easily.
I have 2 things going for me:
I can accept approximate answers
The "answers" change slowly. For example: [.1 .1 .8 0] and [.1
.1 .75 .05] will yield similar results.
Consequently, I want to build a look-up table of answers off-line. Then, when the system is running, I can look-up an approximate answer based on the "shape" of the input histogram.
To be precise, I plan to look-up the precomputed answer that corresponds to the histogram with the minimum Earth-Mover-Distance to the actual input histogram.
I can only afford to store about 80 to 100 precomputed (histogram , computation result) pairs in my look up table.
So, how do I "spread out" my precomputed histograms so that, no matter what the input histogram is, I'll always have a precomputed result that is "close"?
Finding N points in M-space that are a best spread-out set is more-or-less equivalent to hypersphere packing (1,2) and in general answers are not known for M>10. While a fair amount of research has been done to develop faster methods for hypersphere packings or approximations, it is still regarded as a hard problem.
It probably would be better to apply a technique like principal component analysis or factor analysis to as large a set of histograms as you can conveniently generate. The results of either analysis will be a set of M numbers such that linear combinations of histogram data elements weighted by those numbers will predict some objective function. That function could be the “something that you can store pretty easily” numbers, or could be case numbers. Also consider developing and training a neural net or using other predictive modeling techniques to predict the objective function.
Building on #jwpat7's answer, I would apply k-means clustering to a huge set of randomly generated (and hopefully representative) histograms. This would ensure that your space was spanned with whatever number of exemplars (precomputed results) you can support, with roughly equal weighting for each cluster.
The trick, of course, will be generating representative data to cluster in the first place. If you can recompute from time to time, you can recluster based on the actual data in the system so that your clusters might get better over time.
I second jwpat7's answer, but my very naive approach was to consider the count of items in each histogram bin as a y value, to consider the x values as just 0..1 in 20 steps, and then to obtain parameters a,b,c that describe x vs y as a cubic function.
To get a "covering" of the histograms I just iterated through "possible" values for each parameter.
e.g. to get 27 histograms to cover the "shape space" of my cubic histogram model I iterated the parameters through -1 .. 1, choosing 3 values linearly spaced.
Now, you could change the histogram model to be quartic if you think your data will often be represented that way, or whatever model you think is most descriptive, as well as generate however many histograms to cover. I used 27 because three partitions per parameter for three parameters is 3*3*3=27.
For a more comprehensive covering, like 100, you would have to more carefully choose your ranges for each parameter. 100**.3 isn't an integer, so the simple num_covers**(1/num_params) solution wouldn't work, but for 3 parameters 4*5*5 would.
Since the actual values of the parameters could vary greatly and still achieve the same shape it would probably be best to store ratios of them for comparison instead, e.g. for my 3 parmeters b/a and b/c.
Here is an 81 histogram "covering" using a quartic model, again with parameters chosen from linspace(-1,1,3):
edit: Since you said your histograms were described by arrays that were ~20 elements, I figured fitting parameters would be very fast.
edit2 on second thought I think using a constant in the model is pointless, all that matters is the shape.

What does a Bayesian Classifier score represent?

I'm using the ruby classifier gem whose classifications method returns the scores for a given string classified against the trained model.
Is the score a percentage? If so, is the maximum difference 100 points?
It's the logarithm of a probability. With a large trained set, the actual probabilities are very small numbers, so the logarithms are easier to compare. Theoretically, scores will range from infinitesimally close to zero down to negative infinity. 10**score * 100.0 will give you the actual probability, which indeed has a maximum difference of 100.
Actually to calculate the probability of a typical naive bayes classifier where b is the base, it is b^score/(1+b^score). This is the inverse logit (http://en.wikipedia.org/wiki/Logit) However, given the independence assumptions of the NBC, these scores tend to be too high or too low and probabilities calculated this way will accumulate at the boundaries. It is better to calculate the scores in a holdout set and do a logistic regression of accurate(1 or 0) on score to get a better feel for the relationship between score and probability.
From a Jason Rennie paper:
2.7 Naive Bayes Outputs Are Often Overcondent
Text databases frequently have
10,000 to 100,000 distinct vocabulary words; documents often contain 100 or more
terms. Hence, there is great opportunity for duplication.
To get a sense of how much duplication there is, we trained a MAP Naive Bayes
model with 80% of the 20 Newsgroups documents. We produced p(cjd;D) (posterior)
values on the remaining 20% of the data and show statistics on maxc p(cjd;D) in
table 2.3. The values are highly overcondent. 60% of the test documents are assigned
a posterior of 1 when rounded to 9 decimal digits. Unlike logistic regression, Naive
Bayes is not optimized to produce reasonable probability values. Logistic regression
performs joint optimization of the linear coecients, converging to the appropriate
probability values with sucient training data. Naive Bayes optimizes the coecients
one-by-one. It produces realistic outputs only when the independence assumption
holds true. When the features include signicant duplicate information (as is usually
the case with text), the posteriors provided by Naive Bayes are highly overcondent.

Resources