Differences in FP and FN rates between two algorithems - performance

I am conducting binary classification using logistic regression with and without applying PCA. The application of PCA before logistic regression gives a higher accuracy and lower FNs in comparison to logistic regression alone. I would like to find out why this is happening, specifically why PCA produces less FNs. I have read that cost sensitivity analysis could help explain this, but I am not sure if this is correct. Any suggestions?

There is no need of fancy analysis to explain this behavior.
PCA is used just for "clean" the data by limiting its variance. Let me explain this concept with an example, and then I will turn back to your question.
In general, in any ML problem, the available samples are never sufficient in number to cover all the possible variety of the sample space. You can never have a dataset with all the possible human faces, with all the possible expressions, etc.
So, instead of using all the available features you engineer the features (the pixels, in this example) in a way that you get more meaningful higher level features. You can reduce the resolution of the pictures, as easy example; you will loose the informations on the pictures background, but your model will focus better on the most important part of the picture, i.e. the faces.
When you deal with tabular data, a technique similar to the resolution lowering is cutting off parts of the original features, and that's what PCA do: it keeps the most important components of the features, the "Principal Components", dropping the less important ones.
So, the model trained with PCA gives better results because, by cutting off part of the features, your model focus better on the most important part of your samples, and so it gains robustness against overfitting.
cheers

Related

XGBOOST/lLightgbm over-fitting despite no indication in cross-validation test scores?

We aim to identify predictors that may influence the risk of a relatively rare outcome.
We are using a semi-large clinical dataset, with data on nearly 200,000 patients.
The outcome of interest is binary (i.e. yes/no), and quite rare (~ 5% of the patients).
We have a large set of nearly 1,200 mostly dichotomized possible predictors.
Our objective is not to create a prediction model, but rather to use the boosted trees algorithm as a tool for variable selection and for examining high-order interactions (i.e. to identify which variables, or combinations of variables, that may have some influence on the outcome), so we can target these predictors more specifically in subsequent studies. Given the paucity of etiological information on the outcome, it is somewhat possible that none of the possible predictors we are considering have any influence on the risk of developing the condition, so if we were aiming to develop a prediction model it would have likely been a rather bad one. For this work, we use the R implementation of XGBoost/lightgbm.
We have been having difficulties tuning the models. Specifically when running cross validation to choose the optimal number of iterations (nrounds), the CV test score continues to improve even at very high values (for example, see figure below for nrounds=600,000 from xgboost). This is observed even when increasing the learning rate (eta), or when adding some regularization parameters (e.g. max_delta_step, lamda, alpha, gamma, even at high values for these).
As expected, the CV test score is always lower than the train score, but continuous to improve without ever showing a clear sign of over fitting. This is true regardless of the evaluation metrics that is used (example below is for logloss, but the same is observed for auc/aucpr/error rate, etc.). Relatedly, the same phenomenon is also observed when using a grid search to find the optimal value of tree depth (max_depth). CV test scores continue to improve regardless of the number of iterations, even at depth values exceeding 100, without showing any sign of over fitting.
Note that owing to the rare outcome, we use a stratified CV approach. Moreover, the same is observed when a train/test split is used instead of CV.
Are there situations in which over fitting happens despite continuous improvements in the CV-test (or test split) scores? If so, why is that and how would one choose the optimal values for the hyper parameters?
Relatedly, again, the idea is not to create a prediction model (since it would be a rather bad one, owing that we don’t know much about the outcome), but to look for a signal in the data that may help identify a set of predictors for further exploration. If boosted trees is not the optimal method for this, are there others to come to mind? Again, part of the reason we chose to use boosted trees was to enable the identification of higher (i.e. more than 2) order interactions, which cannot be easily assessed using more conventional methods (including lasso/elastic net, etc.).
welcome to Stackoverflow!
In the absence of some code and representative data it is not easy to make other than general suggestions.
Your descriptive statistics step may give some pointers to a starting model.
What does existing theory (if it exists!) suggest about the cause of the medical condition?
Is there a male/female difference or old/young age difference that could help get your foot in the door?
Your medical data has similarities to the fraud detection problem where one is trying to predict rare events usually much rarer than your cases.
It may pay you to check out the use of xgboost/lightgbm in the fraud detection literature.

Gensim Word2vec model parameter tuning

I am working on Word2Vec model. Is there any way to get the ideal value for one of its parameter i.e iter. Like the way we used do in K-Means (Elbo curve plot) to get the K value.Or is there any other way for parameter tuning on this model.
There's no one ideal set of parameters for a word2vec session – it depends on your intended usage of the word-vectors.
For example, some research has suggested that using a larger window tends to position the final vectors in a way that's more sensitive to topical/domain similarity, while a smaller window value shifts the word-neighborhoods to be more syntactic/functional drop-in replacements for each other. So depending on your particular project goals, you'd want a different value here.
(Similarly, because the original word2vec paper evaluated models, & tuned model meta-parameters, based on the usefulness of the word-vectors to solve a set of English-language analogy problems, many have often tuned their models to do well on the same analogy task. But I've seen cases where the model that scores best on those analogies does worse when contributing to downstream classification tasks.)
So what you really want is a project-specific way to score a set of word-vectors, well-matched to your goals. Then, you run many alternate word2vec training sessions, and pick the parameters that do best on your score.
The case of iter/epochs is special, in that by the logic of the underlying stochastic-gradient-descent optimization method, you'd ideally want to use as many training-epochs as necessary for the per-epoch running 'loss' to stop improving. At that point, the model is plausibly as good as it can be – 'converged' – given its inherent number of free-parameters and structure. (Any further internal adjustments that improve it for some examples worsen it for others, and vice-versa.)
So potentially, you'd watch this 'loss', and choose a number of training-iterations that's just enough to show the 'loss' stagnating (jittering up-and-down in a tight window) for a few passes. However, the loss-reporting in gensim isn't yet quite optimal – see project bug #2617 – and many word2vec implementations, including gensim and going back to the original word2vec.c code released by Google researchers, just let you set a fixed count of training iterations, rather than implement any loss-sensitive stopping rules.

Remove noisy and redundant features

I have extracted features from a video sequence based on facial markers as means and standard deviations of those markers over a video sequence. They need to be classified into four different classes based on those markers.
In all I have a feature set of around 260 features. How should I determine which features are noisy and redundant in my set. I read about it in some research papers and some of them used the plus l take away r algorithm that I found to be quite appropriate but in such algorithms they always rate one feature against the other and say its good or bad compared to it.
How do I rate my features to be good or bad? What criterion are used for that generally?
I researched a lot for a couple of days but found nothing clear cut and useful. Would be grateful for the help, Thanks.
Think of your 260 features as a basis for a 260 dimensional room. However, your basis-vectors are not normal to each other so they contain a lot of redundant information. You'd like to transform these vectors into a vector-set where all vectors are normal to each other, thus minimizing the dimensions without losing (much) information.
This is what Principal component analysis does.
Linear discriminant analysis may also be of interest to you.
You can use pca or you can train some classifiers, and after this you loop all over yours features adding a big value to each feature, testing if this alteration changes the precision of the classifier, if not, you can remove this feature, after remove all the redundat features, and then retrain your classifiers!
Its a good ideia to train not one classifier but a lot of them, and them make your prediction based on votes, you can user MODE function in matlab to do this!
Use classification rate to determine a subset of feature how much good. You have 260 feature and then have 2^260 subset, this is too much! and search in this space is very difficult. Thus it's better to remove some feature by Filter method (for example FA, t-test, fisher and ...) and then use your search method to find best subset of feature.
Plus l take away r algorithm (or other search algorithm) find various subset and rate it (in this stage use classification rate) and at last specify which subset is better.

How can I visualize changes in a large code base quality?

One of the things I’ve been thinking about a lot off and on is how we can use metrics of some kind to measure change, are we going backwards or not? This is in the context of a large, legacy code base which we are improving. Most of the code is C++ with a C heritage. Some new functions and the GUI are written in C#.
To start with, we could at least be checking if the simple complexity level was changing over time in the code. The difficulty is in having a representation – we can maybe do a 3D surface where a 2D map represents the code and we have a heat-map of color representing complexity with the 3D surface bulging in and out to show change.
Once you can generate some matrics of numbers there are a ton of math systems around to take care of stuff like this.
Over time, I'd like to have more sophisticated numbers in there but the same visualisation techniques used to represent change.
I like the idea in Crap4j of focusing on the ratio between complexity and number of unit tests covering that code.
I'd also like to include Uncle Bob's SOLID metrics and some of the Chidamber and Kemerer OO metrics. The hard part is finding tools to generate these for C++. The only option seems to be Krakatau Essential Metrics (I have no objection to paying for tools). My desire to use the CK metrics comes partly from the books Object-Oriented Metrics:Measures of Complexity by Henderson-Sellers and the earlier Object-Oriented Software Metrics.
If we start using a number of these metrics we could end up with ten or so numbers that are varying across time. I'm fairly ignorant of statistics but it seems it could be interesting to track a bunch of such metrics and then pay attention to which ones tend to vary.
Note that a related question is about measuring code quality across a large code base. I'm more interested in measuring the change.
I'd consider using a Kiviat Diagram to represent multiple software metrics dimensions evolving over time. These diagrams represent multiple data points in a concave hull around a centerpoint. Visual inspection will show where a particular metric is going up or down, and one ought to be able to compute an overall ratio of area biased by metric value using some hueristic area computation.
You can also have a glance at NDepend documentation about code metrics. Disclaimer: I am one of the developer of the tool NDepend.
With the Code Rule and Query over LINQ (CQLinq) facility, it is possible to ask for code metric evolution/trending across two different snapshots in time of the code base. For example there is a default rule proposed: Avoid making complex methods even more complex illustrated by the screenshot below:
Several metric trending rules are proposed like:
Avoid decreasing code coverage by[enter link description here]5 tests of types
Types that used to be 100% covered but not anymore
and also, since you mentioned Crap4J the metric C.R.A.P can be written with CQLinq, and the query could be easily tweaked to see the trending in C.R.A.P metric.
Concerning the visualization of code metric, NDepend lets visualize code metrics values through an interactive treemap:
There is a fresh approach for this topic.
E.g.
https://github.com/databricks/koalas/pull/840#issuecomment-536949320
See https://softagram.com/docs/visualizing-code-changes/ for more info or do an image search in search engine using the two keywords: softagram koalas
Disclaimer: I work for Softagram.

What are good algorithms for detecting abnormality?

Background
Here is the problem:
A black box outputs a new number each day.
Those numbers have been recorded for a period of time.
Detect when a new number from the black box falls outside the pattern of numbers established over the time period.
The numbers are integers, and the time period is a year.
Question
What algorithm will identify a pattern in the numbers?
The pattern might be simple, like always ascending or always descending, or the numbers might fall within a narrow range, and so forth.
Ideas
I have some ideas, but am uncertain as to the best approach, or what solutions already exist:
Machine learning algorithms?
Neural network?
Classify normal and abnormal numbers?
Statistical analysis?
Cluster your data.
If you don't know how many modes your data will have, use something like a Gaussian Mixture Model (GMM) along with a scoring function (e.g., Bayesian Information Criterion (BIC)) so you can automatically detect the likely number of clusters in your data. I recommend this instead of k-means if you have no idea what value k is likely to be. Once you've constructed a GMM for you data for the past year, given a new datapoint x, you can calculate the probability that it was generated by any one of the clusters (modeled by a Gaussian in the GMM). If your new data point has low probability of being generated by any one of your clusters, it is very likely a true outlier.
If this sounds a little too involved, you will be happy to know that the entire GMM + BIC procedure for automatic cluster identification has been implemented for you in the excellent MCLUST package for R. I have used it several times to great success for such problems.
Not only will it allow you to identify outliers, you will have the ability to put a p-value on a point being an outlier if you need this capability (or want it) at some point.
You could try line fitting prediction using linear regression and see how it goes, it would be fairly easy to implement in your language of choice.
After you fitted a line to your data, you could calculate the mean standard deviation along the line.
If the novel point is on the trend line +- the standard deviation, it should not be regarded as an abnormality.
PCA is an other technique that comes to mind, when dealing with this type of data.
You could also look in to unsuperviced learning. This is a machine learning technique that can be used to detect differences in larger data sets.
Sounds like a fun problem! Good luck
There is little magic in all the techniques you mention. I believe you should first try to narrow the typical abnormalities you may encounter, it helps keeping things simple.
Then, you may want to compute derived quantities relevant to those features. For instance: "I want to detect numbers changing abruptly direction" => compute u_{n+1} - u_n, and expect it to have constant sign, or fall in some range. You may want to keep this flexible, and allow your code design to be extensible (Strategy pattern may be worth looking at if you do OOP)
Then, when you have some derived quantities of interest, you do statistical analysis on them. For instance, for a derived quantity A, you assume it should have some distribution P(a, b) (uniform([a, b]), or Beta(a, b), possibly more complex), you put a priori laws on a, b and you ajust them based on successive information. Then, the posterior likelihood of the info provided by the last point added should give you some insight about it being normal or not. Relative entropy between posterior and prior law at each step is a good thing to monitor too. Consult a book on Bayesian methods for more info.
I see little point in complex traditional machine learning stuff (perceptron layers or SVM to cite only them) if you want to detect outliers. These methods work great when classifying data which is known to be reasonably clean.

Resources