Transforming crime rates - transformation

I am working on a project using campus crime rates as the independent variable. The data is highly positively skewed. I need to transform the data in order to achieve normal distribution to run OLS. However, I know that if I do a log transformation I will lose all instances where the crime rates are 0 (representing an absence of crime). What are other possible solutions?

While you could avoid the loss of cases by calculating something like log(1+rate), the nonnegativity boundis likely to cause trouble anyway. You might consider using a generalized linear model (Analyze > Generalized Linear Models) with a gamma with log link response scale. This can deal with the right-skew issue as well.
Note, though, that it is the error that carries the normality assumption in OLS regression, not the dependent variable.

Related

Importance of covariance

If I have calculated coefficient of correlation, I already have the idea of covariance. But I have seen many data scientist calculates the covariance after it. If I have coefficient of correlation with me, I can say that data is positively or negatively correlated with the strength, while covariance give the same thing without the strength. Then what is the importance of covariance if I have coefficient of correlation.
Please suggest, apologies if my question is of low importance.
The correlation and covariance are stricty related, indeed rho=cov(x,y)/(sigma_x*sigma_y)
However, the units of a covariance are hard to interpret. For example, if we wanted to know the covariance between wages paid to employees and the number of employees at a firm, it can be shown that by converting the wages from dollars to cents, we would increase the covariance by a factor of 100. This is odd given that the underlying relationship shouldn't be different if we are talking about dollars or cents. Another way to express this is:
Cov(a*X,Y)=a*Cov(X,Y)
The correlation is always bounded between -1 and 1 and easier to be interpreted
I tend to see correlation versus covariance as an opposition between a quick dry mathematical relation overview and a more raw relation analysis. Imagine yourself joining a project in a field you know approximately nothing about:
if a team member gives you a correlation coefficient for two key variables/indicators associated with the project, you will be able to extract all the information out of this coefficient immediately, without knowing the samples respective scales
if he gives you a covariance, you will probably want to have a look at the data to appreciate exactly what it implies
Covariance is easily understood when the samples being compared live on a similar scale/have a similar nature, since the value you'll be considering will not try to compare two completely different things with an intuitively absurd compromise in nature/scale (remember that to compute the covariance you use the products of two things possibly very different with (x-mean(x))(y-mean(y))). Correlation being standardized, issues associated with varying scales and nature in the data are simply absent of your indicator, leading to an "easier interpretation" feeling.
One should therefore realize that while correlation can make it easier to understand a mathematical relationship, it obfuscates the actual nature of the data you're playing with. Looking at both can't hurt you to appreciate what's going on with your samples, and that's probably why you'll like considering both. If you aren't convinced you can also read this related stats.stackexchange question.
In case you wonder why would you want to keep close to the nature and scales of your data while trying to highlight relations between samples, a good example would be the efforts deployed in AI to extract useful features in images to feed models: you want to emphasize discriminatory descriptions of data, without filtering out other potentially interesting information with a standardization. See for example this paper that uses covariance matrices to build a dictionary on images.

Differences in FP and FN rates between two algorithems

I am conducting binary classification using logistic regression with and without applying PCA. The application of PCA before logistic regression gives a higher accuracy and lower FNs in comparison to logistic regression alone. I would like to find out why this is happening, specifically why PCA produces less FNs. I have read that cost sensitivity analysis could help explain this, but I am not sure if this is correct. Any suggestions?
There is no need of fancy analysis to explain this behavior.
PCA is used just for "clean" the data by limiting its variance. Let me explain this concept with an example, and then I will turn back to your question.
In general, in any ML problem, the available samples are never sufficient in number to cover all the possible variety of the sample space. You can never have a dataset with all the possible human faces, with all the possible expressions, etc.
So, instead of using all the available features you engineer the features (the pixels, in this example) in a way that you get more meaningful higher level features. You can reduce the resolution of the pictures, as easy example; you will loose the informations on the pictures background, but your model will focus better on the most important part of the picture, i.e. the faces.
When you deal with tabular data, a technique similar to the resolution lowering is cutting off parts of the original features, and that's what PCA do: it keeps the most important components of the features, the "Principal Components", dropping the less important ones.
So, the model trained with PCA gives better results because, by cutting off part of the features, your model focus better on the most important part of your samples, and so it gains robustness against overfitting.
cheers

Will non-linear regression algorithms perform better if trained with normally distributed target values?

After finding out about many transformations that can be applied on the target values(y column), of a data set, such as box-cox transformations I learned that linear regression models need to be trained with normally distributed target values in order to be efficient.(https://stats.stackexchange.com/questions/298/in-linear-regression-when-is-it-appropriate-to-use-the-log-of-an-independent-va)
I'd like to know if the same applies for non-linear regression algorithms. For now I've seen people on kaggle use log transformation for mitigation of heteroskedasticity, by using xgboost, but they never mention if it is also being done for getting normally distributed target values.
I've tried to do some research and I found in Andrew Ng's lecture notes(http://cs229.stanford.edu/notes/cs229-notes1.pdf) on page 11 that the least squares cost function, used by many algorithms linear and non-linear, is derived by assuming normal distribution of the error. I believe if the error should be normally distributed then the target values should be as well.
If this is true then all the regression algorithms using least squares cost function should work better with normally distributed target values.
Since xgboost uses least squares cost function for node splitting(http://cilvr.cs.nyu.edu/diglib/lsml/lecture03-trees-boosting.pdf - slide 13) then maybe this algorithm would work better if I transform the target values using box-cox transformations for training the model and then apply inverse box-cox transformations on the output in order to get the predicted values.
Will this theoretically speaking give better results?
Your conjecture "I believe if the error should be normally distributed then the target values should be as well." is totally wrong. So your question does not have any answer at all since it is not a valid question.
There are no assumptions on the target variable to be Normal at all.
Getting the target variable transformed does not mean the errors are normally distributed. In fact, that may ruin normality.
I have no idea what this is supposed to mean: "linear regression models need to be trained with normally distributed target values in order to be efficient." Efficient in what way?
Linear regression models are global models. They simply fit a surface to the overall data. The operations are matrix operations, so the time to "train" the model depends only on the size of data. The distribution of the target has nothing to do with model building performance. And, it has nothing to do with model scoring performance either.
Because targets are generally not normally distributed, I would certainly hope that such a distribution is not required for a machine learning algorithm to work effectively.

Binary classification of sensor data

My problem is the following: I need to classify a data stream coming from an sensor. I have managed to get a baseline using the
median of a window and I subtract the values from that baseline (I want to avoid negative peaks, so I only use the absolute value of the difference).
Now I need to distinguish an event (= something triggered the sensor) from the noise near the baseline:
The problem is that I don't know which method to use.
There are several approaches of which I thought of:
sum up the values in a window, if the sum is above a threshold the class should be EVENT ('Integrate and dump')
sum up the differences of the values in a window and get the mean value (which gives something like the first derivative), if the value is positive and above a threshold set class EVENT, set class NO-EVENT otherwise
combination of both
(unfortunately these approaches have the drawback that I need to guess the threshold values and set the window size)
using SVM that learns from manually classified data (but I don't know how to set up this algorithm properly: which features should I look at, like median/mean of a window?, integral?, first derivative?...)
What would you suggest? Are there better/simpler methods to get this task done?
I know there exist a lot of sophisticated algorithms but I'm confused about what could be the best way - please have a litte patience with a newbie who has no machine learning/DSP background :)
Thank you a lot and best regards.
The key to evaluating your heuristic is to develop a model of the behaviour of the system.
For example, what is the model of the physical process you are monitoring? Do you expect your samples, for example, to be correlated in time?
What is the model for the sensor output? Can it be modelled as, for example, a discretized linear function of the voltage? Is there a noise component? Is the magnitude of the noise known or unknown but constant?
Once you've listed your knowledge of the system that you're monitoring, you can then use that to evaluate and decide upon a good classification system. You may then also get an estimate of its accuracy, which is useful for consumers of the output of your classifier.
Edit:
Given the more detailed description, I'd suggest trying some simple models of behaviour that can be tackled using classical techniques before moving to a generic supervised learning heuristic.
For example, suppose:
The baseline, event threshold and noise magnitude are all known a priori.
The underlying process can be modelled as a Markov chain: it has two states (off and on) and the transition times between them are exponentially distributed.
You could then use a hidden Markov Model approach to determine the most likely underlying state at any given time. Even when the noise parameters and thresholds are unknown, you can use the HMM forward-backward training method to train the parameters (e.g. mean, variance of a Gaussian) associated with the output for each state.
If you know even more about the events, you can get by with simpler approaches: for example, if you knew that the event signal always reached a level above the baseline + noise, and that events were always separated in time by an interval larger than the width of the event itself, you could just do a simple threshold test.
Edit:
The classic intro to HMMs is Rabiner's tutorial (a copy can be found here). Relevant also are these errata.
from your description a correctly parameterized moving average might be sufficient
Try to understand the Sensor and its output. Make a model and do a Simulator that provides mock-data that covers expected data with noise and all that stuff
Get lots of real sensor data recorded
visualize the data and verify your assuptions and model
annotate your sensor data i. e. generate ground truth (your simulator shall do that for the mock data)
from what you learned till now propose one or more algorithms
make a test system that can verify your algorithms against ground truth and do regression against previous runs
implement your proposed algorithms and run them against ground truth
try to understand the false positives and false negatives from the recorded data (and try to adapt your simulator to reproduce them)
adapt your algotithm(s)
some other tips
you may implement hysteresis on thresholds to avoid bouncing
you may implement delays to avoid bouncing
beware of delays if implementing debouncers or low pass filters
you may implement multiple algorithms and voting
for testing relative improvements you may do regression tests on large amounts data not annotated. then you check the flipping detections only to find performance increase/decrease

Odd correlated posterior traceplots in multilevel model

I'm trying out PyMC3 with a simple multilevel model. When using both fake and real data the traces of the random effect distributions move with each other (see plot below) and appear to be offsets of the same trace. Is this an expected artifact of NUTS or an indication of a problem with my model?
Here is a traceplot on real data:
Here is an IPtyhon notebook of the model and the functions used to create the fake data. Here is the corresponding gist.
I would expect this to happen in accordance with the group mean distribution on alpha. If you think about it, if the group mean shifts around it will influence all alphas to the same degree. You could confirm this by doing a scatter plot of the group mean trace against some of the alphas. Hierarchical models are in general difficult for most samplers because of these complex interdependencies between group mean and variance and the individual RVs. See http://arxiv.org/abs/1312.0906 for more information on this.
In your specific case, the trace doesn't look too worrisome to me, especially after iteration 1000. So you could probably just discard those as burn-in and keep in mind that you have some sampling noise but probably got the right posterior overall. In addition, you might want to perform a posterior predictive check to see if the model can reproduce the patterns in your data you are interested in.
Alternatively, you could try to estimate a better hessian using pm.find_hessian(), e.g. https://github.com/pymc-devs/pymc/blob/3eb2237a8005286fee32776c304409ed9943cfb3/pymc/examples/hierarchical.py#L51
I also found this paper which looks interesting (haven't read it yet but might be cool to implement in PyMC3): arxiv-web3.library.cornell.edu/pdf/1406.3843v1.pdf

Resources