Using ROC value as a threshold in confusion matrix - roc

Is it possible to use AUC value as a threshold to build the confusion matrix?

Related

multiple ROC curve in R with a matrix of prediction values and labels

I want to plot multiple ROC curves with a matrix of predictions and labels. I have > 100 samples with a matrix of predictions and labels for each sample. The length of the samples is different. How could I get design a single matrix for all the samples and get multiple ROC curves in a single plot? I would appreciate any suggestions. Thanks

What is the similarity score in the gensim similar_by_word function?

What is the similarity score in the genism similar_by_word function?
I was reading here about the genism similar_by_word function:
https://radimrehurek.com/gensim/models/keyedvectors.html
The similar_by_word function returns a sequence of (word, similarity). What is the definition by similarity here and how is it calculated?
The similarity measure used here is the cosine similarity, which takes values between -1 and 1. The cosine similarity measures the (cosine of) the angle between two vectors. If the angle is very small the vectors are considered similar since they are pointing in the same direction. This way of measuring similarity is common when working with high dimensional vector spaces such as word embeddings.
The formula for the cosine similarity of two vectors A and B is as follows:

Plotting ROC curve for binary image

I think ROC has some limitation to evaluate the performance of some binary classifier. I plot the ROC for an image classifier with this instruction: first I apply my detection method to the image, it's result is a gray-scale image. Now I must apply different threshold to obtain different binary images. For a range of threshold (for example th=0.01:0.01:1), I obtain a binary image corresponding to each threshold. Then for each binary image, true positive rate(TPR) and false positive rate(FPR) are calculated which (TPR,FPR) determines a point on the ROC curve. The whole curve includes the points that are calculated for each binary image.
And my problem: if at the first step I have a binary image against a gray-scale image, how I can apply different threshold to it for plotting ROC. Is there any performance evaluation instead of ROC that be suitable for this state?

Understanding Gradient Descent Algorithm

I'm learning Machine Learning. I was reading a topic called Linear Regression with one variable and I got confused while understanding Gradient Descent Algorithm.
Suppose we have given a problem with a Training Set such that pair $(x^{(i)},y^{(i)})$ represents (feature/Input Variable, Target/ Output Variable). Our goal is to create a hypothesis function for this training set, Which can do prediction.
Hypothesis Function:
$$h_{\theta}(x)=\theta_0 + \theta_1 x$$
Our target is to choose $(\theta_0,\theta_1)$ to best approximate our $h_{\theta}(x)$ which will predict values on the training set
Cost Function:
$$J(\theta_0,\theta_1)=\frac{1}{2m}\sum\limits_{i=1}^m (h_{\theta}(x^{(i)})-y^{(i)})^2$$
$$J(\theta_0,\theta_1)=\frac{1}{2}\times Mean Squared Error$$
We have to minimize $J(\theta_0,\theta_1)$ to get the values $(\theta_0,\theta_1)$ which we can put in our hypothesis function to minimize it. We can do that by applying Gradient Descent Algorithm on the plot $(\theta_0,\theta_1,J(\theta_0,\theta_1))$.
My question is how we can choose $(\theta_0,\theta_1)$ and plot the curve $(\theta_0,\theta_1,J(\theta_0,\theta_1))$. In the online lecture, I was watching. The instructor told everything but didn't mentioned from where the plot will come.
At each iteration you will have some h_\theta, and you will calculate the value of 1/2n * sum{(h_\theta(x)-y)^2 | for each x in train set}.
At each iteration h_\theta is known, and the values (x,y) for each train set sample is known, so it is easy to calculate the above.
For each iteration, you have a new value for \theta, and you can calculate the new MSE.
The plot itself will have the iteration number on x axis, and MSE on y axis.
As a side note, while you can use gradient descent - there is no reason. This cost function is convex and it has a singular minimum that is well known: $\theta = (X^T*X)^{-1)X^Ty$, where yis the values of train set (1xn dimension for train set of size n), and X is 2xn matrix where each line X_i=(1,x_i).

How do you plot an ROC curve for a 1 nearest neighbor classifier

I want to evaluate my KNN where K = 1 classifier against Support Vector Machine Classifiers etc but I'm not sure if the way I am computing the ROC plot is correct. The classifier is constructed for a two class problem (positive and negative class).
If I understand correctly, to compute the ROC for a KNN for K=20, to get the first point on the plot we would get the true positive and false positive values for the tests samples where 1 or more of the 20 nearest neighbors are of the positive class. To get the second point we evaluate the true positive and false positive values for the test samples where 2 or more of the 20 nearest neighbors are of the positive class. This is repeated until the threshold reaches 20 out of 20 nearest neighbors.
For the case where K=1, does the ROC curve simply only have 1 point on the plot? Is there a better way to compute the ROC for the 1NN case? How can we fairly evaluate the performance for the 1NN classifier to a SVM classifier? Can we only compare the performance of the classifiers only at the single false positive value of the 1NN classifier?

Resources