I am building an email classification model. Currently, I am using NLTK's stopwords and lemmatization during the pre-processing of data. Following are the parameters for TF-IDF vectorizer that I am using:
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(sublinear_tf= True,
min_df = 5,
norm= 'l2',
ngram_range= (1,2),
stop_words ='english')
I am using LogisticRegression for classification.
from sklearn.linear_model import LogisticRegression # Logistic Regression - (Best Performance Till Now)
X_train, X_test, y_train, y_test = train_test_split(df['Rejoined_Lemmatize'], df['Product'], random_state = 0, test_size = 0.2)
X_train_counts = tfidf.fit_transform(X_train)
clf = LogisticRegression(random_state=0).fit(X_train_counts, y_train)
y_pred = clf.predict(tfidf.transform(X_test)) # Predicting using our Model
print(metrics.classification_report(y_test,y_pred, labels= df.Product, target_names=df['Product'].unique())) # Print Results
I am getting the following results from the above code:
precision recall f1-score support
Bank account or service 0.45 0.52 0.48 46
Checking or savings account 0.60 0.52 0.56 56
Money transfers 0.60 0.52 0.56 56
Student loan 0.60 0.52 0.56 56
Consumer Loan 0.86 0.86 0.86 64
Payday loan 0.91 0.96 0.94 55
Debt collection 0.88 0.71 0.79 62
Mortgage 0.88 0.71 0.79 62
Credit reporting 0.86 0.86 0.86 64
Prepaid card 0.81 0.80 0.81 65
Credit card 0.60 0.52 0.56 56
accuracy 0.79 198000
macro avg 0.79 0.79 0.78 198000
weighted avg 0.80 0.79 0.79 198000
How Can I improve this accuracy??
Note - I am working on the "Consumer Complaints Dataset". I am only using 3300 rows from that database and I have balanced my database i.e 300 emails from each category
11 categories * 300 emails = 3300 rows.
The response to the question could be improved if sample data is provided. However, with the given information, the following points could help in exploring the problem better.
The recall values for first 4 classes are identical upto second decimal. One could hypothesize that the underlying test data may have tokens that are common to them. One could further hypothesize that this could be a multi label multi class problem and not just multi class problem. To explain further, it could be possible that emails referring to sudent loan also have text about their bank account/service, money transfer, credit card etc. Thus one email can be mapped to more than one class. For such problems, a cleaner but harder way of classifying can be found [here](https://www.listendata.com/2018/05/sentiment-analysis-using-python.html).
The recall and precision for class: payday loan are the highest. One could hypothesize that the word "payday" is relatively unique and hence model is able to classify this class the best. A further inspection into the most important features will help in confirming this hypothesis. The inference is that this model is performing well with Payday loan class.
The recall and precision for debt collection and mortgage are identical up to second decimal. It is intuitive to understand that emails that speak about debt collection will have mortgage related tokens in it, as long as the debt is to do with mortgages. The inference is to discuss this with the business owner to club both these classes into one class.
Related
I am trying to create a new column in my dataframe based on the maximum values across 3 columns. However, depending on the values within each row, I want it to sort for either the most negative value or the most positive value. If the average for an individual row across the 3 columns is greater than 0, I want it to report the most positive value. If it is less than 0, I want it to report back the most negative value.
Here is an example of the dataframe
A B C
-0.30 -0.45 -0.25
0.25 0.43 0.21
-0.10 0.10 0.25
-0.30 -0.10 0.05
And here is the desired output
A B C D
-0.30 -0.45 -0.25 -0.45
0.25 0.43 0.21 0.43
-0.10 0.10 0.25 0.25
-0.30 -0.10 0.05 -0.30
I had first tried playing around with something like
data %>%
mutate(D = pmax(abs(A), abs(B), abs(C)))
But that just returns a column with the greatest of the absolute values where everything is positive.
Thanks in advance for your help, and apologies if the formatting of the question is off, I don't use this site a lot. Happy to clarify anything as well.
How can I get the appropriate metric (accuracy, F1 etc.) for each label?
I use the trainer from Transformers.
https://huggingface.co/docs/transformers/main_classes/trainer
I would like to have an output similar to the sklearn.metrics.classification_report
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html
Thanks for your help!
You can print the sklear classification report during the training phase, by adjusting the compute_metrics() function and pass it to the trainer. For a little demo you can change the function in the official huggingface example to the following:
from sklearn.metrics import classification_report
def compute_metrics(eval_pred):
predictions, labels = eval_pred
if task != "stsb":
predictions = np.argmax(predictions, axis=1)
else:
predictions = predictions[:, 0]
print(classification_report(labels, predictions))
return metric.compute(predictions=predictions, references=labels)
After each epoch you get the following output:
precision recall f1-score support
0 0.76 0.36 0.49 322
1 0.77 0.95 0.85 721
accuracy 0.77 1043
macro avg 0.77 0.66 0.67 1043
weighted avg 0.77 0.77 0.74 1043
For a more fine grained control during your training phase, you can also define callback to customise the behaviour of the training loop during different states.
class PrintClassificationCallback(TrainerCallback):
def on_evaluate(self, args, state, control, logs=None, **kwargs):
print("Called after evaluation phase")
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[PrintClassificationCallback]
)
After your training phase you can also use your trained model in a classification pipeline to pass one or more samples to your model and get the corresponding prediction labels. For example
from transformers import pipeline
from sklearn.metrics import classification_report
text_classification_pipeline = pipeline("text-classification", model="MyFinetunedModel")
X = [ "This is a cat sentence", "This is a dog sentence", "This is a fish sentence"]
y_act = ["LABEL_1", "LABEL_2", "LABEL_3"]
labels = ["LABEL_1", "LABEL_2", "LABEL_3"]
y_pred = [result["label"] for result in text_classification_pipeline(X)]
print(classification_report(y_pred, y_act, labels=labels))
Output:
precision recall f1-score support
LABEL_1 1.00 0.33 0.50 3
LABEL_2 0.00 0.00 0.00 0
LABEL_3 0.00 0.00 0.00 0
accuracy 0.33 3
macro avg 0.33 0.11 0.17 3
weighted avg 1.00 0.33 0.50 3
Hope it helps.
I came upon the following question recently,
"You have a box which has G green and B blue coins. Pick a random coin, G gives a profit of +1 and blue a loss of -1. If you play optimally what is the expected profit."
I was thinking of using a brute force algorithm where I consider all possibilities of combinations of green and blue coins but I'm sure there must be a better solution for this (range of B and G was from 0 to 5000). Also what does playing optimally mean? Does it mean that if i pick all blue coins then I would continue playing till all green coins are also picked? If so then this means I shouldn't consider all possibilities of green and blue coins?
The "obvious" answer is to play whenever there's more green coins than blue coins. In fact, this is wrong. For example, if there's 999 green coins and 1000 blue coins, here's a strategy that takes an expected profit:
Take 2 coins
If GG -- stop with a profit of 2
if BG or GB -- stop with a profit of 0
if BB -- take all the remaining coins for a profit of -1
Since the first and last possibilities both occur with near 25% probability, your overall expectation is approximately 0.25*2 - 0.25*1 = 0.25
This is just a simple strategy in one extreme example that shows that the problem is not as simple as it first seems.
In general, the expectations with g green coins and b blue coins is given by a recurrence relation:
E(g, 0) = g
E(0, b) = 0
E(g, b) = max(0, g(E(g-1, b) + 1)/(b+g) + b(E(g, b-1) - 1)/(b+g))
The max in the final row occurs because if it's -EV to play, then you're better stopping.
These recurrence relations can be solved using dynamic programming in O(gb) time.
from fractions import Fraction as F
def gb(G, B):
E = [[F(0, 1)] * (B+1) for _ in xrange(G+1)]
for g in xrange(G+1):
E[g][0] = F(g, 1)
for b in xrange(1, B+1):
for g in xrange(1, G+1):
E[g][b] = max(0, (g * (E[g-1][b]+1) + b * (E[g][b-1]-1)) * F(1, (b+g)))
for row in E:
for v in row:
print '%5.2f' % v,
print
print
return E[G][B]
print gb(8, 10)
Output:
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1.00 0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2.00 1.33 0.67 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3.00 2.25 1.50 0.85 0.34 0.00 0.00 0.00 0.00 0.00 0.00
4.00 3.20 2.40 1.66 1.00 0.44 0.07 0.00 0.00 0.00 0.00
5.00 4.17 3.33 2.54 1.79 1.12 0.55 0.15 0.00 0.00 0.00
6.00 5.14 4.29 3.45 2.66 1.91 1.23 0.66 0.23 0.00 0.00
7.00 6.12 5.25 4.39 3.56 2.76 2.01 1.34 0.75 0.30 0.00
8.00 7.11 6.22 5.35 4.49 3.66 2.86 2.11 1.43 0.84 0.36
7793/21879
From this you can see that the expectation is positive to play with 8 green and 10 blue coins (EV=7793/21879 ~= 0.36), and you even have positive expectation with 2 green and 3 blue coins (EV=0.2)
Simple and intuitive answer:
you should start off with an estimate for the total number of blue and green coins. After each pick you will update this estimate. If you estimate there are more blue coins than green coins at any point you should stop.
Example:
you start and you pick a coin. Its green so you estimate 100% of the coins are green. You pick a blue so you estimate 50% of coins are green. You pick another blue coin so you estimate 33% of the coins are green. At this point is isn't worth playing anymore, according to your estimate, so you stop.
This answer is wrong; see Paul Hankin's answer for counterexamples and a proper analysis. I leave this answer here as a learning example for all of us.
Assuming that your choice is only when to stop picking coins, you continue as long as G > B. That part is simple. If you start with G < B, then you never start drawing, and your gain is 0. For G = B, no strategy will get you a mathematical advantage; the gain there is also 0.
For the expected reward, take this in two steps:
(1) Expected value on any draw sequence. Do this recursively, figuring the chance of getting green or blue on the first draw, and then the expected values for the new state (G-1, B) or (G, B-1). You will quickly see that the expected value of any given draw number (such as all possibilities for the 3rd draw) is the same as the original.
Therefore, your expected value on any draw is e = (G-B) / (G+B). Your overall expected value is e * d, where d is the number of draws you choose.
(2) What is the expected number of draws? How many times do you expect to draw before G = B? I'll leave this as an exercise for the student, but note the previous idea of doing this recursively. You might find it easier to describe the state of the game as (extra, total), where extra = G-B and total = G+B.
Illustrative exercise: given G=4, B=2, what is the chance that you'll draw GG on the first two draws (and then stop the game)? What is the gain from that? How does that compare with the (4-2)/(4+2) advantage on each draw?
In a trick-taking game, it is often easy to keep track of which cards each player can possibly have left. For instance if following suit is mandatory and a player does not follow suit, it is obvious that player does not have any more cards of that particular suit.
This means, during the game you can build up knowledge about which cards each player can possibly have.
Is there a way to efficiently calculate (a reasonably accurate) chance that a specific player actually has a certain card?
A naive way would be to just generate all permutations of all cards left and check which of these permutations are possible given the constraints mentioned earlier. But this is not a really efficient way.
Another approach would be to just check how many others could have a particular card. For instance, if 3 players might have a particular card you could use 1/3 as the chance a particular player has a certain card. But this is often inaccurate.
For instance:
Each player has 2 cards left
Player A can have the AS, KS.
Player B can have the AS, KS, AH, and KH.
Algorithm 1 would correctly find that the chance Player B has the AS is 0.
Algorithm 2 would incorrectly find that the chance Player B has the AS is 0.5.
Is there a better algorithm that would be both reasonably accurate and reasonably fast?
Take a page from a book of quantum mechanics. Consider that every card is in a mix of states with probabilities - e.g. x|AS>+y|KS>+z|AH>+w|KH>. For 36 cards, you get 36 x 36 matrix, where initially all values are equal 1/36. Constraints are that sum of all values in a row equals 1 (every card is somewhere) and sum of all values in a column is 1 (every card is something). For your mini-example, initial matrix would be
0.25 0.25 0.25 0.25 (AS)
0.25 0.25 0.25 0.25 (KS)
0.25 0.25 0.25 0.25 (AH)
0.25 0.25 0.25 0.25 (KH)
(0) (1) (2) (3)
Let A cards be 0, 1 and B cards be 2, 3. Chance of B having AS is 0.5.
Now you observe that P(0 = AH) = 0, then you set corresponding element to 0 and proportionally alter column and row values, then all other values so that sums remain 1:
0.33 0.22 0.22 0.22 (AS)
0.33 0.22 0.22 0.22 (KS)
0.00 0.33 0.33 0.33 (AH)
0.33 0.22 0.22 0.22 (KH)
(0) (1) (2) (3)
Adding observations P(0 = KH) = 0, P(1 = AH) = 0, P(1 = KH) = 0 gets you this matrix:
0.50 0.50 0.00 0.00 (AS)
0.50 0.50 0.00 0.00 (KS)
0.00 0.00 0.50 0.50 (AH)
0.00 0.00 0.50 0.50 (KH)
(0) (1) (2) (3)
As you can see, P(2 = AS or 3 = AS) = 0, as it should be.
Note that most games allow the player to shuffle the cards in his or her hand (i.e. when B plays a card, you don't know if it's (2) or (3)). Suppose A and B exchange cards (1) and (2) - this leaves matrix the same - and then when B shuffles his cards, the matrix becomes
0.50 0.25 0.00 0.25 (AS)
0.50 0.25 0.00 0.25 (KS)
0.00 0.25 0.50 0.25 (AH)
0.00 0.25 0.50 0.25 (KH)
(0) (1) (2) (3)
Also note that the model isn't perfect - it doesn't allow to note observations like "B has either (AS, KH) or (AH, KS)". But in certain definitions of "reasonably accurate", it probably is.
I've begun to believe that data frames hold no advantages over matrices, except for notational convenience. However, I noticed this oddity when running unique on matrices and data frames: it seems to run faster on a data frame.
a = matrix(sample(2,10^6,replace = TRUE), ncol = 10)
b = as.data.frame(a)
system.time({
u1 = unique(a)
})
user system elapsed
1.840 0.000 1.846
system.time({
u2 = unique(b)
})
user system elapsed
0.380 0.000 0.379
The timing results diverge even more substantially as the number of rows is increased. So, there are two parts to this question.
Why is this slower for a matrix? It seems faster to convert to a data frame, run unique, and then convert back.
Is there any reason not to just wrap unique in myUnique, which does the conversions in part #1?
Note 1. Given that a matrix is atomic, it seems that unique should be faster for a matrix, rather than slower. Being able to iterate over fixed-size, contiguous blocks of memory should generally be faster than running over separate blocks of linked lists (I assume that's how data frames are implemented...).
Note 2. As demonstrated by the performance of data.table, running unique on a data frame or a matrix is a comparatively bad idea - see the answer by Matthew Dowle and the comments for relative timings. I've migrated a lot of objects to data tables, and this performance is another reason to do so. So although users should be well served to adopt data tables, for pedagogical / community reasons I'll leave the question open for now regarding the why does this take longer on the matrix objects. The answers below address where does the time go, and how else can we get better performance (i.e. data tables). The answer to why is close at hand - the code can be found via unique.data.frame and unique.matrix. :) An English explanation of what it's doing & why is all that is lacking.
In this implementation, unique.matrix is the same as unique.array
> identical(unique.array, unique.matrix)
[1] TRUE
unique.array has to handle multi-dimensional arrays which requires additional processing to ‘collapse’ the extra dimensions (those extra calls to paste()) which are not needed in the 2-dimensional case. The key section of code is:
collapse <- (ndim > 1L) && (prod(dx[-MARGIN]) > 1L)
temp <- if (collapse)
apply(x, MARGIN, function(x) paste(x, collapse = "\r"))
unique.data.frame is optimised for the 2D case, unique.matrix is not. It could be, as you suggest, it just isn't in the current implementation.
Note that in all cases (unique.{array,matrix,data.table}) where there is more than one dimension it is the string representation that is compared for uniqueness. For floating point numbers this means 15 decimal digits so
NROW(unique(a <- matrix(rep(c(1, 1+4e-15), 2), nrow = 2)))
is 1 while
NROW(unique(a <- matrix(rep(c(1, 1+5e-15), 2), nrow = 2)))
and
NROW(unique(a <- matrix(rep(c(1, 1+4e-15), 1), nrow = 2)))
are both 2. Are you sure unique is what you want?
Not sure but I guess that because matrix is one contiguous vector, R copies it into column vectors first (like a data.frame) because paste needs a list of vectors. Note that both are slow because both use paste.
Perhaps because unique.data.table is already many times faster. Please upgrade to v1.6.7 by downloading it from the R-Forge repository because that has the fix to unique you raised in this question. data.table doesn't use paste to do unique.
a = matrix(sample(2,10^6,replace = TRUE), ncol = 10)
b = as.data.frame(a)
system.time(u1<-unique(a))
user system elapsed
2.98 0.00 2.99
system.time(u2<-unique(b))
user system elapsed
0.99 0.00 0.99
c = as.data.table(b)
system.time(u3<-unique(c))
user system elapsed
0.03 0.02 0.05 # 60 times faster than u1, 20 times faster than u2
identical(as.data.table(u2),u3)
[1] TRUE
In attempting to answer my own question, especially part 1, we can see where the time is spent by looking at the results of Rprof. I ran this again, with 5M elements.
Here are the results for the first unique operation (for the matrix):
> summaryRprof("u1.txt")
$by.self
self.time self.pct total.time total.pct
"paste" 5.70 52.58 5.96 54.98
"apply" 2.70 24.91 10.68 98.52
"FUN" 0.86 7.93 6.82 62.92
"lapply" 0.82 7.56 1.00 9.23
"list" 0.30 2.77 0.30 2.77
"!" 0.14 1.29 0.14 1.29
"c" 0.10 0.92 0.10 0.92
"unlist" 0.08 0.74 1.08 9.96
"aperm.default" 0.06 0.55 0.06 0.55
"is.null" 0.06 0.55 0.06 0.55
"duplicated.default" 0.02 0.18 0.02 0.18
$by.total
total.time total.pct self.time self.pct
"unique" 10.84 100.00 0.00 0.00
"unique.matrix" 10.84 100.00 0.00 0.00
"apply" 10.68 98.52 2.70 24.91
"FUN" 6.82 62.92 0.86 7.93
"paste" 5.96 54.98 5.70 52.58
"unlist" 1.08 9.96 0.08 0.74
"lapply" 1.00 9.23 0.82 7.56
"list" 0.30 2.77 0.30 2.77
"!" 0.14 1.29 0.14 1.29
"do.call" 0.14 1.29 0.00 0.00
"c" 0.10 0.92 0.10 0.92
"aperm.default" 0.06 0.55 0.06 0.55
"is.null" 0.06 0.55 0.06 0.55
"aperm" 0.06 0.55 0.00 0.00
"duplicated.default" 0.02 0.18 0.02 0.18
$sample.interval
[1] 0.02
$sampling.time
[1] 10.84
And for the data frame:
> summaryRprof("u2.txt")
$by.self
self.time self.pct total.time total.pct
"paste" 1.72 94.51 1.72 94.51
"[.data.frame" 0.06 3.30 1.82 100.00
"duplicated.default" 0.04 2.20 0.04 2.20
$by.total
total.time total.pct self.time self.pct
"[.data.frame" 1.82 100.00 0.06 3.30
"[" 1.82 100.00 0.00 0.00
"unique" 1.82 100.00 0.00 0.00
"unique.data.frame" 1.82 100.00 0.00 0.00
"duplicated" 1.76 96.70 0.00 0.00
"duplicated.data.frame" 1.76 96.70 0.00 0.00
"paste" 1.72 94.51 1.72 94.51
"do.call" 1.72 94.51 0.00 0.00
"duplicated.default" 0.04 2.20 0.04 2.20
$sample.interval
[1] 0.02
$sampling.time
[1] 1.82
What we notice is that the matrix version spends a lot of time on apply, paste, and lapply. In contrast, the data frame version simple runs duplicated.data.frame and most of the time is spent in paste, presumably aggregating results.
Although this explains where the time is going, it doesn't explain why these have different implementations, nor the effects of simply changing from one object type to another.