R xgboost xgb.cv pred values: best iteration or final iteration? - cross-validation

I am using the xgb.cv function to grid search best hyperparameters in the R implementation of xgboost. When setting predictions to TRUE, it supplies the predictions for the out of fold observations. Presuming you are using early stopping, do the predictions correspond to predictions at the best iteration or are they the predictions of the final iteration?

CV predictions correspond to the best iteration - you can see this using a 'strict' early_stopping value, then comparing the predictions with those made using models trained with the 'best' number of iterations and 'final' number of iterations, eg:
# Load minimum reproducible example
library(xgboost)
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
test <- agaricus.test
dtest <- xgb.DMatrix(test$data, label=test$label)
# Perform cross validation with a 'strict' early_stopping
cv <- xgb.cv(data = train$data, label = train$label, nfold = 5, max_depth = 2,
eta = 1, nthread = 4, nrounds = 10, objective = "binary:logistic",
prediction = TRUE, early_stopping_rounds = 1)
# Check which round was the best iteration (the one that initiated the early stopping)
print(cv$best_iteration)
[1] 3
# Get the predictions
head(cv$pred)
[1] 0.84574515 0.15447612 0.15390711 0.84502697 0.09661318 0.15447612
# Train a model using 3 rounds (corresponds to best iteration)
trained_model <- xgb.train(data = dtrain, max_depth = 2,
eta = 1, nthread = 4, nrounds = 3,
watchlist = list(train = dtrain, eval = dtrain),
objective = "binary:logistic")
# Get predictions
head(predict(trained_model, dtrain))
[1] 0.84625006 0.15353635 0.15353635 0.84625006 0.09530514 0.15353635
# Train a model using 10 rounds (corresponds to final iteration)
trained_model <- xgb.train(data = dtrain, max_depth = 2,
eta = 1, nthread = 4, nrounds = 10,
watchlist = list(train = dtrain, eval = dtrain),
objective = "binary:logistic")
head(predict(trained_model, dtrain))
[1] 0.9884467125 0.0123147098 0.0050151693 0.9884467125 0.0008781737 0.0123147098
So the predictions from the CV are ~the same as the predictions made when the number of iterations is 'best', not 'final'.

Related

Line search fails when training linear SVM with caret

I am trying to train a linear SVM while tuning the parameters with 10fold CV for binary text classification.
As all solutions provided in other threads do not work and I already removed all NAs, NANs and Inf and balanced my dataset by applying downsampling but still the model returns NAs and fails in line search. Therefore I need the help of the community as I am kind of stuck.
The data has 2099 observations of 926 variables and is mostly 0 and 1, 2 or 3s.
dat_SetimentAnalysis <- c(
This is my code:
set.seed(335)
trainIndex <- createDataPartition(dat_SentimentAnalysis$Usefulness, p = .75,
list = FALSE,
times = 1)
train <- dat_SentimentAnalysis[ trainIndex,]
test <- dat_SentimentAnalysis[-trainIndex,]
#check for distribution of class
table(train$Usefulness)
#downsample training set
train <- downSample(train, as.factor(train$Usefulness))
#check again for distribution
table(train$Usefulness)
train <- na.omit(train) #no na values detected
#separate feature and predictors
x_train <- train[2:926]
y_train <- as.factor(train$Usefulness)
x_test <- test[2:926]
y_test <- as.factor(test$Usefulness)
sum(is.na(x_train))
sum(is.na(y_train))
#tune hyperparameters for SVM
fitControl <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3,
search = "grid",
classProbs = TRUE,
savePredictions = TRUE)
model <- caret::train(x = x_train,
y = y_train,
method = "svmLinear",
trControl = fitControl,
tunegrid=data.frame(C=c(0.25, 0.5, 1,5,8,12,100)))
Does anybody have an idea what could be wrong? Because, when I do not perform tuning I get a very poor performing SVM with around 52 % accuracy but at least I get one. So maybe something with the tuning formula is wrong?
Thank you very much for your help!

Rcpp instead of mapply in model validation for many subsets

Let's say that we have specified N number of train datasets (80:20 division) and we want to retrieve a two element list with pvalues and coefficients from glm model, for each train dataset. The code reproducing this example is as follows:
library(parallel)
library(caret)
# prepare dataset
data(iris)
iris <- iris[!iris$Species == "setosa", ]
# create validation folds
set.seed(12345)
folds <- createDataPartition(y = iris$Species, times = 100, p = 0.8, list = FALSE)
# glm model expression
model.expr.tr <- expression(glm(formula = Species ~ Sepal.Length,
data = dtr,
family = binomial(link = "logit")))
# glm elements that will be validated
val_part <- list(coefs = expression(summary(eval(model.expr.tr))$coefficients[, 1]),
pvals = expression(summary(eval(model.expr.tr))$coefficients[, 4]))
# lapply with mapply for validation results
val_results <- lapply(val_part, function(x){
mapply(function(i){
trindex <- rownames(iris) %in% folds[, i]
dtr <- iris[trindex, ]
eval(x)
},
i = 1:100)
})
As you are aware, the longest part is running the model summary through all of train datasets, especially if we choose more than 100 of them. In your opinion, is there any way to speed up this process? Of course I am aware of parLapply / mcmapply options but what about some kind of Rcpp speed up in this case? Any suggestions?
Thanks.

How to sample without replacement and reweigh each time (conditional sampling)?

Consider a dataset of N rows with weights. This is the basic algorithm:
Normalize the weights so that they sum to 1.
Backup the weights into another column to record sample probabilities
Randomly choose 1 row (without replacement), given the sample probabilities, and add it to the sample dataset
Remove the drawn weight from the original dataset, and recompute the sample probabilities by normalizing the weights of the remaining rows
Repeat steps 3 and 4 till sum of weights in sample reaches or exceeds threshold (assume 0.6)
Here is a toy example:
import pandas as pd
import numpy as np
def sampler(n):
df = pd.DataFrame(np.random.rand(n), columns=['weight'])
df['weight'] = df['weight']/df['weight'].sum()
df['samp_prob'] = df['weight']
samps = pd.DataFrame(columns=['weight'])
while True:
choice = np.random.choice(df.index, 1, replace=False, p=df['samp_prob'])[0]
samps.loc[choice, 'weight'] = df.loc[choice, 'weight']
df.drop(choice, axis=0, inplace=True)
df['samp_prob'] = df['weight']/df['weight'].sum()
if samps['weight'].sum() >= 0.6:
break
return samps
The problem with the toy example is the exponential growth in run times with increasing size of n:
Starting off approach
Few observations :
The dropping of rows per iteration that results in creation of new dataframes isn't helping with the performance.
Doesn't look like easy to vectorize, BUT should be easy to work with the underlying array data for performance. The idea would be to use masks and avoid re-creating dataframes or arrays. Starting off, we would be using two columns array, corresponding to the columns named : 'weights' and 'samp_prob'.
So, with those in mind, the starting approach would be something like this -
def sampler2(n):
a = np.random.rand(n,2)
a[:,0] /= a[:,0].sum()
a[:,1] = a[:,0]
N = len(a)
idx = np.arange(N)
mask = np.ones(N,dtype=bool)
while True:
choice = np.random.choice(idx[mask], 1, replace=False, p=a[mask,1])[0]
mask[choice] = 0
a_masked = a[mask,0]
a[mask,1] = a_masked/a_masked.sum()
if a[~mask,0].sum() >= 0.6:
break
out = a[~mask,0]
return out
Improvement #1
A later observation revealed that the first column of the array isn't changing across iterations. So, we could optimize for the masked summations for the first column, by pre-computing the total summation and then at each iteration, a[~mask,0].sum() would be simply the total summation minus a_masked.sum(). Thsi leads us to the first improvement, listed below -
def sampler3(n):
a = np.random.rand(n,2)
a[:,0] /= a[:,0].sum()
a[:,1] = a[:,0]
N = len(a)
idx = np.arange(N)
mask = np.ones(N,dtype=bool)
a0_sum = a[:,0].sum()
while True:
choice = np.random.choice(idx[mask], 1, replace=False, p=a[mask,1])[0]
mask[choice] = 0
a_masked = a[mask,0]
a_masked_sum = a_masked.sum()
a[mask,1] = a_masked/a_masked_sum
if a0_sum - a_masked_sum >= 0.6:
break
out = a[~mask,0]
return out
Improvement #2
Now, slicing and masking into the columns of a 2D array could be improved by using two separate arrays instead, given that the first column wasn't changing between iterations. That gives us a modified version, like so -
def sampler4(n):
a = np.random.rand(n)
a /= a.sum()
b = a.copy()
N = len(a)
idx = np.arange(N)
mask = np.ones(N,dtype=bool)
a_sum = a.sum()
while True:
choice = np.random.choice(idx[mask], 1, replace=False, p=b[mask])[0]
mask[choice] = 0
a_masked = a[mask]
a_masked_sum = a_masked.sum()
b[mask] = a_masked/a_masked_sum
if a_sum - a_masked_sum >= 0.6:
break
out = a[~mask]
return out
Runtime test -
In [250]: n = 1000
In [251]: %timeit sampler(n) # original app
...: %timeit sampler2(n)
...: %timeit sampler3(n)
...: %timeit sampler4(n)
1 loop, best of 3: 655 ms per loop
10 loops, best of 3: 50 ms per loop
10 loops, best of 3: 44.9 ms per loop
10 loops, best of 3: 38.4 ms per loop
In [252]: n = 2000
In [253]: %timeit sampler(n) # original app
...: %timeit sampler2(n)
...: %timeit sampler3(n)
...: %timeit sampler4(n)
1 loop, best of 3: 1.32 s per loop
10 loops, best of 3: 134 ms per loop
10 loops, best of 3: 119 ms per loop
10 loops, best of 3: 100 ms per loop
Thus, we are getting 17x+ and 13x+ speedups with the final version over the original method for n=1000 and n=2000 sizes!
I think you can rewrite this while loop to do it in a single pass:
while True:
choice = np.random.choice(df.index, 1, replace=False, p=df['samp_prob'])[0]
samps.loc[choice, 'weight'] = df.loc[choice, 'weight']
df.drop(choice, axis=0, inplace=True)
df['samp_prob'] = df['weight']/df['weight'].sum()
if samps['weight'].sum() >= 0.6:
break
to something more like:
n = len(df.index)
ind = np.random.choice(n, n, replace=False, p=df["samp_prob"])
res = df.iloc[ind]
i = (res.cumsum() >= 0.6).idxmax() # first index that satisfies .sum() >= 0.6
samps = res.iloc[:i+1]
The key parts are that choice can take multiple elements (indeed the entire array) whilst still respecting the probabilities. The cumsum allows you to cut off after passing the 0.6 threshold.
In this example you can see that the array is randomly chosen, but that 4 is most likely chosen nearer the top.
In [11]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[11]: array([0, 4, 3, 2, 1])
In [12]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[12]: array([3, 4, 1, 2, 0])
In [13]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[13]: array([0, 4, 3, 1, 2])
In [14]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[14]: array([4, 3, 0, 2, 1])
In [15]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[15]: array([4, 2, 3, 0, 1])
In [16]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[16]: array([3, 4, 2, 0, 1])
Note: The replace=False, ensures the probabilities are "reweighed" in the sense that it can't be picked again.

What does it mean when train and validation loss diverge from epoch 1?

I was recently working on a deep learning model in Keras and it gave me very perplexing results. The model is capable of mastering the training data over time, but it consistently gets worse results on the validation data.
I know that if the validation accuracy goes up for a while and then starts to decrease that you are over-fitting to the training data, but in this case, the validation accuracy only ever decreases. I am really confused why this happens. Does anyone have any intuition as to what could cause this to happen? Or any suggestions on things to test to potentially fix it?
Edit to add more info and code
Ok. So I am making a model that is trying to do some basic stock predictions. By looking at the open, high, low, close, and volume of the last 40 days, the model tries to predict whether or not the price will go up two average true ranges without going down one average true range. As input, I took CSVs from Yahoo Finance that include this information for the last 30 years for all of the stocks in the Dow Jones Industrial Average. The model trains on 70% of the stocks and validates on the other 20%. This leads to about 150,000 training samples. I am currently using a 1d Convolutional Neural Network, but I have also tried other smaller models (logistic regression and small Feed Forward NN) and I always get the same either diverging train and validation loss or nothing learned at all because the model is too simple.
Here is the code:
import numpy as np
from sklearn import preprocessing
from sklearn.metrics import auc, roc_curve, roc_auc_score
from keras.layers import Input, Dense, Flatten, Conv1D, Activation, MaxPooling1D, Dropout, Concatenate
from keras.models import Model
from keras.callbacks import ModelCheckpoint, EarlyStopping, Callback
from keras import backend as K
import matplotlib.pyplot as plt
from random import seed, shuffle
from os import listdir
class roc_auc(Callback):
def on_train_begin(self, logs={}):
self.aucs = []
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict(self.validation_data[0])
self.aucs.append(roc_auc_score(self.validation_data[1], y_pred))
if max(self.aucs) == self.aucs[-1]:
model.save_weights("weights.roc_auc.hdf5")
print(" - auc: %0.4f" % self.aucs[-1])
return
def on_batch_begin(self, batch, logs={}):
return
def on_batch_end(self, batch, logs={}):
return
rrr = 2
epochs = 200
batch_size = 64
days_input = 40
seed(42)
X_train = []
X_test = []
y_train = []
y_test = []
files = listdir("Stocks")
total_stocks = len(files)
shuffle(files)
for x, file in enumerate(files):
test = False
if (x+1.0)/total_stocks > 0.7:
test = True
if test:
print("Test -> Stocks/%s" % file)
else:
print("Train -> Stocks/%s" % file)
stock = np.loadtxt(open("Stocks/"+file, "r"), delimiter=",", skiprows=1, usecols = (1,2,3,5,6))
atr = []
last = None
for day in stock:
if last is None:
tr = abs(day[1] - day[2])
atr.append(tr)
else:
tr = max(day[1] - day[2], abs(last[3] - day[1]), abs(last[3] - day[2]))
atr.append((13*atr[-1]+tr)/14)
last = day.copy()
stock = np.insert(stock, 5, atr, axis=1)
for i in range(days_input,stock.shape[0]-1):
input = stock[i-days_input:i, 0:5].copy()
for j, day in enumerate(input):
input[j][1] = (day[1]-day[0])/day[0]
input[j][2] = (day[2]-day[0])/day[0]
input[j][3] = (day[3]-day[0])/day[0]
input[:,0] = input[:,0] / np.linalg.norm(input[:,0])
input[:,1] = input[:,1] / np.linalg.norm(input[:,1])
input[:,2] = input[:,2] / np.linalg.norm(input[:,2])
input[:,3] = input[:,3] / np.linalg.norm(input[:,3])
input[:,4] = input[:,4] / np.linalg.norm(input[:,4])
preprocessing.scale(input, copy=False)
output = -1
buy = stock[i][1]
stoploss = buy - stock[i][5]
target = buy + rrr*stock[i][5]
for j in range(i+1, stock.shape[0]):
if stock[j][0] < stoploss or stock[j][2] < stoploss:
output = 0
break
elif stock[j][1] > target:
output = 1
break
if output != -1:
if test:
X_test.append(input)
y_test.append(output)
else:
X_train.append(input)
y_train.append(output)
shape = list(X_train[0].shape)
shape[:0] = [len(X_train)]
X_train = np.concatenate(X_train).reshape(shape)
y_train = np.array(y_train)
shape = list(X_test[0].shape)
shape[:0] = [len(X_test)]
X_test = np.concatenate(X_test).reshape(shape)
y_test = np.array(y_test)
print("Train class split is %0.2f" % (100*np.average(y_train)))
print("Test class split is %0.2f" % (100*np.average(y_test)))
inputs = Input(shape=(days_input,5))
x = Conv1D(32, 5, padding='same')(inputs)
x = Activation('relu')(x)
x = MaxPooling1D()(x)
x = Conv1D(64, 5, padding='same')(x)
x = Activation('relu')(x)
x = MaxPooling1D()(x)
x = Conv1D(128, 5, padding='same')(x)
x = Activation('relu')(x)
x = MaxPooling1D()(x)
x = Flatten()(x)
x = Dense(128, activation="relu")(x)
x = Dense(64, activation="relu")(x)
output = Dense(1, activation="sigmoid")(x)
model = Model(inputs=inputs,outputs=output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
filepath="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=0, save_best_only=True, mode='max')
auc_hist = roc_auc()
callbacks_list = [checkpoint, auc_hist]
history = model.fit(X_train, y_train, validation_data=(X_test,y_test) , epochs=epochs, callbacks=callbacks_list, batch_size=batch_size, class_weight ='balanced').history
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("weights.latest.hdf5")
model.load_weights("weights.roc_auc.hdf5")
plt.plot(history['acc'])
plt.plot(history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(history['loss'])
plt.plot(history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(auc_hist.aucs)
plt.title('model ROC AUC')
plt.ylabel('AUC')
plt.xlabel('epoch')
plt.show()
y_pred = model.predict(X_train)
fpr, tpr, _ = roc_curve(y_train, y_pred)
roc_auc = auc(fpr, tpr)
plt.subplot(1, 2, 1)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy',linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Train ROC')
plt.legend(loc="lower right")
y_pred = model.predict(X_test)
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(fpr, tpr)
plt.subplot(1, 2, 2)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy',linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Test ROC')
plt.legend(loc="lower right")
plt.show()
with open('roc.csv','w+') as file:
for i in range(len(thresholds)):
file.write("%f,%f,%f\n" % (fpr[i], tpr[i], thresholds[i]))
Results by 100 batches instead of by epoch
I listened to suggestions and made a few updates. The classes are now balanced 50% to 50% instead of 25% to 75%. Also, the validation data is randomly selected now instead of being a specific set of stocks. By graphing the loss and accuracy at a finer resolution(100 batches vs 1 epoch), the over-fitting can clearly be seen. The model does actually start to learn at the very beginning before it starts to diverge. I am surprised at how fast it starts to over-fit, but now that I can see the issue hopefully I can debug it.
Possible explanations
Coding error
Overfitting due to differences in the training / validation data
Skewed classes (and differences in the training / validation data)
Things I would try
Swapping the training and the validation set. Does the problem still occur?
Plot the curves in more detail for the first ~10 epochs (e.g. directly after initialization; each few training iterations, not only per epoch). Do you still start at > 75%? Then your classes might be skewed and you might also want to check if your training-validation split is stratified.
Code
This is useless: np.concatenate(X_train)
Make your code as readable as possible when you post it here. This includes removing lines which are commented out.
This looks suspicious for a coding error to me:
if test:
X_test.append(input)
y_test.append(output)
else:
#if((output == 0 and np.average(y_train) > 0.5) or output == 1):
X_train.append(input)
y_train.append(output)
use sklearn.model_selection.train_test_split instead. Do all transformations to the data before, then make the split with this method.
Looks like the batch size is much too small for the number of training samples you have. Try batching 20% and see if that makes a difference.

How to speed up monte carlo simulation in python

I have written a poker simulator that calculates the probability to win in texas holdem by running a simulation of games (i.e. monte carlo simulation). Currently it runs around 10000 simulations in 10 seconds which in general is good enough. However, there are apps on the iPhone that are running around 100x faster. I'm wondering whether there's anything I can do to speed up the program significantly. I have already replaced strings with lists and found many ways to speed up the program by around 2-3 times. But what can I do to possibly speed it up 50x-100x? I checked with a profiler but couldn't find any significant bottleneck. I also complied it to cython (without making any changes) but that had no effect on speed either. Any suggestions are appreciated. The full listing is below:
__author__ = 'Nicolas Dickreuter'
import time
import numpy as np
from collections import Counter
class MonteCarlo(object):
def EvalBestHand(self, hands):
scores = [(i, self.score(hand)) for i, hand in enumerate(hands)]
winner = sorted(scores, key=lambda x: x[1], reverse=True)[0][0]
return hands[winner],scores[winner][1][-1]
def score(self, hand):
crdRanksOriginal = '23456789TJQKA'
originalSuits='CDHS'
rcounts = {crdRanksOriginal.find(r): ''.join(hand).count(r) for r, _ in hand}.items()
score, crdRanks = zip(*sorted((cnt, rank) for rank, cnt in rcounts)[::-1])
potentialThreeOfAKind = score[0] == 3
potentialTwoPair = score == (2, 2, 1, 1, 1)
potentialPair = score == (2, 1, 1, 1, 1, 1)
if score[0:2]==(3,2) or score[0:2]==(3,3): # fullhouse (three of a kind and pair, or two three of a kind)
crdRanks = (crdRanks[0],crdRanks[1])
score = (3,2)
elif score[0:4]==(2,2,2,1):
score=(2,2,1) # three pair are not worth more than two pair
sortedCrdRanks = sorted(crdRanks,reverse=True) # avoid for example 11,8,6,7
crdRanks=(sortedCrdRanks[0],sortedCrdRanks[1],sortedCrdRanks[2],sortedCrdRanks[3])
elif len(score) >= 5: # high card, flush, straight and straight flush
# straight
if 12 in crdRanks: # adjust if 5 high straight
crdRanks += (-1,)
sortedCrdRanks = sorted(crdRanks,reverse=True) # sort again as if pairs the first rank matches the pair
for i in range(len(sortedCrdRanks) - 4):
straight = sortedCrdRanks[i] - sortedCrdRanks[i + 4] == 4
if straight:
crdRanks=(sortedCrdRanks[i],sortedCrdRanks[i+1],sortedCrdRanks[i+2],sortedCrdRanks[i+3],sortedCrdRanks[i+4])
break
# flush
suits = [s for _, s in hand]
flush = max(suits.count(s) for s in suits) >= 5
if flush:
for flushSuit in originalSuits: # get the suit of the flush
if suits.count(flushSuit)>=5:
break
flushHand = [k for k in hand if flushSuit in k]
rcountsFlush = {crdRanksOriginal.find(r): ''.join(flushHand).count(r) for r, _ in flushHand}.items()
score, crdRanks = zip(*sorted((cnt, rank) for rank, cnt in rcountsFlush)[::-1])
crdRanks = tuple(sorted(crdRanks,reverse=True)) # ignore original sorting where pairs had influence
# check for straight in flush
if 12 in crdRanks: # adjust if 5 high straight
crdRanks += (-1,)
for i in range(len(crdRanks) - 4):
straight = crdRanks[i] - crdRanks[i + 4] == 4
# no pair, straight, flush, or straight flush
score = ([(5,), (2, 1, 2)], [(3, 1, 3), (5,)])[flush][straight]
if score == (1,) and potentialThreeOfAKind: score = (3, 1)
elif score == (1,) and potentialTwoPair: score = (2, 2, 1)
elif score == (1,) and potentialPair: score = (2, 1, 1)
if score[0]==5:
handType="StraightFlush"
#crdRanks=crdRanks[:5] # five card rule makes no difference {:5] would be incorrect
elif score[0]==4:
handType="FoufOfAKind"
crdRanks=crdRanks[:2]
elif score[0:2]==(3,2):
handType="FullHouse"
# crdRanks=crdRanks[:2] # already implmeneted above
elif score[0:3]==(3,1,3):
handType="Flush"
crdRanks=crdRanks[:5] # !! to be verified !!
elif score[0:3]==(3,1,2):
handType="Straight"
crdRanks=crdRanks[:5] # !! to be verified !!
elif score[0:2]==(3,1):
handType="ThreeOfAKind"
crdRanks=crdRanks[:3]
elif score[0:2]==(2,2):
handType="TwoPair"
crdRanks=crdRanks[:3]
elif score[0]==2:
handType="Pair"
crdRanks=crdRanks[:4]
elif score[0]==1:
handType="HighCard"
crdRanks=crdRanks[:5]
else: raise Exception('Card Type error!')
return score, crdRanks, handType
def createCardDeck(self):
values = "23456789TJQKA"
suites = "CDHS"
Deck=[]
[Deck.append(x+y) for x in values for y in suites]
return Deck
def distributeToPlayers(self, Deck, PlayerAmount, PlayerCardList, TableCardsList):
Players =[]
CardsOnTable = []
knownPlayers = 0
for PlayerCards in PlayerCardList:
FirstPlayer=[]
FirstPlayer.append(Deck.pop(Deck.index(PlayerCards[0])))
FirstPlayer.append(Deck.pop(Deck.index(PlayerCards[1])))
Players.append(FirstPlayer)
knownPlayers += 1
for c in TableCardsList:
CardsOnTable.append(Deck.pop(Deck.index(c))) # remove cards that are on the table from the deck
for n in range(0, PlayerAmount - knownPlayers):
plr=[]
plr.append(Deck.pop(np.random.random_integers(0,len(Deck)-1)))
plr.append(Deck.pop(np.random.random_integers(0,len(Deck)-1)))
Players.append(plr)
return Players, Deck
def distributeToTable(self, Deck, TableCardsList):
remaningRandoms = 5 - len(TableCardsList)
for n in range(0, remaningRandoms):
TableCardsList.append(Deck.pop(np.random.random_integers(0,len(Deck)-1)))
return TableCardsList
def RunMonteCarlo(self, originalPlayerCardList, originalTableCardsList, PlayerAmount, gui, maxRuns=6000,maxSecs=5):
winnerlist = []
EquityList = []
winnerCardTypeList=[]
wins = 0
runs=0
timeout_start=time.time()
for m in range(maxRuns):
runs+=1
try:
if gui.active==True:
gui.progress["value"] = int(round(m*100/maxRuns))
except:
pass
Deck = self.createCardDeck()
PlayerCardList = originalPlayerCardList[:]
TableCardsList = originalTableCardsList[:]
Players, Deck = self.distributeToPlayers(Deck, PlayerAmount, PlayerCardList, TableCardsList)
Deck5Cards = self.distributeToTable(Deck, TableCardsList)
PlayerFinalCardsWithTableCards = []
for o in range(0, PlayerAmount):
PlayerFinalCardsWithTableCards.append(Players[o]+Deck5Cards)
bestHand,winnerCardType=self.EvalBestHand(PlayerFinalCardsWithTableCards)
winner = (PlayerFinalCardsWithTableCards.index(bestHand))
#print (winnerCardType)
CollusionPlayers = 0
if winner < CollusionPlayers + 1:
wins += 1
winnerCardTypeList.append(winnerCardType)
# winnerlist.append(winner)
# self.equity=wins/m
# if self.equity>0.99: self.equity=0.99
# EquityList.append(self.equity)
if time.time()>timeout_start+maxSecs:
break
self.equity = wins / runs
self.winnerCardTypeList = Counter(winnerCardTypeList)
for key, value in self.winnerCardTypeList.items():
self.winnerCardTypeList[key] = value / runs
self.winTypesDict=self.winnerCardTypeList.items()
# show how the montecarlo converges
# xaxis=range(500,monteCarloRuns)
# plt.plot(xaxis,EquityList[499:monteCarloRuns])
# plt.show()
return self.equity,self.winTypesDict
if __name__ == '__main__':
Simulation = MonteCarlo()
mycards=[['AS', 'KS']]
cardsOnTable = []
players = 3
start_time = time.time()
Simulation.RunMonteCarlo(mycards, cardsOnTable, players, 1, maxRuns=200000, maxSecs=120)
print("--- %s seconds ---" % (time.time() - start_time))
equity = Simulation.equity # considering draws as wins
print (equity)
This algorithm is too long and complex for any sensible suggestion apart from generic ones. So here you go: vectorise everything you can and work with vectors of data instead of loops, lists, insertions, removals. Montecarlo simulations allow for this quite nicely because all samples are independent.
For example, you want 1 million retries, generate a numpy array of random 1 million values in one line. You want to test outcome, place 1 million outcomes into a numpy array and test all of them at once against a given condition generating a numpy array of booleans.
If you manage to vectorize this you will get tremendous performance increases.
Inefficient array operations are probably causing most of the slowness. In particular, I see a lot of lines like this:
Deck.pop(some_index)
Every time you do that, the list has to shift all the elements after that index. You can eliminate this (as well as repeated random functions) by using:
from random import shuffle
# Setup
originalDeck = createCardDeck()
# Individual run
Deck = originalDeck[:]
shuffle(Deck) # shuffles in place
# Draw a card from the end so nothing needs to shift
card = Deck.pop()
# Concise and efficient!
There are some other things you could do, like change long if-else chains into constant time lookups with a dict or custom class. Right now it checks for a straight flush first down to a high card last for every hand, which seems rather inefficient. I doubt that it would make a big difference though.

Resources