I know that this error is recurrent and I understand what can cause it.
For example, running this model with 163 images of 150x150 gives me the error (however it's not clear to me why setting batch_size Keras still seems to try to allocate all images at a time in the GPU):
model = Sequential()
model.add(Conv2D(64, kernel_size=(6, 6), activation='relu', input_shape=input_shape, padding='same', name='b1_conv'))
model.add(MaxPooling2D(pool_size=(2, 2), name='b1_poll'))
model.add(Conv2D(128, kernel_size=(6, 6), activation='relu', padding='same', name='b2_conv'))
model.add(MaxPooling2D(pool_size=(2, 2), name='b2_pool'))
model.add(Conv2D(256, kernel_size=(6, 6), activation='relu', padding='same', name='b3_conv'))
model.add(MaxPooling2D(pool_size=(2, 2), name='b3_pool'))
model.add(Flatten())
model.add(Dense(500, activation='relu', name='fc1'))
model.add(Dropout(0.5))
model.add(Dense(500, activation='relu', name='fc2'))
model.add(Dropout(0.5))
model.add(Dense(n_targets, activation='softmax', name='prediction'))
model.compile(optimizer=optim, loss='categorical_crossentropy', metrics=['accuracy'])
Given that, I reduced the images size to 30x30 (which resulted in an accuracy drop, as expected). However, running grid search in this model Resource exhausted.
model = KerasClassifier(build_fn=create_model, verbose=0)
# grid initial weight, batch size and optimizer
sgd = optimizers.SGD(lr=0.0005)
rms = optimizers.RMSprop(lr=0.0005)
adag = optimizers.Adagrad(lr=0.0005)
adad = optimizers.Adadelta(lr=0.0005)
adam = optimizers.Adam(lr=0.0005)
adamm = optimizers.Adamax(lr=0.0005)
nadam = optimizers.Nadam(lr=0.0005)
optimizers = [sgd, rms, adag, adad, adam, adamm, nadam]
init = ['glorot_uniform', 'normal', 'uniform', 'he_normal']
batches = [32, 64, 128]
param_grid = dict(optim=optimizers, batch_size=batches, init=init)
grid = GridSearchCV(estimator=model, param_grid=param_grid)
grid_result = grid.fit(X_train, y_train)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
I wonder if it's possible to "clean" things before each combination used by the grid search (not sure if I made myself clear, this is all new to me).
EDIT
Using the fit_generator also gives me the same error:
def generator(features, labels, batch_size):
# Create empty arrays to contain batch of features and labels#
batch_features = np.zeros((batch_size, size, size, 1))
batch_labels = np.zeros((batch_size, n_targets))
while True:
for i in range(batch_size):
# choose random index in features
index = np.random.choice(len(features),1)
batch_features[i] = features[index]
batch_labels[i] = labels[index]
yield batch_features, batch_labels
sgd = optimizers.SGD(lr=0.0005)
rms = optimizers.RMSprop(lr=0.0005)
adag = optimizers.Adagrad(lr=0.0005)
adad = optimizers.Adadelta(lr=0.0005)
adam = optimizers.Adam(lr=0.0005)
adamm = optimizers.Adamax(lr=0.0005)
nadam = optimizers.Nadam(lr=0.0005)
optim = [rms, adag, adad, adam, adamm, nadam]
init = ['normal', 'uniform', 'he_normal']
combinations = [(a, b) for a in optim for b in init]
for combination in combinations:
init = combination[1]
optim = combination[0]
model = create_model(init=init, optim=optim)
model.fit_generator(generator(X_train, y_train, batch_size=32),
steps_per_epoch=X_train.shape[0] // 32,
epochs=100, verbose=0, validation_data=(X_test, y_test))
scores = model.model.evaluate(X_test, y_test, verbose=0)
print("%s: %.2f%% Model %s %s" % (model.model.metrics_names[1], scores[1]*100, optim, init))
You should work with generators + yield, they discard from the memory the data they already used. Check out my answer to a similar question.
You have to clear the tensorflow session after each run of training/evaluation
K.clear_session()
with K as the tensorflow backend.
Related
I'm trying to do some RL, this requires repeatedly calling model.predict or model.forward instead of calling either function on large batches. I noticed that this was going surprisingly slow. For example, the following code ran (on CPU) in around 6 seconds:
import numpy as np
import time
# %%
## Tensorflow Example:
import tensorflow as tf
from tensorflow.keras import layers, models
#tf.compat.v1.disable_eager_execution()
model = models.Sequential()
model.add(layers.Conv2D(8, (4, 4), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(8, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(5, activation='relu'))
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
X = np.zeros([1,32,32,3])
t = time.time()
for i in range(100):
model.predict(X, verbose=0)
print(time.time() - t)
The same network in PyTorch to ~10 seconds:
# %%
## Pytorch Example:
import torch
import torch.nn as nn
import torch.nn.functional as F
Y = np.zeros([1,3,32,32])
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 8, 4)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(8, 8, 3)
self.fc1 = nn.Linear(8 * 12 * 12, 5)
def forward(self, x):
x = torch.from_numpy(x).float()
x = self.pool(F.relu(self.conv1(x)))
x = F.relu(self.conv2(x))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
return x
net = Net()
net.forward(Y)
# %%
Y = np.zeros([1,3,32,32])
t = time.time()
with torch.no_grad():
for i in range(100):
net.forward(Y)
print(time.time() - t)
This is surprisingly slow to me. Running the networks on batches, they execute quickly (<.1 second for batches of 1000).
After some poking, I came across the tf.compat.v1.disable_eager_execution() line commented out at the top of the TensorFlow example. Disabling eager execution drops the loop time to around .1 s per 100 calls, or .85 s per 1000 calls. However, this is still much slower than just calling a batch, where 1000 samples are predicted on in ~.05 ms.
So, two question: First, is there a way to speed up prediction for either PyTorch or Tensorflow when repeatedly calling predict in an RL setting?
Second, is there an equivalent of turning off eager execution in PyTorch CPU?
try to build an image binary classification model using keras. Unfortunately, get a same output every time. The probability for each test sample was different, but they all favor one label.
The datasets are balanced. label L(n=250) vs. Label E(n=250): 300 for train, 100 for validate, 100 for test. There is no sample overlap among those groups.
After failing to predict the test dataset, I also used the training dataset for prediction which meant the model would make predictions for the samples that had just been trained. I know it does not make any sense. But it also got same output: Counter({0: 300}).
from keras.layers.core import Dense, Flatten, Dropout
from keras.layers.convolutional import Conv2D, MaxPooling2D, SeparableConv2D
import keras
from keras import layers
from skimage.transform import resize
import math
import os,random
import cv2
import numpy as np
import pandas as pd
from keras.models import Sequential
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
from collections import Counter
import matplotlib.pyplot as plt
class DataGenerator(keras.utils.Sequence):
def __init__(self, datas, batch_size=32, shuffle=True):
self.batch_size = batch_size
self.datas = datas
self.indexes = np.arange(len(self.datas))
self.shuffle = shuffle
def __len__(self):
return math.ceil(len(self.datas) / float(self.batch_size))
def __getitem__(self, index):
batch_indexs = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
batch_datas = [self.datas[k] for k in batch_indexs]
X, y = self.data_generation(batch_datas)
return X, y
def on_epoch_end(self):
if self.shuffle == True:
np.random.shuffle(self.indexes)
def data_generation(self, batch_datas):
images = []
labels = []
for i, data in enumerate(batch_datas):
image = resize((cv2.imread(data)/255),(128, 128))
image = list(image)
images.append(image)
right = data.rfind("\\",0)
left = data.rfind("\\",0,right)+1
class_name = data[left:right]
if class_name=="e":
labels.append(0)
else:
labels.append(1)
return np.array(images), np.array(labels)
def create_model():
model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3),
input_shape=(128, 128, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(16, kernel_size=(3, 3),
padding="same",
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(units=64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=['accuracy'])
return model
e_train = []
e_test = []
l_test = []
l_train = []
for file in os.listdir('\e\train')
e_train.append(os.path.join('\e\train',file))
for file in os.listdir('\e\test')
e_test.append(os.path.join('\e\test',file))
for file in os.listdir('\l\train')
l_train.append(os.path.join('\l\train',file))
for file in os.listdir('\l\test')
e_test.append(os.path.join('\l\test',file))
data_tr = e_train + l_train
data_te = e_test + l_test
g_te = DataGenerator(data_te)
kf = KFold(n_splits=4, shuffle=True, random_state=seed)
fold = 1
for train, test in kf.split(data_tr):
model = create_model()
g_tr = DataGenerator(data_tr[train])
g_v = DataGenerator(data_tr[test])
H = model.fit_generator(generator = g_tr, epochs=10,
validation_data = g_v, shuffle=False,
max_queue_size=10,workers=1)
pred = model.predict(g_te, max_queue_size=10, workers=1, verbose=1)
print(pred)
# the probability was different, but the right column of probability were always bigger
# [[0.49817565 0.5018243 ]
# [0.4872172 0.5127828 ]
# [0.48092505 0.519075 ]
predicted_class_indices = [np.argmax(probas) for probas in pred]
print(Counter(predicted_class_indices))
# the output was the same
# Counter({0: 100})
fold = fold + 1
Any thoughts would be appreciated.
Solution:
Instead of solving a binary classification problem, convert it into a multi-class problem with two classes. So, the output of the last layer will have a softmax activation which will provide a probability distribution for the classes. Refer this tutorial wherein you'll understand the changes which need to be made.
Explanation
You should not use the sigmoid activation in the output layer of the model, while using relu activation in the intermediate layers. The vanilla ReLU ( Rectified Linear Unit ) activation is defined as,
Hence, the range of the activation function is [ 0 , infinity ). On the other hand, considering a sigmoid activation,
The range of the sigmoid function is ( 0 , 1 ). So, if a large signal ( > 0 ) is passed through the sigmoid function, the output will be very close to 1 i.e. a fully saturated firing. The output of the relu function can provide a large signal from the intermediate layers, hence making a fully saturated firing ( or 1s ) at the output layer where the sigmoid activation is performed.
If the logits are [ 5.6 , 1.2 , 3.2 , 4.8 ], the output of the sigmoid function is,
[0.9963157 , 0.76852477, 0.96083426, 0.99183744]
and that of softmax is,
[0.6441953 , 0.00790901, 0.05844009, 0.2894557 ]
im really new for gpu coding i found this Kmeans cupy code my propouse is work with a large data base (n,3) for example to realize about the timing difference on gpu and cpu , i wanna have a huge number of clusters but i am getting a memory management error. Can someone give me the route I should take to research and fix it, i already research but i have not a clear start yet.
import contextlib
import time
import cupy
import matplotlib.pyplot as plt
import numpy
#contextlib.contextmanager
def timer(message):
cupy.cuda.Stream.null.synchronize()
start = time.time()
yield
cupy.cuda.Stream.null.synchronize()
end = time.time()
print('%s: %f sec' % (message, end - start))
var_kernel = cupy.ElementwiseKernel(
'T x0, T x1, T c0, T c1', 'T out',
'out = (x0 - c0) * (x0 - c0) + (x1 - c1) * (x1 - c1)',
'var_kernel'
)
sum_kernel = cupy.ReductionKernel(
'T x, S mask', 'T out',
'mask ? x : 0',
'a + b', 'out = a', '0',
'sum_kernel'
)
count_kernel = cupy.ReductionKernel(
'T mask', 'float32 out',
'mask ? 1.0 : 0.0',
'a + b', 'out = a', '0.0',
'count_kernel'
)
def fit_xp(X, n_clusters, max_iter):
assert X.ndim == 2
# Get NumPy or CuPy module from the supplied array.
xp = cupy.get_array_module(X)
n_samples = len(X)
# Make an array to store the labels indicating which cluster each sample is
# contained.
pred = xp.zeros(n_samples)
# Choose the initial centroid for each cluster.
initial_indexes = xp.random.choice(n_samples, n_clusters, replace=False)
centers = X[initial_indexes]
for _ in range(max_iter):
# Compute the new label for each sample.
distances = xp.linalg.norm(X[:, None, :] - centers[None, :, :], axis=2)
new_pred = xp.argmin(distances, axis=1)
# If the label is not changed for each sample, we suppose the
# algorithm has converged and exit from the loop.
if xp.all(new_pred == pred):
break
pred = new_pred
# Compute the new centroid for each cluster.
i = xp.arange(n_clusters)
mask = pred == i[:, None]
sums = xp.where(mask[:, :, None], X, 0).sum(axis=1)
counts = xp.count_nonzero(mask, axis=1).reshape((n_clusters, 1))
centers = sums / counts
return centers, pred
def fit_custom(X, n_clusters, max_iter):
assert X.ndim == 2
n_samples = len(X)
pred = cupy.zeros(n_samples,dtype='float32')
initial_indexes = cupy.random.choice(n_samples, n_clusters, replace=False)
centers = X[initial_indexes]
for _ in range(max_iter):
distances = var_kernel(X[:, None, 0], X[:, None, 1],
centers[None, :, 1], centers[None, :, 0])
new_pred = cupy.argmin(distances, axis=1)
if cupy.all(new_pred == pred):
break
pred = new_pred
i = cupy.arange(n_clusters)
mask = pred == i[:, None]
sums = sum_kernel(X, mask[:, :, None], axis=1)
counts = count_kernel(mask, axis=1).reshape((n_clusters, 1))
centers = sums / counts
return centers, pred
def draw(X, n_clusters, centers, pred, output):
# Plot the samples and centroids of the fitted clusters into an image file.
for i in range(n_clusters):
labels = X[pred == i]
plt.scatter(labels[:, 0], labels[:, 1], c=numpy.random.rand(3))
plt.scatter(
centers[:, 0], centers[:, 1], s=120, marker='s', facecolors='y',
edgecolors='k')
plt.savefig(output)
def run_cpu(gpuid, n_clusters, num, max_iter, use_custom_kernel):##, output
samples = numpy.random.randn(num, 3)
X_train = numpy.r_[samples + 1, samples - 1]
with timer(' CPU '):
centers, pred = fit_xp(X_train, n_clusters, max_iter)
def run_gpu(gpuid, n_clusters, num, max_iter, use_custom_kernel):##, output
samples = numpy.random.randn(num, 3)
X_train = numpy.r_[samples + 1, samples - 1]
with cupy.cuda.Device(gpuid):
X_train = cupy.asarray(X_train)
with timer(' GPU '):
if use_custom_kernel:
centers, pred = fit_custom(X_train, n_clusters, max_iter)
else:
centers, pred = fit_xp(X_train, n_clusters, max_iter)
btw i am working in colab pro 25GB(RAM), the code is working with n_clusters=200 and num= 1000000 but if i use bigger numbers the error appear, i am running the code like this:
run_gpu(0,200,1000000,10,True)
This is the error that i have
Any suggestion will be welcome, thanks for your time.
Assuming that CuPy is clever enough not to create explicit copies of the broadcasted input of var_kernel, the output distances has to have a size of 2 * num * num_clusters which are exactly the 6,400,000,000 Bytes it is trying to allocate. You could have a way smaller memory footprint by never actually writing the distances to memory which means fusing the var_kernel with argmin. See this part of the docs.
If I understand the example there correctly, this should work:
#cupy.fuse(kernel_name='argmin_distance')
def argmin_distance(x1, y1, x2, y2):
return cupy.argmin((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2), axis = 1)
The next question would be where the other 13.7GB come from. A big part of them might just be the instances of distances from earlier iterations. I'm not a CuPy expert, but at least in Python/Numpy your use of distances inside the loop would not reuse the same memory, but allocate more memory each time you call the var_kernel. The same problem is visible with pred which is allocated before the loop. If CuPy does things the Numpy way, the solution would be to just put [:] in there like
pred[:] = new_pred
or
distances[:,:,:] = var_kernel(X[:, None, 0], X[:, None, 1],
centers[None, :, 1], centers[None, :, 0])
For this to work, you need to allocate distances before the loop as well. Also this isn't needed anymore when using kernel fusion, so just take it as an example. It may be best to allocate everything beforehand and then use this syntax everywhere in the loop.
I don't know enough about CuPy to answer why fit_xp doesn't have the same problem (or does it?). But my guess would be that garbage collection with CuPy objects works differently there. If garbage collection were "quick enough" in fit_custom it should work even without kernel fusion or reusing already allocated arrays.
Other problems or at least oddities with your code:
Why are you comparing the zeroth coordinate of centers with the first coordinate of X? Wouldn't it make more sense to call
distances = var_kernel(X[:, None, 0], X[:, None, 1],
centers[None, :, 0], centers[None, :, 1])
Why are you creating 3D data when only using the projection on the 2D plane? So why not
samples = numpy.random.randn(num, 2)
Why are you using floats for (the initial version of) pred? The argmin should give an integer type result.
I was recently working on a deep learning model in Keras and it gave me very perplexing results. The model is capable of mastering the training data over time, but it consistently gets worse results on the validation data.
I know that if the validation accuracy goes up for a while and then starts to decrease that you are over-fitting to the training data, but in this case, the validation accuracy only ever decreases. I am really confused why this happens. Does anyone have any intuition as to what could cause this to happen? Or any suggestions on things to test to potentially fix it?
Edit to add more info and code
Ok. So I am making a model that is trying to do some basic stock predictions. By looking at the open, high, low, close, and volume of the last 40 days, the model tries to predict whether or not the price will go up two average true ranges without going down one average true range. As input, I took CSVs from Yahoo Finance that include this information for the last 30 years for all of the stocks in the Dow Jones Industrial Average. The model trains on 70% of the stocks and validates on the other 20%. This leads to about 150,000 training samples. I am currently using a 1d Convolutional Neural Network, but I have also tried other smaller models (logistic regression and small Feed Forward NN) and I always get the same either diverging train and validation loss or nothing learned at all because the model is too simple.
Here is the code:
import numpy as np
from sklearn import preprocessing
from sklearn.metrics import auc, roc_curve, roc_auc_score
from keras.layers import Input, Dense, Flatten, Conv1D, Activation, MaxPooling1D, Dropout, Concatenate
from keras.models import Model
from keras.callbacks import ModelCheckpoint, EarlyStopping, Callback
from keras import backend as K
import matplotlib.pyplot as plt
from random import seed, shuffle
from os import listdir
class roc_auc(Callback):
def on_train_begin(self, logs={}):
self.aucs = []
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict(self.validation_data[0])
self.aucs.append(roc_auc_score(self.validation_data[1], y_pred))
if max(self.aucs) == self.aucs[-1]:
model.save_weights("weights.roc_auc.hdf5")
print(" - auc: %0.4f" % self.aucs[-1])
return
def on_batch_begin(self, batch, logs={}):
return
def on_batch_end(self, batch, logs={}):
return
rrr = 2
epochs = 200
batch_size = 64
days_input = 40
seed(42)
X_train = []
X_test = []
y_train = []
y_test = []
files = listdir("Stocks")
total_stocks = len(files)
shuffle(files)
for x, file in enumerate(files):
test = False
if (x+1.0)/total_stocks > 0.7:
test = True
if test:
print("Test -> Stocks/%s" % file)
else:
print("Train -> Stocks/%s" % file)
stock = np.loadtxt(open("Stocks/"+file, "r"), delimiter=",", skiprows=1, usecols = (1,2,3,5,6))
atr = []
last = None
for day in stock:
if last is None:
tr = abs(day[1] - day[2])
atr.append(tr)
else:
tr = max(day[1] - day[2], abs(last[3] - day[1]), abs(last[3] - day[2]))
atr.append((13*atr[-1]+tr)/14)
last = day.copy()
stock = np.insert(stock, 5, atr, axis=1)
for i in range(days_input,stock.shape[0]-1):
input = stock[i-days_input:i, 0:5].copy()
for j, day in enumerate(input):
input[j][1] = (day[1]-day[0])/day[0]
input[j][2] = (day[2]-day[0])/day[0]
input[j][3] = (day[3]-day[0])/day[0]
input[:,0] = input[:,0] / np.linalg.norm(input[:,0])
input[:,1] = input[:,1] / np.linalg.norm(input[:,1])
input[:,2] = input[:,2] / np.linalg.norm(input[:,2])
input[:,3] = input[:,3] / np.linalg.norm(input[:,3])
input[:,4] = input[:,4] / np.linalg.norm(input[:,4])
preprocessing.scale(input, copy=False)
output = -1
buy = stock[i][1]
stoploss = buy - stock[i][5]
target = buy + rrr*stock[i][5]
for j in range(i+1, stock.shape[0]):
if stock[j][0] < stoploss or stock[j][2] < stoploss:
output = 0
break
elif stock[j][1] > target:
output = 1
break
if output != -1:
if test:
X_test.append(input)
y_test.append(output)
else:
X_train.append(input)
y_train.append(output)
shape = list(X_train[0].shape)
shape[:0] = [len(X_train)]
X_train = np.concatenate(X_train).reshape(shape)
y_train = np.array(y_train)
shape = list(X_test[0].shape)
shape[:0] = [len(X_test)]
X_test = np.concatenate(X_test).reshape(shape)
y_test = np.array(y_test)
print("Train class split is %0.2f" % (100*np.average(y_train)))
print("Test class split is %0.2f" % (100*np.average(y_test)))
inputs = Input(shape=(days_input,5))
x = Conv1D(32, 5, padding='same')(inputs)
x = Activation('relu')(x)
x = MaxPooling1D()(x)
x = Conv1D(64, 5, padding='same')(x)
x = Activation('relu')(x)
x = MaxPooling1D()(x)
x = Conv1D(128, 5, padding='same')(x)
x = Activation('relu')(x)
x = MaxPooling1D()(x)
x = Flatten()(x)
x = Dense(128, activation="relu")(x)
x = Dense(64, activation="relu")(x)
output = Dense(1, activation="sigmoid")(x)
model = Model(inputs=inputs,outputs=output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
filepath="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=0, save_best_only=True, mode='max')
auc_hist = roc_auc()
callbacks_list = [checkpoint, auc_hist]
history = model.fit(X_train, y_train, validation_data=(X_test,y_test) , epochs=epochs, callbacks=callbacks_list, batch_size=batch_size, class_weight ='balanced').history
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("weights.latest.hdf5")
model.load_weights("weights.roc_auc.hdf5")
plt.plot(history['acc'])
plt.plot(history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(history['loss'])
plt.plot(history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(auc_hist.aucs)
plt.title('model ROC AUC')
plt.ylabel('AUC')
plt.xlabel('epoch')
plt.show()
y_pred = model.predict(X_train)
fpr, tpr, _ = roc_curve(y_train, y_pred)
roc_auc = auc(fpr, tpr)
plt.subplot(1, 2, 1)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy',linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Train ROC')
plt.legend(loc="lower right")
y_pred = model.predict(X_test)
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(fpr, tpr)
plt.subplot(1, 2, 2)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy',linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Test ROC')
plt.legend(loc="lower right")
plt.show()
with open('roc.csv','w+') as file:
for i in range(len(thresholds)):
file.write("%f,%f,%f\n" % (fpr[i], tpr[i], thresholds[i]))
Results by 100 batches instead of by epoch
I listened to suggestions and made a few updates. The classes are now balanced 50% to 50% instead of 25% to 75%. Also, the validation data is randomly selected now instead of being a specific set of stocks. By graphing the loss and accuracy at a finer resolution(100 batches vs 1 epoch), the over-fitting can clearly be seen. The model does actually start to learn at the very beginning before it starts to diverge. I am surprised at how fast it starts to over-fit, but now that I can see the issue hopefully I can debug it.
Possible explanations
Coding error
Overfitting due to differences in the training / validation data
Skewed classes (and differences in the training / validation data)
Things I would try
Swapping the training and the validation set. Does the problem still occur?
Plot the curves in more detail for the first ~10 epochs (e.g. directly after initialization; each few training iterations, not only per epoch). Do you still start at > 75%? Then your classes might be skewed and you might also want to check if your training-validation split is stratified.
Code
This is useless: np.concatenate(X_train)
Make your code as readable as possible when you post it here. This includes removing lines which are commented out.
This looks suspicious for a coding error to me:
if test:
X_test.append(input)
y_test.append(output)
else:
#if((output == 0 and np.average(y_train) > 0.5) or output == 1):
X_train.append(input)
y_train.append(output)
use sklearn.model_selection.train_test_split instead. Do all transformations to the data before, then make the split with this method.
Looks like the batch size is much too small for the number of training samples you have. Try batching 20% and see if that makes a difference.
I am currently solving a problem involving GPS data from buses. The issue I am facing is to reduce computation in my process.
There are about 2 billion GPS-coordinate points (Lat-Long degrees) in one table and about 12,000 bus-stops with their Lat-Long in another table. It is expected that only 5-10% of the 2-billion points are at bus-stops.
Problem: I need to tag and extract only those points (out of the 2-billion) that are at bus-stops (the 12,000 points). Since this is GPS data, I cannot do exact matching of the coordinates, but rather do a tolerance based geofencing.
Issue: The process of tagging bus-stops is taking extremely long time with the current naive approach. Currently, we are picking each of the 12,000 bus-stop points, and querying the 2-billion points with a tolerance of 100m (by converting degree-differences into distance).
Question: Is there an algorithmically efficient process to achieve this tagging of points?
Yes you can use something like SpatialSpark. It only works with Spark 1.6.1 but you can use BroadcastSpatialJoin to create an RTree which is extremely efficient.
Here's an example of me using SpatialSpark with PySpark to check if different polygons are within each other or are intersecting:
from ast import literal_eval as make_tuple
print "Java Spark context version:", sc._jsc.version()
spatialspark = sc._jvm.spatialspark
rectangleA = Polygon([(0, 0), (0, 10), (10, 10), (10, 0)])
rectangleB = Polygon([(-4, -4), (-4, 4), (4, 4), (4, -4)])
rectangleC = Polygon([(7, 7), (7, 8), (8, 8), (8, 7)])
pointD = Point((-1, -1))
def geomABWithId():
return sc.parallelize([
(0L, rectangleA.wkt),
(1L, rectangleB.wkt)
])
def geomCWithId():
return sc.parallelize([
(0L, rectangleC.wkt)
])
def geomABCWithId():
return sc.parallelize([
(0L, rectangleA.wkt),
(1L, rectangleB.wkt),
(2L, rectangleC.wkt)])
def geomDWithId():
return sc.parallelize([
(0L, pointD.wkt)
])
dfAB = sqlContext.createDataFrame(geomABWithId(), ['id', 'wkt'])
dfABC = sqlContext.createDataFrame(geomABCWithId(), ['id', 'wkt'])
dfC = sqlContext.createDataFrame(geomCWithId(), ['id', 'wkt'])
dfD = sqlContext.createDataFrame(geomDWithId(), ['id', 'wkt'])
# Supported Operators: Within, WithinD, Contains, Intersects, Overlaps, NearestD
SpatialOperator = spatialspark.operator.SpatialOperator
BroadcastSpatialJoin = spatialspark.join.BroadcastSpatialJoin
joinRDD = BroadcastSpatialJoin.apply(sc._jsc, dfABC._jdf, dfAB._jdf, SpatialOperator.Within(), 0.0)
joinRDD.count()
results = joinRDD.collect()
map(lambda result: make_tuple(result.toString()), results)
# [(0, 0), (1, 1), (2, 0)] read as:
# ID 0 is within 0
# ID 1 is within 1
# ID 2 is within 0
Note the line
joinRDD = BroadcastSpatialJoin.apply(sc._jsc, dfABC._jdf, dfAB._jdf, SpatialOperator.Within(), 0.0)
the last argument is a buffer value, in your case it would be the tolerance you want to use. It will probably be a very small number if you are using lat/lon since it's a radial system and depending on the meters you want for your tolerance you will need to calculate based on lat/lon for your area of interest.