I have a challenge and I am trying to solve this in order to move forward, it is the final piece of the puzzle for my model operations.
What I am trying to do?:*
is verify the videos that are being used in the Xval_test variable via the split operations here as per the example via here In Python sklearn, how do I retrieve the names of samples/variables in test/training data? :
X_train, Xval_test, Y_train, Yval_test = train_test_split(
X, Y, train_size=0.8, test_size=0.2, random_state=1, shuffle=True)
1.
What I tried?:
is calling the name from the actual tag via file_path name, however that is not working. (every time the code runs the names from the file path are taken and not from the actual split operations Xval_test variable. This causes an issue during the model.fit() procedures as it changes the 1D flattened tensor to (a number of rows, 1 column)
file_paths = []
for file_name in os.listdir(root):
file_path = os.path.join(root, file_name)
if os.path.isfile(file_path):
file_paths.append(file_path)
print('**********************************************************')
print('ALL Directory File Paths Completed', file_paths)
I am not sure if the files are being shuffled properly with my weak attempt as per the guidelines from the split() forum. (based on my knowledge, every time I run the code, those files would be shuffled to a new Xval_test set relative to it's specified split parameter 80:20.
2.
I tried calling the model.predict(), that presents no labels for which I was hoping that it did (maybe I am using it the wrong way for calling the indices, I don't know).
my_pred = model.predict(Xval_test).argmax(axis=1)
I tried calling np.argsmax():( I KNOW THE TOTAL AMT OF FILES IN Xval_test is 16 based on the split())
Y_valpred = np.argmax(model.predict(Xval_test), axis=1) # model
This returns just the class label and not it's contents e.g. the classes in the datastore are folders containing (walking and fencing) rather than the actual video labels such as (walking0.avi....100/n and fencing0.avi.....100n/) !!!???!
I am not certain of the operation to get the folder content's tags, the actual file itself. It is this that I am trying to get from the X_test variable.
(or maybe its the wrong variable or functioning I using, again I am lacking the knowledge to understand this, please assist so that I can move to the next stage).
3.
I tried printing all of the variable's from the previous operations to see where that name tag would be stored and it is stored in the name variable below as per my operations: (but how do I call these folder content's file tags forward to the X_test variable or as per my choice the model.predict() outputs in a column together with the other metrics. So far, this causes issues with the model.fit() function???)
for files3 in files2:
name = os.path.join(namelist, files3)
name1 = name.strip("./dataset/")
name2 = name1.strip("Fencing/")
name3 = name2.strip("Stabing/")
name3 = name3.replace('.av', '')
name4 = name3.split()
# print("This is name1 ", name1)
# name5 = pd.DataFrame({"vid_names": name4}).to_csv("results.csv")
# name1 = name1.replace('[]', '')
with open('vid_names.csv', 'a',newline='') as f:
writer = csv.writer(f)
writer = writer.writerow(name4)
# print("My Video Names => ", name3)
3A.
Thank you in advance, I am grateful for any guidance provided, Please assist!
QUESTIONS: ############################################
Ques: 1.
Is it possible to see what video label tags are segmented within the X_Test Variable?
Ques: 1A.
If yes, may I request your guidance here, please, on how this can be done?:
I have been researching for weeks and cannot seem to get this sorted, your efforts would be greatly appreciated.
Ques: 2. MY Expected OUTCOME:
I am trying to access the prediction. So, In the end I would get an output relative to the actual video tag that insinuates the actual video that was used in the prediction operation along with its class tag (see below):
Initially, the model.predict() operations outputs numerical data relative to the class label.
I am trying to access the actual file label as well:
For example, what I want the predictions to look like is as follows:
X_test_labs Pred_labs Actual_File Pred_Score
0 Fencing Fencing fencing0.avi 0.99650866
1 Walking Fencing walking6.avi 0.9948837
2 Walking Walking walking21.avi 0.9967557
3 Fencing Fencing fencing32.avi 0.9930409
4 Walking Fencing walking43.avi 0.9961387
5 Walking Walking walking48.avi 0.6467387
6 Walking Walking walking50.avi 0.5465369
7 Walking Walking walking9.avi 0.3478027
8 Fencing Fencing fencing22.avi 0.1247543
9 Fencing Fencing fencing46.avi 0.7477777
10 Walking Walking walking37.avi 0.8499399
11 Fencing Fencing fencing19.avi 0.8887722
12 Walking Walking walking12.avi 0.7775351
13 Fencing Fencing fencing33.avi 0.4323323
14 Fencing Fencing fencing51.avi 0.7812434
15 Fencing Fencing fencing8.avi 0.8723476
I am not sure how to achieve this task, this one is a little more tricky for me than anticipated
This is my code*
'''*******Load Dependencies********'''
from keras.regularizers import l2
from keras.layers import Dense
from keras_tqdm import TQDMNotebookCallback
from tqdm.keras import TqdmCallback
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import math
import tensorflow as tf
from tqdm import tqdm
import videoto3d
import seaborn as sns
import scikitplot as skplt
from sklearn import preprocessing
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix
from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score
from keras.utils.vis_utils import plot_model
from keras.utils import np_utils
from tensorflow.keras.optimizers import Adam
from keras.models import Sequential
from keras.losses import categorical_crossentropy
from keras.layers import (Activation, Conv3D, Dense, Dropout, Flatten,MaxPooling3D)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
import argparse
import time
import sys
import openpyxl
import os
import re
import csv
from keras import models
import cv2
import pickle
import glob
from numpy import load
np.seterr(divide='ignore', invalid='ignore')
print('**********************************************************')
print('Graphical Representation Of Accuracy & Validation Results Completed')
def plot_history(history, result_dir):
plt.plot(history.history['val_accuracy'], marker='.')
plt.plot(history.history['accuracy'], marker='.')
plt.title('model accuracy')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.grid()
plt.legend(['Val_acc', 'Test_acc'], loc='lower right')
plt.savefig(os.path.join(result_dir, 'model_accuracy.png'))
plt.close()
plt.plot(history.history['val_loss'], marker='.')
plt.plot(history.history['loss'], marker='.')
plt.title('model Loss')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.grid()
plt.legend(['Val_loss', 'Test_loss'], loc='upper right')
plt.savefig(os.path.join(result_dir, 'model_loss.png'))
plt.close()
# Saving History Accuracy & Validation Acuuracy Results To Directory
print('**********************************************************')
print('Generating History Acuuracy Results Completed')
def save_history(history, result_dir):
loss = history.history['loss']
acc = history.history['accuracy']
val_loss = history.history['val_loss']
val_acc = history.history['val_accuracy']
nb_epoch = len(acc)
# Creating The Results File To Directory = Store Results
print('**********************************************************')
print('Saving History Acuuracy Results To Directory Completed')
with open(os.path.join(result_dir, 'result.txt'), 'w') as fp:
fp.write('epoch\tloss\tacc\tval_loss\tval_acc\n')
# print(fp)
for i in range(nb_epoch):
fp.write('{}\t{}\t{}\t{}\t{}\n'.format(
i, loss[i], acc[i], val_loss[i], val_acc[i]))
print('**********************************************************')
print('Loading All Specified Video Data Samples From Directory Completed')
def loaddata(video_dir, vid3d, nclass, result_dir, color=False, skip=True):
files = os.listdir(video_dir)
with open('files.csv', 'w') as f:
writer = csv.writer(f)
writer.writerow(files)
root = '/Users/symbadian/3DCNN_latest_Version/3DCNNtesting/dataset/'
dirlist = [item for item in os.listdir(
root) if os.path.isdir(os.path.join(root, item))]
print('Get the filesname and path')
print('DIRLIST Directory Completed', dirlist)
file_paths = []
for file_name in os.listdir(root):
file_path = os.path.join(root, file_name)
if os.path.isfile(file_path):
file_paths.append(file_path)
print('**********************************************************')
print('ALL Directory File Paths Completed', file_paths)
roots, dirsy, fitte = next(os.walk(root), ([], [], []))
print('**********************************************************')
print('ALL Directory ROOTED', roots, fitte, dirsy)
X = []
print('X labels==>', X) # This stores all variable data in an object format
labellist = []
pbar = tqdm(total=len(files)) # generate progress bar for file processing
print('**********************************************************')
print('Generating/Join Class Labels For Video Dataset For Input Completed')
# Accessing files and labels from dataset directory
for filename in files:
pbar.update(1)
if filename == '.DS_Store':#.DS_Store
continue
namelist = os.path.join(video_dir, filename)
files2 = os.listdir(namelist)
###############################################################################
######### NEEDS TO FIX THIS Data Adding to CSV Rather Than REWRITTING #########
for files3 in files2:
name = os.path.join(namelist, files3)
#Call a function that extract the frames details of all file names
label = vid3d.get_UCF_classname(filename)
if label not in labellist:
if len(labellist) >= nclass:
continue
labellist.append(label)
# This X variable is the point where the lables are store (I think??!?!)
X.append(vid3d.video3d(name, color=color, skip=skip))
pbar.close()
# generating labellist/ writing to directory
print('******************************************************')
print('Saving All Class Labels For Referencing To Directory Completed')
with open(os.path.join(result_dir, 'classes.txt'), 'w') as fp:
for i in range(len(labellist)):
# print('These are labellist i classes',i) #Not This
fp.write('{}\n'.format(labellist[i]))
# print('These are my labels: ==>',mylabel)
for num, label in enumerate(labellist):
for i in range(len(labels)):
if label == labels[i]:
labels[i] = num
# print('This is labels i',labels[i]) #Not this
if color: # conforming image channels of image for input sequence
return np.array(X).transpose((0, 2, 3, 4, 1)), labels
else:
return np.array(X).transpose((0, 2, 3, 1)), labels
print('**********************************************************')
print('Generating Args Informative Messages/ Tuning Parameters Options Completed')
def main():
parser = argparse.ArgumentParser(description='A 3D Convolution Model For Action Recognition')
parser.add_argument('--batch', type=int, default=130)
parser.add_argument('--epoch', type=int, default=100)
parser.add_argument('--videos', type=str, default='dataset',help='Directory Where Videos Are Stored')# UCF101
parser.add_argument('--nclass', type=int, default= 2)
parser.add_argument('--output', type=str, required=True)
parser.add_argument('--color', type=bool, default=False)
parser.add_argument('--skip', type=bool, default=True)
parser.add_argument('--depth', type=int, default=10)
args = parser.parse_args()
# print('This is the Option Arguments ==>',args)
print('**********************************************************')
print('Specifying Input Size and Channels Completed')
img_rows, img_cols, frames = 32, 32, args.depth
channel = 3 if args.color else 1
print('**********************************************************')
print('Saving Dataset As NPZ To Directory Completed')
fname_npz = 'dataset_{}_{}_{}.npz'.format(args.nclass, args.depth, args.skip)
vid3d = videoto3d.Videoto3D(img_rows, img_cols, frames)
nb_classes = args.nclass
# loading the data
if os.path.exists(fname_npz):
loadeddata = np.load(fname_npz)
X, Y = loadeddata["X"], loadeddata["Y"]
else:
x, y = loaddata(args.videos, vid3d, args.nclass,args.output, args.color, args.skip)
X = x.reshape((x.shape[0], img_rows, img_cols, frames, channel))
Y = np_utils.to_categorical(y, nb_classes)
X = X.astype('float32')
#save npzdata to file
np.savez(fname_npz, X=X, Y=Y)
print('Saved Dataset To dataset.npz. Completed')
print('X_shape:{}\nY_shape:{}'.format(X.shape, Y.shape))
print('**********************************************************')
print('Initialise Model Layers & Layer Parameters Completed')
# Sequential groups a linear stack of layers into a tf.keras.Model.
# Sequential provides training and inference features on this model
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),input_shape=(X.shape[1:]), padding='same'))
model.add(Activation('relu'))
model.add(Conv3D(32, kernel_size=(3, 3, 3), padding='same'))
model.add(MaxPooling3D(pool_size=(3, 3, 3), padding='same'))
model.add(Conv3D(64, kernel_size=(3, 3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv3D(64, kernel_size=(3, 3, 3), padding='same'))
model.add(MaxPooling3D(pool_size=(3, 3, 3), padding='same'))
model.add(Conv3D(128, kernel_size=(3, 3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv3D(128, kernel_size=(3, 3, 3), padding='same'))
model.add(MaxPooling3D(pool_size=(3, 3, 3), padding='same'))
model.add(Dropout(0.5))
model.add(Conv3D(256, kernel_size=(3, 3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv3D(256, kernel_size=(3, 3, 3), padding='same'))
model.add(MaxPooling3D(pool_size=(3, 3, 3), padding='same'))
model.add(Dropout(0.5))
model.add(Flatten())
# Dense function to convert FCL to 512 values
model.add(Dense(512, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
model.compile(loss=categorical_crossentropy,optimizer=Adam(), metrics=['accuracy'])
model.summary()
print('this is the model shape')
model.output_shape
plot_model(model, show_shapes=True,to_file=os.path.join(args.output, 'model.png'))
print('**********************************************************')
print("Train Test Method HoldOut Performance")
X_train, Xval_test, Y_train, Yval_test = train_test_split(
X, Y, train_size=0.8, test_size=0.2, random_state=1, stratify=Y, shuffle=True)
print('**********************************************************')
print('Deploying Data Fitting/ Performance Accuracy Guidance Completed')
#Stop operations when experiencing no learning
rlronp = tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss", factor=0.5, patience=1, mode='auto', min_delta=0.0001, cooldown=1, min_lr=0.0001)
# Fit the training data
history = model.fit(X_train, Y_train, validation_split=0.20, batch_size=args.batch,epochs=args.epoch, verbose=1, callbacks=[rlronp], shuffle=True)
# Predict X_Test (Xval_test) data and Labels
predict_labels = model.predict(Xval_test, batch_size=args.batch,verbose=1,use_multiprocessing=True)
classes = np.argmax(predict_labels, axis = 1)
label = np.argmax(Yval_test,axis = 1)
print('This the BATCH size', args.batch)
print('This the DEPTH size', args.depth)
print('This the EPOCH size', args.epoch)
print('This the TRAIN SPLIT size', len(X_train))
print('This the TEST SPLIT size', len(Xval_test))
# https://stackoverflow.com/questions/52261597/keras-model-fit-verbose-formatting
# A json file enhances the model performance by a simple to save/load model
model_json = model.to_json()
if not os.path.isdir(args.output):
os.makedirs(args.output)
with open(os.path.join(args.output, 'ucf101_3dcnnmodel.json'), 'w') as json_file:
json_file.write(model_json)
# hd5 contains multidimensional arrays of scientific data
model.save_weights(os.path.join(args.output, 'ucf101_3dcnnmodel.hd5'))
''' Evaluation is a process
'''
print('**********************************************************')
print('Displying Test Loss & Test Accuracy Completed')
loss, acc = model.evaluate(Xval_test, Yval_test, verbose=2, batch_size=args.batch, use_multiprocessing=True) # verbose 0
print('this is args output', args.output)
plot_history(history, args.output)
save_history(history, args.output)
print('**********************************************************')
# Generating Picture Of Confusion matrix
print('**********************************************************')
print('Generating CM InputData/Classification Report Completed')
#Ground truth (correct) target values.
y_valtest_arg = np.argmax(Yval_test, axis=1)
#Estimated targets as returned by a classifier
Y_valpred = np.argmax(model.predict(Xval_test), axis=1) # model
print('y_valtest_arg Shape is ==>', y_valtest_arg.shape)
print('Y_valpred Shape is ==>', Y_valpred.shape)
print('**********************************************************')
print('Classification_Report On Model Performance Completed==')
print(classification_report(y_valtest_arg.round(), Y_valpred.round(), target_names=filehandle, zero_division=1))
'''Intitate Confusion Matrix'''
# print('Model Confusion Matrix Per Test Data Completed===>')
cm = confusion_matrix(y_valtest_arg, Y_valpred, normalize=None)
print('Display Confusion Matrix ===>', cm)
print('**********************************************************')
print('Model Overall Accuracy')
print('Model Test loss:', loss)
print('**********************************************************')
print('Model Test accuracy:', acc)
print('**********************************************************')
if __name__ == '__main__':
main()
I Think the solution is around the prediction, train, test split and the evaluation arguments. However, I am lacking the knowledge to access the details required from the train, test, split(), if that's where the issue is.
I am really thankful for your guidance in advance, thanks a whole lot for clarifying this for me and closing the gaps in my understanding. really appreciate this!!!
After some research, I was able to predict the future value using the LSTM code below. I have also attached the Dmd1ahr.csv file in the github link that I am using.
https://github.com/ukeshchawal/hello-world/blob/master/Dmd1ahr.csv
As you all can see below, 90 data points are training sets and 91st to 100th are future value prediction.
However some of the questions that I still have are:
In order to predict these values I had to originally take more than hundred data sets (here, I have taken 500 data sets) which is not exactly what my primary goal is. Is there a way that given 500 data sets, it will predict the rest 10 or 20 out of sample data points? If yes, will you please write me a sample code where you can just take 500 data points from Dmd1ahr.csv file attached below and it will predict some future values (say 501 to 520) based on those 500 points?
The prediction are way off compared to the one who have in your blogs (definitely indicates for parameter tuning - I tried changing epochs, LSTM layers, Activation, optimizer). What other parameter tuning I can do to make it more robust?
Thank you'll in advance.
import numpy as np
import matplotlib.pyplot as plt
import pandas
# By twaking the architecture it could be made more robust
np.random.seed(7)
numOfSamples = 500
lengthTrain = 90
lengthValidation = 100
look_back = 1 # Can be set higher, in my experiments it made performance worse though
transientTime = 90 # Time to "burn in" time series
series = pandas.read_csv('Dmd1ahr.csv')
def generateTrainData(series, i, look_back):
return series[i:look_back+i+1]
trainX = np.stack([generateTrainData(series, i, look_back) for i in range(lengthTrain)])
testX = np.stack([generateTrainData(series, lengthTrain + i, look_back) for i in range(lengthValidation)])
trainX = trainX.reshape((lengthTrain,look_back+1,1))
testX = testX.reshape((lengthValidation, look_back + 1, 1))
trainY = trainX[:,1:,:]
trainX = trainX[:,:-1,:]
testY = testX[:,1:,:]
testX = testX[:,:-1,:]
############### Build Model ###############
import keras
from keras.models import Model
from keras import layers
from keras import regularizers
inputs = layers.Input(batch_shape=(1,look_back,1), name="main_input")
inputsAux = layers.Input(batch_shape=(1,look_back,1), name="aux_input")
# this layer makes the actual prediction, i.e. decides if and how much it goes up or down
x = layers.recurrent.LSTM(300,return_sequences=True, stateful=True)(inputs)
x = layers.recurrent.LSTM(200,return_sequences=True, stateful=True)(inputs)
x = layers.recurrent.LSTM(100,return_sequences=True, stateful=True)(inputs)
x = layers.recurrent.LSTM(50,return_sequences=True, stateful=True)(inputs)
x = layers.wrappers.TimeDistributed(layers.Dense(1, activation="linear",
kernel_regularizer=regularizers.l2(0.005),
activity_regularizer=regularizers.l1(0.005)))(x)
# auxillary input, the current input will be feed directly to the output
# this way the prediction from the step before will be used as a "base", and the Network just have to
# learn if it goes a little up or down
auxX = layers.wrappers.TimeDistributed(layers.Dense(1,
kernel_initializer=keras.initializers.Constant(value=1),
bias_initializer='zeros',
input_shape=(1,1), activation="linear", trainable=False
))(inputsAux)
outputs = layers.add([x, auxX], name="main_output")
model = Model(inputs=[inputs, inputsAux], outputs=outputs)
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['mean_squared_error'])
#model.summary()
#model.fit({"main_input": trainX, "aux_input": trainX[look_back-1,look_back,:]},{"main_output": trainY}, epochs=4, batch_size=1, shuffle=False)
model.fit({"main_input": trainX, "aux_input": trainX[:,look_back-1,:].reshape(lengthTrain,1,1)},{"main_output": trainY}, epochs=100, batch_size=1, shuffle=False)
############### make predictions ###############
burnedInPredictions = np.zeros(transientTime)
testPredictions = np.zeros(len(testX))
# burn series in, here use first transitionTime number of samples from test data
for i in range(transientTime):
prediction = model.predict([np.array(testX[i, :, 0].reshape(1, look_back, 1)), np.array(testX[i, look_back - 1, 0].reshape(1, 1, 1))])
testPredictions[i] = prediction[0,0,0]
burnedInPredictions[:] = testPredictions[:transientTime]
# prediction, now dont use any previous data whatsoever anymore, network just has to run on its own output
for i in range(transientTime, len(testX)):
prediction = model.predict([prediction, prediction])
testPredictions[i] = prediction[0,0,0]
# for plotting reasons
testPredictions[:np.size(burnedInPredictions)-1] = np.nan
############### plot results ###############
#import matplotlib.pyplot as plt
plt.plot(testX[:, 0, 0])
plt.show()
plt.plot(burnedInPredictions, label = "training")
plt.plot(testPredictions, label = "prediction")
plt.legend()
plt.show()
I'm new to scikit learn and I'm banging my head against the wall. I've used both real world and test data and the scikit algorithms are not performing above chance level in predicting anything. I've tried knn, decision trees, svc and naive bayes.
Basically, I made a test data set consisting of a column of 0s and 1s, with all the 0s having a feature between 0 and .5 and all the 1s having a feature value between .5 and 1. This should be extremely easy and give near 100% accuracy. However, none of the algorithms are performing above chance level. Accurasies range from 45 to 55 %. I've already tried tweaking a whole bunch of parameters for every algorithm but noting helps. I think something is fundamentally wrong with my implementation.
Please help me out. Here's my code:
from sklearn.cross_validation import train_test_split
from sklearn import preprocessing
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import accuracy_score
import sklearn
import pandas
import numpy as np
df=pandas.read_excel('Test.xlsx')
# Make data into np arrays
y = np.array(df[1])
y=y.astype(float)
y=y.reshape(399)
x = np.array(df[2])
x=x.astype(float)
x=x.reshape(399, 1)
# Creating training and test data
labels_train, labels_test = train_test_split(y)
features_train, features_test = train_test_split(x)
#####################################################################
# PERCEPTRON
#####################################################################
from sklearn import linear_model
perceptron=linear_model.Perceptron()
perceptron.fit(features_train, labels_train)
perc_pred=perceptron.predict(features_test)
print sklearn.metrics.accuracy_score(labels_test, perc_pred, normalize=True, sample_weight=None)
print 'perceptron'
#####################################################################
# KNN classifier
#####################################################################
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(features_train, labels_train)
knn_pred = knn.predict(features_test)
# Accuraatheid
print sklearn.metrics.accuracy_score(labels_test, knn_pred, normalize=True, sample_weight=None)
print 'knn'
#####################################################################
## SVC
#####################################################################
from sklearn.svm import SVC
from sklearn import svm
svm2 = SVC(kernel="linear")
svm2 = svm.SVC()
svm2.fit(features_train, labels_train)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=1.0, kernel='linear', max_iter=-1, probability=False,
random_state=None,
shrinking=True, tol=0.001, verbose=False)
svc_pred = svm2.predict(features_test)
print sklearn.metrics.accuracy_score(labels_test, svc_pred, normalize=True,
sample_weight=None)
#####################################################################
# Decision tree
#####################################################################
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf = clf.fit(features_train, labels_train)
tree_pred=clf.predict(features_test)
# Accuraatheid
print sklearn.metrics.accuracy_score(labels_test, tree_pred, normalize=True,
sample_weight=None)
print 'tree'
#####################################################################
# Naive bayes
#####################################################################
import sklearn
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(features_train, labels_train)
print "training time:", round(time()-t0, 3), "s"
GaussianNB()
bayes_pred = clf.predict(features_test)
print sklearn.metrics.accuracy_score(labels_test, bayes_pred,
normalize=True, sample_weight=None)
You seem to use train_test_split the wrong way.
labels_train, labels_test = train_test_split(y) #WRONG
features_train, features_test = train_test_split(x) #WRONG
the splitting of your labels and data isn't necessary the same. One easy way to split your data manually:
randomvec=np.random.rand(len(data))
randomvec=randomvec>0.5
train_data=data[randomvec]
train_label=labels[randomvec]
test_data=data[np.logical_not(randomvec)]
test_label=labels[np.logical_not(randomvec)]
or to use the scikit method properly:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.5, random_state=42)