how can I reduce the tokenizer loading time? - performance

Tokenizer in huggingface is too slow to load. it takes normally 8s. I have no idea why it takes so long.
when I tried to load the vocab from my local, it takes 50ms. judging by this, weight loading from huggingface makes it load slow. but the problem is AutoTokenizer has no function that load from the local path.... I have to use AutoTokenizer because of the word_id() function. so is there anyone knows why AutoTokenizer is so slot to load?
from transformers import AutoTokenizer
import time
start = time()
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
end = time()-start
print(f"Loading Time : {round(end, 2)}s")
# 7.23s
when I tried to use BertTokenizer or something for using local path, it took really fast.

Related

Improving the speed of preprocessing

Following code is used to preprocess text with a custom lemmatizer function:
%%time
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from gensim.utils import simple_preprocess, lemmatize
from gensim.parsing.preprocessing import STOPWORDS
STOPWORDS = list(STOPWORDS)
def preprocessor(s):
result = []
for token in lemmatize(s, stopwords=STOPWORDS, min_length=2):
result.append(token.decode('utf-8').split('/')[0])
return result
data = pd.read_csv('https://pastebin.com/raw/dqKFZ12m')
%%time
X_train, X_test, y_train, y_test = train_test_split([preprocessor(x) for x in data.text],
data.label, test_size=0.2, random_state=0)
#10.8 seconds
Question:
Can the speed of the lemmatization process be improved?
On a large corpus of about 80,000 documents, it currently takes about two hours. The lemmatize() function seems to be the main bottleneck, as a gensim function such as simple_preprocess is quite fast.
Thanks for your help!
You may want to refactor your code to make it easier to time each portion separately. lemmatize() might be part of your bottleneck, but other significant contributors might also be: (1) composing large documents, one-token-at-a-time, via list .append(); (2) the utf-8 decoding.
Separately, the gensim lemmatize() relies on the parse() function from the Pattern library; you could try an alternative lemmatization utility, like those in NLTK or Spacy.
Finally, as lemmatization may be an inherently costly operation, and it might be the case that the same source data gets processed many times in your pipeline, you might want to engineer your process so that the results are re-written to disk, then re-used on subsequent runs – rather than always done "in-line".

Gensim FastText - KeyError: "word not in vocabulary"

I was having trouble with the "most_similar" call in a FastText model, from my understanding, Fasttext should be able to obtain results for words that aren't in the vocabulary, but I'm getting a "Not in Vocabulary" error, even when prior to saving and loading, the call was perfectly fine.
Here's the code from juypter.
import gensim as gensim
model = gensim.models.FastText(my_sentences, size=100, window=5, min_count=3, workers=4, sg=1)
model.wv.most_similar(positive=['iPhone 6'])
Returns
[('iPhone7', 0.942690372467041),
('iPhone7.', 0.9395840764045715),
('iPhone5s', 0.9379133582115173),
('iPhone6s', 0.9338586330413818),
('iPhone5S', 0.9335439801216125),
('iPhone5.', 0.9318809509277344),
('iPhone®', 0.9314558506011963),
('iPhone6', 0.9268479347229004),
('iPhone4s', 0.9223971366882324),
('iPhone5', 0.9212019443511963)]
So far so good, now I save the model.
model.wv.save_word2vec_format("example_fasttext.txt", binary=False)
Then load it up again:
from gensim.models import KeyedVectors
new_model = KeyedVectors.load_word2vec_format('example_fasttext.txt', binary=False, limit=50000)
Then I do the exact most_similar call from the model I just loaded:
new_model.most_similar(positive=['iPhone 6'])
But results now are:
KeyError: "word 'iPhone 6' not in vocabulary"
Any idea what I did wrong?
Your problem is probaly in the limit parameter of the load_word2vec_format method. What you are doing here is loading the model only for the 50000 most frequent words. If iPhone 6 does not appear enough times, you are not loading it.
Try with
new_model = KeyedVectors.load_word2vec_format('example_fasttext.txt', binary=False)
I'm having the same problem as you, and I think I am starting to understand what's going on.
Basically, when you save your model as a .txt or as a .vec, you are only saving the word-vectors ; not the n-grams (saved in the binary version of your model), which allow you to generalize / approximate out-of-vocabulary words.
I suggest you save your model with:
your_fasttext_model.save(file_path)

Using Tensorflow to train a image classifier using my own data using inception and TFrecords

I follow the tutorial on how to train your own data from tensorflow at Github: https://github.com/tensorflow/models/tree/master/inception#how-to-construct-a-new-dataset-for-retraining.
I split my data (Training and validation), created labels suggested and managed to created the TFrecords using bazel-bin. Everything works and now I have my own data as TFrecords.
Now I want to train my image classifier using inception-v3 model from scratch and it seems I should use the script inception_train.py, but I am not sure. Is that right ? https://github.com/tensorflow/models/blob/master/inception/inception/inception_train.py.
If so, I have two questions:
1-) How can I train it using my TFrecords. If you can show me an example would be great.
2-) Can I run on CPU or is only possible on GPUs ?
Thank you very much.
Try the following sample code to read images and labels from your tfrecords,
import os
import glob
import tensorflow as tf
from matplotlib import pyplot as plt
def read_and_decode_file(filename_queue):
# Create an instance of tf record reader
reader = tf.TFRecordReader()
# Read the generated filename queue
_, serialized_reader = reader.read(filename_queue)
# extract the features you require from the tfrecord using their corresponding key
# In my example, all images were written with 'image' key
features = tf.parse_single_example(
serialized_reader, features={
'image': tf.FixedLenFeature([], tf.string),
'labels': tf.FixedLenFeature([], tf.int16)
})
# Extract the set of images as shown below
img = features['image']
img_out = tf.image.resize_image_with_crop_or_pad(img, target_height=128, target_width=128)
# Similarly extract the labels, be careful with the type
label = features['labels']
return img_out, label
if __name__ == "__main__":
tf.reset_default_graph()
# Path to your tfrecords
path_to_tf_records = os.getcwd() + '/*.tfrecords'
# Collect all tfrecords present in the records folder using glob
list_of_tfrecords = sorted(glob.glob(path_to_tf_records))
# Generate a tensorflow readable filename queue by supplying it with
# a list of tfrecords, optionally it is recommended to shuffle your data
# before feeding into the network
filename_queue = tf.train.string_input_producer(list_of_tfrecords, shuffle=False)
# Supply the tensorflow generated filename queue to the custom function above
image, label = read_and_decode_file(filename_queue)
# Create a new tf session to read the data
sess = tf.Session()
tf.train.start_queue_runners(sess=sess)
# Arbitrary number of iterations
for i in range(50):
img =sess.run(image)
# Show image
plt.imshow(img)
Now, you also have a function called tf.train.shuffle_batch to help you spawn multiple CPU threads that perform this function and return images and labels based on user specified batch size. You would need to create simultaneous data and training pipelines so that they work simultaneously.
To answer your second question, yes you can train your model using CPU alone but it would be slow and might take several hours or even days to achieve decent results. Remove the with tf.device('/gpu:{0}'): decorator before the creation of your inception model and tensorflow would create the model on your CPU.
Hope this explanation helps.

Cross validation of dataset separated on files

The dataset that I have is separated on different files grouped on samples that know each other, i.e., they were created on similar conditions on a similar time.
The balance of the train-test dataset is important so the samples have to be on train or test, but cannot be separated. So KFold it is not simple to use on my scikit-learn code.
Right now, I am using something similar to LOO making something like:
train ~> cat ./dataset/!(1.txt)
test ~> cat ./dataset/1.txt
Which is not confortable and not very useful if I want to make folds on test of several files and make a "real" CV.
How would be possible to make a good CV to check real overfitting?
Looking to this answer, I've realized that pandas can concatenate dataframes. I checked that the process is 15-20% slower than cat command-line but makes able to do folds as I was expecting.
Anyway, I am quite sure that there should be any other better way than this one:
import glob
import numpy as np
import pandas as pd
from sklearn.cross_validation import KFold
allFiles = glob.glob("./dataset/*.txt")
kf = KFold(len(allFiles), n_folds=3, shuffle=True)
for train_files, cv_files in kf:
dataTrain = pd.concat((pd.read_csv(allFiles[idTrain], header=None) for idTrain in train_files))
dataTest = pd.concat((pd.read_csv(allFiles[idTest], header=None) for idTest in cv_files))

Managing Setup code with TimeIt

As part of a pet project of mine, I need to test the performance of various different implementations of my code in Python. I anticipate this to be something I do alot of, and I want to try to make the code I write to serve this aim as easy to update and modify as possible.
It's still in its infancy at the moment, but I've taken to using strings to manage common setup or testing code, eg:
naiveSetup = 'from PerformanceTests.Vectors import NaiveVector\n' \
+ 'left = NaiveVector([1,0,0])\n' \
+ 'right = NaiveVector([0,1,0])'
This allows me to only write the code once, at the expense of making it harder to read and clunky to update.
Is there a better way?
Use triple quotes """
setup_code = """
from PerformanceTests.Vectors import NaiveVector
left = NaiveVector([1,0,0])
right = NaiveVector([0,1,0])
"""
Another interesting method is provided in the docs of timeit:
def test():
"Stupid test function"
L = []
for i in range(100):
L.append(i)
if __name__=='__main__':
from timeit import Timer
t = Timer("test()", "from __main__ import test")
print t.timeit()
Though this isn't suitable for all needs.
Timing code is fine, but it will still leave you guessing what's going on.
To find out what's actually going on, manually pause it a few random times in the debugger, and examine the call stack.
For example, in the code that is 30x slower in one implementation than in another, each sample of the stack has a 96.7% chance of falling in the extra time that it is spending, so you can see why.
No guesswork required.

Resources