How to speed up spaCy lemmatization? - performance

I'm using spaCy (version 2.0.11) for lemmatization in the first step of my NLP pipeline but unfortunately it's taking a verrry long time. It is clearly the slowest part of my processing pipeline and I want to know if there are improvements I could be making. I am using a pipeline as:
nlp.pipe(docs_generator, batch_size=200, n_threads=6, disable=['ner'])
on a 8 core machine, and I have verified that the machine is using all the cores.
On a corpus of about 3 million short texts totaling almost 2gb it takes close to 24hrs to lemmatize and write to disk. Reasonable?
I have tried disabling a couple parts of the processing pipeline and found that it broke the lemmatization (parser, tagger).
Are there any parts of the default processing pipeline that are not required for lemmatization besides named entity recognition?
Are there other ways of speeding up the spaCy lemmatization process?
Aside:
It also appears that documentation doesn't list all the operations in the parsing pipeline. At the top of the spacy Language class we have:
factories = {
'tokenizer': lambda nlp: nlp.Defaults.create_tokenizer(nlp),
'tensorizer': lambda nlp, **cfg: Tensorizer(nlp.vocab, **cfg),
'tagger': lambda nlp, **cfg: Tagger(nlp.vocab, **cfg),
'parser': lambda nlp, **cfg: DependencyParser(nlp.vocab, **cfg),
'ner': lambda nlp, **cfg: EntityRecognizer(nlp.vocab, **cfg),
'similarity': lambda nlp, **cfg: SimilarityHook(nlp.vocab, **cfg),
'textcat': lambda nlp, **cfg: TextCategorizer(nlp.vocab, **cfg),
'sbd': lambda nlp, **cfg: SentenceSegmenter(nlp.vocab, **cfg),
'sentencizer': lambda nlp, **cfg: SentenceSegmenter(nlp.vocab, **cfg),
'merge_noun_chunks': lambda nlp, **cfg: merge_noun_chunks,
'merge_entities': lambda nlp, **cfg: merge_entities
}
which includes some items not covered in the documentation here:
https://spacy.io/usage/processing-pipelines
Since they are not covered I don't really know which may be disabled, nor what their dependencies are.

I found out you can disable the parser portion of the spacy pipeline as well, as long as you add the sentence segmenter. It's not crazy fast but it is definitely an improvement--in tests the time looks to be about 1/3 of what I was doing before (when I was just disabling 'ner'). Here is what I have now:
nlp = spacy.load('en', disable=['ner', 'parser'])
nlp.add_pipe(nlp.create_pipe('sentencizer'))

One quick and impactful optimization would be memoization using suitable inmemory structures or inmemory databases ( python dicts or redis/memcache ).
Lemmatized forms of the words alongside with their context like Part-Of-Speech would be constant, and do not change, so there is no need to spend computing power on them again and again.
There would be LOADS of repetitions in your 3 million text corpus, and memoization would cut down the time hugely.
Example:
>>> import spacy
>>> nlp = spacy.load('en')
>>> txt1 = u"he saw the dragon, he saw the forest, and used a saw to cut the tree, and then threw the saw in the river."
>>> [(x.text,x.pos_,x.lemma_) for x in nlp(txt1)]
[(u'he', u'PRON', u'-PRON-'), (u'saw', u'VERB', u'see'), (u'the', u'DET', u'the'), (u'dragon', u'NOUN', u'dragon'), (u',', u'PUNCT', u','), (u'he', u'PRON', u'-PRON-'), (u'saw', u'VERB', u'see'), (u'the', u'DET', u'the'), (u'forest', u'NOUN', u'forest'), (u',', u'PUNCT', u','), (u'and', u'CCONJ', u'and'), (u'used', u'VERB', u'use'), (u'a', u'DET', u'a'), (u'saw', u'NOUN', u'saw'), (u'to', u'PART', u'to'), (u'cut', u'VERB', u'cut'), (u'the', u'DET', u'the'), (u'tree', u'NOUN', u'tree'), (u',', u'PUNCT', u','), (u'and', u'CCONJ', u'and'), (u'then', u'ADV', u'then'), (u'threw', u'VERB', u'throw'), (u'the', u'DET', u'the'), (u'saw', u'NOUN', u'saw'), (u'in', u'ADP', u'in'), (u'the', u'DET', u'the'), (u'river', u'NOUN', u'river'), (u'.', u'PUNCT', u'.')]
As you can see pos tag + lemmatized form are constant.

Related

how to handle spelling mistake(typos) in entity extraction in Rasa NLU?

I have few intents in my training set(nlu_data.md file) with sufficient amount of training examples under each intent.
Following is an example,
##intent: SEARCH_HOTEL
- find good [hotel](place) for me in Mumbai
I have added multiple sentences like this.
At the time of testing, all sentences in training file are working fine. But if any input query is having spelling mistake e.g, hotol/hetel/hotele for hotel keyword then Rasa NLU is unable to extract it as an entity.
I want to resolve this issue.
I am allowed to change only training data, also restricted not to write any custom component for this.
To handle spelling mistakes like this in entities, you should add these examples to your training data. So something like this:
##intent: SEARCH_HOTEL
- find good [hotel](place) for me in Mumbai
- looking for a [hotol](place) in Chennai
- [hetel](place) in Berlin please
Once you've added enough examples, the model should be able to generalise from the sentence structure.
If you're not using it already, it also makes sense to use the character-level CountVectorFeaturizer. That should be in the default pipeline described on this page already
One thing I would highly suggest you to use is to use look-up tables with fuzzywuzzy matching. If you have limited number of entities (like country names) look-up tables are quite fast, and fuzzy matching catches typos when that entity exists in your look-up table (searching for typo variations of those entities). There's a whole blogpost about it here: on Rasa.
There's a working implementation of fuzzy wuzzy as a custom component:
class FuzzyExtractor(Component):
name = "FuzzyExtractor"
provides = ["entities"]
requires = ["tokens"]
defaults = {}
language_list ["en"]
threshold = 90
def __init__(self, component_config=None, *args):
super(FuzzyExtractor, self).__init__(component_config)
def train(self, training_data, cfg, **kwargs):
pass
def process(self, message, **kwargs):
entities = list(message.get('entities'))
# Get file path of lookup table in json format
cur_path = os.path.dirname(__file__)
if os.name == 'nt':
partial_lookup_file_path = '..\\data\\lookup_master.json'
else:
partial_lookup_file_path = '../data/lookup_master.json'
lookup_file_path = os.path.join(cur_path, partial_lookup_file_path)
with open(lookup_file_path, 'r') as file:
lookup_data = json.load(file)['data']
tokens = message.get('tokens')
for token in tokens:
# STOP_WORDS is just a dictionary of stop words from NLTK
if token.text not in STOP_WORDS:
fuzzy_results = process.extract(
token.text,
lookup_data,
processor=lambda a: a['value']
if isinstance(a, dict) else a,
limit=10)
for result, confidence in fuzzy_results:
if confidence >= self.threshold:
entities.append({
"start": token.offset,
"end": token.end,
"value": token.text,
"fuzzy_value": result["value"],
"confidence": confidence,
"entity": result["entity"]
})
file.close()
message.set("entities", entities, add_to_output=True)
But I didn't implement it, it was implemented and validated here: Rasa forum
Then you will just pass it to your NLU pipeline in config.yml file.
Its a strange request that they ask you not to change the code or do custom components.
The approach you would have to take would be to use entity synonyms. A slight edit on a previous answer:
##intent: SEARCH_HOTEL
- find good [hotel](place) for me in Mumbai
- looking for a [hotol](place:hotel) in Chennai
- [hetel](place:hotel) in Berlin please
This way even if the user enters a typo, the correct entity will be extracted. If you want this to be foolproof, I do not recommend hand-editing the intents. Use some kind of automated tool for generating the training data. E.g. Generate misspelled words (typos)
First of all, add samples for the most common typos for your entities as advised here
Beyond this, you need a spellchecker.
I am not sure whether there is a single library that can be used in the pipeline, but if not you need to create a custom component. Otherwise, dealing with only training data is not feasible. You can't create samples for each typo.
Using Fuzzywuzzy is one of the ways, generally, it is slow and it doesn't solve all the issues.
Universal Encoder is another solution.
There should be more options for spell correction, but you will need to write code in any way.

How to speedup Stanford NLP in Python?

import numpy as np
from nltk.tag import StanfordNERTagger
from nltk.tokenize import word_tokenize
#english.all.3class.distsim.crf.ser.gz
st = StanfordNERTagger('/media/sf_codebase/modules/stanford-ner-2018-10-16/classifiers/english.all.3class.distsim.crf.ser.gz',
'/media/sf_codebase/modules/stanford-ner-2018-10-16/stanford-ner.jar',
encoding='utf-8')
After initializing above code Stanford NLP following code takes 10 second to tag the text as shown below. How to speed up?
%%time
text="My name is John Doe"
tokenized_text = word_tokenize(text)
classified_text = st.tag(tokenized_text)
print (classified_text)
Output
[('My', 'O'), ('name', 'O'), ('is', 'O'), ('John', 'PERSON'), ('Doe', 'PERSON')]
CPU times: user 4 ms, sys: 20 ms, total: 24 ms
Wall time: 10.9 s
Another solution within NLTK is to not use the old nltk.tag.StanfordNERTagger but instead to use the newer nltk.parse.CoreNLPParser . See, e.g., https://github.com/nltk/nltk/wiki/Stanford-CoreNLP-API-in-NLTK .
More generally the secret to good performance is indeed to use a server on the Java side, which you can repeatedly call without having to start new subprocesses for each sentence processed. You can either use the NERServer if you just need NER or the StanfordCoreNLPServer for all CoreNLP functionality. There are a number of Python interfaces to it, see: https://stanfordnlp.github.io/CoreNLP/other-languages.html#python
Found the answer.
Initiate the Stanford NLP Server in background in the folder where Stanford NLP is unzipped.
java -Djava.ext.dirs=./lib -cp stanford-ner.jar edu.stanford.nlp.ie.NERServer -port 9199 -loadClassifier ./classifiers/english.all.3class.distsim.crf.ser.gz
Then initiate Stanford NLP Server tagger in Python using sner library.
from sner import Ner
tagger = Ner(host='localhost',port=9199)
Then run the tagger.
%%time
classified_text=tagger.get_entities(text)
print (classified_text)
Output:
[('My', 'O'), ('name', 'O'), ('is', 'O'), ('John', 'PERSON'), ('Doe', 'PERSON')]
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 18.2 ms
Almost 300 times better performance in terms of timing! Wow!
After attempting several options, I like Stanza. It is developed by Stanford, is very simple to implement, I didn't have to figure out how to start the server properly on my own, and it dramatically improved the speed of my program. It implements the 18 different object classifications.
I found Stanza by following the link provided in Christopher Manning's answer.
To download:
pip install stanza
then in Python:
import stanza
stanza.download('en') # download English model
nlp = stanza.Pipeline('en') # initialize English neural pipeline
doc = nlp("My name is John Doe.") # run annotation over a sentence or multiple sentences
If you only want a specific tool (NER), you can specify with processors as:
nlp = stanza.Pipeline('en',processors='tokenize,ner')
For an output similar to that produced by the OP:
classified_text = [(token.text,token.ner) for i, sentence in enumerate(doc.sentences) for token in sentence.tokens]
print(classified_text)
[('My', 'O'), ('name', 'O'), ('is', 'O'), ('John', 'B-PERSON'), ('Doe', 'E-PERSON')]
But to produce a list of only those words that are recognizable entities:
classified_text = [(ent.text,ent.type) for ent in doc.ents]
[('John Doe', 'PERSON')]
It produces a couple of features that I really like:
instead of each word being classified as a separate person entity, it combines John Doe into one 'PERSON' object.
If you do want each separate word, you can extract those and it identifies which part of the object it is ('B' for the first word in the object, 'I' for the intermediate words, and 'E' for the last word in the object)

Improving the speed of preprocessing

Following code is used to preprocess text with a custom lemmatizer function:
%%time
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from gensim.utils import simple_preprocess, lemmatize
from gensim.parsing.preprocessing import STOPWORDS
STOPWORDS = list(STOPWORDS)
def preprocessor(s):
result = []
for token in lemmatize(s, stopwords=STOPWORDS, min_length=2):
result.append(token.decode('utf-8').split('/')[0])
return result
data = pd.read_csv('https://pastebin.com/raw/dqKFZ12m')
%%time
X_train, X_test, y_train, y_test = train_test_split([preprocessor(x) for x in data.text],
data.label, test_size=0.2, random_state=0)
#10.8 seconds
Question:
Can the speed of the lemmatization process be improved?
On a large corpus of about 80,000 documents, it currently takes about two hours. The lemmatize() function seems to be the main bottleneck, as a gensim function such as simple_preprocess is quite fast.
Thanks for your help!
You may want to refactor your code to make it easier to time each portion separately. lemmatize() might be part of your bottleneck, but other significant contributors might also be: (1) composing large documents, one-token-at-a-time, via list .append(); (2) the utf-8 decoding.
Separately, the gensim lemmatize() relies on the parse() function from the Pattern library; you could try an alternative lemmatization utility, like those in NLTK or Spacy.
Finally, as lemmatization may be an inherently costly operation, and it might be the case that the same source data gets processed many times in your pipeline, you might want to engineer your process so that the results are re-written to disk, then re-used on subsequent runs – rather than always done "in-line".

What problem does a reinitializable iterator solve?

From the tf.data documentation:
A reinitializable iterator can be initialized from multiple different
Dataset objects. For example, you might have a training input pipeline
that uses random perturbations to the input images to improve
generalization, and a validation input pipeline that evaluates
predictions on unmodified data. These pipelines will typically use
different Dataset objects that have the same structure (i.e. the same
types and compatible shapes for each component).
the following example was given:
# Define training and validation datasets with the same structure.
training_dataset = tf.data.Dataset.range(100).map(
lambda x: x + tf.random_uniform([], -10, 10, tf.int64))
validation_dataset = tf.data.Dataset.range(50)
# A reinitializable iterator is defined by its structure. We could use the
# `output_types` and `output_shapes` properties of either `training_dataset`
# or `validation_dataset` here, because they are compatible.
iterator = tf.data.Iterator.from_structure(training_dataset.output_types,
training_dataset.output_shapes)
next_element = iterator.get_next()
training_init_op = iterator.make_initializer(training_dataset)
validation_init_op = iterator.make_initializer(validation_dataset)
# Run 20 epochs in which the training dataset is traversed, followed by the
# validation dataset.
for _ in range(20):
# Initialize an iterator over the training dataset.
sess.run(training_init_op)
for _ in range(100):
sess.run(next_element)
# Initialize an iterator over the validation dataset.
sess.run(validation_init_op)
for _ in range(50):
sess.run(next_element)
It is unclear what the benefit of this complexity is.
Why not simply create 2 different iterators?
The original motivation for reinitializable iterators was as follows:
The user's input data is in two or more tf.data.Dataset objects with the same structure but different pipeline definitions.
For example, you might have a training data pipeline with augmentations in a Dataset.map(), and an evaluation data pipeline that produced raw examples, but they would both produce batches with the same structure (in terms of the number of tensors, their element types, shapes, etc.).
The user would define a single training graph that took input from a tf.data.Iterator, created using Iterator.from_structure().
The user could then switch between the different input data sources by reinitializing the iterator from one of the datasets.
In hindsight, reinitializable iterators have turned out to be quite hard to use for their intended purpose. In TensorFlow 2.0 (or 1.x with eager execution enabled), it is much easier to create iterators over different datasets using idiomatic Python for loops and high-level training APIs:
tf.enable_eager_execution()
model = ... # A `tf.keras.Model`, or some other class exposing `fit()` and `evaluate()` methods.
train_data = ... # A `tf.data.Dataset`.
eval_data = ... # A `tf.data.Dataset`.
for i in range(NUM_EPOCHS):
model.fit(train_data, ...)
# Evaluate every 5 epochs.
if i % 5 == 0:
model.evaluate(eval_data, ...)

Spark Scala DataFrame vs Dataset - Performance analysis [duplicate]

starting out with spark 2.0.1 I got some questions. I read a lot of documentation but so far could not find sufficient answers:
What is the difference between
df.select("foo")
df.select($"foo")
do I understand correctly that
myDataSet.map(foo.someVal) is typesafe and will not convert into RDD but stay in DataSet representation / no additional overhead (performance wise for 2.0.0)
all the other commands e.g. select, .. are just syntactic sugar. They are not typesafe and a map could be used instead. How could I df.select("foo") type-safe without a map statement?
why should I use a UDF / UADF instead of a map (assuming map stays in the dataset representation)?
Difference between df.select("foo") and df.select($"foo") is signature. The former one takes at least one String, the later one zero or more Columns. There is no practical difference beyond that.
myDataSet.map(foo.someVal) type checks, but as any Dataset operation uses RDD of objects, and compared to DataFrame operations, there is a significant overhead. Let's take a look at a simple example:
case class FooBar(foo: Int, bar: String)
val ds = Seq(FooBar(1, "x")).toDS
ds.map(_.foo).explain
== Physical Plan ==
*SerializeFromObject [input[0, int, true] AS value#123]
+- *MapElements <function1>, obj#122: int
+- *DeserializeToObject newInstance(class $line67.$read$$iw$$iw$FooBar), obj#121: $line67.$read$$iw$$iw$FooBar
+- LocalTableScan [foo#117, bar#118]
As you can see this execution plan requires access to all fields and has to DeserializeToObject.
No. In general other methods are not syntactic sugar and generate a significantly different execution plan. For example:
ds.select($"foo").explain
== Physical Plan ==
LocalTableScan [foo#117]
Compared to the plan shown before it can access column directly. It is not so much a limitation of the API but a result of a difference in the operational semantics.
How could I df.select("foo") type-safe without a map statement?
There is no such option. While typed columns allow you to transform statically Dataset into another statically typed Dataset:
ds.select($"bar".as[Int])
there are not type safe. There some other attempts to include type safe optimized operations, like typed aggregations, but this experimental API.
why should I use a UDF / UADF instead of a map
It is completely up to you. Each distributed data structure in Spark provides its own advantages and disadvantages (see for example Spark UDAF with ArrayType as bufferSchema performance issues).
Personally, I find statically typed Dataset to be the least useful:
Don't provide the same range of optimizations as Dataset[Row] (although they share storage format and some execution plan optimizations it doesn't fully benefit from code generation or off-heap storage) nor access to all the analytical capabilities of the DataFrame.
Typed transformations are black boxes, and effectively create analysis barrier for the optimizer. For example selections (filters) cannot be be pushed over typed transformation:
ds.groupBy("foo").agg(sum($"bar") as "bar").as[FooBar].filter(x => true).where($"foo" === 1).explain
== Physical Plan ==
*Filter (foo#133 = 1)
+- *Filter <function1>.apply
+- *HashAggregate(keys=[foo#133], functions=[sum(cast(bar#134 as double))])
+- Exchange hashpartitioning(foo#133, 200)
+- *HashAggregate(keys=[foo#133], functions=[partial_sum(cast(bar#134 as double))])
+- LocalTableScan [foo#133, bar#134]
Compared to:
ds.groupBy("foo").agg(sum($"bar") as "bar").as[FooBar].where($"foo" === 1).explain
== Physical Plan ==
*HashAggregate(keys=[foo#133], functions=[sum(cast(bar#134 as double))])
+- Exchange hashpartitioning(foo#133, 200)
+- *HashAggregate(keys=[foo#133], functions=[partial_sum(cast(bar#134 as double))])
+- *Filter (foo#133 = 1)
+- LocalTableScan [foo#133, bar#134]
This impacts features like predicate pushdown or projection pushdown.
There are not as flexible as RDDs with only a small subset of types supported natively.
"Type safety" with Encoders is disputable when Dataset is converted using as method. Because data shape is not encoded using a signature, a compiler can only verify the existence of an Encoder.
Related questions:
Perform a typed join in Scala with Spark Datasets
Spark 2.0 DataSets groupByKey and divide operation and type safety
Spark Dataset is way more powerful than Spark Dataframe. Small example - you can only create Dataframe of Row, Tuple or any primitive datatype but Dataset gives you power to create Dataset of any non-primitive type too. i.e. You can literally create Dataset of object type.
Ex:
case class Employee(id:Int,name:String)
Dataset[Employee] // is valid
Dataframe[Employee] // is invalid
DATAFRAME: DataFrame is an abstraction that allows a schema view of data.
case class Person(name: String, age: Int, address: String)
defined class Person
scala > val df = List ( Person ( “Sumanth”, 23, “BNG”)
DATAFRAME VS DATASET
DATASET: Data Set is an extension to Dataframe API, the latest abstraction which tries to provide the best of both RDD and Dataframe.

Resources