Gensim Word2Vec that I've trained lacks vectors for some words. That is, although I have a word "yuval" as an input, the model lacks a vector for it. What is the cause?
You either didn't supply 'yuval' as a token with a properly-formatted corpus, or the number of occurrences was below the model's min_count. (It's generally helpful for a Word2Vec model to discard low-frequency words – more data isn't automatically better if there are only a few examples of a word.)
Double-check that 'yuval' appears in the corpus, and how many times, and whether that's sufficient for the word to survive min_count trimming.
To expand on #gojomo's answer, Word2Vec models during training are told to discard tokens below a min_count, as they are considered to be uninformative, meaning unable to extrapolate useful context.
This means, those tokens wont have vectors.
To check this, you an load the model, and check that the vocabulary contains the token you are interested in:
>>> import gensim
>>> model = gensim.models.KeyedVectors.load(...)
>>> 'car' in model
True
>>> 'yuval' in model
False
Since 'yuval' is not in the vocabulary, it can't be found with the in operator, and will throw a key error if used.
>>> model['car']
...
...
<numpy array>
>>> model['yuval']
...
...
KeyError: "word 'yuval' not in vocabulary"
If you really expect that the word should be in the list of vocabulary words, you can always print them out too:
>>> for token in model.wv.vocab.keys():
... print(token)
...
Related
I am trying to convert a matrix to the type that can be received by gensim. AuthorTopic Model, which means I should convert a matrix to a sparse vector. I have already tried several functions in gensim like gensim.matutils.full2sparse and gensim.matutils.any2sparse. But there is something wrong:
my code:
matrix=numpy.array([[1,0 ,1],[0,1,1]])
mycorpus=any2sparse(matrix)
print(matrix)
print(mycorpus)
the output:
[[1 0 1]
[0 1 1]]
[(0, 1.0), (0, 1.0), (1, 0.0), (1, 0.0)] #mycorpus
accoring to the tutorial, mycorpus should be like:
[[(0,1),(2,1)]
[(1,1),(2,1)]]
I have no idea what's wrong. I really appreciate if anyone could give me some advise.
The Gensim AuthorTopicModel docs describe its desired corpus-format as iterable of list of (int, float).
Those int values would be word-ids, and ideally be accompanied by the id2word dict which idntifies which int means which word.
What's the source of your matrix, & do you know if it's the rows or the columns that represent words, and have a mapping of indexes to words? That will drive the conversion.
Also, as the docs mention, "The model is closely related to LdaModel. The AuthorTopicModel class inherits LdaModel, and its usage is thus similar.
Have you reviewed guides to Gensim LDA usage to see how they prepare their corpus, such as the multiple Usage Examples, to see if that helps suggest steps & necessary formats?
Or, is your corpus still available as texts, so you can directly use the examples there as a model to turn the text into the BoW format (rather than your already-processed matrix)?
If you're still having problems, you should expand your question text with more details, especially how the true corpus matrix that you have was created, and which errors you've encountered (& how you triggered them) that convince you things aren't working.
I have noticed that my gensim Doc2Vec (DBOW) model is sensitive to document tags. My understanding was that these tags are cosmetic and so they should not influence the learned embeddings. Am I misunderstanding something? Here is a minimal example:
from gensim.test.utils import common_texts
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
import numpy as np
import os
os.environ['PYTHONHASHSEED'] = '0'
reps = []
for a in [0,500]:
documents = [TaggedDocument(doc, [i + a])
for i, doc in enumerate(common_texts)]
model = Doc2Vec(documents, vector_size=100, window=2, min_count=0,
workers=1, epochs=10, dm=0, seed=0)
reps.append(np.array([model.docvecs[k] for k in range(len(common_texts))])
reps[0].sum() == reps[1].sum()
This last line returns False. I am working with gensim 3.8.3 and Python 3.5.2. More generally, is there any role that the values of the tags play (assuming they are unique)? I ask because I have found that using different tags for documents in a classification task leads to widely varying performance.
Thanks in advance.
First & foremost, your test isn't even comparing vectors corresponding to the same texts!
In run #1, the vector for the 1st text in in model.docvecs[0]. In run #2, the vector for the 1st text is in model.docvecs[1].
And, in run #2, the vector at model.docvecs[0] is just a randomly-initialized, but never-trained, vector - because none of the training texts had a document tag of (int) 0. (If using pure ints as the doc-tags, Doc2Vec uses them as literal indexes - potentially leaving any unused slots less than your highest tag allocated-and-initialized, but never-trained.)
Since common_texts only has 11 entries, by the time you reach run #12, all the vectors in your reps array of the first 11 vectors are garbage uncorrelated with any of your texts/
However, even after correcting that:
As explained in the Gensim FAQ answer #11, determinism in this algorithm shouldn't generally be expected, given many sources of potential randomness, and the fuzzy/approximate nature of the whole approach. If you're relying on it, or testing for it, you're probably making some unwarranted assumptions.
In general, tests of these algorithms should be evaluating "roughly equivalent usefulness in comparative uses" rather than "identical (or even similar) specific vectors". For example, a test whether apple and orange are roughly at the same positions in each others' nearest-neighbor rankings makes more sense than checking their (somewhat arbitrary) exact vector positions or even cosine-similarity.
Additionally:
tiny toy datasets like common_texts won't show the algorithm's usual behavior/benefits
PYTHONHASHSEED is only consulted by the Python interpreter at startup; setting it from Python can't have any effect. But also, the kind of indeterminism it introduces only comes up with separate interpreter launches: a tight loop within a single interpreter run like this wouldn't be affected by that in any case.
Have you checked the magnitude of the differences?
Just running:
delta = reps[0].sum() - reps[1].sum()
for the aggregate differences results with -1.2598932e-05 when I run it.
Comparison dimension-wise:
eps = 10**-4
over = (np.abs(diff) <= eps).all()
Returns True on a vast majority of the runs which means that you are getting quite reproducible results given the complexity of the calculations.
I would blame numerical stability of the calculations or uncontrolled randomness. Even though you do try to control the random seed, there is a different random seed in NumPy and different in random standard library so you are not controlling for all of the sources of randomness. This can also have an influence on the results but I did not check the actual implementation in gensim and it's dependencies.
Change
import os
os.environ['PYTHONHASHSEED'] = '0'
to
import os
import sys
hashseed = os.getenv('PYTHONHASHSEED')
if not hashseed:
os.environ['PYTHONHASHSEED'] = '0'
os.execv(sys.executable, [sys.executable] + sys.argv)
I'm interested in huggingface's distillBERT work, by going through their code (https://github.com/huggingface/transformers/blob/master/examples/distillation/train.py), I found that if use roBERTa as student model, they will freeze the position embedding, I'm wondering what is this for?
def freeze_pos_embeddings(student, args):
if args.student_type == "roberta":
student.roberta.embeddings.position_embeddings.weight.requires_grad = False
elif args.student_type == "gpt2":
student.transformer.wpe.weight.requires_grad = False
I understand the reason of freezing token_type_embeddings, because roBERTa never used segment embeddings, but why position embeddings?
Thanks a lot for the help!
In most (maybe even all) commonly used Transformers, position embeddings are not trained but defined using an analytically described function (unnumbered equation on
page 6 of Attention is all you need paper):
To save the computation time in the Transformer package, they are pre-computed up to the length of 512 and stored as a variable that serves as a cache which should not change during training.
The reason for not training the position embeddings is that the embeddings for later positions would be undertrained, but with cleverly analytically defined position embeddings, the network can learn the regularity behind the equations and generalize for longer sequences more easily.
Question:
Given a piece of text like "This is a test"; how to build a machine learning model to get the number of word occurrences for example in this piece, word count is 4. After training, it is possible to predict text word count.
I know it is easy to write a program (like below pseudo code),
data: memory.punctuation['~', '`', '!', '#', '#', '$', '%', '^', '&', '*', ...]
f: count.word(text) -> count =
f: tokenize(text) --list-->
f: count.token(list, filter) where filter(token)<not in memory.punctuation> -> count
however in this question, we require to use machine learning algorithm. I wonder how machine can learn the concept of count (currently, we know machine learning is good at classification). Any idea and suggestions? Thanks in advance.
Failures:
We can use sth like word2vec (encoder) to build word vectors; if we consider seq2seq approach, we can train sth like This is a test <s> 4 <e> This is very very long sentence and the word count is greater than ten <s> 4 1 <e> (4 1 to represent the number 14). However, it does not work since attention model is used to get similar vector for example text translating (This is a test --> 这(this) 是(is) 一个(a) 测试(test)). It is hard to find relationship between [this ...] and 4 which is an aggregated number (i.e. model not convergent).
We know machine learning is good at classification. If we treat "4" as a class, the number of classes is infinite; if we do a tricky and use count/text.length as prediction, i have not got a model that fit even training data set (model not convergent); for example, if we use many short sentence to train the model, it will fail to predict long sentence length. And it may be related to an information paradox: we can encode data in a book as 0.x and use a machine to to mark a position on a rod to split it into 2 parts with length a and b, where a/b = 0.x; but we cannot find a machine.
What about a regression problem?
I think it would work quite well and that at the end it will output a nearly whole numbers all the time.
Also you can train a simple RNN to do the job, assuming you are using a hot one encoding and take an output from the last state.
If V_h is all zeros but the space index (which will be 1) and V_x as well, than the network will actually sum the spaces, and if c is 1 at the end so the output will be the number of words - For every length!
I think we can take it as a classification problem for a character being the input and if word breaker as the output.
In other words, at some time point t, we output whether the input character at the same time point is a word breaker (YES) or not (NO). If yes, then increase the word count. If no, then read the next character.
In modern English language I don't think there are going to be long words. So simple RNN model should do perhaps without the concern of vanishing gradient.
Let me know what you think!
Use NLTK for counting words,
from nltk.tokenize import word_tokenize
text = "God is Great!"
word_count = len(word_tokenize(text))
print(word_count)
I've been thinking of if this was created already but image a function that can validate a string and determine if it's a word or not. eg
print(validateWord("Hello")) --> true
print(validateWord("Haloe")) --> true (may not be a real word but follows the standards of placements of vowels and such)
print(validateWord("sewxdw")) --> false
I'm not asking for code, I would just like knowledge of if this exists already and a wiki post to this algorithm would be nice if it did.
What you want is a hidden Markov model, trained on the words in a corpus of English (or whatever language you are interested in). You can then score putative words for whether the model likes them or not. It will only disallow actually disallowed combinations like "jx" but it should give a low score to unlikely candidates.
You might have better luck trying to break up the text into phoneme symbols (th, ae qu, ph etc) first rather than writing a model that uses raw letters.