How to rotate a word2vec onto another word2vec? - gensim

I am training multiple word2vec models with Gensim. Each of the word2vec will have the same parameter and dimension, but trained with slightly different data. Then I want to compare how the change in data affected the vector representation of some words.
But every time I train a model, the vector representation of the same word is wildly different. Their similarity among other words remain similar, but the whole vector space seems to be rotated.
Is there any way I can rotate both of the word2vec representation in such way that same words occupy same position in vector space, or at least they are as close as possible.
Thanks in advance.

That the locations of words vary between runs is to be expected. There's no one 'right' place for words, just mutual arrangements that are good at the training task (predicting words from other nearby words) – and the algorithm involves random initialization, random choices during training, and (usually) multithreaded operation which can change the effective ordering of training examples, and thus final results, even if you were to try to eliminate the randomness by reliance on a deterministically-seeded pseudorandom number generator.
There's a class called TranslationMatrix in gensim that implements the learn-a-projection-between-two-spaces method, as used for machine-translation between natural languages in one of the early word2vec papers. It requires you to have some words that you specify should have equivalent vectors – an anchor/reference set – then lets other words find their positions in relation to those. There's a demo of its use in gensim's documentation notebooks:
https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/translation_matrix.ipynb
But, there are some other techniques you could also consider:
transform & concatenate the training corpuses instead, to both retain some words that are the same across all corpuses (such as very frequent words), but make other words of interest different per segment. For example, you might leave words like "hot" and "cold" unchanged, but replace words like "tamale" or "skiing" with subcorpus-specific versions, like "tamale(A)", "tamale(B)", "skiing(A)", "skiing(B)". Shuffle all data together for training in a single session, then check the distances/directions between "tamale(A)" and "tamale(B)" - since they were each only trained by their respective subsets of the data. (It's still important to have many 'anchor' words, shared between different sets, to force a correlation on those words, and thus a shared influence/meaning for the varying-words.)
create a model for all the data, with a single vector per word. Save that model aside. Then, re-load it, and try re-training it with just subsets of the whole data. Check how much words move, when trained on just the segments. (It might again help comparability to hold certain prominent anchor words constant. There's an experimental property in the model.trainables, with a name ending _lockf, that lets you scale the updates to each word. If you set its values to 0.0, instead of the default 1.0, for certain word slots, those words can't be further updated. So after re-loading the model, you could 'freeze' your reference words, by setting their _lockf values to 0.0, so that only other words get updated by the secondary training, and they're still bound to have coordinates that make sense with regard to the unmoving anchor words. Read the source code to better understand how _lockf works.)

Related

word2vec window size at sentence boundaries

I am using word2vec (and doc2vec) to get embeddings for sentences, but i want to completely ignore word order.
I am currently using gensim, but can use other packages if necessary.
As an example, my text looks like this:
[
['apple', 'banana','carrot','dates', 'elderberry', ..., 'zucchini'],
['aluminium', 'brass','copper', ..., 'zinc'],
...
]
I intentionally want 'apple' to be considered as close to 'zucchini' as it is to 'banana' so I have set the window size to a very large number, say 1000.
I am aware of 2 problems that may arise with this.
Problem 1:
The window might roll in at the start of a sentence creating the following training pairs:
('apple', ('banana')), ('apple', ('banana', 'carrot')), ('apple', ('banana', 'carrot', 'date')) before it eventually gets to the correct ('apple', ('banana','carrot', ..., 'zucchini')).
This would seem to have the effect of making 'apple' closer to 'banana' than 'zucchini',
since their are so many more pairs containing 'apple' and 'banana' than there are pairs containing 'apple' and 'zucchini'.
Problem 2:
I heard that pairs are sampled with inverse proportion to the distance from the target word to the context word- This also causes an issue making nearby words more seem more connected than I want them to be.
Is there a way around problems 1 and 2?
Should I be using cbow as opposed to sgns? Are there any other hyperparameters that I should be aware of?
What is the best way to go about removing/ignoring the order in this case?
Thank you
I'm not sure what you mean by "Problem 1" - there's no "roll" or "wraparound" in the usual interpretation of a word2vec-style algorithm's window parameter. So I wouldn't worry about this.
Regarding "Problem 2", this factor can be essentially made negligible by the choice of a giant window value – say for example, a value one million times larger than your largest sentence. Then, any difference in how the algorithm treats the nearest-word and the 2nd-nearest-word is vanishingly tiny.
(More specifically, the way the gensim implementation – which copies the original Google word2vec.c in this respect – achieves a sort of distance-based weighting is actually via random dynamic shrinking of the actual window used. That is, for each visit during training to each target word, the effective window truly used is some random number from 1 to the user-specified window. By effectively using smaller windows much of the time, the nearer words have more influence – just without the cost of performing other scaling on the whole window's words every time. But in your case, with a giant window value, it will be incredibly rare for the effective-window to ever be smaller than your actual sentences. Thus every word will be included, equally, almost every time.)
All these considerations would be the same using SG or CBOW mode.
I believe a million-times-larger window will be adequate for your needs, for if for some reason it wasn't, another way to essentially cancel-out any nearness effects could be to ensure your corpus's items individual word-orders are re-shuffled between each time they're accessed as training data. That ensures any nearness advantages will be mixed evenly across all words – especially if each sentence is trained on many times. (In a large-enough corpus, perhaps even just a 1-time shuffle of each sentence would be enough. Then, over all examples of co-occurring words, the word co-occurrences would be sampled in the right proportions even with small windows.)
Other tips:
If your training data starts in some arranged order that clumps words/topics together, it can be beneficial to shuffle them into a random order instead. (It's better if the full variety of the data is interleaved, rather than presented in runs of many similar examples.)
When your data isn't true natural-language data (with its usual distributions & ordering significance), it may be worth it to search further from the usual defaults to find optimal metaparameters. This goes for negative, sample, & especially ns_exponent. (One paper has suggested the optimal ns_exponent for training vectors for recommendation-systems is far different from the usual 0.75 default for natural-language modeling.)

FastTextKeyedVectors difference between vectors, vectors_vocab and vectors_ngrams instance variables

I downloaded wiki-news-300d-1M-subword.bin.zip and loaded it as follows:
import gensim
print(gensim.__version__)
model = gensim.models.fasttext.load_facebook_model('./wiki-news-300d-1M-subword.bin')
print(type(model))
model_keyedvectors = model.wv
print(type(model_keyedvectors))
model_keyedvectors.save('./wiki-news-300d-1M-subword.keyedvectors')
As expected, I see the following output:
3.8.1
<class 'gensim.models.fasttext.FastText'>
<class 'gensim.models.keyedvectors.FastTextKeyedVectors'>
I also see the following three numpy arrays serialized to the disk:
$ du -h wiki-news-300d-1M-subword.keyedvectors*
127M wiki-news-300d-1M-subword.keyedvectors
2.3G wiki-news-300d-1M-subword.keyedvectors.vectors_ngrams.npy
2.3G wiki-news-300d-1M-subword.keyedvectors.vectors.npy
2.3G wiki-news-300d-1M-subword.keyedvectors.vectors_vocab.npy
I understand vectors_vocab.npy and vectors_ngrams.npy, however, what is vectors.npy is used for internally in gensim.models.keyedvectors.FastTextKeyedVectors? If I look at the source code for finding out word vector, I do not see how attribute vectors is being used anywhere. I see the attributes vectors_vocab and vectors_ngrams bing used. However, if I remove vectors.npy file, I am not able to load the model using gensim.models.keyedvectors.FastTextKeyedVectors.load method.
Can someone please explain where this variable is used? Can I remove it if all I am interested is in looking word vectors (to reduce memory footprint)?
Thanks.
vectors_ngrams are the buckets storing the vectors that are learned from word-fragments (character-n-grams). It's a fixed size no matter how many n-grams are encountered - as multiple n-grams can 'collide' into the same slot.
vectors_vocab are the full-word-token vectors as trained by the FastText algorithm, for full-words of interest. However, note that the actual word-vector, as returned by FastText for an in-vocabulary word, is defined as being this vector plus all the subword vectors.
vectors stores the actual, returnable full-word vectors for in-vocabulary words. That is: it's the precalculated combination of the vectors_vocab value plus all the word's n-gram vectors.
So, vectors is never directly trained, and can always be recalculated from the other arrays. It probably should not be stored as part of the saved model (as it's redundant info that could be reconstructed on demand).
(It could possibly even be made an optional optimization, for the specific case of FastText – with users who are willing to save memory, but have slower per-word lookup, discarding it. However, this would complicate the very common and important most_similar()-like operations, which are far more efficient if they have a full, ready array of all potential-answer word-vectors.)
If you don't see vectors being directly accessed, perhaps you're not considering methods inherited from superclasses.
While any model that was saved with vectors present will need that file when later .load()ed, you could conceivably save on disk-storage by discarding the model.wv.vectors property before saving, then forcing its reconstruction after loading. You would still be paying the RAM cost, when the model is loaded.
After vectors is calculated, and if you're completely done training, you could conceivably discard the vectors_vocab property to save RAM. (For any known word, the vectors can be consulted directly for instant look-up, and vectors_vocab is only needed in the case of further training or needing to re-generate vectors.)

How are word vectors co-trained with paragraph vectors in doc2vec DBOW?

I don't understand how word vectors are involved at all in the training process with gensim's doc2vec in DBOW mode (dm=0). I know that it's disabled by default with dbow_words=0. But what happens when we set dbow_words to 1?
In my understanding of DBOW, the context words are predicted directly from the paragraph vectors. So the only parameters of the model are the N p-dimensional paragraph vectors plus the parameters of the classifier.
But multiple sources hint that it is possible in DBOW mode to co-train word and doc vectors. For instance:
section 5 of An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation
this SO answer: How to use Gensim doc2vec with pre-trained word vectors?
So, how is this done? Any clarification would be much appreciated!
Note: for DM, the paragraph vectors are averaged/concatenated with the word vectors to predict the target words. In that case, it's clear that words vectors are trained simultaneously with document vectors. And there are N*p + M*q + classifier parameters (where M is vocab size and q word vector space dim).
If you set dbow_words=1, then skip-gram word-vector training is added the to training loop, interleaved with the normal PV-DBOW training.
So, for a given target word in a text, 1st the candidate doc-vector is used (alone) to try to predict that word, with backpropagation adjustments then occurring to the model & doc-vector. Then, a bunch of the surrounding words are each used, one at a time in skip-gram fashion, to try to predict that same target word – with the followup adjustments made.
Then, the next target word in the text gets the same PV-DBOW plus skip-gram treatment, and so on, and so on.
As some logical consequences of this:
training takes longer than plain PV-DBOW - by about a factor equal to the window parameter
word-vectors overall wind up getting more total training attention than doc-vectors, again by a factor equal to the window parameter

Relation between two texts with different tags

I'm currently having a problem with the conception of an algorithm.
I want to create a WYSIWYG editor that goes along the current [bbcode] editor I have.
To do that, I use a div with contenteditable set to true for the WYSIWYG editor and a textarea containing the associated bbcode. Until there, no problem. But my concern is that if a user wants to add a tag (for example, the [b] tag), I need to know where they want to include it.
For that, I need to know exactly where in the bbcode I should insert the tags. I thought of comparing the two texts (one with html tags like <span>, the other with bbcode tags like [b]), and that's where I'm struggling.
I did some research but couldn't find anything that would help me, or I did not understand it correctly (maybe did I do a wrong research). What I could find is the Jaccard index, but I don't really know how to make it work correctly.
I also thought of another alternative. I could just take the code in the WYSIWYG editor before the cursor location, and split it every time I encounter a html tag. That way, I can, in the bbcode editor, search for the first occurrence, then search for the second occurrence starting at the last index found, and so on until I reach the place where the cursor is pointing at.
I'm not sure if it would work, and I find that solution a bit dirty. Am I totally wrong or should I do it this way?
Thanks for the help.
A popular way of determining what is the level of the similarity between the two texts is computing the mentioned Jaccard similarity. Citing Wikipedia:
The Jaccard index, also known as Intersection over Union and the Jaccard similarity coefficient, is a statistic used for comparing the similarity and diversity of sample sets. The Jaccard coefficient measures the similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets:
If you have a large number of texts though, computing the full Jaccard index of every possible combination of two texts is super computationally expensive. There is another way to approximate this index that is called minhashing. What it does is use several (e.g. 100) independent hash functions to create a signature and it repeats this procedure many times. This whole process has a nice property that the probability (over all permutations) that T1 = T2 is the same as J(A,B).
Another way to cluster similar texts (or any other data) together is to use Locality Sensitive Hashing which by itself is an approximation of what KNN does, and is usually worse than that, but is definitely faster to compute. The basic idea is to project the data into low-dimensional binary space (that is, each data point is mapped to a N-bit vector, the hash key). Each hash function h must satisfy the sensitive hashing property prob[h(x)=h(y)]=sim(x,y) where sim(x,y) in [0,1] is the similarity function of interest. For dots products it can be visualized as follows:
we can now ask what would be the has of the indicated point (in this case it's 101) and everything that is close to this point has the same hash.
EDIT to answer the comment
No, you asked about the text similarity and so I answered that. You basically ask how can you predict the position of the character in text 2. It depends on whether you analyze the writer's style or just pure syntax. In any of those two cases, IMHO you need some sort of statistics that will tell where it is likely for this character to occur given all the other data/text. You can go with n-grams, RNNs, LSTMs, Markov Chains or any other form of sequential data analysis.

How to align two GloVe models in text2vec?

Let's say I have trained two separate GloVe vector space models (using text2vec in R) based on two different corpora. There could be different reasons for doing so: the two base corpora may come from two different time periods, or two very different genres, for example. I would be interested in comparing the usage/meaning of words between these two corpora. If I simply concatenated the two corpora and their vocabularies, that would not work (the location in the vector space for word pairs with different usages would just be somewhere in the "middle").
My initial idea was to train just one model, but when preparing the texts, append a suffix (_x, _y) to each word (where x and y stand for the usage of word A in corpus x/y), as well as keep a separate copy of each corpus without the suffixes, so that the vocabulary of the final concatenated training corpus would consist of: A, A_x, A_y, B, B_x, B_y ... etc, e.g.:
this is an example of corpus X
this be corpus Y yo
this_x is_x an_x example_x of_x corpus_x X_x
this_y be_y corpus_y Y_y yo_y
I figured the "mean" usages of A and B would serve as sort of "coordinates" of the space, and I could measure the distance between A_x and A_y in the same space. But then I realized since A_x and A_y never occur in the same context (due to the suffixation of all words, including the ones around them), this would probably distort the space and not work. I also know there is something called an orthogonal procrustes problem, which relates to aligning matrices, but I wouldn't know how to implement it for my case.
What would be a reasonable way to fit two GloVe models (preferably in R and so that they work with text2vec) into a common vector space, if my final goal is to measure the cosine similarity of word pairs, which are orthographically identical, but occur in two different corpora?
I see 2 possible solutions:
try to initialize second glove model with solution from first and hope that coordinate system won't change too much during the fit of the second model
fit two models and get word vector matrices A, B. Then find rotation matrix that minimize sum of the angles between rows of A and B (don't know how to do that yet)
Also check http://nlp.stanford.edu/projects/histwords/, mb it will help with methodology.
Seems this is a good question for https://math.stackexchange.com/

Resources