Extracting Topic distribution from gensim LDA model - gensim

I created an LDA model for some text files using gensim package in python. I want to get topic's distributions for the learned model. Is there any method in gensim ldamodel class or a solution to get topic's distributions from the model?
For example, I use the coherence model to find a model with the best cohrence value subject to the number of topics in range 1 to 5. After getting the best model I use get_document_topics method (thanks kenhbs) to get topic distribution in the document that used for creating the model.
id2word = corpora.Dictionary(doc_terms)
bow = id2word.doc2bow(doc_terms)
max_coherence = -1
best_lda_model = None
for num_topics in range(1, 6):
lda_model = gensim.models.ldamodel.LdaModel(corpus=bow, num_topics=num_topics)
coherence_model = gensim.models.CoherenceModel(model=lda_model, texts=doc_terms,dictionary=id2word)
coherence_value = coherence_model.get_coherence()
if coherence_value > max_coherence:
max_coherence = coherence_value
best_lda_model = lda_model
The best has 4 topics
print(best_lda_model.num_topics)
4
But when I use get_document_topics, I get less than 4 values for document distribution.
topic_ditrs = best_lda_model.get_document_topics(bow)
print(len(topic_ditrs))
3
My question is: For best lda model with 4 topics (using coherence model) for a document, why get_document_topics returns fewer topics for the same document? why some topics have very small distribution (less than 1-e8)?

From the documentation, you can use two methods for this.
If you are aiming to get the main terms in a specific topic, use get_topic_terms:
from gensim.model.ldamodel import LdaModel
K = 10
lda = LdaModel(some_corpus, num_topics=K)
lda.get_topic_terms(5, topn=10)
# Or for all topics
for i in range(K):
lda.get_topic_terms(i, topn=10)
You can also print the entire underlying np.ndarray (called either beta or phi in standard LDA papers, dimensions are (K, V) or (V, K)).
phi = lda.get_topics()
edit:
From the link i included in the original answer: if you are looking for a document's topic distribution, use
res = lda.get_document_topics(bow)
As can be read from the documentation, the resulting object contains the following three lists:
list of (int, float) – Topic distribution for the whole document. Each element in the list is a pair of a topic’s id, and the probability that was assigned to it.
list of (int, list of (int, float), optional – Most probable topics per word. Each element in the list is a pair of a word’s id, and a list of topics sorted by their relevance to this word. Only returned if per_word_topics was set to True.
list of (int, list of float), optional – Phi relevance values, multipled by the feature length, for each word-topic combination. Each element in the list is a pair of a word’s id and a list of the phi values between this word and each topic. Only returned if per_word_topics was set to True.
Now,
tops, probs = zip(*res[0])
probs will contains K (for you 4) probabilities. Some may be zero, but they should sum up to 1

You can play with the minimum_probability parameter and set it to a very small value like 0.000001.
topic_vector = [ x[1] for x in ldamodel.get_document_topics(new_doc_bow , minimum_probability= 0.0, per_word_topics=False)]

Just type,
pd.DataFrame(lda_model.get_document_topics(doc_term_matrix))

Related

How do I add noise/variability to a dataset in Python, given the CV?

Given a dataset of blood results, say cholesterol level, and knowing that the instrument that produced those results is subject to a known degree of variability, how would I add that variability back into the dataset? i.e. I want to assume the result in the original dataset is the true/mean value, and then produce new results that are subject to the known variability of the instrument.
In Excel you use =NORM.INV(RAND(), mean, std_dev), where RAND() provides a random value between 0 and 1, "mean" will be the original value and I have the CV so I can calculate the SD. NORM.INV then provides the inverse of the cumulative normal distribution function.
I've done the following to create a new column with my new values, but would like to know if it is valid (i.e., will each row have a different random number between 0 and 1 as the probability? and is this formula equivalent to NORM.INV?
df8000['HDL_1'] = norm.ppf(random(), loc = df8000['HDL_0'], scale = TAE_df.loc[0,'HDL'])
Thanks in advance!

Is there a way to infer topic distributions on unseen document from gensim LDA pre-trained model using matrix multiplication?

Is there a way to get the topic distribution of an unseen document using a pretrained LDA model without using the LDA_Model[unseenDoc] syntax? I am trying to implement my LDA model into a web application, and if there was a way to use matrix multiplication to get a similar result then I could use the model in javascript.
For example, I tried the following:
import numpy as np
import gensim
from gensim.corpora import Dictionary
from gensim import models
import nltk
from nltk.stem import WordNetLemmatizer, SnowballStemmer
nltk.download('wordnet')
def Preprocesser(text_list):
smallestWordSize = 3
processedList = []
for token in gensim.utils.simple_preprocess(text_list):
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > smallestWordSize:
processedList.append(StemmAndLemmatize(token))
return processedList
lda_model = models.LdaModel.load('LDAModel\GoldModel') #Load pretrained LDA model
dictionary = Dictionary.load("ModelTrain\ManDict") #Load dictionary model was trained on
#Sample Unseen Doc to Analyze
doc = "I am going to write a string about how I can't get my task executor \
to travel properly. I am trying to use the \
AGV navigator, but it doesn't seem to be working network. I have been trying\
to use the AGV Process flow but that isn't working either speed\
trailer offset I am now going to change this so I can see how fast it runs"
termTopicMatrix = lda_model.get_topics() #Get Term-topic Matrix from pretrained LDA model
cleanDoc = Preprocesser(doc) #Tokenize, lemmatize, clean and stem words
bowDoc = dictionary.doc2bow(cleanDoc) #Create bow using dictionary
dictSize = len(termTopicMatrix[0]) #Get length of terms in dictionary
fullDict = np.zeros(dictSize) #Initialize array which is length of dictionary size
First = [first[0] for first in bowDoc] #Get index of terms in bag of words
Second = [second[1] for second in bowDoc] #Get frequency of term in bag of words
fullDict[First] = Second #Add word frequency to full dictionary
print('Matrix Multiplication: \n', np.dot(termTopicMatrix,fullDict))
print('Conventional Syntax: \n', lda_model[bowDoc])
Output:
Matrix Multiplication:
[0.0283254 0.01574513 0.03669142 0.01671816 0.03742738 0.01989461
0.01558603 0.0370233 0.04648389 0.02887623 0.00776652 0.02147539
0.10045133 0.01084273 0.01229849 0.00743788 0.03747379 0.00345913
0.03086953 0.00628912 0.29406082 0.10656977 0.00618827 0.00406316
0.08775404 0.00785408 0.02722744 0.09957815 0.01669402 0.00744392
0.31177135 0.03063149 0.07211428 0.01192056 0.03228589]
Conventional Syntax:
[(0, 0.070313625), (2, 0.056414187), (18, 0.2016589), (20, 0.46500313), (24, 0.1589748)]
In the pretrained model there are 35 topics and 1155 words.
In the "Conventional Syntax" output, the first element of each tuple is the index of the topic and the second element is the probability of the topic. In the "Matrix Multiplication" version, the probability is the index and the value is the probability. Clearly the two don't match up.
For example, the lda_model[unseenDoc] shows that topic 0 has a 0.07 probability, but the matrix multiplication method says that topic has a 0.028 probability. Am I missing a step here?
You can review the full source code used by LDAModel's get_document_topics() method in your installation, or online at:
https://github.com/RaRe-Technologies/gensim/blob/e75f6c8e8d1dee0786b1b2cd5ef60da2e290f489/gensim/models/ldamodel.py#L1283
(It also makes use of the inference() method in the same file.)
It's doing a lot more scaling/normalization/clipping than your code, which is likely the cause of the discrepancy. But you should be able to examine, line-by-line, where your process & its differ to get the steps to match up.
It also shouldn't be hard to use the gensim code's steps as guidance for creating parallel Javascript code that, given the right parts of the model's state, can reproduce its results.

How to add words and vectors manually to Word2vec gensim?

Let's say, word2vec.model is my trained word2vec model. When a out-of-vocabulary word (oov_word) occurs, I compute a vector vec using compute_vec(oov_word) method. Now, I want to add/append oov_word and its corresponding vector vec to my already trained model word2vec.model.
I have already checked the below links. But they do not answer my question.
Combining/adding vectors from different word2vec models
https://datascience.stackexchange.com/questions/49431/how-to-train-an-existing-word2vec-gensim-model-on-new-words
https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.BaseKeyedVectors.add
from gensim.models.keyedvectors import WordEmbeddingsKeyedVectors
vector_length = 100
kv = WordEmbeddingsKeyedVectors(vector_length)
# wordList - list of words
# vectorList - list of the vector corresponding to the words
kv.add(wordList, vectorList)
kv.most_similar(word1) # gives the list of words similar to word1
for version 4.0:
w2v_model.wv.add_vectors(keys, vectors)
Check the doc

Finding the most commonly occuring pairs

Say that I have a list (or array) that links Suppliers with the materials they supply. For example, an array of the form
[[Supplier_1, Material_a], [Supplier_2, Material_a], [Supplier_3, Material_a], [Supplier_1, Material_b], [Supplier_2, Material_c], [Supplier_3, Material_b], ...]
I am interested in finding the the list of suppliers that supply at least k materials that a particular supplier say Supplier_1 supplies.
One way that I can think of is to pair all suppliers with Supplier_1 for each material Supplier_1 supplies
[[Supplier_1, Supplier_2, Material_a], [Supplier_1, Supplier_3, Material_a], [Supplier_1, Supplier_3, Material_b]...]
and then count the number of times each pair is present
[[Supplier_1, Supplier_2, 1], [Supplier_1, Supplier_3, 2]...]
The problem is that this approach can be very time consuming since the list provided can be quite long. I was wondering if there is a better way to do this.
You would put the materials of Supplier_1 in a hash set, so that you can verify for any material whether it is supplied by Supplier_1 in constant time.
Once you have that you can iterate the data again, and in a dictionary (hash map) keep a count per supplier which you increment each time the material is in the above mentioned set.
In Python it would look like this:
def getsuppliers(pairs, selected_supplier, k):
materialset = set()
countmap = {} # a dictionary with <key=supplier, value=count> pairs
for supplier, material in pairs:
if supplier == selected_supplier:
materialset.add(material)
countmap[supplier] = 0
# An optional quick exit: if the selected provider does not have k materials,
# there is no use in continuing...
if countmap[selected_supplier] < k:
return [] # no supplier meets the requirement
for supplier, material in pairs:
if material in materialset:
countmap[supplier] = countmap[supplier]+1
result = []
for supplier, count in countmap.items():
if count >= k:
result.append(supplier)
return result
NB: this would include the selected supplier also, provided it has at least k materials.
All operations within each individual loop body, have a constant time complexity, so the overall time complexity is O(n), where n is the size of the input list (pairs).

Gensim LDA topic assignment

I am hoping to assign each document to one topic using LDA. Now I realise that what you get is a distribution over topics from LDA. However as you see from the last line below I assign it to the most probable topic.
My question is this. I have to run lda[corpus] for somewhat the second time in order to get these topics. Is there some other builtin gensim function that will give me this topic assignment vectors directly? Especially since the LDA algorithm has passed through the documents it might have saved these topic assignments?
# Get the Dictionary and BoW of the corpus after some stemming/ cleansing
texts = [[stem(word) for word in document.split() if word not in STOPWORDS] for document in cleanDF.text.values]
dictionary = corpora.Dictionary(texts)
dictionary.filter_extremes(no_below=5, no_above=0.9)
corpus = [dictionary.doc2bow(text) for text in texts]
# The actual LDA component
lda = models.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=30, chunksize=10000, passes=10,workers=4)
# Assign each document to most prevalent topic
lda_topic_assignment = [max(p,key=lambda item: item[1]) for p in lda[corpus]]
There is no other builtin Gensim function that will give the topic assignment vectors directly.
Your question is valid that LDA algorithm has passed through the documents but implementation of LDA is working by updating the model in chunks (based on value of chunksize parameter), hence it will not keep the entire corpus in-memory.
Hence you have to use lda[corpus] or use the method lda.get_document_topics()
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
test =LDA[corpus[0]]
print(test)
sorted(test, reverse=True, key=lambda x: x[1])
Topics = ['Topic_'+str(sorted(LDA[i], reverse=True, key=lambda x: x[1])[0][0]).zfill(3) for i in corpus]

Resources