Extracting food items from sentences - algorithm

Given a sentence:
I had peanut butter and jelly sandwich and a cup of coffee for
breakfast
I want to be able to extract the following food items from it:
peanut butter and jelly sandwich
coffee
Till now, using POS tagging, I have been able to extract the individual food items, i.e.
peanut, butter, jelly, sandwich, coffee
But like I said, what I need is peanut butter and jelly sandwich instead of the individual items.
Is there some way of doing this without having a corpus or database of food items in the backend?

You can attempt it without using a trained set which contains a corpus of food items, but the approach shall work without it too.
Instead of doing simple POS tagging, do a dependency parsing combined with POS tagging.
That way would be able to find relations between multiple tokens of the phrase, and parsing the dependency tree with restricted conditions like noun-noun dependencies you shall be able to find relevant chunk.
You can use spacy for dep parsing. Here is output from displacy :
https://demos.explosion.ai/displacy/?text=peanut%20butter%20and%20jelly%20sandwich%20is%20delicious&model=en&cpu=1&cph=1
You can use freely available data here, or something better:
https://en.wikipedia.org/wiki/Lists_of_foods as a training set to
create a base set of food items (the hyperlinks in the crawled tree)
Based on the dependency parsing on your new data, you can keep
enriching the base data. For example: if 'butter' exists in your
corpus, and 'peanut butter' is a frequently encountered pair of
tokens, then 'peanut' and 'peanut butter' also get added to the
corpus.
The corpus can be maintained in a file which can be loaded in memory
while processing, or database like redis,aerospike etc.
Make sure you work with normalized i.e. small cased, special
characters cleaned, words lemmatized/stemmed, both in corpus and the
processing data. That would increase your coverage and accuracy.

First extract all Noun phrases using NLTK's Chunking (code copied from here):
import nltk
import re
import pprint
from nltk import Tree
import pdb
patterns="""
NP: {<JJ>*<NN*>+}
{<JJ>*<NN*><CC>*<NN*>+}
{<NP><CC><NP>}
{<RB><JJ>*<NN*>+}
"""
NPChunker = nltk.RegexpParser(patterns)
def prepare_text(input):
sentences = nltk.sent_tokenize(input)
sentences = [nltk.word_tokenize(sent) for sent in sentences]
sentences = [nltk.pos_tag(sent) for sent in sentences]
sentences = [NPChunker.parse(sent) for sent in sentences]
return sentences
def parsed_text_to_NP(sentences):
nps = []
for sent in sentences:
tree = NPChunker.parse(sent)
print(tree)
for subtree in tree.subtrees():
if subtree.label() == 'NP':
t = subtree
t = ' '.join(word for word, tag in t.leaves())
nps.append(t)
return nps
def sent_parse(input):
sentences = prepare_text(input)
nps = parsed_text_to_NP(sentences)
return nps
if __name__ == '__main__':
print(sent_parse('I ate peanut butter and beef burger and a cup of coffee for breakfast.'))
This will POS tag your sentences and uses a regex parser to extract Noun Phrases.
1.Define and Refine your noun phrase regex
You'll need to change the patterns regex to define and refine your Noun phrases.
For example is telling the parser than an NP followed by a coordinator (CC) like ''and'' and another NP is itself an NP.
2.Change from NLTK POS tagger to Stanford POS tagger
Also I noted that NLTK's POS tagger is not performing very well (e.g. It considers had peanut as a verb phrase. You can change the POS tagger to Stanford Parser if you want.
3.Remove smaller noun phrases:
After you have extracted all the Noun phrases for a sentence, you can remove the ones that are part of a bigger noun phrase. For example in the following example beef burger and peanut butter should be removed because
they're a part of a bigger noun phrase peanut butter and beef burger.
4.Remove noun phrases which none of the words are in a food lexicon
you will get noun phrases like school bus. if none of school and bus is in a food lexicon that you can compile from Wikipedia or WordNet then you remove the noun phrase. In this case remove cup and breakfast because they're not hopefully in your food lexicon.
The current code returns
['peanut butter and beef burger', 'peanut butter', 'beef burger', 'cup', 'coffee', 'breakfast']
for input
print(sent_parse('I ate peanut butter and beef burger and a cup of coffee for breakfast.'))

Too much for a comment, but not really an answer:
I think you would at least get closer if when you got two foods without a proper separator and combined them into one food. That would give peanut butter, jelly sandwich, coffee.
If you have correct English you could detect this case by count/non-count. Correcting the original to "I had a peanut butter and jelly sandwich and a cup of coffee for breakfast". Butter is non-count, you can't have "a butter", but you can have "a sandwich". Thus the a must apply to sandwich and despite the and "peanut butter" and "jelly sandwich" must be the same item--"peanut butter and jelly sandwich". Your mistaken sentence would parse the other way, though!
I would be very surprised if you could come up with general rules that cover every case, though. I would come at this sort of thing figuring that a few would leak and need a database to catch.

You could search for n-grams in your text where you vary the value of n. For example, if n=5 then you would extract "peanut butter and jelly sandwich" and "cup of coffee for breakfast", depending on where you start your search in the text for groups of five words. You won't need a corpus of text or a database to make the algorithm work.

A rule based approach with a lexicon of all food items would work here.
You can use GATE for the same and use JAPE rules with it.
In the above example your jape rule would have a condition to find all (np cc np) && np in "FOOD LEIXCON"
Can share a detailed jape code in an event you plan to go this route.

Related

Limiting BART HuggingFace Model to complete sentences of maximum length

I'm implementing BART on HuggingFace, see reference: https://huggingface.co/transformers/model_doc/bart.html
Here is the code from their documentation that works in creating a generated summary:
from transformers import BartModel, BartTokenizer, BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
def baseBart(ARTICLE_TO_SUMMARIZE):
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=25, early_stopping=True)
return [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
I need to impose conciseness with my summaries so I am setting max_length=25. In doing so though, I'm getting incomplete sentences such as these two examples:
EX1: The opacity at the left lung base appears stable from prior exam.
There is elevation of the left hemidi
EX 2: There is normal mineralization and alignment. No fracture or
osseous lesion is identified. The ankle mort
How do I make sure that the predicted summary is only coherent sentences with complete thoughts and remains concise. If possible, I'd prefer to not perform a regex on the summarized output and cut off any text after the last period, but actually have the BART model produce sentences within the the maximum length.
I tried setting truncation=True in the model but that didn't work.

I want to use elastic search for NER

I want to use elastic search for NER
Imagine that Elastic Search engine has data included key and value.
the key is word. And the value is a list of Entity.
for example; key:apple, value:[fruit, company]
And when I send a query that consisting of a sentence. The sentence can have several candidate keywords. So, my question is whether the functionality is in the Elastic Search that gives results for each candidate keyword in a single query.
Ex)
query : "what is apple pie"
candidate keywords : "what", "what is", "what is apple", "what is apple pie", "is", "is apple", "is apple pie", "apple", "apple pie", "pie"
exist key in DB : "apple", "apple pie", "pie"
returned result : "apple":[fruit, company], "apple pie":[food], "pie":[food]
thanks.
In my case, I use CoreNLP to perform extraction, given the input to the NLP REST server, the resulting output for the tokenizing, NER, and additional parsing like lemmatization, sentiment, co-ference, etc is stored in elasticsearch for posterior discoverability in terms of how to keep training CoreNLP. This might not be the answer on how to use elasticsearch to nail NLP tasks since CoreNLP is the trainable machine learning tool should be used for this (or spaCy which is great too), so I assume you wanted to say "I want to use elastic search for searching and analyzing extracted NER", if yes, there you go.

Writing My Prolog Code

I am writing my first Prolog code, and I am have some difficulties with it I was wondering if anyone could help me out.
I am writing a program that needs to follow the following rules:
for Verb phrases., noun phrases come before transitive verbs.
subjects (nominative noun phrases) are followed by ga
Direct Objects (nominative noun phrases are followed by o.
it must be able to form these sentences with the given words in the code:
Adamu ga waraimasu (adam laughs)
Iive ga nakimasu (eve cries)
Adamu ga Iivu O mimasi (adam watches Eve)
Iivu ga Adamu O tetsudaimasu (eve helps adam)
here is my code. It it mostly complete except, I don't know if the rules are correct in the code:
Japanese([adamu ],[nounphrase],[adam],[entity]).
Japanese([iivu ],[nounphrase],[eve],[entity]).
Japanese([waraimasu ],[verb,intransitive],[laughs],[property]).
Japanese([nakimasu],[verb,intransitive],[cries],[property]).
Japanese([mimasu ],[verb,transitive],[watches],[relation]).
Japanese([tetsudaimasu ],[verb,transitive],[helps],[relation]).
Japanese(A,[verbphrase],B,[property]):-
Japanese(A,[verb,intransitive],B,[property]).
Japanese(A,[nounphrase,accusative],B,[entity]):-
Japanese(C,[nounphrase],B,[entity]),
append([ga],C,A).
Japanese(A,[verbphrase],B,[property]):-
Japanese(C,[verb,transitive],D,[relation]),
Japanese(E,[nounphrase,accusative],F,[entity]),
append(C,E,A),
append(D,F,B).
Japanese(A,[sentence],B,[proposition]):-
Japanese(C,[nounphrase],D,[entity]),
Japanese(E,[verbphrase],F,[property]),
append(E,C,A),
append(F,D,B).

Perform sentence segmentation on paragraphs without punctuation?

I have a bunch of badly formatted text with lots of missing punctuation. I want to know if there was any method to segment text into sentences when periods, semi-colons, capitalization, etc. are missing.
For example, consider the paragraph: "the lion is called the king of the forest it has a majestic appearance it eats flesh it can run very fast the roar of the lion is very famous".
This text should be segmented as separate sentences:
the lion is called the king of the forest
it has a majestic appearance
it eats flesh
it can run very fast
the roar of the lion is very famous
Can this be done or is it impossible? Any suggestion is much appreciated!
You can try using the following Python implementation from here.
import torch
model, example_texts, languages, punct, apply_te = torch.hub.load(repo_or_dir='snakers4/silero-models', model='silero_te')
#your text goes here. I imagine it is contained in some list
input_text = input('Enter input text\n')
apply_te(input_text, lan='en')

Rewriting sentences while retaining semantic meaning

Is it possible to use WordNet to rewrite a sentence so that the semantic meaning of the sentence still ways the same (or mostly the same)?
Let's say I have this sentence:
Obama met with Putin last week.
Is it possible to use WordNet to rephrase the sentence into alternatives like:
Obama and Putin met the previous week.
Obama and Putin met each other a week ago.
If changing the sentence structure is not possible, can WordNet be used to replace only the relevant synonyms?
For example:
Obama met Putin the previous week.
If the question is the possibility to use WordNet to do sentence paraphrases. It is possible with much grammatical/syntax components. You would need system that:
First get the individual semantics of the tokens and parse the sentence for its syntax.
Then understand the overall semantics of the composite sentence (especially if it's metaphorical)
Then rehash the sentence with some grammatical generator.
Up till now I only know of ACE parser/generator that can do something like that but it takes a LOT of hacking the system to make it work as a paraphrase generator. http://sweaglesw.org/linguistics/ace/
So to answer your questions,
Is it possible to use WordNet to rephrase the sentence into alternatives? Sadly, WordNet isn't a silverbullet. You will need more than semantics for a paraphrase task.
If changing the sentence structure is not possible, can WordNet be used to replace only the relevant synonyms? Yes this is possible. BUT to figure out which synonym is replace-able is hard... And you would also need some morphology/syntax component.
First you will run into a problem of multiple senses per word:
from nltk.corpus import wordnet as wn
sent = "Obama met Putin the previous week"
for i in sent.split():
possible_senses = wn.synsets(i)
print i, len(possible_senses), possible_senses
[out]:
Obama 0 []
met 13 [Synset('meet.v.01'), Synset('meet.v.02'), Synset('converge.v.01'), Synset('meet.v.04'), Synset('meet.v.05'), Synset('meet.v.06'), Synset('meet.v.07'), Synset('meet.v.08'), Synset('meet.v.09'), Synset('meet.v.10'), Synset('meet.v.11'), Synset('suffer.v.10'), Synset('touch.v.05')]
Putin 1 [Synset('putin.n.01')]
the 0 []
previous 3 [Synset('previous.s.01'), Synset('former.s.03'), Synset('previous.s.03')]
week 3 [Synset('week.n.01'), Synset('workweek.n.01'), Synset('week.n.03')]
Then even if you know the sense (let's say the first sense), you get multiple words per sense and not every word can be replaced in the sentence. Moreover, they are in the lemma form not a surface form (e.g. verbs are in their base form (simple present tense) and nouns are in singular):
from nltk.corpus import wordnet as wn
sent = "Obama met Putin the previous week"
for i in sent.split():
possible_senses = wn.synsets(i)
if possible_senses:
print i, possible_senses[0].lemma_names
else:
print i
[out]:
Obama
met ['meet', 'run_into', 'encounter', 'run_across', 'come_across', 'see']
Putin ['Putin', 'Vladimir_Putin', 'Vladimir_Vladimirovich_Putin']
the
previous ['previous', 'old']
week ['week', 'hebdomad']
One approach is grammatical analysis with nltk read more here and after analysis convert your sentence in to active voice or passive voice.

Resources