Adding new tokens to BERT/RoBERTa while retaining tokenization of adjacent tokens - huggingface-transformers

I'm trying to add some new tokens to BERT and RoBERTa tokenizers so that I can fine-tune the models on a new word. The idea is to fine-tune the models on a limited set of sentences with the new word, and then see what it predicts about the word in other, different contexts, to examine the state of the model's knowledge of certain properties of language.
In order to do this, I'd like to add the new tokens and essentially treat them like new ordinary words (that the model just hasn't happened to encounter yet). They should behave exactly like normal words once added, with the exception that their embedding matrices will be randomly initialized and then be learned during fine-tuning.
However, I'm running into some issues doing this. In particular, the tokens surrounding the newly added tokens do not behave as expected when initializing the tokenizer with do_basic_tokenize=False in the case of BERT (in the case of RoBERTa, changing this setting doesn't seem to affect the output in the examples here). The problem can be observed in the following example; in the case of BERT, the period following the newly added token is not tokenized as a subword (i.e., it is tokenized as . instead of as the expected ##.), and in the case of RoBERTa, the word following the newly added subword is treated as though it does not have a preceding space (i.e., it is tokenized as a instead of as Ġa.
from transformers import BertTokenizer, RobertaTokenizer
new_word = 'mynewword'
bert = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize = False)
bert.tokenize('mynewword') # does not exist yet
# ['my', '##ne', '##w', '##word']
bert.tokenize('testing.')
# ['testing', '##.']
bert.add_tokens(new_word)
bert.tokenize('mynewword') # now it does
# ['mynewword']
bert.tokenize('mynewword.')
# ['mynewword', '.']
roberta = RobertaTokenizer.from_pretrained('roberta-base', do_basic_tokenize = False)
roberta.tokenize('mynewword') # does not exist yet
# ['my', 'new', 'word']
roberta.tokenize('A testing a')
# ['A', 'Ġtesting', 'Ġa']
roberta.add_tokens(new_word)
roberta.tokenize('mynewword') # now it does
# ['mynewword']
roberta.tokenize('A mynewword a')
# ['A', 'mynewword', 'a']
Is there a way for me to add the new tokens while getting the behavior of the surrounding tokens to match what it would be if there were not an added token there? I feel like it's important because the model could end up learning that (for instance), the new token can occur before ., while most others can only occur before ##. That seems like it would affect how it generalizes. In addition, I could turn on basic tokenization to solve the BERT problem here, but that wouldn't really reflect the full state of the model's knowledge, since it collapses the distinction between different tokens. And that doesn't help with the RoBERTa problem, which is still there regardless.
In addition, I'd ideally be able to add the RoBERTa token as Ġmynewword, but I'm assuming that as long as it never occurs as the first word in a sentence, that shouldn't matter.

After continuing to try and figure this out, I seem to have found something that might work. It's not necessarily generalizable, but one can load a tokenizer from a vocabulary file (+ a merges file for RoBERTa). If you manually edit those files to add the new tokens in the right way, everything seems to work as expected. Here's an example for BERT:
from transformers import BertTokenizer
bert = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize=False)
bert.tokenize('testing.') # ['testing', '##.']
bert.tokenize('mynewword') # ['my', '##ne', '##w', '##word']
bert_vocab = bert.get_vocab() # get the pretrained tokenizer's vocabulary
bert_vocab.update({'mynewword' : len(bert_vocab)}) # add the new word to the end
with open('vocab.tmp', 'w', encoding = 'utf-8') as tmp_vocab_file:
tmp_vocab_file.write('\n'.join(bert_vocab))
new_bert = BertTokenizer(name_or_path = 'bert-base-uncased', vocab_file = 'vocab.tmp', do_basic_tokenize=False)
new_bert.max_model_length = 512 # for identity to this setting on the pretrained one
new_bert.tokenize('mynewword') # ['mynewword']
new_bert.tokenize('mynewword.') # ['mynewword', '##.']
import os
os.remove('vocab.tmp') # cleanup
RoBERTa is much harder since we also have to add the pairs to merges.txt. I have a way of doing this that works for the new tokens, but unfortunately it can affect tokenization of words that are subparts of the new tokens, so it's not perfect—if one is using this to add made up words (as in my use case), you can just choose strings that are unlikely to cause problems (unlike the example here of 'mynewword'), but in other cases it is likely to cause problems. (While it's not a perfect solution, hopefully it might get others to see a better one.)
import re
import json
import requests
from transformers import RobertaTokenizer
roberta = RobertaTokenizer.from_pretrained('roberta-base')
roberta.tokenize('testing a') # ['testing', 'Ġa']
roberta.tokenize('mynewword') # ['my', 'new', 'word']
# update the vocabulary with the new token and the 'Ġ'' version
roberta_vocab = roberta.get_vocab()
roberta_vocab.update({'mynewword' : len(roberta_vocab)})
roberta_vocab.update({chr(288) + 'mynewword' : len(roberta_vocab)}) # chr(288) = 'Ġ'
with open('vocab.tmp', 'w', encoding = 'utf-8') as tmp_vocab_file:
json.dump(roberta_vocab, tmp_vocab_file, ensure_ascii=False)
# get and modify the merges file so that the new token will always be tokenized as a single word
url = 'https://huggingface.co/roberta-base/resolve/main/merges.txt'
roberta_merges = requests.get(url).content.decode().split('\n')
# this is a helper function to loop through a list of new tokens and get the byte-pair encodings
# such that the new token will be treated as a single unit always
def get_roberta_merges_for_new_tokens(new_tokens):
merges = [gen_roberta_pairs(new_token) for new_token in new_tokens]
merges = [pair for token in merges for pair in token]
return merges
def gen_roberta_pairs(new_token, highest = True):
# highest is used to determine whether we are dealing with the Ġ version or not.
# we add those pairs at the end, which is only if highest = True
# this is the hard part...
chrs = [c for c in new_token] # list of characters in the new token, which we will recursively iterate through to find the BPEs
# the simplest case: add one pair
if len(chrs) == 2:
if not highest:
return tuple([chrs[0], chrs[1]])
else:
return [' '.join([chrs[0], chrs[1]])]
# add the tokenization of the first letter plus the other two letters as an already merged pair
if len(chrs) == 3:
if not highest:
return tuple([chrs[0], ''.join(chrs[1:])])
else:
return gen_roberta_pairs(chrs[1:]) + [' '.join([chrs[0], ''.join(chrs[1:])])]
if len(chrs) % 2 == 0:
pairs = gen_roberta_pairs(''.join(chrs[:-2]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-2:]), highest = False)
pairs += tuple([''.join(chrs[:-2]), ''.join(chrs[-2:])])
if not highest:
return pairs
else:
# for new tokens with odd numbers of characters, we need to add the final two tokens before the
# third-to-last token
pairs = gen_roberta_pairs(''.join(chrs[:-3]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-2:]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-3:]), highest = False)
pairs += tuple([''.join(chrs[:-3]), ''.join(chrs[-3:])])
if not highest:
return pairs
pairs = tuple(zip(pairs[::2], pairs[1::2]))
pairs = [' '.join(pair) for pair in pairs]
# pairs with the preceding special token
g_pairs = []
for pair in pairs:
if re.search(r'^' + ''.join(pair.split(' ')), new_token):
g_pairs.append(chr(288) + pair)
pairs = g_pairs + pairs
pairs = [chr(288) + ' ' + new_token[0]] + pairs
pairs = list(dict.fromkeys(pairs)) # remove any duplicates
return pairs
# first line of this file is a comment; add the new pairs after it
roberta_merges = roberta_merges[:1] + get_roberta_merges_for_new_tokens(['mynewword']) + roberta_merges[1:]
roberta_merges = list(dict.fromkeys(roberta_merges))
with open('merges.tmp', 'w', encoding = 'utf-8') as tmp_merges_file:
tmp_merges_file.write('\n'.join(roberta_merges))
new_roberta = RobertaTokenizer(name_or_path='roberta-base', vocab_file='vocab.tmp', merges_file='merges.tmp')
# for some reason, we have to re-add the <mask> token to roberta if we are using it, since
# loading the tokenizer from a file will cause it to be tokenized as separate parts
# the weight matrix is identical, and once re-added, a fill-mask pipeline still identifies
# the mask token correctly (not shown here)
new_roberta.add_tokens(new_roberta.mask_token, special_tokens=True)
new_roberta.model_max_length = 512
new_roberta.tokenize('mynewword') # ['mynewword']
new_roberta.tokenize('mynewword a') # ['mynewword', 'Ġa']
new_roberta.tokenize(' mynewword') # ['Ġmynewword']
# however, this does not guarantee that tokenization of other words will not be affected
roberta.tokenize('mynew') # ['my', 'new']
new_roberta.tokenize('mynew') # ['myne', 'w']
import os
os.remove('vocab.tmp')
os.remove('merges.tmp') # cleanup

If you want to add new tokens to fine-tune a Roberta-based model, consider training your tokenizer on your corpus. Take a look at the HuggingFace How To Train for a complete roadmap of how to do that.
I did that myself to fine-tune the XLM-Roberta-base on my health-related corpus.
Here's the snippet:
from tokenizers import ByteLevelBPETokenizer
from glob import glob
import os
CORPUS_TRAIN = 'corpus_train.shc'
TOKENIZER_DIR = 'you_tokenizer_dir'
paths = list(
glob(CORPUS_TRAIN)
)
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer(lowercase=False)
# Customize training
tokenizer.train(files=paths, vocab_size=32000, min_frequency=3, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
# Save files to disk
os.makedirs(TOKENIZER_DIR, exist_ok=True)
tokenizer.save_model(TOKENIZER_DIR)
The 32k parameter was arbitrarily chosen. It took 10min on my corpus, then I was able to train my model.
Inside the TOKENIZER_DIR you will see the vocab.json and merges.txt.
If you are using a custom script for training, you can load the tokenizer like this: tokenizer = RobertaTokenizerFast.from_pretrained(TOKENIZER_DIR, max_len=512).

Related

Shifting N objects with Manim at the same time in different directions

I have a list of dots of variable random length and I want to be able to apply a transform (shift in this case) to these objects with independence but at the same time.
list = [Dot(), Dot() ...] # Variable length
I am using the Manim Library by https://github.com/3b1b/manim by 3blue1brown.
As a note, other related posts don't solve my problem as they only work with a fix number of objects (dots).
The following code from this reddit post, used as an example, solves the problem:
import numpy as np
class DotsMoving(Scene):
def construct(self):
dots = [Dot() for i in range(5)]
directions = [np.random.randn(3) for dot in dots]
self.add(*dots) # It isn't absolutely necessary
animations = [ApplyMethod(dot.shift,direction) for dot,direction in zip(dots,directions)]
self.play(*animations) # * -> unpacks the list animations
Special thanks to u/Xorlium.
Don't use list, it's a reserved word, use VGroup to contain objects:
list_dots = VGroup(*[Dot() for _ in range(5)]) # 5 dots vgroup
# this is the same as:
# list_dots = VGroup(Dot(),Dot(),Dot(),Dot(),Dot())
# See 'list comprehension python' in google
list_dots.arrange(RIGHT)
list_dots.set_color(RED)
list_dots.shift(UP)

word2vec recommendation system KeyError: "word '21883' not in vocabulary"

The code works absolutely fine for the data set containing 500000+ instances but whenever I reduce the data set to 5000/10000/15000 it throws a key error : word "***" not in vocabulary.Not for every data point but for most them it throws the error.The data set is in excel format. [1]: https://i.stack.imgur.com/YCBiQ.png
I don't know how to fix this problem since i have very little knowledge about it,,I am still learning.Please help me fix this problem!
purchases_train = []
for i in tqdm(customers_train):
temp = train_df[train_df["CustomerID"] == i]["StockCode"].tolist()
purchases_train.append(temp)
purchases_val = []
for i in tqdm(validation_df['CustomerID'].unique()):
temp = validation_df[validation_df["CustomerID"] == i]["StockCode"].tolist()
purchases_val.append(temp)
model = Word2Vec(window = 10, sg = 1, hs = 0,
negative = 10, # for negative sampling
alpha=0.03, min_alpha=0.0007,
seed = 14)
model.build_vocab(purchases_train, progress_per=200)
model.train(purchases_train, total_examples = model.corpus_count,
epochs=10, report_delay=1)
model.save("word2vec_2.model")
model.init_sims(replace=True)
# extract all vectors
X = model[model.wv.vocab]
X.shape
products = train_df[["StockCode", "Description"]]
products.drop_duplicates(inplace=True, subset='StockCode', keep="last")
products_dict=products.groupby('StockCode'['Description'].apply(list).to_dict()
def similar_products(v, n = 6):
ms = model.similar_by_vector(v, topn= n+1)[1:]
new_ms = []
for j in ms:
pair = (products_dict[j[0]][0], j[1])
new_ms.append(pair)
return new_ms
similar_products(model['21883'])
If you get a KeyError saying a word is not in the vocabulary, that's a reliable indicator that the word you're looking-up was not in the training data fed to Word2Vec, or did not appear enough (default min_count=5) times.
So, your error indicates the word-token '21883' did not appear at least 5 times in the texts (purchases_train) supplied to Word2Vec. You should do either or both of:
Ensure all words you're going to look-up appear enough times, either with more training data or a lower min_count. (However, words with only one or a few occurrences tend not to get good vectors & instead just drag the quaality of surrounding-words' vectors down - so keeping this value above 1, or even raising it above the default of 5 to discard more rare words, is a better path whenever you have sufficient data.)
If your later code will be looking up words that might not be present, either check for their presence first (word in model.wv.vocab) or set up a try: ... except: ... to catch & handle the case where they're not present.

Gensim - iterating over multiple documents

I'm trying to the Q6 recipe shown here but my corpus keep getting returned as [] even though I have checked and it does seem to be reading the document correctly.
So my code is:
def iter_documents(top_directory):
"""Iterate over all documents, yielding a document (=list of utf8 tokens) at a time."""
for root, dirs, files in os.walk(top_directory):
for file in filter(lambda file: file.endswith('.txt'), files):
document = open(os.path.join(root, file)).read() # read the entire document, as one big string
yield utils.tokenize(document, lower=True) # or whatever tokenization suits you
class MyCorpus(object):
# Used to create the object
def __init__(self, top_dir):
self.top_dir = top_dir
self.dictionary = corpora.Dictionary(iter_documents(top_dir))
self.dictionary.filter_extremes(no_below=1, keep_n=30000) # check API docs for pruning params
# Used if you ever need to iterate through the values
def __iter__(self):
for tokens in iter_documents(self.top_dir):
yield self.dictionary.doc2bow(tokens)
and the text file I'm using to test is this.
Ok I figured it out. Change line 12 to this: self.dictionary.filter_extremes(no_below=0, no_above=1,keep_n=30000)
Because I only have 1 document to start with it's being filtered out.
See this

In a CSV file, how can a Python coder remove all but an X number of duplicates across rows?

Here is an example CSV file for this problem:
Jack,6
Sam,10
Milo,9
Jacqueline,7
Sam,5
Sam,8
Sam,10
Let's take the context to be the names and scores of a quiz these people took. We can see that Sam has taken this quiz 4 times but I want to only have an X number of the same person's result (They also need to be the most recent entries). Let's assume we wanted no more than 3 of the same person's results.
I realised it probably wouldn't be possible to achieve having no more than 3 of each person's result without some extra information. Here is the updated CSV file:
Jack,6,1793
Sam,10,2079
Milo,9,2132
Jacqueline,7,2590
Sam,5,2881
Sam,8,3001
Sam,10,3013
The third column is essentially the number of seconds from the "Epoch", which is a reference point for time. With this, I thought I could simply sort the file in terms of lowest to highest for the epoch column and use set() to remove all but a certain number of duplicates for the name column while also removing the removed persons score as well.
In theory, this should leave me with the 3 most recent results per person but in practice, I have no idea how I could adapt the set() function to do this unless there is some alternative way. So my question is, what possible methods are there to achieve this?
You could use a defaultdict of a list, and each time you add an entry check the length of the list: if it's more than three items pop the first one off (or do the check after cycling through the file). This assumes the file is in time sequence.
from collections import defaultdict
# looping over a csv file gives one row at a time
# so we will emulate that
raw_data = [
('Jack', '6'),
('Sam', '10'),
('Milo', '9'),
('Jacqueline', '7'),
('Sam', '5'),
('Sam', '8'),
('Sam', '10'),
]
# this will hold our information, and works by providing an empty
# list for any missing key
student_data = defaultdict(list)
for row in raw_data: # note 1
# separate the row into its component items, and convert
# score from str to int
name, score = row
score = int(score)
# get the current list for the student, or a brand-new list
student = student_data[name]
student.append(score)
# after addeng the score to the end, remove the first scores
# until we have no more than three items in the list
if len(student) > 3:
student.pop(0)
# print the items for debugging
for item in student_data.items():
print(item)
which results in:
('Milo', [9])
('Jack', [6])
('Sam', [5, 8, 10])
('Jacqueline', [7])
Note 1: to use an actual csv file you want code like this:
raw_file = open('some_file.csv')
csv_file = csv.reader(raw_file)
for row in csv_file:
...
To handle the timestamps, and as an alternative, you could use itertools.groupby:
from itertools import groupby, islice
from operator import itemgetter
raw_data = [
('Jack','6','1793'),
('Sam','10','2079'),
('Milo','9','2132'),
('Jacqueline','7','2590'),
('Sam','5','2881'),
('Sam','8','3001'),
('Sam','10','3013'),
]
# Sort by name in natural order, then by timestamp from highest to lowest
sorted_data = sorted(raw_data, key=lambda x: x[0], -int(x[2]))
# Group by user
grouped = groupby(sorted_data, key=itemgetter(0))
# And keep only three most recent values for each user
most_recent = [(k, [v for _, v, _ in islice(grp, 3)]) for k, grp in grouped]

Graphlab: How to avoid manually duplicating functions that has only a different string variable?

I imported my dataset with SFrame:
products = graphlab.SFrame('amazon_baby.gl')
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
I would like to do sentiment analysis on a set of words shown below:
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
Then I would like to create a new column for each of the selected words in the products matrix and the entry is the number of times such word occurs, so I created a function for the word "awesome":
def awesome_count(word_count):
if 'awesome' in product:
return product['awesome']
else:
return 0;
products['awesome'] = products['word_count'].apply(awesome_count)
so far so good, but I need to manually create other functions for each of the selected words in this way, e.g., great_count, etc. How to avoid this manual effort and write cleaner code?
I think the SFrame.unpack command should do the trick. In fact, the limit parameter will accept your list of selected words and keep only these results, so that part is greatly simplified.
I don't know precisely what's in your reviews data, so I made a toy example:
# Create the data and convert to bag-of-words.
import graphlab
products = graphlab.SFrame({'review':['this book is awesome',
'I hate this book']})
products['word_count'] = \
graphlab.text_analytics.count_words(products['review'])
# Unpack the bag-of-words into separate columns.
selected_words = ['awesome', 'hate']
products2 = products.unpack('word_count', limit=selected_words)
# Fill in zeros for the missing values.
for word in selected_words:
col_name = 'word_count.{}'.format(word)
products2[col_name] = products2[col_name].fillna(value=0)
I also can't help but point out that GraphLab Create does have its own sentiment analysis toolkit, which could be worth checking out.
I actually find out an easier way do do this:
def wordCount_select(wc,selectedWord):
if selectedWord in wc:
return wc[selectedWord]
else:
return 0
for word in selected_words:
products[word] = products['word_count'].apply(lambda wc: wordCount_select(wc, word))

Resources