Shifting N objects with Manim at the same time in different directions - transformation

I have a list of dots of variable random length and I want to be able to apply a transform (shift in this case) to these objects with independence but at the same time.
list = [Dot(), Dot() ...] # Variable length
I am using the Manim Library by https://github.com/3b1b/manim by 3blue1brown.
As a note, other related posts don't solve my problem as they only work with a fix number of objects (dots).

The following code from this reddit post, used as an example, solves the problem:
import numpy as np
class DotsMoving(Scene):
def construct(self):
dots = [Dot() for i in range(5)]
directions = [np.random.randn(3) for dot in dots]
self.add(*dots) # It isn't absolutely necessary
animations = [ApplyMethod(dot.shift,direction) for dot,direction in zip(dots,directions)]
self.play(*animations) # * -> unpacks the list animations
Special thanks to u/Xorlium.

Don't use list, it's a reserved word, use VGroup to contain objects:
list_dots = VGroup(*[Dot() for _ in range(5)]) # 5 dots vgroup
# this is the same as:
# list_dots = VGroup(Dot(),Dot(),Dot(),Dot(),Dot())
# See 'list comprehension python' in google
list_dots.arrange(RIGHT)
list_dots.set_color(RED)
list_dots.shift(UP)

Related

Adding new tokens to BERT/RoBERTa while retaining tokenization of adjacent tokens

I'm trying to add some new tokens to BERT and RoBERTa tokenizers so that I can fine-tune the models on a new word. The idea is to fine-tune the models on a limited set of sentences with the new word, and then see what it predicts about the word in other, different contexts, to examine the state of the model's knowledge of certain properties of language.
In order to do this, I'd like to add the new tokens and essentially treat them like new ordinary words (that the model just hasn't happened to encounter yet). They should behave exactly like normal words once added, with the exception that their embedding matrices will be randomly initialized and then be learned during fine-tuning.
However, I'm running into some issues doing this. In particular, the tokens surrounding the newly added tokens do not behave as expected when initializing the tokenizer with do_basic_tokenize=False in the case of BERT (in the case of RoBERTa, changing this setting doesn't seem to affect the output in the examples here). The problem can be observed in the following example; in the case of BERT, the period following the newly added token is not tokenized as a subword (i.e., it is tokenized as . instead of as the expected ##.), and in the case of RoBERTa, the word following the newly added subword is treated as though it does not have a preceding space (i.e., it is tokenized as a instead of as Ġa.
from transformers import BertTokenizer, RobertaTokenizer
new_word = 'mynewword'
bert = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize = False)
bert.tokenize('mynewword') # does not exist yet
# ['my', '##ne', '##w', '##word']
bert.tokenize('testing.')
# ['testing', '##.']
bert.add_tokens(new_word)
bert.tokenize('mynewword') # now it does
# ['mynewword']
bert.tokenize('mynewword.')
# ['mynewword', '.']
roberta = RobertaTokenizer.from_pretrained('roberta-base', do_basic_tokenize = False)
roberta.tokenize('mynewword') # does not exist yet
# ['my', 'new', 'word']
roberta.tokenize('A testing a')
# ['A', 'Ġtesting', 'Ġa']
roberta.add_tokens(new_word)
roberta.tokenize('mynewword') # now it does
# ['mynewword']
roberta.tokenize('A mynewword a')
# ['A', 'mynewword', 'a']
Is there a way for me to add the new tokens while getting the behavior of the surrounding tokens to match what it would be if there were not an added token there? I feel like it's important because the model could end up learning that (for instance), the new token can occur before ., while most others can only occur before ##. That seems like it would affect how it generalizes. In addition, I could turn on basic tokenization to solve the BERT problem here, but that wouldn't really reflect the full state of the model's knowledge, since it collapses the distinction between different tokens. And that doesn't help with the RoBERTa problem, which is still there regardless.
In addition, I'd ideally be able to add the RoBERTa token as Ġmynewword, but I'm assuming that as long as it never occurs as the first word in a sentence, that shouldn't matter.
After continuing to try and figure this out, I seem to have found something that might work. It's not necessarily generalizable, but one can load a tokenizer from a vocabulary file (+ a merges file for RoBERTa). If you manually edit those files to add the new tokens in the right way, everything seems to work as expected. Here's an example for BERT:
from transformers import BertTokenizer
bert = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize=False)
bert.tokenize('testing.') # ['testing', '##.']
bert.tokenize('mynewword') # ['my', '##ne', '##w', '##word']
bert_vocab = bert.get_vocab() # get the pretrained tokenizer's vocabulary
bert_vocab.update({'mynewword' : len(bert_vocab)}) # add the new word to the end
with open('vocab.tmp', 'w', encoding = 'utf-8') as tmp_vocab_file:
tmp_vocab_file.write('\n'.join(bert_vocab))
new_bert = BertTokenizer(name_or_path = 'bert-base-uncased', vocab_file = 'vocab.tmp', do_basic_tokenize=False)
new_bert.max_model_length = 512 # for identity to this setting on the pretrained one
new_bert.tokenize('mynewword') # ['mynewword']
new_bert.tokenize('mynewword.') # ['mynewword', '##.']
import os
os.remove('vocab.tmp') # cleanup
RoBERTa is much harder since we also have to add the pairs to merges.txt. I have a way of doing this that works for the new tokens, but unfortunately it can affect tokenization of words that are subparts of the new tokens, so it's not perfect—if one is using this to add made up words (as in my use case), you can just choose strings that are unlikely to cause problems (unlike the example here of 'mynewword'), but in other cases it is likely to cause problems. (While it's not a perfect solution, hopefully it might get others to see a better one.)
import re
import json
import requests
from transformers import RobertaTokenizer
roberta = RobertaTokenizer.from_pretrained('roberta-base')
roberta.tokenize('testing a') # ['testing', 'Ġa']
roberta.tokenize('mynewword') # ['my', 'new', 'word']
# update the vocabulary with the new token and the 'Ġ'' version
roberta_vocab = roberta.get_vocab()
roberta_vocab.update({'mynewword' : len(roberta_vocab)})
roberta_vocab.update({chr(288) + 'mynewword' : len(roberta_vocab)}) # chr(288) = 'Ġ'
with open('vocab.tmp', 'w', encoding = 'utf-8') as tmp_vocab_file:
json.dump(roberta_vocab, tmp_vocab_file, ensure_ascii=False)
# get and modify the merges file so that the new token will always be tokenized as a single word
url = 'https://huggingface.co/roberta-base/resolve/main/merges.txt'
roberta_merges = requests.get(url).content.decode().split('\n')
# this is a helper function to loop through a list of new tokens and get the byte-pair encodings
# such that the new token will be treated as a single unit always
def get_roberta_merges_for_new_tokens(new_tokens):
merges = [gen_roberta_pairs(new_token) for new_token in new_tokens]
merges = [pair for token in merges for pair in token]
return merges
def gen_roberta_pairs(new_token, highest = True):
# highest is used to determine whether we are dealing with the Ġ version or not.
# we add those pairs at the end, which is only if highest = True
# this is the hard part...
chrs = [c for c in new_token] # list of characters in the new token, which we will recursively iterate through to find the BPEs
# the simplest case: add one pair
if len(chrs) == 2:
if not highest:
return tuple([chrs[0], chrs[1]])
else:
return [' '.join([chrs[0], chrs[1]])]
# add the tokenization of the first letter plus the other two letters as an already merged pair
if len(chrs) == 3:
if not highest:
return tuple([chrs[0], ''.join(chrs[1:])])
else:
return gen_roberta_pairs(chrs[1:]) + [' '.join([chrs[0], ''.join(chrs[1:])])]
if len(chrs) % 2 == 0:
pairs = gen_roberta_pairs(''.join(chrs[:-2]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-2:]), highest = False)
pairs += tuple([''.join(chrs[:-2]), ''.join(chrs[-2:])])
if not highest:
return pairs
else:
# for new tokens with odd numbers of characters, we need to add the final two tokens before the
# third-to-last token
pairs = gen_roberta_pairs(''.join(chrs[:-3]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-2:]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-3:]), highest = False)
pairs += tuple([''.join(chrs[:-3]), ''.join(chrs[-3:])])
if not highest:
return pairs
pairs = tuple(zip(pairs[::2], pairs[1::2]))
pairs = [' '.join(pair) for pair in pairs]
# pairs with the preceding special token
g_pairs = []
for pair in pairs:
if re.search(r'^' + ''.join(pair.split(' ')), new_token):
g_pairs.append(chr(288) + pair)
pairs = g_pairs + pairs
pairs = [chr(288) + ' ' + new_token[0]] + pairs
pairs = list(dict.fromkeys(pairs)) # remove any duplicates
return pairs
# first line of this file is a comment; add the new pairs after it
roberta_merges = roberta_merges[:1] + get_roberta_merges_for_new_tokens(['mynewword']) + roberta_merges[1:]
roberta_merges = list(dict.fromkeys(roberta_merges))
with open('merges.tmp', 'w', encoding = 'utf-8') as tmp_merges_file:
tmp_merges_file.write('\n'.join(roberta_merges))
new_roberta = RobertaTokenizer(name_or_path='roberta-base', vocab_file='vocab.tmp', merges_file='merges.tmp')
# for some reason, we have to re-add the <mask> token to roberta if we are using it, since
# loading the tokenizer from a file will cause it to be tokenized as separate parts
# the weight matrix is identical, and once re-added, a fill-mask pipeline still identifies
# the mask token correctly (not shown here)
new_roberta.add_tokens(new_roberta.mask_token, special_tokens=True)
new_roberta.model_max_length = 512
new_roberta.tokenize('mynewword') # ['mynewword']
new_roberta.tokenize('mynewword a') # ['mynewword', 'Ġa']
new_roberta.tokenize(' mynewword') # ['Ġmynewword']
# however, this does not guarantee that tokenization of other words will not be affected
roberta.tokenize('mynew') # ['my', 'new']
new_roberta.tokenize('mynew') # ['myne', 'w']
import os
os.remove('vocab.tmp')
os.remove('merges.tmp') # cleanup
If you want to add new tokens to fine-tune a Roberta-based model, consider training your tokenizer on your corpus. Take a look at the HuggingFace How To Train for a complete roadmap of how to do that.
I did that myself to fine-tune the XLM-Roberta-base on my health-related corpus.
Here's the snippet:
from tokenizers import ByteLevelBPETokenizer
from glob import glob
import os
CORPUS_TRAIN = 'corpus_train.shc'
TOKENIZER_DIR = 'you_tokenizer_dir'
paths = list(
glob(CORPUS_TRAIN)
)
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer(lowercase=False)
# Customize training
tokenizer.train(files=paths, vocab_size=32000, min_frequency=3, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
# Save files to disk
os.makedirs(TOKENIZER_DIR, exist_ok=True)
tokenizer.save_model(TOKENIZER_DIR)
The 32k parameter was arbitrarily chosen. It took 10min on my corpus, then I was able to train my model.
Inside the TOKENIZER_DIR you will see the vocab.json and merges.txt.
If you are using a custom script for training, you can load the tokenizer like this: tokenizer = RobertaTokenizerFast.from_pretrained(TOKENIZER_DIR, max_len=512).

How can I pass multiple parameters to a parallel operation in Octave?

I wrote a function that acts on each combination of columns in an input matrix. It uses multiple for loops and is very slow, so I am trying to parallelize it to use the maximum number of threads on my computer.
I am having difficulty finding the correct syntax to set this up. I'm using the Parallel package in octave, and have tried several ways to set up the calls. Here are two of them, in a simplified form, as well as a non-parallel version that I believe works:
function A = parallelExample(M)
pkg load parallel;
# Get total count of columns
ct = columns(M);
# Generate column pairs
I = nchoosek([1:ct],2);
ops = rows(I);
slice = ones(1, ops);
Ic = mat2cell(I, slice, 2);
## # Non-parallel
## A = zeros(1, ops);
## for i = 1:ops
## A(i) = cmbtest(Ic{i}, M);
## endfor
# Parallelized call v1
A = parcellfun(nproc, #cmbtest, Ic, {M});
## # Parallelized call v2
## afun = #(x) cmbtest(x, M);
## A = parcellfun(nproc, afun, Ic);
endfunction
# function to apply
function P = cmbtest(indices, matrix)
colset = matrix(:,indices);
product = colset(:,1) .* colset(:,2);
P = sum(product);
endfunction
For both of these examples I generate every combination of two columns and convert those pairs into a cell array that the parcellfun function should split up. In the first, I attempt to convert the input matrix M into a 1x1 cell array so it goes to each parallel instance in the same form. I get the error 'C must be a cell array' but this must be internal to the parcellfun function. In the second, I attempt to define an anonymous function that includes the matrix. The error I get here specifies that 'cmbtest' is undefined.
(Naturally, the actual function I'm trying to apply is far more complex than cmbtest here)
Other things I have tried:
Put M into a global variable so it doesn't need to be passed. Seemed to be impossible to put a global variable in a function file, though I may just be having syntax issues.
Make cmbtest a nested function so it can access M (parcellfun doesn't support that)
I'm out of ideas at this point and could use help figuring out how to get this to work.
Converting my comments above to an answer.
When performing parallel operations, it is useful to think of each parallel worker that will result as separate and independent octave instances, which need to have appropriate access to all functions and variables they will require in order to do their independent work.
Therefore, do not rely on subfunctions when calling parcellfun from a main function, since this might lead to errors if the worker is unable to access the subfunction directly under the hood.
In this case, separating the subfunction into its own file fixed the problem.

How to add words and vectors manually to Word2vec gensim?

Let's say, word2vec.model is my trained word2vec model. When a out-of-vocabulary word (oov_word) occurs, I compute a vector vec using compute_vec(oov_word) method. Now, I want to add/append oov_word and its corresponding vector vec to my already trained model word2vec.model.
I have already checked the below links. But they do not answer my question.
Combining/adding vectors from different word2vec models
https://datascience.stackexchange.com/questions/49431/how-to-train-an-existing-word2vec-gensim-model-on-new-words
https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.BaseKeyedVectors.add
from gensim.models.keyedvectors import WordEmbeddingsKeyedVectors
vector_length = 100
kv = WordEmbeddingsKeyedVectors(vector_length)
# wordList - list of words
# vectorList - list of the vector corresponding to the words
kv.add(wordList, vectorList)
kv.most_similar(word1) # gives the list of words similar to word1
for version 4.0:
w2v_model.wv.add_vectors(keys, vectors)
Check the doc

Graphlab: How to avoid manually duplicating functions that has only a different string variable?

I imported my dataset with SFrame:
products = graphlab.SFrame('amazon_baby.gl')
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
I would like to do sentiment analysis on a set of words shown below:
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
Then I would like to create a new column for each of the selected words in the products matrix and the entry is the number of times such word occurs, so I created a function for the word "awesome":
def awesome_count(word_count):
if 'awesome' in product:
return product['awesome']
else:
return 0;
products['awesome'] = products['word_count'].apply(awesome_count)
so far so good, but I need to manually create other functions for each of the selected words in this way, e.g., great_count, etc. How to avoid this manual effort and write cleaner code?
I think the SFrame.unpack command should do the trick. In fact, the limit parameter will accept your list of selected words and keep only these results, so that part is greatly simplified.
I don't know precisely what's in your reviews data, so I made a toy example:
# Create the data and convert to bag-of-words.
import graphlab
products = graphlab.SFrame({'review':['this book is awesome',
'I hate this book']})
products['word_count'] = \
graphlab.text_analytics.count_words(products['review'])
# Unpack the bag-of-words into separate columns.
selected_words = ['awesome', 'hate']
products2 = products.unpack('word_count', limit=selected_words)
# Fill in zeros for the missing values.
for word in selected_words:
col_name = 'word_count.{}'.format(word)
products2[col_name] = products2[col_name].fillna(value=0)
I also can't help but point out that GraphLab Create does have its own sentiment analysis toolkit, which could be worth checking out.
I actually find out an easier way do do this:
def wordCount_select(wc,selectedWord):
if selectedWord in wc:
return wc[selectedWord]
else:
return 0
for word in selected_words:
products[word] = products['word_count'].apply(lambda wc: wordCount_select(wc, word))

Create an array of strings of all possible combination of the elements of a series of arrays

I would like to do reverse DNA translation using BioRuby which offers a nice CODON table for bacterias.
Here a code snippet describing the series of arrays I have (they are a lot more!).
# Arrays Sample
a = table.revtrans("A") # ["gct", "gcc"]
b = table.revtrans("M") # ["atg"]
c = table.revtrans("L") # ["tta", "ttg", "ctt", "ctc", "cta", "ctg"]
d = ...
I would like to create an array or hash with all possible combinations of the above strings.
["gctatgtta", "gccatgtta", "gtcatgttg", "gctatgctt", etc]
Any idea how can I achieve this using Ruby? I tried using the combination method, but failed to produce any sensible result. Also, I'd like to be able to predetermine the number of computations if possible! So please offer some mathematical explanation if you can!
Some Explanation
These 3-letter strings are DNA codons. Each triplet can be translated into an amino acid from a pre-determined table. What I'm doing is essentially creating a (huge) series of potential DNA sequences from which a protein could be produced theoretically.
Thanks!
What you want to use is product.
Returns an array of all combinations of elements from all arrays.
The length of the returned array is the product of the length of self
and the argument arrays.
%w(gct gcc).product(%w(atg), %w(tta ttg)).map(&:join)
# => ["gctatgtta", "gctatgttg", "gccatgtta", "gccatgttg"]
[*a, *b, *c].combination(3).map &:join
#=> ["gctgccatg", "gctgcctta", #...

Resources