Why is positional encoding needed while input ids already represent the order of words in Bert? - huggingface-transformers

For example, in Huggingface's example:
encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
The input_ids vector already encode the order of each token in the original sentence. Why does it need positional encoding again with an extra vector to represent it?

The reason is the design of the neural architecture. BERT consists of self-attention and feedforward sub-layers, and neither of them is sequential.
The feedforward layers process each token independently of others.
The self-attention views the input states as an unordered set of states. Attention can be interpreted as soft probabilistic retrieval from a set of values according to some keys. The position embeddings are there so the keys can contain information about their relative order.

Related

Need clarity on "padding" parameter in Bert Tokenizer

I have been fine-tuning a BERT model for sentence classification. In training, while tokenization I had passed these parameters padding="max_length", truncation=True, max_length=150 but while inferencing it is still predicting even if padding="max_length" parameter is not being passed.
Surprisingly, predictions are the same in both cases when padding="max_length" is passed or not but if padding="max_length" is not being passed, inferencing is much faster.
So, I need some clarity on the parameter "padding" in Bert Tokenizer. Can someone help me to understand how best is able to predict even without the padding since the length of the sentences will differ and does it have any negative consequences If padding="max_length" is not passed while inferencing? Any help would be highly appreciated.
Thanks
When passing a list of sentences to a tokenizer, each sentence might have a different length. Hence the output of the tokenizer for each sentence will have a different length. Padding is a strategy for ensuring tensors are rectangular by adding a special padding token to shorter sentences.
Consider the following example where padding="max_length", max_length=10.
batch_sentences = ["Hello World", "Hugging Face Library"]
encoded_input = tokenizer(batch_sentences, padding="max_length", max_length=10)
print(encoded_input)
{'input_ids': [[101, 8667, 1291, 102, 0, 0, 0, 0, 0, 0], [101, 20164, 10932, 10289, 3371, 102, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0, 0, 0, 0]]}
Notice that the output of the tokenizer for each sentence is padded to the maximum_length which is 10 by a special padding token '0'. Similarly, if we set padding=True, the output of the tokenizer for each sentence will be padded to the length of the longest sequence in the batch.
Coming back to your question, padding has no effect if you pass a list of just one sentence to the tokenizer. If you have set batch_size = 1 during training or inference, your model will be processing your data one sentence at a time. This could be one reason why padding is not making a difference in your case.
Another possible yet very unlikely reason padding does not make a difference in your case is that all your sentences have the same length. Lastly, if you have not converted the output of the tokenizer to a PyTorch or TensorFlow tensor, having varying sentence lengths would not be a problem. This again is unlikely in your case given that you used your model for training and testing.

How I can find the next value? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Given an array of 0 and 1, e.g. array[] = {0, 1, 0, 0, 0, 1, ...}, how I can predict what the next value will be with the best possible accuracy?
What kind of methods are best suited for this kind of task?
The prediction method would depend on the interpretation of data.
However, it looks like in this particular case we can make some general assumptions that might justify use of certain machine learning techniques.
Values are generated one after another in chronological order
Values depend on some (possibly non-observable) external state. If the state repeats itself, so do the values.
This is a pretty common scenario in many machine learning contexts. One example is the prediction of stock prices based on history.
Now, to build the predictive model you'll need to define the training data set. Assume our model looks at the last k values. In case if k=1, we might end up with something similar to a Markov chain model.
Our training data set will consist of k-dimensional data points together with their respective dependent values. For example, suppose k=3 and we have the following input data
0,0,1,1,0,1,0,1,1,1,1,0,1,0,0,1...
We'll have the following training data:
(0,0,1) -> 1
(0,1,1) -> 0
(1,1,0) -> 1
(1,0,1) -> 0
(0,1,0) -> 1
(1,0,1) -> 1
(0,1,1) -> 1
(1,1,1) -> 1
(1,1,1) -> 0
(1,1,0) -> 1
(1,0,1) -> 0
(0,1,0) -> 0
(1,0,0) -> 1
Now, let's say you want to predict the next value in the sequence. The last 3 values are 0,0,1, so the model must predict the value of the function at (0,0,1), based on the training data.
A popular and relatively simple approach would be to use a multivariate linear regression on a k-dimensional data space. Alternatively, consider using a neural network if linear regression underfits the training data set.
You might need to try out different values of k and test against your validation set.
You could use a maximum likelihood estimator for the Bernoulli distribution. In essence you would:
look at all observed values and estimate parameter p
then use p to determine the next value
In Python this could look like this:
#!/usr/bin/env python
from __future__ import division
signal = [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0]
def maximum_likelihood(s, last=None):
"""
The maximum likelihood estimator selects the parameter value which gives
the observed data the largest possible probability.
http://mathworld.wolfram.com/MaximumLikelihood.html
If `last` is given, only use the last `n` values.
"""
if not last:
return sum(s) / len(s)
return sum(s[:-last]) / last
if __name__ == '__main__':
hits = []
print('p\tpredicted\tcorrect\tsignal')
print('-\t---------\t-------\t------')
for i in range(1, len(signal) - 1):
p = maximum_likelihood(signal[:i]) # p = maximum_likelihood(signal[:i], last=2)
prediction = int(p >= 0.5)
hits.append(prediction == signal[i])
print('%0.3f\t%s\t\t%s\t%s' % (
p, prediction, prediction == signal[i], signal[:i]))
print('accuracy: %0.3f' % (sum(hits) / len(hits)))
The output would like this:
# p predicted correct signal
# - --------- ------- ------
# 1.000 1 False [1]
# 0.500 1 True [1, 0]
# 0.667 1 True [1, 0, 1]
# 0.750 1 False [1, 0, 1, 1]
# 0.600 1 False [1, 0, 1, 1, 0]
# 0.500 1 True [1, 0, 1, 1, 0, 0]
# 0.571 1 False [1, 0, 1, 1, 0, 0, 1]
# 0.500 1 True [1, 0, 1, 1, 0, 0, 1, 0]
# 0.556 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1]
# 0.600 1 False [1, 0, 1, 1, 0, 0, 1, 0, 1, 1]
# 0.545 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0]
# 0.583 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1]
# 0.615 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1]
# 0.643 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1]
# 0.667 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1]
# 0.688 1 False [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1]
# 0.647 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0]
# 0.667 1 False [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1]
# 0.632 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0]
# 0.650 1 True [1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1]
# accuracy: 0.650
You could vary the window size for performance reasons or to favor recent events.
In above example, if we would estimate the the next value by looking only at the last 3 observed values, we could increase our accuracy to 0.7.
Update: Inspired by Narek's answer I added a logistic regression classifier example to the gist.
You can predict by calculating the probabilities of 0s and 1s and make their probability ranges and then draw a random number between 0 and 1 to predict.....
If these are series of numbers that are generated each time after some reset event, and next numbers are somehow related to previous ones, you could create a tree (binary tree with two branches at each node in your case) and feed in such historical series from the root, adjusting weights (say a count) on each branch you follow.
Could divide such counts by the number of series you entered before using them, or keep a number on each node too, increased before choosing a branch. That way root node contains number of series entered.
Then, as you feed it a new sequence you can see which branch is "hotter" (would make nice visualization as heatmap/tree btw) to follow, especially if sequence is long enough. That is, assuming order of items in sequence plays a role in what comes next.

Understand disaster model in PyMC

I start learning PyMC and strungle to understand the very first tutorial´s example.
disasters_array = \
np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
switchpoint = DiscreteUniform('switchpoint', lower=0, upper=110, doc='Switchpoint[year]')
early_mean = Exponential('early_mean', beta=1.)
late_mean = Exponential('late_mean', beta=1.)
I don´t understand why early_mean and late_mean is modeled as stochastic variable following exponential distribution with rate = 1. My intuition is that they should be deterministic calculated using disasters_array and switchpoint variable e.g.
#deterministic(plot=False)
def early_mean(s=switchpoint):
return sum(disasters_array[:(s-1)])/(s-1)
#deterministic(plot=False)
def late_mean(s=switchpoint):
return sum(disasters_array[s:])/s
disasters_array are the data generated by a Poisson process, under the assumptions of this model. late_mean and early_mean are the parameters associated with this process, depending on when in the time series they occurred. The true values of the parameters are unknown, so they are specified as stochastic variables. Deterministic objects are only for nodes that are completely determined by the values of their parents.
Think of early_mean and late_mean stochastics as model parameters, and the Exponential as the prior distribution for these parameters. In the version of the model here, the deterministic r and likelihood D lead to posteriors on early_mean and late_mean through MCMC sampling.

Hashi Puzzle Representation to solve all solutions with Prolog Restrictions

Im trying to write a prolog program that receives a representation of an unsolved Hashi board and answers all the possible solutions, using restrictions. Im having an hard time figuring out which is the best (or a very good) way of representing the board with the bridges and without. The program is supposed to draw the boards for an easy reading of the solutions.
board(
[[3, 0, 6, 0, 0, 0, 6, 0, 3],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 3, 0, 0, 2, 0, 0, 0],
[0, 3, 0, 0, 0, 0, 4, 0, 1]]
).
For example, this representation is only good without the bridges, since it holds no info about them. The drawing of this board would be basicly turning the 0's into spaces, and the board would be drawn like this:
3 6 6 3
1
2 1
1 3 2
3 4 1
which is a decent representation of a real hashi board.
The point now is to be able to draw the same thing, but also draw bridges if there's any. I must be able to do so before i even think of making the restrictions themselves, since going at it with a bad way of representation will make my job alot more difficult.
I started thinking of solutions like this:
if every element of the board would be a list:
[NumberOfConnections, [ListOfConnections]]
but this gives me no info for the drawing, and what would the list of connections really have?
maybe this:
[Index, NumberOfConnections, [ListOfIndex]]
this way every "island" would have a unique ID and the list of connections would have ids
but drawing still sounds kinda hard, in the end the bridges can only be horizontal or vertical
Anyway, anyone can think of a better way of representation that makes it the easiest to achive the final goal of the program?
Nice puzzle, I agree. Here is a half-way solution in ECLiPSe, a Prolog dialect with constraints (http://eclipseclp.org).
The idea is to have, for every field of the board, four variables N, E, S, W (for North, East, etc) that can take values 0..2 and represent the number of connections on that edge of the field. For the node-fields, these connections must sum up to the given number. For the empty fields, the connections must go through (N=S, E=W) and not cross (N=S=0 or E=W=0).
Your example solves correctly:
?- hashi(stackoverflow).
3 = 6 = = = 6 = 3
| X X |
| 1 X X |
| | X X |
2 | X 1 X |
| | X | X |
| | X | X |
1 | 3 - - 2 X |
3 = = = = 4 1
but the wikipedia one doesn't, because there is no connectedness constraint yet!
:- lib(ic). % uses the integer constraint library
hashi(Name) :-
board(Name, Board),
dim(Board, [Imax,Jmax]),
dim(NESW, [Imax,Jmax,4]), % 4 variables N,E,S,W for each field
( foreachindex([I,J],Board), param(Board,NESW,Imax,Jmax) do
Sum is Board[I,J],
N is NESW[I,J,1],
E is NESW[I,J,2],
S is NESW[I,J,3],
W is NESW[I,J,4],
( I > 1 -> N #= NESW[I-1,J,3] ; N = 0 ),
( I < Imax -> S #= NESW[I+1,J,1] ; S = 0 ),
( J > 1 -> W #= NESW[I,J-1,2] ; W = 0 ),
( J < Jmax -> E #= NESW[I,J+1,4] ; E = 0 ),
( Sum > 0 ->
[N,E,S,W] #:: 0..2,
N+E+S+W #= Sum
;
N = S, E = W,
(N #= 0) or (E #= 0)
)
),
% find a solution
labeling(NESW),
print_board(Board, NESW).
print_board(Board, NESW) :-
( foreachindex([I,J],Board), param(Board,NESW) do
( J > 1 -> true ; nl ),
Sum is Board[I,J],
( Sum > 0 ->
write(Sum)
;
NS is NESW[I,J,1],
EW is NESW[I,J,2],
symbol(NS, EW, Char),
write(Char)
),
write(' ')
),
nl.
symbol(0, 0, ' ').
symbol(0, 1, '-').
symbol(0, 2, '=').
symbol(1, 0, '|').
symbol(2, 0, 'X').
% Examples
board(stackoverflow,
[]([](3, 0, 6, 0, 0, 0, 6, 0, 3),
[](0, 0, 0, 0, 0, 0, 0, 0, 0),
[](0, 1, 0, 0, 0, 0, 0, 0, 0),
[](0, 0, 0, 0, 0, 0, 0, 0, 0),
[](2, 0, 0, 0, 0, 1, 0, 0, 0),
[](0, 0, 0, 0, 0, 0, 0, 0, 0),
[](0, 0, 0, 0, 0, 0, 0, 0, 0),
[](1, 0, 3, 0, 0, 2, 0, 0, 0),
[](0, 3, 0, 0, 0, 0, 4, 0, 1))
).
board(wikipedia,
[]([](2, 0, 4, 0, 3, 0, 1, 0, 2, 0, 0, 1, 0),
[](0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 1),
[](0, 0, 0, 0, 2, 0, 3, 0, 2, 0, 0, 0, 0),
[](2, 0, 3, 0, 0, 2, 0, 0, 0, 3, 0, 1, 0),
[](0, 0, 0, 0, 2, 0, 5, 0, 3, 0, 4, 0, 0),
[](1, 0, 5, 0, 0, 2, 0, 1, 0, 0, 0, 2, 0),
[](0, 0, 0, 0, 0, 0, 2, 0, 2, 0, 4, 0, 2),
[](0, 0, 4, 0, 4, 0, 0, 3, 0, 0, 0, 3, 0),
[](0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0),
[](2, 0, 2, 0, 3, 0, 0, 0, 3, 0, 2, 0, 3),
[](0, 0, 0, 0, 0, 2, 0, 4, 0, 4, 0, 3, 0),
[](0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0),
[](3, 0, 0, 0, 0, 3, 0, 1, 0, 2, 0, 0, 2))
).
For drawing bridges, you could use ASCII 179 for single vertical bridges, 186 for double vertical bridges, 196 for single horizontal bridges, and 205 for double horizontal bridges. This depends on which extended ASCII set is in use, though. It works in the most common.
For internal representation, I'd use -1 and -2 for single and double bridges in one direction, and -3 and -4 in the other. You could use just about any symbol that isn't 0-8, but this has the added benefit of simply adding the bridges to the island (converting (-3, -4) to (-1, -2)) to check the solution. If the sum is 0, that island is solved.
What a cool puzzle! I did a few myself, and I don't see an obvious way to make solving them deterministic, which is a nice property for a puzzle to have. Games like Tetris derive much of their ongoing play value from the fact that you don't get bored--even a good strategy can continually be refined. This has a practical ramification: if I were coding this, I would spend no further time trying to find a deterministic algorithm. I would instead focus on the generate/test paradigm Prolog excels at.
If you know you're going to do generate-and-test, you know already where all your effort at optimization is going to go: making your generator more intelligent (so it generates better candidates) and making your test fast. So I'm looking at your board representation and I'm asking myself: is it going to be easy and fast to generate alternatives from this? And we both know the answer is no, for several reasons:
Finding alternative islands to connect to from any particular island is going to be highly inefficient: searching a list forward and backward and then indexing all the other lists by the current offset. This is a huge amount of list finagling, which won't be cheap.
Detecting and preventing a bridge crossing is going to be interesting.
More to the point, the proper way to encode bridges is not obvious with this design. Islands can be separated by great distances--are you going to put a 0/1/2 in every connecting cell? If so, you have a data duplication problem; if not, you're going to have some fun calculating which location should hold the bridge count.
It's just an intuition, but having a heterogeneous data structure like this where the "kind" of element is determined entirely by whether the indices are odd or even, strikes me as unwelcome.
I think what you've got for the board layout is a great input format, but I don't think it's going to serve you well as an intermediate representation. The game is clearly a graph problem. This suggests one of the two classic graph data structures might be more helpful: the adjacency list, or the edge matrix. Either of these will expedite choosing alternatives for bridge layout, but it's not obvious to me (maybe to someone who does more graph theory) how one would prevent bridge crossings. Ideally, your data structure would simply prevent bridge crossings from occurring. Next best would be preventing the generator from generating candidate solutions with bridge crossings; worst would be to simply fail them at the test stage.

Convert a String Containing an Array to an Array in Ruby

I have a string that contains an array that i would like to convert into an array. How would you do this?
I want to convert this:
myvar=
"[[Date.UTC(2010, 0, 23),0],[Date.UTC(2010, 0, 24),0],[Date.UTC(2010, 0, 25),3],[Date.UTC(2010, 0, 26),0],[Date.UTC(2010, 0, 27),0],[Date.UTC(2010, 0, 28),0],[Date.UTC(2010, 0, 29),0],[Date.UTC(2010, 0, 30),0],[Date.UTC(2010, 0, 31),0],[Date.UTC(2010, 1, 01),0],[Date.UTC(2010, 1, 02),0],[Date.UTC(2010, 1, 03),1],[Date.UTC(2010, 1, 04),2],[Date.UTC(2010, 1, 05),0],[Date.UTC(2010, 1, 06),0],[Date.UTC(2010, 1, 07),0],[Date.UTC(2010, 1, 08),0],[Date.UTC(2010, 1, 09),0],[Date.UTC(2010, 1, 10),0],[Date.UTC(2010, 1, 11),0],[Date.UTC(2010, 1, 12),0],[Date.UTC(2010, 1, 13),0],[Date.UTC(2010, 1, 14),0],[Date.UTC(2010, 1, 15),0],[Date.UTC(2010, 1, 16),0],[Date.UTC(2010, 1, 17),0],[Date.UTC(2010, 1, 18),0],[Date.UTC(2010, 1, 19),0],[Date.UTC(2010, 1, 20),0],[Date.UTC(2010, 1, 21),0]]"
myvar.class
>>string
Into This:
myvar =
[[Date.UTC(2010, 0, 23),0],[Date.UTC(2010, 0, 24),0],[Date.UTC(2010, 0, 25),3],[Date.UTC(2010, 0, 26),0],[Date.UTC(2010, 0, 27),0],[Date.UTC(2010, 0, 28),0],[Date.UTC(2010, 0, 29),0],[Date.UTC(2010, 0, 30),0],[Date.UTC(2010, 0, 31),0],[Date.UTC(2010, 1, 01),0],[Date.UTC(2010, 1, 02),0],[Date.UTC(2010, 1, 03),1],[Date.UTC(2010, 1, 04),2],[Date.UTC(2010, 1, 05),0],[Date.UTC(2010, 1, 06),0],[Date.UTC(2010, 1, 07),0],[Date.UTC(2010, 1, 08),0],[Date.UTC(2010, 1, 09),0],[Date.UTC(2010, 1, 10),0],[Date.UTC(2010, 1, 11),0],[Date.UTC(2010, 1, 12),0],[Date.UTC(2010, 1, 13),0],[Date.UTC(2010, 1, 14),0],[Date.UTC(2010, 1, 15),0],[Date.UTC(2010, 1, 16),0],[Date.UTC(2010, 1, 17),0],[Date.UTC(2010, 1, 18),0],[Date.UTC(2010, 1, 19),0],[Date.UTC(2010, 1, 20),0],[Date.UTC(2010, 1, 21),0]]
myvar.class
>>Array
While the obvious answer involves eval, this is dangerous. I would instead recommend parsing it. Since this is quite a well defined data format (it seems), you can use this:
myvar.scan(/\d+/).map(&:to_i).each_slice(4).map{|*x,y| [Date.UTC(*x), y]}
this will
pull out all the digits
convert them to integers
separate them into groups of four
apply the first three of each group to Date.UTC as the first through third arguments
pair each date with its corresponding y
create an array containing all of these pairs.
I don't have a Date.UTC method, but I assume you have some custom method called that.
try eval command
x = eval("[\"foo\",\"bar\",\"land\"]")
=> ["foo", "bar", "land"]
x
=> ["foo", "bar", "land"]
but eval is danger be care full when use it.

Resources