Vowpal Wabbit Continuing Training - vowpalwabbit

Continuing with some experimentation here I was interested is seeing how to continuing training a VW model.
I first ran this and saved the model.
vw -d housing.vm --loss_function squared -f housing2.mod --invert_hash readable.housing2.mod
Examining the readable model:
Version 7.7.0
Min label:0.000000
Max label:50.000000
bits:18
0 pairs:
0 triples:
rank:0
lda:0
0 ngram:
0 skip:
options:
:0
^AGE:104042:0.020412
^B:158346:0.007608
^CHAS:102153:1.014402
^CRIM:141890:0.016158
^DIS:182658:0.278865
^INDUS:125597:0.062041
^LSTAT:170288:0.028373
^NOX:165794:2.872270
^PTRATIO:223085:0.108966
^RAD:232476:0.074916
^RM:2580:0.330865
^TAX:108300:0.002732
^ZN:54950:0.020350
Constant:116060:2.728616
If I then continue to train the model using two more examples (in housing_2.vm), which note, has zero values for ZN and CHAS:
27.50 | CRIM:0.14866 ZN:0.00 INDUS:8.560 CHAS:0 NOX:0.5200 RM:6.7270 AGE:79.90 DIS:2.7778 RAD:5 TAX:384.0 PTRATIO:20.90 B:394.76 LSTAT:9.42
26.50 | CRIM:0.11432 ZN:0.00 INDUS:8.560 CHAS:0 NOX:0.5200 RM:6.7810 AGE:71.30 DIS:2.8561 RAD:5 TAX:384.0 PTRATIO:20.90 B:395.58 LSTAT:7.67
If the model saved is loaded and training continues, the coefficients appear to be lost from these zero valued features. Am I doing something wrong or is this a bug?
vw -d housing_2.vm --loss_function squared -i housing2.mod --invert_hash readable.housing3.mod
output from readable.housing3.mod:
Version 7.7.0
Min label:0.000000
Max label:50.000000
bits:18
0 pairs:
0 triples:
rank:0
lda:0
0 ngram:
0 skip:
options:
:0
^AGE:104042:0.023086
^B:158346:0.008148
^CRIM:141890:1.400201
^DIS:182658:0.348675
^INDUS:125597:0.087712
^LSTAT:170288:0.050539
^NOX:165794:3.294814
^PTRATIO:223085:0.119479
^RAD:232476:0.118868
^RM:2580:0.360698
^TAX:108300:0.003304
Constant:116060:2.948345

If you want to continue learning from saved state in a smooth fashion you must use the --save_resume option.
There are 3 fundamentally different types of "state" that can be saved into a vw "model" file:
The weight vector (regressor) obviously. That's the model itself.
invariant parameters like the version of vw (to ensure binary compatibility which is not always preserved between versions), number of bits in the vector (-b), and type of model
state which dynamically changes during learning. This subset includes parameters like learning and decay rates which gradually change during learning with each example, the example numbers themselves, etc.
Only --save_resume saves the last group.
--save_resume is not the default because it has an overhead and in most use-cases it isn't needed. e.g. if you save a model once in order to do many predictions and no learning (-t), there's no need in saving the 3rd subset of state.
So, I believe in your particular case, you want to use --save_resume.
The possibility of a bug always exists, especially since vw supports so many options (about 100 at last count) which are often interdependent. Some option combinations make sense, other don't. Doing a sanity check for roughly 2^100 possible option combinations is a bit unrealistic. If you find a bug, please open an issue on github. In this case, please make sure to use a complete example (full data & command line) so your problem can be reproduced.
Update 2014-09-20 (after an issue was opened on github, thanks!):
The reason for 0 valued features "disappearing" (not really from the model, but only from the --invert_hash output) is that 1) --invert_hash was never designed for multiple passes, because keeping the original feature names in a hash-table, incurs a large performance overhead 2) The missing features are those with a zero value, which are discarded. The model itself should still have any feature with any prior pass non-zero weight in it. Fixing this inconsistency is too complex and costly for implementation reasons, and would go against the overriding motivation of making vw fast, especially for the most useful/common use-cases. Anyway, thanks for the report, I too learned something new from it.

Related

My Doc2Vec code, after many loops/epochs of training, isn't giving good results. What might be wrong?

I'm training a Doc2Vec model using the below code, where tagged_data is a list of TaggedDocument instances I set up before:
max_epochs = 40
model = Doc2Vec(alpha=0.025,
min_alpha=0.001)
model.build_vocab(tagged_data)
for epoch in range(max_epochs):
print('iteration {0}'.format(epoch))
model.train(tagged_data,
total_examples=model.corpus_count,
epochs=model.iter)
# decrease the learning rate
model.alpha -= 0.001
# fix the learning rate, no decay
model.min_alpha = model.alpha
model.save("d2v.model")
print("Model Saved")
When I later check the model results, they're not good. What might have gone wrong?
Do not call .train() multiple times in your own loop that tries to do alpha arithmetic.
It's unnecessary, and it's error-prone.
Specifically, in the above code, decrementing the original 0.025 alpha by 0.001 forty times results in (0.025 - 40*0.001) -0.015 final alpha, which would also have been negative for many of the training epochs. But a negative alpha learning-rate is nonsensical: it essentially asks the model to nudge its predictions a little bit in the wrong direction, rather than a little bit in the right direction, on every bulk training update. (Further, since model.iter is by default 5, the above code actually performs 40 * 5 training passes – 200 – which probably isn't the conscious intent. But that will just confuse readers of the code & slow training, not totally sabotage results, like the alpha mishandling.)
There are other variants of error that are common here, as well. If the alpha were instead decremented by 0.0001, the 40 decrements would only reduce the final alpha to 0.021 – whereas the proper practice for this style of SGD (Stochastic Gradient Descent) with linear learning-rate decay is for the value to end "very close to 0.000"). If users start tinkering with max_epochs – it is, after all, a parameter pulled out on top! – but don't also adjust the decrement every time, they are likely to far-undershoot or far-overshoot 0.000.
So don't use this pattern.
Unfortunately, many bad online examples have copied this anti-pattern from each other, and make serious errors in their own epochs and alpha handling. Please don't copy their error, and please let their authors know they're misleading people wherever this problem appears.
The above code can be improved with the much-simpler replacement:
max_epochs = 40
model = Doc2Vec() # of course, if non-default parameters needed, use them here
# most users won't need to change alpha/min_alpha at all
# but many will want to use more than default `epochs=5`
model.build_vocab(tagged_data)
model.train(tagged_data, total_examples=model.corpus_count, epochs=max_epochs)
model.save("d2v.model")
Here, the .train() method will do exactly the requested number of epochs, smoothly reducing the internal effective alpha from its default starting value to near-zero. (It's rare to need to change the starting alpha, but even if you wanted to, just setting a new non-default value at initial model-creation is enough.)
Also: note that later calls to infer_vector() will reuse the epochs specified at the time of model-creation. If nothing is specified, the default epochs=5 will be used - which is often smaller than is best for training or inference. So if you find a larger number of epochs (such as 10, 20 or more) is better for training, remember to also use at least the same number of epochs for inference. (.infer_vector() takes an optional epochs parameter whihc can override any value set at model-contruction.

vowpal wabbit: --multiclass_oaa does not produce probabilities

I've tried
vw --multilabel_oaa 68 -d vw_data.csv --loss_function=logistic --probabilities -p probabilities.txt
and ended up with target labels only in probabilities.txt. Also -r option designed to produce raw output returned nothing, unfortunately.
Apart from that, I'm not sure is there a way to achieve similar behaviour (multilabel prediction with logistic loss) with other available VW multiclass options such as --csoaa and --wap.
I don't remember exactly, but I think --probabilities does not support multilabel. I even don't know what would be the interpretation (modelling the probability of label co-occurrence? and providing the probabilities for all 2^68 subsets?).
You can use standard multi-class --oaa 68. With --probabilities it should predict the probability for each class, so you can use e.g. some kind of threshold for selecting multiple lables=classes for each example (e.g. such that the sum of their probabilities is at least 42%).

Speed of L2 Regularization on Pytorch

I'm trying to manually implement L2 regularisation and a couple of its variations in a neural network. What I'm doing is the following:
for name, param in model.state_dict():
if 'weight' in name:
l2_reg += torch.sum(param**2)
loss = cross_entropy(outputs, labels) + 0.0001*l2_reg
Is this equivalent to adding 'weight_decay = 0.0001' inside my optimizer? i.e.:
torch.optim.SGD(model.parameters(), lr=learning_rate , momentum=0.9, weight_decay = 0.0001)
My problem is that I thought they were equivalent, but the manual procedure is about 100x slower than adding 'weight_decay = 0.0001'. Why is that? How can I fix it?
Note that I need to also implement my own variation of L2 regularization, so just adding 'weight_decay = 0.0001' won't help.
You can check PyTorch implementation of SGD to get some tips and base off of that code.
There are a few things going on which should speed up your custom regularization.
Below is a cleaned version (a little pseudo-code, refer to original) of the parts we are interested in:
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
p.data.add_(-group['lr'], d_p)
return loss
BTW. It seems your implementation is mathematically sound (correct me if I missed anything) and equivalent to PyTorch but will be slow indeed.
Modify only gradient
Please notice you perform regularization explicitly during forward pass. This takes a lot of time, more or less because:
take parameters and iterate over them
take it to the power of 2
sum all of them
add to variable containing all previous parameters (all this while creating graph dynamically and creating new nodes).
What pytorch does is it only focuses on backward pass as that's all is needed. This is pretty handy because:
parameters have to be loaded and iterated over once anyway during corrections performed by optimizer (in your case they are taken out twice)
no power of 2 because gradient of w**2 is simply 2*w (2 is further left out and L2 is often expressed as 1/2 * w **2 to make it simpler and a little faster)
no accumulation and creation of additional graph nodes
Essentially, this line:
d_p.add_(weight_decay, p.data)
Modifies the gradient adding p.data (weight) multiplied by weight_decay all done in-place (notice d_p.add_), which is all you have to do to perform L2 regularization.
Finally this line:
p.data.add_(-group['lr'], d_p)
Updates weights with gradient (modified by weight decay) using standard SGD formula (once again, in-place to be as fast as possible, at least on Python level).
Your own implementation
I would advise you to follow similar logic for your own regularization if you want to make it faster.
You can copy PyTorch implementation of SGD and only change this one relevant line. This would also gives you functionality of PyTorch optimizer in case you need it in your experiments.
For L1 regularization (|w| instead of w**2) you would have to calculate the derivative of it (which is 1 for positive case, -1 for negative and undefined for 0 (we can't have that so it should be zero)).
With that in mind we can write the weight_decay like this:
if weight_decay != 0:
d_p.add_(weight_decay, torch.sign(p.data))
torch.sign returns 1 for positive values and -1 for negative and 0 for... yeah, 0.
Hope this helps, exact implementation is left for you (hit me up in the comments in case you have any questions or troubles).

metafor() non-negative sampling variance

I am trying to learn meta regression using the metafor() package. In running
one of the mixed regression models, I received an error indicating
"There are outcomes with non-positive sampling variances."
I am at lost as to how to proceed with this error. I understand that certain
model statistics (e.g., I^2 and QE) cannot be computed with due to the
presence of non-positive sampling variances. However, I am not sure whether
these results can be interpreted similarly as we would have otherwise. I
also tried using other estimators and/or the unweighted option; the error
still persists.
Any suggestions would be much appreciated.
First of all, to clarify: You are getting a warning, not an error.
Aside from that, I can't think of many situations where it is reasonable to assume that the sampling variance is really equal to 0 in a particular study. I would first question whether this really makes sense. This is why the rma() function is generating this warning message -- to make the user aware of this situation and question whether this really is intended/reasonable.
But suppose that we really want to go through with this, then you have to use an estimator for tau^2 that can handle this (e.g., method="REML" -- which is actually the default). If the estimate of tau^2 ends up equal to 0 as well, then the model cannot be fitted at all (due to division by zero -- and then you get an error). If you do end up with a positive estimate of tau^2, then the results should be okay (but things like the Q-test, I^2, or H^2 cannot be computed then).

Ordinary Least Squares Regression in Vowpal Wabbit

Has anyone managed to run an ordinary least squares regression in Vowpal Wabbit? I'm trying to confirm that it will return the same answer as the exact solution, i.e. when choosing a to minimize ||y - X a||_2 + ||Ra||_2 (where R is the regularization) I want to get the analytic answer
a = (X^T X + R^T R)^(-1) X^T y. Doing this type of regression takes about 5 lines in numpy python.
The documentation of VW suggests that it can do this (presumably the "squared" loss function) but so far I've been unable to get it to come even close to matching the python results. Becuase squared is the default loss function, I'm simply calling:
$ vw-varinfo input.txt
where input.txt has lines like
1.4 | 0:3.4 1:-1.2 2:4.0 .... etc
Do I need some other parameters in the VW call? I'm unable to grok the (rather minimal) documentation.
I think you should use this syntax (vowpal wabbit version 7.3.1):
vw -d input.txt -f linear_model -c --passes 50 --holdout_off --loss_function squared --invert_hash model_readable.txt
This syntax will instruct VW to read your input.txt file, write on disk a model record and a cache (necessary for multi-pass convergence) and fit a regression using the squared loss function. Moreover it will finally write the model coefficients in a readable fashion into a file called model_readable.txt.
The --holdout_off option is a recent additional one in order to suppress the out-of-sample automatic loss computation (if you are using an earlier version you have to remove it).
Basically a regression analysis based on stochastic gradient descent will provide you with a vector of coefficients similar to the exact solution only when no regularization is applied and when the number of passes is high (I would suggest 50 or even more, also randomly shuffling the input file rows would help the algorithm to converge better).

Resources