Deep semantic segmenation and conditional random field - crf

i found the idea of combining the deep semantic segmentation model with the conditional random field getting very popular ... is their any library or resources for the CRFs code in tensorflow? since my deep semantic model is written in tensor flow and just would like to integrate it with the CRF for further boundary refinement...

Related

How to check the understanding of a trained model?

I'm currently training two models (BERT & MPNet) for a semantic textual similarity (STS) task using the SentenceTransformers library.
Now I want to check, if the base models and/or the trained models understand specific words/names which occur within the training dataset. I tried masking or calculating the similarity to specific categories related to the words/names but the results were hardly distinguishable.
Is there any way to check or even prove if a model understands a specific word or sequence?

Do I need to create separate embedding matrices for source and target vocab for abstractive summarization model?

I'm working on a Seq2Seq model to perform abstractive summarization using the Glove pre-trained word embeddings. Is it required I make two embedding matrices? One that covers the source vocabulary and one that covers the summary vocabulary.
No, the common practice is to share the embedding matrices even in machine translation where the words are from different languages.
Sometimes, the embedding matrix is also used as an output projection matrix when generating model output (see e.g., the Attention is all you need paper), however, this is only possible if you use a vocabulary of tens of thousands of (sub)word as opposed to the very large vocabulary of GloVe.

LightGBM: Intent of lightgbm.dataset()

What is the purpose of lightgbm.Dataset() as per the docs when I can use the sklearn API to feed the data and train a model?
Any real world examples explaining the usage of lightgbm.dataset() would be interesting to learn?
LightGBM uses a few techniques to speed up training which require preprocessing one time before training starts.
The most important of these is bucketing continuous features into histograms. When LightGBM searches splits to possibly add to a tree, it only searches the boundaries of these histogram bins. This greatly reduces the number of splits to evaluate.
I think this picture from "What Makes LightGBM Fast?" describes it well:
The Dataset object in the library is where this preprocessing happens. Histograms are created one time, and then don't need to be calculated again for the rest of training.
You can get some more information about what happens in the Dataset object by looking at the parameters that control that Dataset, available at https://lightgbm.readthedocs.io/en/latest/Parameters.html#dataset-parameters. Some examples of other tasks:
optimization for sparse features
filtering out features that are not splittable
when I can use the sklearn API to feed the data and train a model
The lightgbm.sklearn interface is intended to make it easy to use LightGBM alongside other libraries like xgboost and scikit-learn. It takes in data in formats like scipy sparse matrices, pandas data frames, and numpy arrays to be compatible with those other libraries. Internally, LightGBM constructs a Dataset from those inputs.

Should I split my data into training/testing/validation sets with k-fold-cross validation?

When evaluating a recommender system, one could split his data into three pieces: training, validation and testing sets. In such case, the training set would be used to learn the recommendation model from data and the validation set would be used to choose the best model or parameters to use. Then, using the chosen model, the user could evaluate the performance of his algorithm using the testing set.
I have found a documentation page for the scikit-learn cross validation (http://scikit-learn.org/stable/modules/cross_validation.html) where it says that is not necessary to split the data into three pieces when using k-fold-cross validation, but only into two: training and testing.
A solution to this problem is a procedure called cross-validation (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles).
I am wondering if this would be a good approach. And if so, someone could show me a reference to an article/book backing this theory up?
Cross validation does not avoid validation set, it simply uses many. In other words instead of one split into three parts, you have one split into two, and what you now call "training" is actually what previously has been training and validation, CV is simply about repeated splits (in slightly more smart manner than just randomly) into train and test, and then averaging the results. Theory backing it up is widely available in pretty much any good ML book; the crucial bit is "should I use it" and the answer is suprisingly simple - only if you do not have enough data to do one split. CV is used when you do not have enough data for each of the splits to be representative for the distribution you are interested in, then doing repeated splits simply reduce the variance. Furthermore, for really small datasets one does nested CV - one for [train+val][test] split and internal for [train][val], so the variance of both - model selection and its final evaluation - are reduced.

4 fold cross validation | Caffe

So I trying to perform a 4-fold cross validation on my training set. I have divided my training data into four quarters. I use three quarters for training and one quarter for validation. I repeat this three more times till all the quarters are given a chance to be the validation set, atleast once.
Now after training I have four caffemodels. I test the models on my validation sets. I am getting different accuracy in each case. How should I proceed from here? Should I just choose the model with the highest accuracy?
Maybe it is a late reply, but in any case...
The short answer is that, if the performances of the four models are similar and good enough, then you re-train the model on all the data available, because you don't want to waste any of them.
The n-fold cross validation is a practical technique to get some insights on the learning and generalization properties of the model you are trying to train, when you don't have a lot of data to start with. You can find details everywhere on the web, but I suggest the open-source book Introduction to Statistical Learning, Chapter 5.
The general rule says that after you trained your n models, you average the prediction error (MSE, accuracy, or whatever) to get a general idea of the performance of that particular model (in your case maybe the network architecture and learning strategy) on that dataset.
The main idea is to assess the models learned on the training splits checking if they have an acceptable performance on the validation set. If they do not, then your models probably overfitted tha training data. If both the errors on training and validation splits are high, then the models should be reconsidered, since they don't have predictive capacity.
In any case, I would also consider the advice of Yoshua Bengio who says that for the kind of problem deep learning is meant for, you usually have enough data to simply go with a training/test split. In this case this answer on Stackoverflow could be useful to you.

Resources