Huggingface transformer export tokenizer and model - huggingface-transformers

I'm currently working on a text summarizer powered by the Huggingface transformers library. The summarization process has to be done on premise, as such I have the following code (close to documentation):
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
model = BartForConditionalGeneration.from_pretrained('sshleifer/distilbart-cnn-6-6')
tokenizer = BartTokenizer.from_pretrained('sshleifer/distilbart-cnn-6-6')
inputs = tokenizer([myTextToSummarize], max_length=1024, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'], num_beams=4, early_stopping=True)
[tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]
My problem is that I cannot load the model in memory and have my server expose an API which can directly use model and tokenizer, I would like both of them to be initialized in a first process, and made available in a second one (one that will expose an HTTP API). I saw that you can export the model on the filesystem, but again, I don't have access to it (locked k8s environment), and I'd need to store it in a specific database.
Is it possible to export both the modeland the tokenizer as string/buffer/something storable in a Database ?
Thanks a lot

Related

How combine results from multiple models in Google Vertex AI?

I have multiple models in Google Vertex AI and I want to create an endpoint to serve my predictions.
I need to run aggregation algorithms, like the Voting algorithm on the output of my models.
I have not found any ways of using the models together so that I can run the voting algorithms on the results.
Do I have to create a new model, curl my existing models and then run my algorithms on the results?
There is no in-built provision to implement aggregation algorithms in Vertex AI. To curl results from the models then aggregate them, we would need to deploy all of them to individual endpoints. Instead, I would suggest the below method to deploy the models and the meta-model(aggregate model) to a single endpoint using custom containers for prediction. The custom container requirements can be found here.
You can load the model artifacts from GCS into a custom container. If the same set of models are used (i.e) the input models to the meta-model do not change, you can package them inside the container to reduce load time. Then, a custom HTTP logic can be used to return the aggregation output like so. This is a sample custom flask server logic.
def get_models_from_gcs():
## Pull the required model artifacts from GCS and load them here.
models = [model_1, model_2, model_3]
return models
def aggregate_predictions(predictions):
## Your aggregation algorithm here
return aggregated_result
#app.post(os.environ['AIP_PREDICT_ROUTE'])
async def predict(request: Request):
body = await request.json()
instances = body["instances"]
inputs = np.asarray(instances)
preprocessed_inputs = _preprocessor.preprocess(inputs)
models = get_models_from_gcs()
predictions = []
for model in models:
predictions.append(model.predict(preprocessed_inputs))
aggregated_result = aggregate_predictions(predictions)
return {"aggregated_predictions": aggregated_result}

Download pre-trained sentence-transformers model locally

I am using the SentenceTransformers library (here: https://pypi.org/project/sentence-transformers/#pretrained-models) for creating embeddings of sentences using the pre-trained model bert-base-nli-mean-tokens. I have an application that will be deployed to a device that does not have internet access. Here, it's already been answered, how to save the model Download pre-trained BERT model locally. Yet I'm stuck at loading the saved model from the locally saved path.
When I try to save the model using the above-mentioned technique, these are the output files:
('/bert-base-nli-mean-tokens/tokenizer_config.json',
'/bert-base-nli-mean-tokens/special_tokens_map.json',
'/bert-base-nli-mean-tokens/vocab.txt',
'/bert-base-nli-mean-tokens/added_tokens.json')
When I try to load it in the memory, using
tokenizer = AutoTokenizer.from_pretrained(to_save_path)
I'm getting
Can't load config for '/bert-base-nli-mean-tokens'. Make sure that:
- '/bert-base-nli-mean-tokens' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/bert-base-nli-mean-tokens' is the correct path to a directory containing a config.json
You can download and load the model like this
from sentence_transformers import SentenceTransformer
modelPath = "local/path/to/model
model = SentenceTransformer('bert-base-nli-stsb-mean-tokens')
model.save(modelPath)
model = SentenceTransformer(modelPath)
this worked for me.You can check the SBERT documentation for model details for the SentenceTransformer class [Here][1]
[1]: https://www.sbert.net/docs/package_reference/SentenceTransformer.html#:~:text=class,Optional%5Bstr%5D%20%3D%20None)
There are many ways to solve this issue:
Assuming you have trained your BERT base model locally (colab/notebook), in order to use it with the Huggingface AutoClass, then the model (along with the tokenizers,vocab.txt,configs,special tokens and tf/pytorch weights) has to be uploaded to Huggingface. The steps to do this is mentioned here. Once it is uploaded, there will be a repository created with your username, and then the model can be accessed as follows:
from transformers import AutoTokenizer
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("<username>/<model-name>")
The second way is to use the trained model locally, and this can be done by using pipelines.The following is an example how to use this model trained(&saved) locally for your use-case (giving an example from my locally trained QA model):
from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
nlp_QA=pipeline('question-answering',model='./abhilash1910/distilbert-squadv1',tokenizer='./abhilash1910/distilbert-squadv1')
QA_inp={
'question': 'What is the fund price of Huggingface in NYSE?',
'context': 'Huggingface Co. has a total fund price of $19.6 million dollars'
}
result=nlp_QA(QA_inp)
result
The third way is to directly use Sentence Transformers from the Huggingface models repo.
There are also other ways to resolve this but these might help. Also this list of pretrained models might help.

How do I train a encoder-decoder model for a translation task using hugging face transformers?

I would like to train a encoder decoder model as configured below for a translation task. Could someone guide me as to how I can set-up a training pipeline for such a model? Any links or code snippets would be appreciated to understand.
from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
# Initializing a BERT bert-base-uncased style configuration
config_encoder = BertConfig()
config_decoder = BertConfig()
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
# Initializing a Bert2Bert model from the bert-base-uncased style configurations
model = EncoderDecoderModel(config=config)
The encoder-decoder models are used in the same as any other models in Transformers. It accepts batches of tokenized text as vocabulary indices (i.e., you need a tokenizer that is suitable for your sequence-to-sequence task). When you feed the model with the input (input_ids) and the desired output (decoder_input_ids and labels), you will get the loss value that you can optimize during training. Note that if the sentences in the batch have different lengths, you need to do masking too. This is a minimum example for the EncoderDecoderModel documentation:
from transformers import EncoderDecoderModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = EncoderDecoderModel.from_encoder_decoder_pretrained(
'bert-base-uncased', 'bert-base-uncased')
input_ids = torch.tensor(
tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
outputs = model(
input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids,
return_dict=True)
loss = outputs.loss
If you do not want to write the training loop yourself, you can use dataset processing (DataCollatorForSeq2Seq) and training (Seq2SeqTrainer) utilities from Transformers. You can follow the Seq2Seq example on GitHub.

Reusing h2o model mojo or pojo file from python

As H2o models are only reusable with the same major version of h2o they were saved with, an alternative is to save the model as MOJO/POJO format. Is there a way these saved models can be reused/loaded from python code. Or is there any way to keep the model for further development when upgrading the H2O version??
If you want to use your model for scoring via python, you could use either h2o.mojo_predict_pandas or h2o.mojo_predict_csv. But otherwise if you want to load a binary model that you previously saved, you will need to have compatible versions.
Outside of H2O-3 you can look into pyjnius as Tom recommended: https://github.com/kivy/pyjnius
Another alternative is to use pysparkling, if you only need it for scoring:
from pysparkling.ml import H2OMOJOModel
# Load test data to predict
df = spark.read.parquet(test_data_path)
# Load mojo model
mojo = H2OMOJOModel.createFromMojo(mojo_path)
# Make predictions
predictions = mojo.transform(df)
# Show predictions with ground truth (y_true and y_pred)
predictions.select('your_target_column', 'prediction').show()

Is it possible to train XGBoost4J-Spark with tree_method='exact' ?

I intend to use the trained xgboost model with tree_method='exact' in the SparkML pipeline so I need to use XGBoost4J-Spark; however documentation says "Distributed and external memory version only support approximate algorithm." (https://xgboost.readthedocs.io/en/latest//parameter.html). Is there anyway to work around this?
Alternatively, I can train the model with C-based xgboost and some how convert the trained model to XGBoostEstimator which is a SparkML estimator and seamless to integrate in SparkML pipeline. Has anyone came across such a convertor?
I don't mind running on a single node instead of a cluster as I can afford to wait.
Any insights is appreciated.
So there is this way:
import ml.dmlc.xgboost4j.scala.XGBoost
val xgb1 = XGBoost.loadModel("xgb1")
import ml.dmlc.xgboost4j.scala.spark.XGBoostRegressionModel
val xgbSpark = new XGBoostRegressionModel(xgb1)
where xgb1 is the model trained with C-based xgboost. There is a problem however; their predictions don't match. I have reported the issue on the github repo: https://github.com/dmlc/xgboost/issues/3190

Resources