BART loading from HuggingFace requires logging in - huggingface-transformers

I'm trying to use pretrained model from HuggingFace. However, I get the following error,
OSError: bart-large is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
The code I'm using is a little dated, and I have not found definite solution, and I'm not sure if it is a bug, or I really need to somehow log in to use this model.

The correct model identifier is facebook/bart-large and not bart-large:
from transformers import BartTokenizer, BartModel
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartModel.from_pretrained('facebook/bart-large')

Related

Download pre-trained sentence-transformers model locally

I am using the SentenceTransformers library (here: https://pypi.org/project/sentence-transformers/#pretrained-models) for creating embeddings of sentences using the pre-trained model bert-base-nli-mean-tokens. I have an application that will be deployed to a device that does not have internet access. Here, it's already been answered, how to save the model Download pre-trained BERT model locally. Yet I'm stuck at loading the saved model from the locally saved path.
When I try to save the model using the above-mentioned technique, these are the output files:
('/bert-base-nli-mean-tokens/tokenizer_config.json',
'/bert-base-nli-mean-tokens/special_tokens_map.json',
'/bert-base-nli-mean-tokens/vocab.txt',
'/bert-base-nli-mean-tokens/added_tokens.json')
When I try to load it in the memory, using
tokenizer = AutoTokenizer.from_pretrained(to_save_path)
I'm getting
Can't load config for '/bert-base-nli-mean-tokens'. Make sure that:
- '/bert-base-nli-mean-tokens' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/bert-base-nli-mean-tokens' is the correct path to a directory containing a config.json
You can download and load the model like this
from sentence_transformers import SentenceTransformer
modelPath = "local/path/to/model
model = SentenceTransformer('bert-base-nli-stsb-mean-tokens')
model.save(modelPath)
model = SentenceTransformer(modelPath)
this worked for me.You can check the SBERT documentation for model details for the SentenceTransformer class [Here][1]
[1]: https://www.sbert.net/docs/package_reference/SentenceTransformer.html#:~:text=class,Optional%5Bstr%5D%20%3D%20None)
There are many ways to solve this issue:
Assuming you have trained your BERT base model locally (colab/notebook), in order to use it with the Huggingface AutoClass, then the model (along with the tokenizers,vocab.txt,configs,special tokens and tf/pytorch weights) has to be uploaded to Huggingface. The steps to do this is mentioned here. Once it is uploaded, there will be a repository created with your username, and then the model can be accessed as follows:
from transformers import AutoTokenizer
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("<username>/<model-name>")
The second way is to use the trained model locally, and this can be done by using pipelines.The following is an example how to use this model trained(&saved) locally for your use-case (giving an example from my locally trained QA model):
from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
nlp_QA=pipeline('question-answering',model='./abhilash1910/distilbert-squadv1',tokenizer='./abhilash1910/distilbert-squadv1')
QA_inp={
'question': 'What is the fund price of Huggingface in NYSE?',
'context': 'Huggingface Co. has a total fund price of $19.6 million dollars'
}
result=nlp_QA(QA_inp)
result
The third way is to directly use Sentence Transformers from the Huggingface models repo.
There are also other ways to resolve this but these might help. Also this list of pretrained models might help.

GraphQL Nexus - adding custom scalar type

GraphQL Nexus is fairly new and the documentation appears to be lacking. In addition the examples are lacking as well. From the example and from the doc I am trying to add my own NON-GraphQL scalar type. I created my own scalar following the example in the documentation however when I try to call it in an objecttype I get the read underline. What am I doing wrong?
To resolve this issue what I did was:
1. once you create your own scalar type such as:
#json.ts **FILE NAME MATTERS**
export const JSONScalar = scalarType({
name: "JSON",
asNexusMethod: "json",
description: "JSON scalar type",
...})
2. Once I called the new type in a separate object I had to add this above my field for it to compile, you may not have too:
//#ts-ignore
t.topjson("data");
3. In my make schema I added the scalar code first:
const schema = makeSchema({
types: [JSONScalar, MyObject, BlahObject],
The name of the file is very important I think that is how the schema is generated and looks for the new type. I also think you have to be sure to compile that code first in the makeSchema however I did not try to switch around the order as I spent a lot of time trying to figure out how to make my own scalar type work.
This may have been self explanatory for more seasoned Nexus developers however I am a novice so these steps escaped me.
Happy coding!
You can also do this:
t.field("data", {type: "JSON"});
That works perfectly fine with TypeScript.
And I wish they had better examples and documentation. Their examples only cover the most trivial use cases. I'm getting very heavily into GraphQL, Prisma, Nexus and related tool for a huge data set. So I'm literally hitting every limitation and lack of documentation that exists.
But alas we can pool our knowledge here.

How to add custom Models Rocket Chat

I am trying to add new models to RocketChat specifically to store Location information. I have extended the models._Base model to ModelLocations and assigned it to RocketChat.models.Locations as following
RocketChat.models.Locations = new ModelLocations('location', true);
But when I try to use RocketChat.models.Locations it's always undefined. I observed the migrations folder and thought I need to create a migration file but I couldn't find documentation for that. Can anybody point me on how to add a new model.
I forgot to add the new model's file path in Rocket.Chat/packages/rocketchat-lib/package.js. Once added it works.

Unknown Class – yii\base\UnknownClassException Yii2

I am a beginner in Yii2.I am studying Yii2 from this Yii2 documentation link.When I run this file,its giving me the following error.
I have created a CountryController within C:\xampp\htdocs\yii\frontend\controllers\CountryController.php.
EDIT:
After modifying the namespace in controller,Now am getting issue in View.
The error message is basically saying that it cannot find your Controller. So what you need to do is ensure that your file is stored in the same place as your namespace implies. My suspicion is that they don't align. So...
You are using the namespace of app\controllers but you store your file in yii\frontend\controllers. I suspect yii\frontend\controllers does not correspond to app\controllers. I suspect, all you'll have to do is use the namespace frontend\controllers - but it depends on what aliases you are using.

Django REST framework, get object from URL

I'm wondering if there is a clean way to retrieve an object from its URL with django rest framework. Surely there should be, as it seems to be what's happening when using HyperlinkedRelatedField.
For instance, I have this URL /api/comment/26 as a string. From my view, how can I get the comment instance with pk=26?
Of course I could redo the work and work on the string but it must be a better way?
Thanks a lot.
EDIT:
This is how I solved it at the end:
resolve('/api/comment/26/').func.cls.model will return my model Comment.
resolve('/api/category/1/').kwargs['pk'] will return the pk.
Which gives you:
from django.core.urlresolvers import resolve
resolved_func, unused_args, resolved_kwargs = resolve('/api/category/1/')
resolved_func.cls.model.objects.get(pk=resolved_kwargs['pk'])
I suspect your best bet would be to manually keep a map of models to url patterns — not too dissimilar from a URLConf.
If that doesn't rock your boat you could pass the path into resolve.
This will give you a ResolverMatch upon which you'll find the func that was returned by the as_view call when you set up your URLs.
The __name__ attribute of your view function will be that of your original view class. Do something like globals()[class_name] to get the class itself.
From there access the model attribute.
I hope that helps. As I say, you might just want to map models to URLs yourself.
With class-based views, something like the following seems needed:
resolve(url).func.cls.serializer_class.Meta.model.objects.get(
**resolve(url).kwargs)
Solution above did not work for me, the following did work though:
from django.core.urlresolvers import resolve
resolved_func, unused_args, resolved_kwargs = resolve('/api/category/1/')
resolved_func.cls().get_queryset().get(id=resolved_kwargs['pk'])
additionally this solution uses the built in queryset of your view, which might have annotations or important filters.
Using HyperlinkedModelSerializer I actually needed to do this with a full url. For this to work, you need to extract the path first, resulting in:
import urllib.parse
from django.core.urlresolvers import resolve
def obj_from_url(url):
path = urllib.parse.urlparse(url).path
resolved_func, unused_args, resolved_kwargs = resolve(path)
return resolved_func.cls().get_queryset().get(id=resolved_kwargs['pk'])

Resources