Parsing LUIS prediction result entities JSON - azure-language-understanding

Is there a library or set of helper functions to parse the JSON objects returned from LUIS predictions in C#? I found a blog that does some custom parsing of date and money entities, but in the context of the Bot Framework. It seems to be outdated
http://martink.me/articles/bot-framework-v4-with-luis

Are you looking for more than just the LuisRecognizer? The official class documentation for .NET is here.. I use the nodejs version for entity and top intent extraction, though I haven't used it for complex entity extraction like datetime and money.

Related

Entities or Models in NestJs code first GraphQl

I am new to NestJs and GraphQl, I am learning going over some tutorials. It appears to be an inconstancy in the usage of the terminology model or entity. The nestjs schematics resource generator for graphql code first produces entities, yet the example shown on their website use models.
produces entities:
nx generate #nestjs/schematics:resource generated --language=ts --type=graphql-code-first
uses models no mention of entities in code first approach
https://docs.nestjs.com/graphql/resolvers
which one terminology is most appropriate?
Thank You,
Michael
Both are generally correct. It comes down to naming preferences.
I view entities as database entities, or database table maps. They map from your database data to a class representation that your code will understand. Models can also be used for this, which I believe is the term that sequilize and mongoose prefer to use.
Models, as described in the docs you linked, are generally your DTOs, your schema objects that you expect the API to accept and respond with.
You'll notice that the generator also generates two #InputType() files as well, which will be more closely tied to your incoming DTO while the entity.ts will be closer to your response DTO.
So, both are correct, and it comes down to naming preferencec.

Eloquent: API Resources vs Fractal

Quick question, what is the difference between Eloquent: API Resources and Fractal?
To me, it looks like the same thing?
Both are used to transform API json responses to standardise the response structure.
However, API resources is inbuilt in Laravel and it's very easy to use. Fractal was the preferred way to go when API resources were not in-build in Laravel. Fractal has some methods which make it little extensive as compared to API resources.
But if you consider the core functionality, both are same with different syntactical sugar.
Most of the things which were in fractal, you can do natively in Laravel now. Plus API resources eliminate the need of any extra installation and setup. The nomenclature is very easy in API resources to start with
both of them are created for one job but their solutions are different in many ways.
Relationships:
in fractal you can easily add related models to the response. also, you can control when the related models should be presented in the response . (default include vs Available include)
for example your client can use ?include=rate to get the rate model from an article when needed! consider that, the fractal will eager load your relationships when you forgot to load it.
in API Resources you have no control over relationships and you should decide to have relationship or not in the first place. otherwise, if you forgot to eager load data for it, it will cost you too many queries to load related model (1+n problem).
Serializer
in basic usage of api resource you have no control on how data will map to final response.
for example if you want jsonnapi specification for your responses, you should manage all of the works by yourself. but in fractal you have it in the first place.
as a conclusion
i recommend you to use fractal in this case. (or use dingo package for api but consider complexity of dingo !!)

With the Simple API in Stanford CoreNLP, is there a way to get multi-token entity mentions?

This question is very similar to my question, however due to the way SO works, I think it is better to ask a new question rather than just continue a thread.
CoreNLP has the Simple API which allows for quicker access to various components of the NLP pipeline. The way to get named entities appears to be:
Form a document annotation from the text
Get the sentences from the document object
Use nerTags() from the sentences object to get the token-by-token ner labeling.
Via other mechanisms, as talked about in the question link above, one can retrieve full multi-token entity mentions such as George Washington, which is an entity mention composed of 2 tokens. Is there a way using the simple api to get these multi-token entity mentions?
Yes, though it gives you less information than the full API, returning only the String spans of the mention. See Sentence#mentions(String) and Sentence#mentions().
If you want to get more information about a mention, you'll have to either use the regular API, or re-implement the logic in these functions. You can also try mucking around in the raw Proto, which will certainly have all the information you could possibly want, but in a less-than-pleasant proto interface. The proto definition is here.

Documenting fields in Django Rest Framework

We're providing a public API that we use internally but also provide to our SaaS users as a feature. I have a Model, a ModelSerializer and a ModelViewSet. Everything is functional but it's blurting out the Model help_text for the description in the API documentation.
While this works for some fields, we would like to be a lot more explicit for API users, providing examples, not just explanations of guidance.
I realise I can redefine each field in a Serializer (with the same name, then just add a new help_text argument, but this is pretty boring work.
Can I provide (eg) a dictionary of field names and their text?
If not, how can I intercede in the documentation process to make something like that work?
Also, related, is there a way to provide a full example for each Viewset endpoint? Eg showing what is submitted and returned, like a lot of popular APIs do (Stripe as an example). Or am I asking too much from DRF's documentation generation? Should I handle this all externally?
To override help_text values coming from the models, you'll need to use your own schema generator subclass and override get_path_fields. There you'd be able to prioritize a mapping on the viewset (as you envision) over the model fields help_text values.
On adjusting the example generation - you could define a JSON language which just deals with raw JSON and illustrate the request side of things pretty easily, however, illustrating responses is difficult without really getting deep into the plumbing, as the default schema generated does not contain response structure data.

Microsoft CRM 4.0 and magic strings

Is there a method/tool/technique for developing with Microsoft CRM 4.0 that keeps the developer from having to use strings for entity names and attributes?
We've built our own model classes and store entity names, attribute names, and picklist values there. It's just a bunch of enums and constant strings, but at least it's using a centralized constant so we can know when something breaks.
We use our own mapper which translates objects into dynamic entities. This is all configured by attributes on the classes or types. You can find a project which uses a similar approach here: http://xrm.codeplex.com
On the other hand, you have the possibility to create early bound types. See Code Generation Using the CrmSvcUtil Tool.

Resources