If you have used the Language Understanding Service (LUIS) which is part of Cognitive Services suite from Microsoft, you probably have come to the point where you needed to improve the performance of your models at predicting intents.
LUIS allows you to train models based on sample utterances which you supply either interactively or in batches. In addition, I would like to have the chat logs showing utterances which are wrongly classified by the model so I can use them as a basis for new training data-sets.
I would imagine that such feature would be released in future but in the meantime, does anyone have a work-around for this scenario?
I think a good way to implement this (by hand), would be to route all the messages that were wrongly classified to some kind of storage or log so you could in the future use them to retrain your LUIS model. In fact, you could use LUIS API to make calls dynamically and easily with the data in your log.
So, the flow would be something like this:
1) User send message to bot.
2) Bot logic tries to match message's intent using LUIS model.
3) An intent isn't found or the value associated to the result is really low.
4) Grab that message and store it somewhere (from a simple txt file stored in an Azure Blob Storage), or in in a Database (Table Storage, DocumentDB or SQL Server).
5) Make a simple program that for each line in your log makes you choose an intent and then it calls LUIS Api to retrain.
So, I found the closest thing to what I had in mind when I asked the question.
Right on the "MyApps" page on luis.ai there's the possibility to download the chat logs in which one can see the entire set of interactions between users and the bot. It could be a good starting point for picking out intents which were wrongly classified.
I attached a screenshot to indicate the link
The only caveat is that the log chat format is currently in .csv which isn't so readable. Hopefully LUIS supports json formatted logs soon.
Related
I'm building chatbots with the Microsoft Bot Framework (and the Composer). To help troubleshoot problems with my bot, or identify issues, it would be helpful if I could see detailed information on LUIS's classification of user intents. I have used other bot frameworks that have a way to see, for example, intent classification confidence. This information would be extremely useful to identify times when the bot is more likely to have screwed up in its responses.
You can use LUIS API to get predictions from your LUIS APP rather than using the default composer mechanism of getting predictions from LUIS.
Make a LUIS API call with your query.
LUIS returned the predictions.
Store the LUIS response in application insights (or any logging application you are using)
Periodically, you can see the logs, which will give you insights into LUIS prediction for each query.
I've created a bot after publishing the Azure QnA service knowledge base. I've added the metadata to each question answer pairs to denote the source of information. Now, I want to change the bot code to limit the knowledge base search using the metadata filter. Basically, Bot would initially prompt users to input source (for eg HR, Finance, Legal) and use that input further to only search through question answers pairs tagged to it.
bot source code that is used ->
https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/javascript_nodejs/48.customQABot-all-features
How can I change this code to return answers linked to specified metadata/user input dynamically?
please note README.md does provide information about how to filter answers by passing metadata but it has used static value ({ key: 'Language', value: 'Javascript' }).
I want to pass user input from CustomQABot.js to rootDialog.js so that it will be used in rootDialog.js to filter answers.
any input would be greatly appreciated. Thanks.
ps - I don't know much about node.js so it's very hard for me to understand the program flow.
Here's some documentation that explains how MSFT Bot Framework bots work.
Essentially, the bot invokes dialogs based on the user's input. Each dialog contains some form of conversation on a specific topic. Once a dialog is complete it exits and the user can ask for a new dialog, or just leave.
For example, a travel booking bot may have multiple dialogs, one for each activity. They might have one for booking airlines, one for cruise lines, and one for long distance buses, plus another for modifying or cancelling tickets.
In the example bot you have, there is just one dialog called rootDialog. You could start by trying to modify this dialog to ask the user what filter they wish to use and use that for the rest of the dialog. The user would be asked this question every time the dialog starts.
I'd highly recommend you read through some of the documentation and play with some of the other samples to understand how bots work first.
I am making an application that shows real-time status for a Valorant game. like players alive, the type of weapons each play has, time remaining, etc.
Is it possible to use Riot Valorant API to do this for live matches or for previously played matches?
As per my knowledge you couldn't. But I think you should try with Riot Games' official production API, not development API.
Let me know if you find something relatable.
(This is adding onto Sanskar's answer, which I cannot comment on as I lack the required 'reputation')
I'm aware that this is an old question, but for anyone who happens to have stumbled upon this question, there is no way to obtain real-time in-game events however, there is a way to retrieve certain data from a match-- only except, not in an official way that does go against Riot Game's TOS of using third party software. Though, I wouldn't worry about this too much as long as you do not ruin the competitive integrity of the game by providing yourself with an in-game advantage over others in the game. I personally have been using this for over a year now and have not received any form of punishment for doing so.
Anyhow, back to the actual question of this thread, check out this document of API endpoints that have been scraped through monitoring HTTP traffic of the Riot Client. https://github.com/techchrism/valorant-api-docs/tree/trunk/docs/ You'll need to obtain certain authorization tokens of the Valorant account through whatever methods are available to you (I pray that it is through lawful means :) ), which highly depends on the type of endpoint. There are certain wrappers for these endpoints already made by other users somewhere on GitHub, and you can always ask for help in the small community of developers that are using these endpoints in the README of the GitHub page I sent in this post.
REMEMBER TO NOT DO ANYTHING THAT WOULD CREATE AN UNFAIR ADVANTAGE, OR ANYTHING ELSE THAT A RIOT EMPLOYEE WOULD NOT APPROVE OF USING THIS :)
I have subscription on Azure and I dowloaded chat bot with my AppId and password. On page luis.ai I trained new model and exported it to downloaded azure project (with flight booking). I replaced their cognitive model with my model from luis.ai, but after that - azure project is always working with old data. I donĀ“t understand why, because their model is removed from my PC. What should I do to working with my own model? Thanks.
You code is based on the core-bot sample. First of all, make sure that your LUIS configuration is set up correctly in your .env file (or in App Settings if running from Azure). The recognizer is created in index.js and passed to MainDialog.js. From the core-bot sample code, I'm actually not seeing where it is importing the local code. I think that is maybe just to give you the model to import to your own LUIS app? If you have the proper LUIS keys and app ID, it should respond to whatever you have in there. My guess is maybe that you replaced the FlightBooking.json LUIS model file, but didn't actually point the bot to your LUIS app with the new intents.
I would suggest, though, that this isn't the best sample to use if you are trying to just tweak it. There are a lot of things here that are set up specifically for booking flights that don't really make sense if that's not what your bot is doing. Personally I like the Dispatch Bot sample better as a starting point (even if you are not using Dispatch CLI tool), though it has the intent actions within the bot file instead of separate dialogs. Maybe that will give you a better starting point though?
I have a bot that uses .NET, MS Bot Framework and LUIS.ai for its smarts.
All's fine, except that I need to provide a way for non-technical users to train the bot and teach it new things, i.e. new intents in LUIS.ai.
In other words, suppose that right now the bot can answer messages like "hey bot where can i get coffee" and "where can I buy some clothes" with simple phrases containing directions. Non-technical users need to be able to train it to answer "where can I get some food" too.
Here's what I have considered:
Continuing to use LUIS.ai. Doesn't work because LUIS.ai doesn't have an API. The best it has is the GUI to refine existing intents, and the upload app/phrase list feature. The process can be semi-automated if the JSON file with the app can be generated by some application that I write; however, there still needs to be backend code that handles the new intents, and that has to be implemented by a C# coder.
Could it work if I switch from C# to Node.js? Then theoretically I would be able to auto-generate code files / intent handlers.
Azure Bot Service. Seems it doesn't have a non-technical interface and is just a browser-based IDE.
Ditching Bot Framework entirely and using third-party tools such as motion.ai. Doesn't work because there's no "intellect" as the one provided by LUIS.ai.
Using Form Flow that's part of Bot Framework. If my GUI bot builder application can generate JSON files, these files can be used by Bot Framework to build a bot automatically. Doesn't work because there's no intellect as in LUIS.ai.
Keep using Bot Framework, but ditch LUIS and build a separate web service based on a node.js language processing library for determining intents. May or may not work, may be less smart than LUIS, and could be an overkill.
Override the method in LuisDialog that selects the intent from the LuisResponse, in order to use the my own way to decide the intent (but how?).
At this point I'm out of ideas and any pointers will be greatly appreciated.
First of all, LUIS.ai provides an API that you can use to automatize the training. Moreover, here is Luis Trainer written entirely in Python against the API that just does that.
The easiest one, probably is the one you are describing in #1: you can automatize the training (as explaining above) but you will still have to deploy a new version of the bot if new intents are being provided. One thing is letting users to train an existing model with new utteraces and another completely and different thing is to let them create the model :)
It might be hard to skip having to write the backend code (I wouldn't automatize that at all)
Here is a potential idea (not sure if it will work though). You would need 2 Luis models.
One with your current model, that users will be able to train with new utterances.
The second model, is one exclusively intended to be "expanded" with new intents by users.
If you separate this in that way, you might be able to look into a "plugin" architecture for the second LUIS model. So, your app, somehow, loads dinamically an assembly where the second model lives.
Once you you have that in place, you can focus on writing the backend code for your second Luis Model without having to worry about the bot/first model. You should be able to replace the assembly with the second Luis Model and be able in the bot to detect if there is new version of that assembly and replace the current one in the app domain.
As I said, is just an idea as I'm brainstorming with you. Sounds a bit complex, and it's not addressing all your concerns; as you still will need to write code (which in any case, you will eventually have to do)
I am working through a challenge project (training) to automate the creation of Chat Bots specifically targeted against a Luis.ai model using plain old javascript and web services to Luis.
I looked at the Bot Framework and it's just too cumbersome to automate (I want X number of customers to create a Chat Bot without coding). I also want to add my own type of 'Cards' (html widgets) that do more and can be easily configured by someone with zero coding skills.
Calls to the Luis.ai/Cognitive Services API are made in my code behind and the json response returned to my own rules engine. On the following URL click the LUIS API link on the page to open the Luis API Console where you can test, and train your Model. All the endpoints you will need are here...
https://dev.projectoxford.ai/docs/services/
Based on the various endpoints on that page, you can use WebClient in asp.net to pull back the response. So in my testing I have buttons on a page to push utterances up to the model, pull back entities, create hierarchical entities and so on. Have a look at http://onlinebotbuilder.com to see how an intent of product dynamically inserted a shopping cart.
When your tool is built and utterances start to arrive, Luis.ai will store them and via the Suggest tab (at Luis.ai) it will ask you for guidance...Unfortunately I don't think you could give that control over to your customers, unless they are experts in your domain (they understand which utterance belongs to which intent). You don't need to take your app down, just train it periodically to improve the Model based on your customers input...soon enough you will have your model working well based on your intents.
Hope that helps.