How to create dynamic dialogs from REST API in botframework - botframework

I wan't to build a bot that gather its answers and questions from a rest API.
Bot: How are you?
User: I'm fine, how are you?
Bot: I'm fine, also.
So the questions from the bot (even the first one) is gathered via REST API from an external service. Also the answer of the user is sent to this service and the Bots answer "I'm fine, also" is the result of a REST request.
I've first implemented it without using dialog feature at all. Works great, but without a dialog it's impossible to finish a dialog.
Looking around for some example I could only find some with WaterfallDialog. WatefallDialogs are build with steps - and I don't know the number of steps.
Is it possible to build such a dialog or isn't botframework not designed for such things?

In bot framework V4, the dialog/conversation flow you pick for conversations is optional, and you don't need to use them (https://learn.microsoft.com/en-us/azure/bot-service/bot-service-design-conversation-flow?view=azure-bot-service-4.0). All you NEED to do is implement bot state (https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-howto-v4-state?view=azure-bot-service-4.0), so you can store either conversation data (or user data - depends on what state you need.)
I implemented conversational flow using a single activity handler and an FSM (https://en.wikipedia.org/wiki/Finite-state_machine) - I use recognizers for common dialogs (help) but for the most part, my transition handler does regex comparisons to extract keywords and then hit the next state. So, if you can graph out your FSM, and list out all your dialog options, you can build a dialog that appears conversational and natural.
I can't share code at this time, but hopefully you don't need it.

Related

Azure QnA Bot Knowledge base filter using metadata

I've created a bot after publishing the Azure QnA service knowledge base. I've added the metadata to each question answer pairs to denote the source of information. Now, I want to change the bot code to limit the knowledge base search using the metadata filter. Basically, Bot would initially prompt users to input source (for eg HR, Finance, Legal) and use that input further to only search through question answers pairs tagged to it.
bot source code that is used ->
https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/javascript_nodejs/48.customQABot-all-features
How can I change this code to return answers linked to specified metadata/user input dynamically?
please note README.md does provide information about how to filter answers by passing metadata but it has used static value ({ key: 'Language', value: 'Javascript' }).
I want to pass user input from CustomQABot.js to rootDialog.js so that it will be used in rootDialog.js to filter answers.
any input would be greatly appreciated. Thanks.
ps - I don't know much about node.js so it's very hard for me to understand the program flow.
Here's some documentation that explains how MSFT Bot Framework bots work.
Essentially, the bot invokes dialogs based on the user's input. Each dialog contains some form of conversation on a specific topic. Once a dialog is complete it exits and the user can ask for a new dialog, or just leave.
For example, a travel booking bot may have multiple dialogs, one for each activity. They might have one for booking airlines, one for cruise lines, and one for long distance buses, plus another for modifying or cancelling tickets.
In the example bot you have, there is just one dialog called rootDialog. You could start by trying to modify this dialog to ask the user what filter they wish to use and use that for the rest of the dialog. The user would be asked this question every time the dialog starts.
I'd highly recommend you read through some of the documentation and play with some of the other samples to understand how bots work first.

Chatbot with map/location picker

I am planning to build a chat bot using either azure framework composer, aws lex or google dialogflow but none seem to offer an easy way to have a map/location picker. It should be an straight forward interaction with user where based on a button click the user can then pick the precise location from the map.
Has anyone done something like that?
Many thanks.
I don't believe that any pure NLU framework is going to offer you that level of functionality.
The true purpose behind the likes of Amazon Lex, Google Diaglogflow is to perform intent detection and classification. It is up to the developer to build out the desired functionality associated with each of the configured intents.
What it may practically come down to is presenting the chat user with a map/location popup within the chat widget. The user could then select their current/desired location which gets sent back to the NLU framework for fulfillment.

Slack API : Ability to view all recently added users to Slack Channel

I am working on a POC to proof out the ability to get a list of all the new users who have been added to a specific Slack Channel. From my initial review of the Slack API I am not seeing anything that showcases this ability, I was curious to see if anyone had worked on something similar or could point me to resources that would be a viable solution.
I believe there is no ready-made API method available, that will give you that specific information. However, Slack is very flexible and you can use the existing building blocks to easily add additional features as needed.
E.g. To get the requested information you can develop a small Slack app that listens to the member_joined_channel and member_left_channel events to keep track of when members joined a channel.
If you need a historical record of membership in a channel, you could use the Slack API's groups.history method, page through results, and build a membership log by looking for events of type member_joined_channel and member_left_channel through time.

Microsoft teams add custom text handlers

Is there a way to add custom text handlers in Microsoft Teams? For example when I Type # in chat then I can pick from users. Similarly want to bind a key to another option when I can lookup something from our web service and add it to the chat window. Following documentation describes how to write connectors for Microsoft Teams but doesn't say anything about what I am looking for.
https://msdn.microsoft.com/en-us/microsoft-teams/connectors
mayank, the closest direct thing to what you want is what we call compose extensions. That feature is in preview and is available here. That will let you call a web service and interactively show results from a search in a popup list, not unlike our Giphy integration.
Under the covers, it's a special kind of bot as mark-lafleur-msft described. With a bot, the difference is that the results from the web service don't appear in a popup window.
Last but not least, the simplest way of all, if all you want to do is execute commands from within a channel and return information, is to create a "Custom Bot" (similar to an "Outgoing webhook" in Slack). Details here.
Hope that helps.
What you're describing here is what Bots in Teams are intended for. Once built, you simply reference the bot by #botname and your query: #botname look up something from web service.

Creating an API for LUIS.AI or using .JSON files in order to train the bot for non-technical users

I have a bot that uses .NET, MS Bot Framework and LUIS.ai for its smarts.
All's fine, except that I need to provide a way for non-technical users to train the bot and teach it new things, i.e. new intents in LUIS.ai.
In other words, suppose that right now the bot can answer messages like "hey bot where can i get coffee" and "where can I buy some clothes" with simple phrases containing directions. Non-technical users need to be able to train it to answer "where can I get some food" too.
Here's what I have considered:
Continuing to use LUIS.ai. Doesn't work because LUIS.ai doesn't have an API. The best it has is the GUI to refine existing intents, and the upload app/phrase list feature. The process can be semi-automated if the JSON file with the app can be generated by some application that I write; however, there still needs to be backend code that handles the new intents, and that has to be implemented by a C# coder.
Could it work if I switch from C# to Node.js? Then theoretically I would be able to auto-generate code files / intent handlers.
Azure Bot Service. Seems it doesn't have a non-technical interface and is just a browser-based IDE.
Ditching Bot Framework entirely and using third-party tools such as motion.ai. Doesn't work because there's no "intellect" as the one provided by LUIS.ai.
Using Form Flow that's part of Bot Framework. If my GUI bot builder application can generate JSON files, these files can be used by Bot Framework to build a bot automatically. Doesn't work because there's no intellect as in LUIS.ai.
Keep using Bot Framework, but ditch LUIS and build a separate web service based on a node.js language processing library for determining intents. May or may not work, may be less smart than LUIS, and could be an overkill.
Override the method in LuisDialog that selects the intent from the LuisResponse, in order to use the my own way to decide the intent (but how?).
At this point I'm out of ideas and any pointers will be greatly appreciated.
First of all, LUIS.ai provides an API that you can use to automatize the training. Moreover, here is Luis Trainer written entirely in Python against the API that just does that.
The easiest one, probably is the one you are describing in #1: you can automatize the training (as explaining above) but you will still have to deploy a new version of the bot if new intents are being provided. One thing is letting users to train an existing model with new utteraces and another completely and different thing is to let them create the model :)
It might be hard to skip having to write the backend code (I wouldn't automatize that at all)
Here is a potential idea (not sure if it will work though). You would need 2 Luis models.
One with your current model, that users will be able to train with new utterances.
The second model, is one exclusively intended to be "expanded" with new intents by users.
If you separate this in that way, you might be able to look into a "plugin" architecture for the second LUIS model. So, your app, somehow, loads dinamically an assembly where the second model lives.
Once you you have that in place, you can focus on writing the backend code for your second Luis Model without having to worry about the bot/first model. You should be able to replace the assembly with the second Luis Model and be able in the bot to detect if there is new version of that assembly and replace the current one in the app domain.
As I said, is just an idea as I'm brainstorming with you. Sounds a bit complex, and it's not addressing all your concerns; as you still will need to write code (which in any case, you will eventually have to do)
I am working through a challenge project (training) to automate the creation of Chat Bots specifically targeted against a Luis.ai model using plain old javascript and web services to Luis.
I looked at the Bot Framework and it's just too cumbersome to automate (I want X number of customers to create a Chat Bot without coding). I also want to add my own type of 'Cards' (html widgets) that do more and can be easily configured by someone with zero coding skills.
Calls to the Luis.ai/Cognitive Services API are made in my code behind and the json response returned to my own rules engine. On the following URL click the LUIS API link on the page to open the Luis API Console where you can test, and train your Model. All the endpoints you will need are here...
https://dev.projectoxford.ai/docs/services/
Based on the various endpoints on that page, you can use WebClient in asp.net to pull back the response. So in my testing I have buttons on a page to push utterances up to the model, pull back entities, create hierarchical entities and so on. Have a look at http://onlinebotbuilder.com to see how an intent of product dynamically inserted a shopping cart.
When your tool is built and utterances start to arrive, Luis.ai will store them and via the Suggest tab (at Luis.ai) it will ask you for guidance...Unfortunately I don't think you could give that control over to your customers, unless they are experts in your domain (they understand which utterance belongs to which intent). You don't need to take your app down, just train it periodically to improve the Model based on your customers input...soon enough you will have your model working well based on your intents.
Hope that helps.

Resources