How to add buttons in RASA chatbot? - rasa-nlu

I want a chatbot with buttons for example, How are you feeling? Sad or Happy. I want two-buttons(one for happy and one for sad) here and get input from the user and followed by other questions. What will be the stories.md, nlu.md, domain.yml, and frontend python code?

For making a button the way you said, you have to change the domain.yml file only. In the templates section, after text add code like this.
buttons:
- title: "Happy"
payload: "utter_happy"
- title: "Sad"
payload: "utter_cheer_up"
With this, I will encourage you to check this GitHub repo to know about other types of chat bot Widget designed for Rasa Bots.
https://github.com/JiteshGaikwad/Chatbot-Widget
And in this repo, you will find some demo chatbots which are fully written in Python.
https://github.com/cedextech/rasa-chatbot-templates
Hope this helps.

Related

Is there a way to clear chat history in Webchat which uses DirectlineJS and not React

As per the below github issues link for Chatbot and BotFramework.
https://github.com/microsoft/BotFramework-WebChat/issues/1846
We can clear chat history by assigning a new instance to store variable, which triggers a DIRECT_LINE/DISCONNECT action. Now this works in React Webchat. But, I have tried in Web-chat Java script version and the issue seems to still persist, i.e., when the store variable is replaced the previous chats are not removed.
Any confirmation from the community will be relieving as there are many open issues in Botframework github that its confusing. I'm sharing a few that I found out.
https://github.com/microsoft/BotFramework-WebChat/issues/1293
https://github.com/Microsoft/BotFramework-DirectLineJS/issues/124
If No, can this be achieved in some other way? any suggestions will be helpful.
I was able to resolve it. So, just replacing the variable will not work. You need to render the store variable to update/replace it in webchat.js code.
await window.WebChat.renderWebChat(
{
store: widgetStore
},
document.getElementById('webchat')
);
The BotFramework-WebChat repo provides a sample that demonstrates precisely how to do this. The sample is 04.api/h.clear-after-idle. The readme also includes some caveats to be aware of.

Bot Framework - Display User's intent

Trying to figure out how I can take the intent from LUIS and insert it into the response from the bot. I cannot seem to find the variable/path to this value. I've tried:
#intentName
turn.recognized.intents.intentName
luisResult.intents[0]
The docs are not helping since it just shows java classes, doesn't directly translate to the Composer app afaik.
You can pull the first intent via the following adaptive expression:
${foreach(turn.recognized.intents, x, x.key)[0]}

Can aws-lex be used to build a conversation flow bot to reply with different answers based on user's input?

Can aws-lex be used to build a conversation flow bot?
For example:
Thank you very much!
Reason for all this: So we have our own "dialogue builder" and "bot-service".
Our own "Dialogue Builder": is maybe similar to Amazon Connect dialogue builder, and our own "Bot-service" is similar to Microsoft bot framework. Before we were using microsoft-luis to get "intention" of a sentence while using our own dialogue builder and our own bot-service to build a conversation/dialogues flow like if a user says "yes" then go to another flow and if a user says "no" then go to different flow (can this be done in slots?) === Binary tree :)
So now we are switching from luis to aws-lex and trying to think if it is possible to just use aws-lex UI and not our (dialogue builder/bot-service) anymore. But what I am understanding is that to use aws-lex without some kind of dialogue builder we would need to write alot of if/case statements if it contains large data, right? what is your suggestion? One way would be to just use "Amazon Connect" to utilize their dialogue builder so we don't have to write alot of if statements but then if we are using dialogue builder we can just use our own (old one) dialogue builder? what do you think?
Questions:
1)Is there a way to do something like this in aws-lex or not? I tried using slots/prompts/lambda but I am not able to go to 2nd or 3rd level depth in diagram. can be done somehow?
2) Do I have to use lambda and use "switch/if conditions each time it has to change the flow (ex: if answer is yes then reply this, and if no then reply this)?
3) If #2 is true, then is it possible for it to be used by non-developer. Even if I write if/conditions ~1k - 2k if conditions, then if a person (non-developer) tries to edit a dialogue/or-something through UI won't able to do it, right? (So does this mean that we are't really using UI of aws-lex, we are just writing "if conditions" in programming + using aws-lex "intention" to get intention, right?
4) Would it be possible to give example and show how making a flow is possible? So far using slots replies/responses don't change based on user's input. It doesn't matter if users says "no" or says "yes" it is going to reply with same path/answer. Is there a way to change reply based on user's input.
5) If #3 is not possible (non-developer) can't use aws-lex UI to make something like this, should we use custom dialogue builder which does this?
Thank you very very much!
It sounds like you're switching from the Microsoft Bot Framework to find a simpler solution to structured flows without entity recognition.
You may want research Microsoft's QnAMaker multi-turn ability. It's supported in the QnA Maker online editor, but not in the bot framework SDK (yet). They do have an example bot that uses it through the Web API.
https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation
I realize this doesn't answer you Lex question, but might address your concern.

A way to open up Slack Search UI (in a browser) from a URL

I was trying to see if there was a way to open up the new Slack Search UI, in a browser, from a URL.
For example, in Microsoft Teams, you can open up the following link https://teams.microsoft.com/_#/apps/a2da8768-95d5-419e-9441-3b539865b118/search?q=yourQueryHere in a new tab, and it would open up your Teams and open up the search UI with yourQueryHere results already listed.
Is there something similar for Slack? So far I do not believe it is possible because Slack does not have routes for search pages.
Let's say we are here: https://company.slack.com/messages/channelId/.
We then type something in the search bar and press search. The URL stays constant.
Note that Slack seems (?) to have deeplinking for search (slack://) according to their docs, but there are no examples (I tried slack://search/hello, slack://search).
I had the same question for Slack support earlier - it looks like they removed Slack deep search linking via the slack://search URL after they released their new search functionality:
I am sorry to report that we currently do not support search deep linking at the moment. We previously did but it needed to be removed while our improved search was being worked on and released.
The docs you linked to used to have a better description further up the page about how to use this deep link and the required syntax. It looks like the example workflows section you link to is a remnant of that.
I am really sorry that the docs mislead you. I will contact the API docs team and ask them to remove those references. Hopefully in the future we can reinstate this ability once again.
It seems that the last parameter from the search URL -https://app.slack.com/client/.../search/search-eyJ... - is Base64 encoding over a JSON string:
{
"d": "search query goes here",
"q": "... (not sure what this is used for, can be disregarded)",
"r": "search query goes here"
}
So I'm using this method and it works fine:
# search.py
import sys
import base64
import os
import json
from urllib.parse import quote
query = sys.argv[1]
search_data = json.dumps({
"d": quote(query),
"r": quote(query)
})
qhash = base64.b64encode(search_data.encode()).decode()
url = f'https://app.slack.com/client/.../search/search-{qhash}'
os.system(f'open "" {url}')
And then:
./search.py "my search query"
I'm afraid to say that there's no way at the moment to do what you describe, but it's a pretty cool idea! I'm not sure how we could cleanly manage such a change in light of our recent adjustments to the search UI (it's not in the side pane anymore!), but I'll pass your desire for such a thing along to the rest of the team here.
Thanks for writing in about this! Please let me know if you happen to think of anything similar you'd like to see changed or added to the app, and I'll be happy to make sure the right folks hear about it.
I've sent a message to the Slack team, and they've responded extremely fast. Unfortunately, there is no way at the moment.

how to send a request to google home\assistant like IFTTT+webhooks

Please direct me the right way. I'm stuck with some documentation issue. Going to code a small service with Google HOME for everyone to add a new phrase and make a POST request or answer with specific "TEXT" (like IFTTT run with WEBHOOKS) (fe: "Hey Google switch my kitchen light" -> service will send POST request to my own HTTP server). I know that IFTTT works but - I would to code the target service for tiny cases with fast response.
I tried to understand all Google Assistant layers - but still no luck and didn't find the clear path.
What I learned are:
1. connect to Google Account using OAuth 2
2. .... save the phrase and action for that in my DB - it's ok and simple way
3. ...??? how to send and to where? in (json?) a specific format
4. receive the answer from google home to understand the right case and make my action.... (for example: turn on\off the kitchen light)
It should be not so hard as I can image.... anyway Please help :).
Appreciate your time and answers and have a nice day!
To learn about extending the Google Assistant, you should look into the documentation for Actions on Google: https://developers.google.com/actions/extending-the-assistant
Although it also seems like you want to use it for Smart Home: https://developers.google.com/actions/smarthome/
You can run the Smart Home sample if you want to see how it works: https://github.com/actions-on-google/actionssdk-smart-home-nodejs

Resources