I've read that to handle messages globally, I have to use Scorables and set a score based on the user's input. I am wondering if I can use LUIS to parse the user input and set a score based on LUIS intent score.
Is there any way that I can use LUIS inside my Scorable class?
Or do i have to manually call LUIS and get the response and process myself?
Yes, you can call LUIS yourself, pass the message to it and see what it returns.
You will receive a list of intents with a score back and you typically take the one with the highest score.
LUIS is just an API with one endpoint so you can call it from wherever really, it's actually very easy. Have a look here for more details : https://github.com/Microsoft/Cognitive-LUIS-Windows
The response from LUIS will give you the intent and the parameters it identified assuming you had any. It's probably a good idea to set a threshold, if the score you get back is not high enough then that means you need to train LUIS more but that's another story. My own threshold is set at 88 anything below that, I don't really like.
If you do it like this you basically eliminate any need to do any processing yourself and you use LUIS for what it's mean to be used, which is understanding the user's query. You can do something with the result after that.
Related
I want to have the user respond with a longish response, like talk about their education and family background. In this, I would like to identify multiple intents and then go back to the user with multiple questions (one by one) as a follow-on to the intents detected. Can I do this in DialogFlow CX and how?
Note that your desired use case is currently not feasible in Dialogflow CX. By default, Dialogflow matches a user query to only one intent route. Intents are matched based on the confidence value in the detectIntent Response’s queryResult.match field.
Moreover, the maximum detect intent text input length is 256 characters. If you are using an integration, the integration platform may have a smaller limit. However, you can use the sys.long-utterance built-in event to handle user queries that exceed the 256-character limit. Note that the long user query will still only get matched to only one intent route.
If you want to ask the user multiple questions, you can design your agent to have a conversation flow that asks the user one question at a time. You can utilize State Handlers to control the conversation flow. You may refer to the Voice agent design documentation for best practices for designing your agent.
You could do several intents with no response that has several follow up intents with your questions, u are going to need to change the number of context to match the amount of intents that you are going to use.
I am a bit lost in terms of how to use the Microsoft QnA maker and LUIS together. If I understand correctly QnA maker only works on FAQ styled data whereas LUIS is for understanding intents and providing the answer.
So the question I have is how to get both of them to work together. First, what technologies are there and how do they determine where the calls get routed to, as in QnA maker or LUIS.
Any insights will be most helpful.
I used this example a few times and it seems to work.
QnAMAker is use when the user would ask a question. "How can I set an alarm on my phone"
Luis is use to execute a command/action and identify entities. "Set an alarm at three o'clock"
Dispatch is used to route the message to the right service, either QNA or Luis (you can have more than one of each, or 5 qna and no Luis)
Hope this helps
To expand on other answers:
QnAMaker is for direct question => answer pairs. It trains based on exact questions, such as the one exampled by Alexandre, and has exact answers.
LUIS parses the question from the user, instead of using it directly, and uses the resulting score to return an 'intent'. The bot dev then uses this score/intent to route the conversation flow to other dialogs. A good example is to think about how many ways you can say 'goodbye' (Goodbye, bye, byebye, cya, peace!, TTYL). All of these can be programmed, or trained, in LUIS to return 'Goodbye' as the main intent. Then you can code 'if Goodbye is return, go to Goodbye dialogs' into your own chatbot.
Dispatch is like an umbrella over both. At it's core, it's a LUIS model (It looks at messages and scores them). Based on that score, it returns an intent, just like LUIS. And again, like LUIS, it would be up to the bot developer to route the returned intent (if QnAIntent is returned, go to QnA dialogs). Using dispatch to route your initial intents means you don't need to hit every single one of your models (both QnA and LUIS) just to test an utterance (message from a user). Just once through dispatch.
I am implementing an example of spring-boot and axon. I have two events
(deposit and withdraw account balance). I want to know is there any way to get the state of the Account Aggregate by a given date ?
I want to get not just the final state, but to replay events in a range of dates.
I think I can help with this.
In the context of Axon Framework, you can start a replay of events by telling a given TrackingEventProcessor to 'reset' it's Tokens. By the way, the current description on this in the Reference Guide can be found here.
These TrackingTokens are the objects which know how far a given TrackingEventProcessor is in terms of handling events from the Event Stream. Thus resetting/adjusting these TrackingTokens is what will issue a Replay of events.
Knowing all these, the second step is to look at the methods the TrackingEventProcessor provides to 'reset tokens', which is threefold:
TrackingEventProcessor#resetTokens()
TrackingEventProcessor#resetTokens(Function<StreamableMessageSource, TrackingToken>)
TrackingEventProcessor#resetTokens(TrackingToken)
Option one will reset your tokens to the beginning of the event stream, which will thus replay everything.
Option two and three however give you the opportunity to provide a TrackingToken.
Thus, you could provide a TrackingToken starting from several points on the Event Stream. So, how do you go about to creating such a TrackingToken at a specific point in time? To that end, you should take a look at the StreamableMessageSource interface, which has the following operations:
StreamableMessageSource#createTailToken()
StreamableMessageSource#createHeadToken()
StreamableMessageSource#createTokenAt(Instant)
StreamableMessageSource#createTokenSince(Duration)
Option 1 is what's used to create a token at the start of the stream, whilst 2 will create a token at the head of the stream.
Option 3 and 4 will however allow you to create a token at a specific point in time, thus allowing you to replay all the events since the defined instance up to now.
There is one caveat in this scenario however. You're asking to replay an Aggregate. From Axon's perspective by default the Aggregate is the Command Model in a CQRS set up, thus dealing with Commands going in to your system. In the majority of the applications, you want Commands (e.g. the requests to change something) to occur on the current state of the application. As such, the Repository provided to retrieve an Aggregate does not allow specifying a point in time.
The above described solution in regards to replaying is thus solely tied to Query Model creation, as the TrackingEventProcessor is part of the Event Handling side in your application most often used to create views. This idea also ties in with your questions, that you want to know the "state of the Account Aggregate" at a given point in time. That's not a command, but a query, as you have 'a request for data' instead of 'the request to change state'.
Hope this helps you out #Safe!
I'm working on an Alexa skill and have decided to declare a slot for one of the intents as AMAZON.SearchQuery type, which allows free-form speech. If the speaker leaves out that slot, my lambda code elicits the slot, so at that point I'm waiting for a response that I can grab and use to search through data.
If the user says "stop" at that point (or "cancel"), "stop" becomes my search query. What's the best practice for dealing with that kind of dialog? Is there an "Alexa way" to handle it or do I have to do it in my lambda?
You would need to handle that in your skill, as a potential input to the intent that you are using the AMAZON.SearchQuery slot with.
Deciding exactly how to deal with it is up to you but you should think about the experience that would be least confusing to the user.
You can choose to stop the skill if you get a stop or cancel utterance as the value for that slot, or if you think it might be possible the user is actually searching for stop or cancel then perhaps introduce one more confirmation: "Would you like to search for stop?"
I need to know the relative position of an object in a list. Lets say I need to know the position of a certain wine of all wines added to the database, based in the votes received by users. The app should be able to receive the ranking position as an object property when retrieving a "wine" class object.
This should be easy to do in the backend side but I've seen Cloud Code and it seems it only is able to execute code before or after saving or deleting, not before reading and giving response.
Any way to do this task?. Any workaround?.
Thanks.
I think you would have to write a Cloud function to perform this calculation for a particular wine.
https://www.parse.com/docs/cloud_code_guide#functions
This would be a function you would call manually. You would have to provide the "wine" object or objectId as a parameter and then get have your cloud function return the value you need. Keep in mind there are limitations on cloud functions. Read the documentation about time limits. You also don't want to make too many API calls every time you run this. It sounds like your computation could be fairly heavy if your dataset is large and you aren't caching at least some of the information.