I activated the Active Learning option in my QnA Service to improve the answers it gives using feedback from users, where they ask some questions and, if the score is too low, active learning lets them choose among the best rated answers in the Knowledge base or let them choose none of them as the correct one.
The problem is that the feedback the users give should go to my QnA Service for approval but when I look for suggestions in the portal, there's nothing waiting to be approved.
QnA MAker Active learning works with feedback from the emulator or other channels to your bot.
A comment from a related GitHub issue states:
When there is a low confidence score difference between the top answers, we collect weighted implicit and explicit feedback to cluster suggestions for any QnA ID.When enough feedback is collected for any given suggestion, it will show in the KB.
More specifically, we cluster similar user queries to generate suggestions. When minimum required feedback is collected, only then will the suggestions show in the KB.
The QnA team wants to avoid publicly divulging the exact logic of what exactly is the "minimum required feedback" and how often suggestions are generated (besides, the team is working on improving and optimizing the logic behind active learning as well)
--however to see suggestions appear in the qnamaker.ai portal:not only ensure that you've given the bot enough feedback
but also give the back end "some time" to allow for the suggestions to appear in the portal.
Again, feedback is collected when your user types in a query that returns answers from QnA that have confidence scores that are close together.
It is also good to note that feedback is not collected in the Test panel in the qnamaker.ai portal as of now. You will need to chat with your bot via emulator or a channel to provide feedback to your bot that it can use for active learning.
Note: If the suggestions are not showing up, then it is probably because the questions users are asking are not generating top N answers that have similar confidence scores.
The latest sample on Active learning in QnA Maker is available here.
Hope this helps.
Related
I have a feeling I'm missing something v obvious here, or perhaps I'm just using the wrong tools in the wrong order, or any combination of the above.
My company uses Blue Prism, and I'd like to build a virtual sales assistant that I can integrate into MS Teams. The idea being, the sales team can ask this bot to carry out a number of different tasks. The user thinks the bot is doing it, but in reality, the bot will be calling Blue Prism and triggering a separate process that has been built. We would integrate LUIS to attempt to split out all the different entities in the question and gradually narrow down what is what by replying to the original user question if it can't split them straight away.
I've built a brief knowledge base and integrated into Teams, however what I'm struggling with is learning how to actually have a central source read the messages asked by users within teams. I'd like to try and go directly to Blue Prism, but I'm aware that something like Flow or Automate may be an option, even if just to use this to trigger blue prism rather than it happening directly from Teams.
Any ideas? An example of a request may be - 'Log 100k pipeline app for the new product for mr smith.'
Thanks
I think you might be getting a bit of crossed lines between LUIS and QnAMaker. QnAMaker, as the name implies, is specifically where you have Questions, and want to maintain a knowledge base of likely answers. In contrast, LUIS would be used here to more accurately identify the instruction being passed, and extracting the values from it. It certainly possible to combine the two into the same bot, i.e. having a bot that handles both commands and questions, but that doesn't sound like what you're trying to do.
As a result, you should be focusing more on LUIS, and defining the relevant entities and intents. As an example, "log pipeline" would be an "intent", whereas "100k", "new product", and "mr smith" would be entities that the bot would know to work with.
What the bot does behind the scenes with that intent+entities combination is of course totally up to what you choose as an implementation, Blue Prism or otherwise.
We have a Slack workspace within a small (~400) scientific community who are discussing multiple ideas (to help solve the current COVID-19 crisis, so lots of conversations and topics. :-) Currently using one channel and wouldn't want to create 50.
What's the best way to spin off discussions into sub-topics. Slack doesn't support sub-channels (yet) does it?
Would a task management plug-in to Slack help here? Want to separate and localize topics and discussions, not track individual responsibility, so "task management" might not be the right paradigm.
Other suggestions?
Slack doesn't have sub-channels but you can mimic by using some conventions.
Eg.
`#covid-19-symptoms -> All discussions related to symptoms
`#covid-19-symptoms-respiration -> Symptoms related to respiration
`#covid-19-symptoms-respiration-topic-3 -> Symptoms related to respiration
If you using this Slack workspace exclusively for discussing Covid-19 you may remove the covid prefix which may be redundant.
You need to have one moderator who can review all the messages, repost/remove/shift across channels to make them more relevant.
I had read documents, and tried to run Active learning sample. I can understand how the program works.
The documents refer to Implicit feedback and Explicit feedback. I have two questions.
I can find the Explicit feedback' code. But I don't understand when the knowledgebase will show the feedback.
Implicit feedback where is the code? no code? I think both Implicit feedback and Explicit feedback have similar scores. what's the difference?
So, as the docs on active learning state, implicit feedback occurs when
when a user question has multiple answers with scores that are very close,
whereas with explicit feedback is the feedback that is received when the
client application asks the user which question is the correct question [and the user's selected question is used as explicit feedback
Where is the feedback collected
The feedback is collected from the conversation between user and bot.
As of now, feedback is not collected in the Test panel in the qnamaker.ai portal.
Where do we see the suggested questions generated via Active Learning?
When "enough" feedback is collected on a cluster of question and answer pair, you will see the active learning feedback inside the portal at qnamaker.ai > Edit
Further Active Learning Explanation
I'll include here one of my posts from a thread regarding Active Learning below. I would encourage you to read the full thread on active learning that was opened as a Microsoft Docs issue afterwards, however, to see included screenshots.
#Souvik04, follow the link to the Active Learning sample
bot
in the BotFramework-Samples repo for a example of how you can query
the QnA service from your bot with active learning enabled.
___ After conversing with the QnA team (Rohit is included in the conversation), here's a little more light regarding when you would
actually see the suggestions inside the portal at qnamaker.ai.
When there is a low confidence score difference between the top
answers, we collect weighted implicit and explicit
feedback
to cluster suggestions for any QnA ID.
=> When enough feedback is collected for any given suggestion, it will show in the KB.
More specifically, we cluster similar user queries to generate
suggestions. When minimum required feedback is collected, only then
will the suggestions show in the KB.
The QnA team wants to avoid publicly divulging the exact logic of what
exactly is the "minimum required feedback" and how often suggestions
are generated (besides, the team is working on improving and
optimizing the logic behind active learning as well)
--however to see suggestions appear in the qnamaker.ai portal:
* not only ensure that you've given the bot enough feedback
* but also give the back end "some time" to allow for the suggestions to appear in the portal.
Again, feedback is collected when your user types in a query that
returns answers from QnA that have confidence scores that are close
together.
It is also good to note that feedback is not collected in the Test
panel in the qnamaker.ai portal as of now. You will need to chat with
your bot via emulator or a channel to provide feedback to your bot
that it can use for active learning.
My company (which does Tutoring services) recently transitioned to Square for their Appointments and POS and I am trying to automate certain tasks. I wanted to know if there was a way to create "Open Tickets" for transactions through the Connect API.
I went through the documentation and couldn't find anything that refers to "tickets". I checked the seller community but wasn't satisfied with the answer from Square since they seemed to not understand what "Tickets" meant. I have provided more details at the end of this post in case someone wasn't sure about "Tickets" here as well.
I believe currently Tickets are only available through the Square POS app (Android/iOS) and not on the Web Dashboard. I would like to be pointed in the right direction in terms of what I might need to look at in order to get access to automatic ticket creation.
For more details, please read on.
In order to clarify what I mean by "tickets", here is Square's page regarding "Open Tickets". They are basically a way to create and save transaction info ahead of time so customers can be charged quicker. The way we use "Open Tickets" is we create tickets for Tutoring sessions every day in the morning and when a customer shows up, all they have to do is look up their ticket and pay. We do this since we expect a lot of traffic every day and we want to streamline the process as much as possible.
Therefore, our admin staff ends up creating 80-100 tickets manually every day! I wanted to know if there was a way to automate this. I already have a running Google Sheets with all appointments data that would be needed in order to create a ticket. I just need to find a way to communicate with ticket creation.
I apologize if this is a long post. I tried to be concise but thorough. Please let me know if there is any detail that I missed. I appreciate any help!
Unfortunately, Open Tickets isn’t currently available for Square’s API. Square's API is only able to track completed transactions at this time.
We are constantly improving the product based on feedback like this, so I’ll be sure to share your thoughts with the API team.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to work in a more organised way and started adopting user stories.
I think I have misunderstanding of how should I use user stories for technical stuff.
Let's say I'm coding an app that gives me the ranking of my site for a certain Keyword in Google.
The user story goes like that:
As an Internet Marketer
I want to find out where my website ranks for a keyword
So I'll know whether my SEO efforts work
Now this is pretty straight forward and user centric... However, what happens if I need to introduce Proxies into the loop.
On one hand, Proxies are technical implementation detail on the other hand, proxies is part of the Internet Marketer's domain.
How should I craft such story?
As an Internet Marketer
I want to use Proxies when searching in Google
So we'll be able to check a lot of keywords without Google blocking us
The above scenario doesn't sound right for me... Maybe I can rewrite it to be something like:
As an Internet Marketer
I want to be able to check a lot of Keywords at a time
So it'll save me time
This sounds more right, however what acceptance criteria can I give it? try scraping google 100 times in a min? Isn't it waste of time?
Here's another scenario. How should I craft a user story when the feature I want to implement is that a proxy can be used once in 30 seconds? I don't have any idea of how to approach this problem from a user centric perspective...
Another thing I thought of doing is to present another Role. Instead of being centered around Internet Marketer, I can say we have a role called Google Scraper. I can say that Internet Marketer is in relation with Google Scraper.
Now I can write a user story like:
As Google Scraper
I want to change proxies every Search
So Google won't ban me
What would you say about approaching technical implementation details like above? It can also help breaking the system down into modules...
You don't write technical stories. User stories should meet the INVEST criteria.
Proxies do sound like an implementation detail and should be avoided. You should not be mentioning proxy servers in your story. Even if they are part of the domain, there are potentially other ways to achieve the same effect.
Instead of writing "I want to use a Proxy, so that I don't get blocked", you should write, "I want to disguise my identity, so that I don't get blocked". If I was your customer, I wouldn't know why you wanted a proxy? Is it a forward, open or reverse proxy? There are loads of uses for a proxy server. You should pick the feature that you want to exploit.
However, you shouldn't get too hung up on perfect stories. The agile manifesto says, "Individuals and interactions over processes and tools".
When writing a user story, you should also consider the 3 C's: Card, Conversation, Confirmation. Do both the customer and you understand the meaning of the story?
Does the card meet INVEST criteria? If you answered yes to both those questions then the story is fine.
User Stories should not include technical details. During Sprint Planing technical details should be added as Delivery Team tasks nested below the User Story. These tasks should be created through discussion by the delivery team. You should not attempt to document every implementation detail under the sun as you will reach a point of diminishing return. Aim for 60-75 percent coverage on implementation details (tasks) for each user story as the details may change as coding begins. Any additional details developer discover during coding can be shared and documented briefly during the daily stand-up. should The User Story can be simple and non-technical while the Delivery / Development Team will flesh out story details as nested Tasks.
These Task should be visible to Developers through their Integrated Development Environment (IDE). As Developers complete tasks they can associate their checked in code with the task in your work item tracking tool (Jira, Team Foundation Server, On-Time)