Input Claim vs Output Claim vs Display Claim in Azure B2C Custom Policies - azure-b2c

Can someone explain to me the differences for:
Input Claims
Output Claims
Display Claims
So I did look into their pages Technical Profile also Custom Policy Code Walkthrough, but I need more info on this.
For the Input Claim I did understand that is used for the technical profile that is requiring the claim and is picking it from the claim bag. It's not the actual input from the user.
For the Output Claim it is the input claims from the client or the output for the technical profile, which will be stored in the claim bag, so that another technical profile will use it, if it's required.
For Display Claim and Quote from their page: "The DisplayClaims element contains a list of claims to be presented on the screen to collect data from the user", but I did not actually seen it used. Only that they are stating it that is in preview. Quote: "The DisplayClaims feature is currently in preview."
Also can someone explain the colon Occurrences that is present in their tables from their Page which is like 1:n, 0:1, 1:0.
Thank you.

Related

Optional parameters for form filling in Dialogflow CX

I cannot seem to get dialogflow CX to fill out optional parameters, only when they are required.
I want to have the bot ask the user for input, like "Tell me about your experience", and then repeat it back to them, but also give the user the ability to skip the question.
I have tried different solutions for solving this. One solution was to have a requires sys.any parameter and then check if it contains the word "skip" or not. However, that does not seem very robust, as a valid comment could result in a skip if they used that word.
My second attempt is seen in image 1. I tried to have a sys.any parameter which is not required and also a custom "skip" entity which was not required. No matter what I said, it would result in default fallback. I also tried to have an intent for skipping (called "no") which would overrule sys.any if it were required. However, it is just recorded in the "AnyInput" parameter, no matter what I say.
What would be a good way to give the user the ability to make a comment or skip it? Any suggestions would be helpful. Thanks!
Screenshot 1

Wrong answer from QnAMaker with keyword

I have been working with the Microsoft Bot Framework v4 and QnA Maker(GA). A problem that I have come across is when the user types a keyword like 'leave absence'. There are 10+ kind of leave absence questions. The QnAMaker will send back the one with the highest score no matter what kind leave it is (not the right answer).
I have a tree to answer question that looks something like this:
Leave of absence
Parental leave
Maternity leave
Care leave
etc.
Each kind can have one or more related questions and a leave can also have a sub-leave.
When the user ask 'leave absence', the bot should answer: 'Which kind of leave absence' and after the user can ask a question about it.
When the user ask 'How many days can I have for a parental leave', the bot should answer straight from the QnA: 'You can have 10 free days'.
My question is, how can I implement this in v4 so the user can receive the right answer? Is LUIS a option for this? Any suggestions?
Thank you.
Its difficult if you have question after question to ask the user. For this, you may need to have a separate Dialog class with a
List<string>
for the set of questions built on runtime of course. At the end it could return back to the original Dialog class. I have implemented something similar for job openings on different posts. Each post having its own set of questions. The control remains in this QuestionnaireDialog (the separate Dialog class) asking next question once the user answers the current question. I don't think QnA Maker will help on this. I have not seen QnA maker much nor v4. I have done the above on v3 and the intent-response mapping was in a database table.
My suggestion is to flatten your structure if possible from multiple levels to just 2-level to avoid the tree.
For eg:
Leaves --> Care Leave --> Medical Care Leave
--> Family Care Leave
Change the structure to
Leaves --> Medical Care Leave
--> Family Care Leave
So that you could manage it with LUIS entities. Simply asking about leaves will bring a response that will have all the type of leaves available and asking specifically about a leave type will bring a different response specific to the type. Again I have done something similar without QnA maker in v3. If you can't flatten the structure then you will probably have to bring in a mixture of the two approaches because you want to respond to user's specific leave type query (LUIS entities) and take user through a questionnaire.

Handling incorrect responses to chatbot questions

I am using Microsoft Bot Framework to develop chatbot and my questions is that how can I handle incorrect responses from a user.
Suppose bot asks for name of user and he or she replies "don't know".
I have seen in boiler plate code of bot framework that it handles minimum length validation, but how can I handle this logical checking.
Thanks in advance.
I am assuming you are using the v4 C# SDK, let me know if this is not correct and I can update my answer for node or v3 for you.
This Sample Does exactly what you are trying to do It has a validator that checks the length of the user's input and reprompts if the length is too short. You can see this in this method
In general name validation is fairly difficult because names can be very diverse and contain special characters like "-", "'", and others. Using a prompt with a custom validator should give you the opportunity to at least add some validation like length and numerical character checking.
An expected answer normally has a known format. If bot is asking a name then the name would not have numbers and special characters.. You can do a quick check if the words returned by user are part of standard english words (there are plenty libraries having this list of words). You can even pass the returned sentence to LUIS and see if you get a known intent, and then you can disqualify the answer.

Google API OAuth-2.0, Installed Application Grant Flow: Why is the authorization code truncated in the browser title?

I am encountering behavior inconsistent with Google's documentation on step 2 of this grant flow.
As described at https://developers.google.com/accounts/docs/OAuth2InstalledApp , when a "redirect_uri" value of "urn:ietf:wg:oauth:2.0:oob" is specified, "your application can then detect that the page has loaded, and can read the title of the HTML page to obtain the authorization code."
With every attempt I have made to use this approach, the result has been the same. After the redirect, the browser's title contains a partial authorization code, though the edit box on the page is correctly populated with the entire authorization code. (I could provide an image for illustration, but not without sufficient "reputation".) Whether I retrieve the title programmatically or just inspect it via the tab's hovertip, the code is consistently truncated at the 44th character of the title, immediately preceding the period in that position in the full code.
With only a partial code, there is no way to proceed past step 2; the documentation leaves little room for doubt that this is buggy behavior. For reference, the full authorization code works if I retrieve it by manual copy and paste (but that is not an option for me in practice).
Has anyone else encountered this behavior?
Most importantly, can Google or a representative thereof, please answer the question of "Why?" (And, assuming it's not something on my end, "When will it be fixed?")
I noticed the shorter code too. The shorter authorization code will work. You can proceed to the next step.

Web Development: how locked down should an admin backend be?

I'm going to use PHP in my example, but my question applies to any programming language.
Whenever I deal with forms that can be filled out by users who are not logged in (in other words, untrusted users), I do several things to make sure it is safe to store in the database:
Verify that all of the expected fields are present in $_POST (none were removed using a tool such as Firebug)
Verify that there are no unexpected fields in $_POST. This way, a field in the database doesn't accidentally get written over.
Verify that all of the expected fields are of the expected type (almost always "string"). This way, problems don't come up if a malicious user is tinkering with the code and adds "[]" to the end of a field name, thus making PHP consider the field to be an array and then performing checks on it as though it were a string.
Verify that all of the required fields were filled out.
Verify that all of the fields (both required and optional) were filled out correctly (for example, email addresses and phone numbers are in the expected format).
Related to the previous item, but worthy of being its own item: verify that fields that are dropdown menus were submitted with values that are actually in the dropdown menu. Again, a user could tinker with the code and change the dropdown menu to be anything they want.
Sanitize all fields just in case the user intentionally or unintentionally included malicious code.
I don't believe that any of the above things are overkill because, as I mentioned, the user filling out the form is not trusted.
When it comes to admin backends, however, I'm not sure all of those things are necessary. These are the things that I still consider to be necessary:
Verify that all of the required fields were filled out.
Verify that all of the fields (both required and optional) were filled out correctly (for example, email addresses and phone numbers are in the expected format).
Sanitize all fields just in case the user intentionally or unintentionally included malicious code.
I'm considering dropping the remaining items in order to save time and have less code (and, therefore, more readable code). Is that a reasonable thing to do or is it worthwhile to treat all forms equally regardless of whether or not they are being filled out by a trusted user?
These are the only two reasons I can think of for why it might be wise to treat all forms equally:
The trusted user's credentials might be found out by an untrusted user.
The trusted user's machine could be infected with malware that messes with forms. I have never heard of such malware and doubt that this is something to be really be worried about, but it is something to consider anyway.
Thanks!
Without knowing all the details, it's hard to say.
However, in general this feels like a situation where code re-use should be possible. In other words, it feels like this boiler-plate form validation shouldn't need to be re-written for each unique form. Instead, I would aim to create some reusable external class that could be used for any form.
You mentioned PHP and there are already lots of form validation classes available:
http://www.google.com/search?gcx=w&sourceid=chrome&ie=UTF-8&q=form+validation+php+class
Best of luck!

Resources