Is WUI a subset of GUI? - user-interface

Let's say I'm developing a web user interface for a program named xxx. As I have the prior knowledge of my potential users will not know what WUI and UI is, and they'll search for GUI instead. Thus I want to name my program as xxxGUI but it will be a web user interface. To make sure I don't make any mistake or misleading, is this a bad way to name my WUI or is web user interface already a subset of graphical user interfaces?

Yes, a Web user interface is a GUI, as well. Your suspicion is right, GUI is a very common term, while WUI is not and might be misleading. (Searching for GUI on Google returns 385MHits, while WUI just 8M, the first of which do not refer to UIs.) So just go ahead and call your WUI GUI.

Related

Can aws-lex be used to build a conversation flow bot to reply with different answers based on user's input?

Can aws-lex be used to build a conversation flow bot?
For example:
Thank you very much!
Reason for all this: So we have our own "dialogue builder" and "bot-service".
Our own "Dialogue Builder": is maybe similar to Amazon Connect dialogue builder, and our own "Bot-service" is similar to Microsoft bot framework. Before we were using microsoft-luis to get "intention" of a sentence while using our own dialogue builder and our own bot-service to build a conversation/dialogues flow like if a user says "yes" then go to another flow and if a user says "no" then go to different flow (can this be done in slots?) === Binary tree :)
So now we are switching from luis to aws-lex and trying to think if it is possible to just use aws-lex UI and not our (dialogue builder/bot-service) anymore. But what I am understanding is that to use aws-lex without some kind of dialogue builder we would need to write alot of if/case statements if it contains large data, right? what is your suggestion? One way would be to just use "Amazon Connect" to utilize their dialogue builder so we don't have to write alot of if statements but then if we are using dialogue builder we can just use our own (old one) dialogue builder? what do you think?
Questions:
1)Is there a way to do something like this in aws-lex or not? I tried using slots/prompts/lambda but I am not able to go to 2nd or 3rd level depth in diagram. can be done somehow?
2) Do I have to use lambda and use "switch/if conditions each time it has to change the flow (ex: if answer is yes then reply this, and if no then reply this)?
3) If #2 is true, then is it possible for it to be used by non-developer. Even if I write if/conditions ~1k - 2k if conditions, then if a person (non-developer) tries to edit a dialogue/or-something through UI won't able to do it, right? (So does this mean that we are't really using UI of aws-lex, we are just writing "if conditions" in programming + using aws-lex "intention" to get intention, right?
4) Would it be possible to give example and show how making a flow is possible? So far using slots replies/responses don't change based on user's input. It doesn't matter if users says "no" or says "yes" it is going to reply with same path/answer. Is there a way to change reply based on user's input.
5) If #3 is not possible (non-developer) can't use aws-lex UI to make something like this, should we use custom dialogue builder which does this?
Thank you very very much!
It sounds like you're switching from the Microsoft Bot Framework to find a simpler solution to structured flows without entity recognition.
You may want research Microsoft's QnAMaker multi-turn ability. It's supported in the QnA Maker online editor, but not in the bot framework SDK (yet). They do have an example bot that uses it through the Web API.
https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation
I realize this doesn't answer you Lex question, but might address your concern.

Common asserts in any automation project

Can anyone briefly explain what are the common asserts to consider in any automation project please. Whether it might be an in-house or public web application. For example presently i am using selenium (java) to automate an eCommerce web application. As this is my first website to automate, i am running out of ideas where i can verify things expect few which i know mentioned below:
1.Verify each page Title
2.Verify a button, text, link, image, custom text etc
Apart from these is there any thing else i can verify? please feel free to correct my question and if you have worked on various automation projects which areas did you add asserts to verify or validate something on a webpage.
basically, you do automation to decrease the execution time of regression cycles by automating the Test Cases relate to the functionality of the application. so, first develop test cases, using test design techniques like ECP, BVA etc.
Each test case must have an Assertion called expected result or functionality (otherwise it won't be called a Test case).
This assertion can be anything like,
Whether login successful after giving valid credentials
Showing an error message after entering wrong credentials etc.
Selenium helps us to automate web interactions (navigations, clicks, enter texts etc.) and don't perform any assertions for you.
Assertions are available by frameworks like JUnit, TestNG (in Java) with Assertions class. There is built-in support from programming languages like assert keyword in python & Java (http://docs.oracle.com/javase/7/docs/technotes/guides/language/assert.html)
So, whatever you mentioned in your question like common assertions (Verify each page Title etc.), those are just web interactions. they don't decide whether a Test is PASS or FAIL. It is you who define the criteria whether a Test is PASS/FAIL.
For example, there is a test case related to successful login.
here, you automate web interactions like navigate to login page, enter credentials, click Submit button.
Then to validate whether you successfully logged in or not, you look for a web element in the Home Page of the user logged in (like, welcome user) in normal scenario. In Automation, you try to find the text welcome user using webelement. Then you use Assertions provided by frameworks, to assert whether the expected message is present in the webpage like
Assertions.assertEqual(expected_message, actual_message); // just an example.
If expected_message and actual_message is same, then the method don't throw any exception, which results in marking the testcase as PASS by the framework
If expected_message and actual_message is NOT same, then AssertionError is raised by the method assertEqual, which results in marking the test case as FAIL by the framework.

AutoUnlock a Windows User Session

Recently, I have been working on a CredentialProvider in order to unlock automatically (the trigger can be any event, so let’s say the end of a timer) a Windows Vista (or more recent version) user session.
For that I read some useful articles on the subject, the change between GINA and this new architecture. http://msdn.microsoft.com/en-us/magazine/cc163489.aspx.
I think, like everyone in the process of creating a custom CredentialProvider, I didn’t start from scratch but from the sample code provided by Microsoft. And then I tried to change the behaviour (things like logging) in the different functions.
So in the end I can use the custom CredentialProvider, enter the SetUsageScenario methods but still I cannot reach the Set or GetSerialization method. From what I’ve understood in the technical documentation on CredentialProvider (still provided by Microsoft) theses two methods should be called automatically. Is there something I missed ?
Also, my original idea was to get an authentication package using Kerberos in order to perform an implicit user authentication. I got this idea by seeking information on other SO or MSDN threads like
Is this approach the good one ?
Thank you very much for your time answering my questions. Any clarifications are welcomed, even if they don’t directly resolve my problems :-)
First of all - you need to set autologon flag to true in your implementation of the ICredentialProviderCredential::SetSelected(BOOL *pbAutoLogon) and ICredentialProvider::GetCredentialCount methods.
Next, you need to call ICredentialProviderEvents::CredentialsChanged when your timer is hit.
LogonUI will recreate your credentials, and because autologon is set to true it will call your GetSerialization() method.
SetSerialization and GetSerialization functions are called from your provider by LogonUI. After user enters username/password and presses ENTER button, LogonUI calls GetSerialization function and provides a pointer, as one of the four parameters, that will point in future to CREDENTIAL_PROVIDER_CREDENTIAL_SERIALIZATION structure created and filled by you, and then this structure will be sent from LogonUI to Winlogon to perform authentication. I don't know how to make LogonUI to call GetSerialization from your credential provider code and as far as I know you can't call GetSerialization by your own because where will you pass your filled CREDENTIAL_PROVIDER_CREDENTIAL_SERIALIZATION structure if no one requested it, but only LogonUI can path it to Winlogon?
There is a document called "Credential Provider Technical Reference", there you can read some details about credential providers. In the Shell samples folder there is a strange folder called "Autologon", maybe it will help you! Good Luck!

How can I work with Windows security groups without knowing their localized names in advance?

I've searched around online but can't find what I'm after. Basically, during an install, we fire off a separate executable that basically brute forces a few folders to be read/write enabled for the user group "EVERYONE".
Now, the person that wrote this never took into consideration system language. I had a call with a customer in France that kept failing installation because "EVERYONE" isn't what we would expect.
I'm after an API call to Windows that would return a security group name which would be "safe" to use in a localized environment. Essentially I'm looking to safely edit this code so instead of hardcoding in "EVERYONE", we call a function instead.
The fundamental mistake here is not so much the use of EVERYONE, but rather that the code is using names at all. Instead of using names you should use the well-known SIDs. In your case you need S-1-1-0.

Session 0 Isolation

Vista puts out a new security preventing Session 0 from accessing hardware like the video card, and the user no longer logs into session 0. I know this means that I cannot show the user a GUI, however, does that also mean I can't show one at all? The way my code is set up right now, it would be more work to make it command line only, however if I can use my existing code and just programmatically manage the GUI it would take a lot less code.
Is this possible?
The article from MSDN says this:
• A service attempts to create a user interface (UI), such as a dialog box, in Session 0. Because the user is not running in Session 0, he or she never sees the UI and therefore cannot provide the input that the service is looking for. The service appears to stop functioning because it is waiting for a user response that does not occur.
Which makes me think it is possible to have an automated UI, but someone told me that you couldn't use SendKeys with a service because it was disabled in Session 0.
EDIT: I don't actually need to show the user the GUI
You can show one; it just doesn't show up.
There is a little notification in the taskbar about there being a GUI window and a way to switch to it.
Anyway, there actually is a TerminalServices API command to switch active session that you could call if you really needed it to show up.
You can write a separate process which provides the UI for your service process. The communication between your UI and service process can be done in various ways (search the web for "inter process communication" or "IPC").
Your service can have a GUI. It's simply that no human will ever see it. As the MSDN quote suggests, a service can display a dialog box. The call to MessageBox won't fail; it just won't ever return — there won't be anyone to press its buttons.
I'm not sure what you mean by wanting to "manage the GUI." Do you actually mean pretending to send input to the controls, as with SendInput? I see no reason that it wouldn't be possible; you'd be injecting input into your own program's queue, after all, and SendInput's Vista-specific warnings don't say anything about that. But I think you'd be making things much more complicated than they need to be. Revisit the idea to alter your program to have no UI at all. (That's not the same as having a console program. Consoles are UI.)
Instead of simulating the mouse messages necessary to click a button, for instance, eliminate the middle-man and simply call directly the function that the button-click event would have called.

Resources