Amazon Echo - Ask for an algorithm to avoid the background command - algorithm

Amazon Echo - the algorithm to avoid the ambiguous background command from the voice record being played:
This is a hypothetical question based on the principle that it could happen: If one uses Amazon Echo to play a voice recording or a video which contains some sentences like "Alexa, could you ... , " What will happen?
If the video/recording contains a sentence like: "Alexa, could you stop the video?" What will happen?
If the video/recording contains a sentence like: "Alexa, please increase the volume to 8?" Meanwhile, you command to the Echo that "Alexa, please decrease the volume to 4?" Could it distinguish which one is the command to fulfill?
Would Amazon Echo be able to neglect the voice recording or the video being played, and not to misunderstand it as a real command from the real human? What kind of algorithm is designed for the Amazon Echo program to deal with this situation?

When a device plays noise and has a microphone, then the manufacturer uses a digital signal processing technique called "echo cancellation" to subtract the noise it makes from the sound picked up from the microphone. This includes phones, headsets, your computer (skype does echo cancellation in software), and the Amazon Echo.
Because of echo cancellation, the Amazon Echo cannot hear itself, so it won't respond to commands that come out of its speakers.
There are lots of questions about echo cancellation on SO, which you can easily find now that you know the magic words. The algorithms are too complex for an SO answer, but you can generally get an open source implementation for whatever environment you're working in.
That's the real answer... But your post reminds me of a book called "Goedel, Escher, Bach", by Douglas Hofstadter. He discusses a similar question about record players, which can pick up their own sounds.
That question has a super-interesting answer: http://genius.com/Douglas-hofstadter-contracrostipunctus-annotated

Related

Outputting Sound to Multiple Audio Devices Simultaneously

OK, the first issue. I am trying to write a virtual soundboard that will output to multiple devices at once. I would prefer OpenAL for this, but if I have to switch over to MS libs (I'm writing this initially on Windows 7) I will.
Anyway, the idea is that you have a bunch of sound files loaded up and ready to play. You're on Skype, and someone fails in a major way, so you hit the play button on the Price is Right fail ditty. Both you and your friends hear this sound at the same time, and have a good laugh about it.
I've gotten OAL to the point where I can play on the default device, and selecting a device at this point seems rather trivial. However, from what I understand, each OAL device needs its context to be current in order for the buffer to populate/propagate properly. Which means, in a standard program, the sound would play on one device, and then the device would be switched and the sound buffered then played on the second device.
Is this possible at all, with any audio library? Would threads be involved, and would those be safe?
Then, the next problem is, in order for it to integrate seamlessly with end-user setups, it would need to be able to either output to the default recording device, or intercept the recording device, mix it with the sound, and output it as another playback device. Is either of these possible, and if both are, which is more feasible? I think it would be preferable to be able to output to the recording device itself, as then the program wouldn't have to be running in order to have the microphone still work for calls.
If I understood well there are two questions here, mainly.
Is it possible to play a sound on two or more audio output devices simultaneously, and how to achieve this?
Is it possible to loop back data through a audio input (recording) device so that is is played on the respective monitor i.e for example sent through the audio stream of Skype to your partner, in your respective case.
Answer to 1: This is absolutely feasable, all independent audio outputs of your system can play sounds simultaneously. For example some professional audio interfaces (for music production) have 8, 16, 64 independent outputs of which all can be played sound simultaneously. That means that each output device maintains its own buffer that it consumes independently (apart from concurrency on eventual shared memory to feed the buffer).
How?
Most audio frameworks / systems provide functions to get a "device handle" which will need you to pass a callback for feeding the buffer with samples (so does Open AL for example). This will be called independently and asynchroneously by the framework / system (ultimately the audio device driver(s)).
Since this all works asynchroneously you dont necessarily need multi-threading here. All you need to do in principle is maintaining two (or more) audio output device handles, each with a seperate buffer consuming callback, to feed the two (or more) seperate devices.
Note You can also play several sounds on one single device. Most devices / systems allow this kind of "resources sharing". Actually, that is one purpose for which sound cards are actually made for. To mix together all the sounds produced by the various programs (and hence take off that heavy burden from the CPU). When you use one (physical) device to play several sounds, the concept is the same as with multiple device. For each sound you get a logical device handle. Only that those handle refers to several "channels" of one physical device.
What should you use?
Open AL seems a little like using heavy artillery for this simple task I would say (since you dont want that much portability, and probably dont plan to implement your own codec and effects ;) )
I would recommend you to use Qt here. It is highly portable (Win/Mac/Linux) and it has a very handy class that will do the job for you: http://qt-project.org/doc/qt-5.0/qtmultimedia/qaudiooutput.html
Check the example in the documentation to see how to play a WAV file, with a couple of lines of code. To play several WAV files simultaneously you simply have to open several QAudioOutput (basically put the code from the example in a function and call it as often as you want). Note that you have to close / stop the QAudioOutput in order for the sound to stop playing.
Answer to 2: What you want to do is called a loopback. Only a very limited number of sound cards i.e audio devices provide a so called loopback input device, which would permit for recording what is currently output by the main output mix of the soundcard for example. However, even this kind of device provided, it will not permit you to loop back anything into the microphone input device. The microphone input device only takes data from the microphone D/A converter. This is deep in the H/W, you can not mix in anything on your level there.
This said, it will be very very hard (IMHO practicably impossible) to have Skype send your sound with a standard setup to your conversation partner. Only thing I can think of would be having an audio device with loopback capabilities (or simply have a physical cable connection a possible monitor line out to any recording line in), and have then Skype set up to use this looped back device as an input. However, Skype will not pick up from your microphone anymore, hence, you wont have a conversation ;)
Note: When saying "simultaneous" playback here, we are talking about synchronizing the playback of two sounds as concerned by real-time perception (in the range of 10-20ms). We are not looking at actual synchronization on a sample level, and the related clock jitter and phase shifting issues that come into play when sending sound onto two physical devices with two independent (free running) clocks. Thus, when the application demands in phase signal generation on independent devices, clock recovery mechanisms are necessary, which may be provided by the drivers or OS.
Note: Virtual audio device software such as Virtual Audio Cable will provide virtual devices to achieve loopback functionnality in Windows. Frameworks such as Jack Audio may achieve the same in UX environment.
There is a very easy way to output audio on two devices at the same time:
For Realtek devices you can use the Audio-mixer "trick" (but this will give you a delay / echo);
For everything else (and without echo) you can use Voicemeeter (which is totaly free).
I have explained BOTH solutions in this video: https://youtu.be/lpvae_2WOSQ
Best Regards

How can the audio data being sent to the speakers be captured from an application?

Is there an API that is suitable for doing this? A possible application of this is for writing a visualiser, and to play with real time signal processing.
EDIT: The operating system in question is Windows. On Linux, a roundabout way to accomplish this is with Jack, but I'm hoping for a way to read the data in the audio buffer without having to couple apps to Jack.
EDIT: A good answer is found here.
If sound board used for playback has recording device/line like "Stereo Mix", "What U Hear", etc., then it is enought to write simple recording application, that is capable to record from a specified recording device/line and record from the "Stereo Mix",...
General case (for "all sound boards") will require to write special driver. Examples of applications with such spesial drivers: Virtual Audio Cable (http://software.muzychenko.net/eng/vac.html); Total Recorder (http://www.totalrecorder/com).

RPG dialogue engine / structure [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've always been interested in the data structures involved in an RPG (Role-Playing Game). In particular, I'm curious about dialogue and events based actions.
For example: If I approach an NPC at point x in the game, with items y and quests z, how would I work out what the NPC needs to say? Branching dialogue and responding to player input seems as trivial as having a defined script, and user input causes the script reader to jump to a particular line in the script, which has a corresponding set of response lines (much like a choose your own adventure)
However, tying in logic to work out if the player has certain items, and completed certain quests seems to really ruin this script based model.
I'm looking for ideas (not necessarily programming language examples) of how to approach all of this dialogue and logic, and separate it out so that it's very easy to add new branching content, without delving into too much code.
This is really an open question. I don't believe there's a single solution, but it'd be good to get the ball rolling with some ideas. As more of a designer than a programmer, I'm always interested in ways to separate content and code.
For example: If I approach an NPC at
point x in the game, with items y and
quests z, how would I work out what
the NPC needs to say? Branching
dialogue and responding to player
input seems as trivial as having a
defined script, and user input causes
the script reader to jump to a
particular line in the script, which
has a corresponding set of response
lines (much like a choose your own
adventure)
However, tying in logic to work out if
the player has certain items, and
completed certain quests seems to
really ruin this script based model.
Not at all. You simply factor the conditionals into the data.
Let's say you have your list of dialogues, numbered 1 to 400 or whatever like the Choose Your Own Adventure book examples. I assume each dialogue may consist of the text spoken by the NPC, followed by a list of responses available to the player.
So the next step is to add the conditionals in there, by simply attaching conditions to each response. The easiest way is to do this with a scripting language, so you have a short and simple piece of code that returns True if this response is available to the player and False if it is not.
eg. (XML format, but could be anything)
<dialogue id='1'>
<text>
Couldst thou venture forth and kill me 10 rats, perchance?
</text>
<response condition="True" nextDialogue='2'>
Verily! Naught could be better than slaying thy verminous foes. Ten ratty
carcasses shall I bring unto thee.
</text>
<response condition="rats_left_in_world() < 10" nextDialogue='3'>
Nay, brother! Had thou but ten rats remaining, my sword would be thine,
but tis not to be.
</response>
</dialogue>
In your scripting language, you'd need a 'rats_left_in_world' function that you can call to retrieve the value in question.
What if you have no scripting language? Well, you could have the programmer code an individual condition for each situation in your dialogue - a bit tedious, not all that difficult if your dialogue is written up-front. Then just refer to a condition by name in the conversation script.
A more advanced scheme, still not requiring a scripting language, might use a tag for each condition, like so:
<response>
<condition type='min_level' value='50'/>
Sadly squire, my time is too valuable for the likes of thee. Get thyself a
farm hand or stable boy to do thy bidding!
</response>
You can add as many conditions in there as you need, as long as they can be easily specified with one or two values. If all conditions are met, the response is available.
It's interesting, there's seems to be a core idea being missed here. We're having a discussion that relates to a programmer performing the task. Indeed, the code examples above are coupled to code, not content.
In game development, it's the content developers that we programmers want to empower. They will not (this is very important) look at code. Period. Now and again you get a technical artist or technical designer, and they're wonderful and don't mind it; but, the majority of content authors are not technically inclined.
I understand the question is for your own edification; but, it should be pointed out that, in industry, when we solve these types of problems our end users (the people utilizing the technology we're developing) are not engineers.
A system like this (branching dialogue) requires a representation in a tool that is relatively intuitive to use. For example, Unreal's Kismet visual scripting system could be utilized.
Essentially, the data structures (more than likely a branching tree as it's easy to represent/debug/etc.) would be crafted by a programmer, as would the nodes that represent the object in script. The system with its ability to link to world objects (more than likely also represented by nodes in visual scripting), etc. would then be crafted and the whole kitten caboodle linked together in some glorious bit of elegant code.
After all of that, a designer would actually be able to build a visual representation of the dialogue branching in the visual scripting language. This would be map-encounter specific, more than likely. Of course, you could procedurally generate these; but, that's more of a programmer desire than a designer's.
Just thought I'd add that bit of knowledge and insight.
EDIT: Noticed there's an XML example. I'm not sure what other designers/artists/etc. feel about it; but, the ones I've worked with cringe at the idea of touching a text file.
I'd venture to say that most modern games (be they RPGs, action games, anything above basic card/board games) generally consist of several components: The display engine, the core data structures, and typically a secondary scripting engine. One example which was popular for a time (and may still be; I haven't even spoken to a game developer in years) was Lua.
The decision-making you're talking about (events, conversation branches, etc) is typically handled by the secondary scripting engine, as the scripting languages are more flexible and typically easier to use for the game's designers. Again, most of the real story-driven or game-driving logic will actually happen here, where it can be swapped out and changed relatively easily. (At least, compared to running a full build of all the code!)
The primary game engine combines the data structures related to the world (geometry, etc), the data structures related to the player(s) and other actor(s) needed, and the scripts to drive the encounters, and uses all of that to display the final, integrated environment.
You can certainly use a scripting language to handle dialogue. Basically a script might look like this:
ShowMessage("Hello " + hero.name + ", how can I help you?")
choices = { "Open the door for me", "Tell me about yourself", "Nevermind" }
chosen = ShowChoices(choices)
if chosen == 0
if hero.inventory["gold key"] > 0
ShowMessage("You have the key! I'll open the door for you!")
isGateOpen = true
else
ShowMessage("I'm sorry, but you need the gold key")
end if
else if chosen == 1
if isGateOpen
ShowMessage("I'm the gate keeper, and the gate is open")
else
ShowMessage("I'm the gate keeper and you need gold key to pass")
end if
else
ShowMessage("Okay, tell me if you need anything")
end if
This is fine for most games. The scripting language can be simple and you can write more complicated logical branches. Your engine will have some representation of the world that is exposed to the scripting language. In this example, this means the name of the hero and the items in the inventory, but you could expose anything you like. You also define functions that could be called by scripts to do things like show a message or play some sound effect. You need to keep track of some global data that is shared between scripts, such as whether a door is open or a quest is done (perhaps as part of the map and quest classes).
In some games however, scripting could get tedious, especially if the dialogue is more dynamic and depends on many conditions (say, character mood and stats, npc knowledge, weather, items, etc.) Here it is possible to store your dialogue tree in some format that allows easily specifying preconditions and outcomes. I don't know if this is the way to do it, but I've once asked a question about storing game logic in XML files. I've found this approach to be effective for my game (in which dialogue is heavily dependent on many factors). In particular, in the future I could easily make a simple dialogue editor that doesn't require much scripting and allow you to simply define dialogue and branches with a graphical user interface.
I recently had to develop something for this, and opted for a very basic text file structure. You can see the resulting code and text format at:
https://github.com/scottbw/dialoguejs
There is a tradeoff between sophistication of scripting and ease of editing for non-programmers.
I've opted for a very simple solution for the dialogue, and handle triggering of related game events separately in a secondary scripting language.
Eventually I might add some way of adding "stage directions" to the text dialogue format that are used to trigger events in the secondary scripting engine, but again without needing to put anything that looks like code in the dialogue file itself.
That's an excellent questions. I had to solve that a few times for clients. We started with an XML structure quite similar to yours, and now we use JSON. You can see an example here: http://www.branchtrack.com/projects/on029pq6.json or https://dl.dropboxusercontent.com/u/11433463/branchtrack/on029pq6.json (prettify it for readability).
Full disclosure: the link above is generated in BranchTrack, an online editor for branching dialogues, and I am the CEO. Feel free to ask anything.
I recently tackled a problem like this while making Chat Mapper. What I do is graphically plot out the dialogues as nodes in a tree and then each node has a condition and a script associated with them. As you traverse through the tree and hit a node, you check the condition to see whether or not that node is valid, and if it is, you execute the script associated with that node. It's a fairly simple idea but seems to work well from our testing. We are using a .NET Lua interpreter for the scripts.
For my solution I developed a custom text file format consisting of seven lines of text per node. Each line can be a strided list or just a text line. Each node has a position number. The last digit of the number is a type, so there are 10 different types of nodes, such as fresh questions, confirmations, repeating actions based on prior results, etc.
Each dialog activation begins with a select query to the data store whose results can be compared against members of a strided list, to match up with the appropriate node. This is more brutal than an if/then but it makes the text config file smaller since you don't need any syntax besides the stride separator. I use a system of wildcards to allow for select query results to be able to be inserted into the speech of the NPC.
Lastly there are API hooks to allow custom scripts to interface in, in case the easy config file is not enough. I plan to make a web app gui in nodejs to allow people to visually script the config files :D

Use NSSpeechRecognizer or alternative with audio file instead of microphone input?

Is it possible to use the NSSpeechRecognizer with an pre-recorded audio file instead of direct microphone input?
Or is there any other speech-to-text framework for Objective-C/Cocoa available?
Added:
Rather than using voice at the machine that is running the application external devices (e.g. iPhone) could be used for sending just an recorded audio stream to that desktop application. The desktop Cocoa app then would process and do whatever it's supposed to do using the assigned commands.
Thanks.
I don't see any obvious way to switch the input programmatically, though the "Speech" companion guide's first paragraph in the "Recognizing Speech" section seems to imply other inputs can be used. I think this is meant to be set via System Preferences, though. I'm guessing it uses the primary audio input device selected there.
I suspect, though, you're looking for open-ended speech recognition, which NSSpeechRecognizer is not. If you're looking to transform any pre-recorded audio into text (ie, make a transcript of a recording), you're completely out of luck with NSSpeechRecognizer, as you must give it an array of "commands" to listen for.
Theoretically, you could feed it the whole dictionary, but I don't think that would work since you usually have to give it clear, distinct commands. Its performance would suffer, I would guess, if you gave it a bunch of stuff to analyze for (in real time).
Your best bet is to look at third-party open source solutions. There are a few generalized packages out there (none specifically for Cocoa/Objective-C), but this poses another question: What kind of recognition are you looking for? The two main forms of speech recognition ('trained' is more accurate but less flexible for different voices and the recording environment, whereas 'open' is generally much less accurate).
It'd probably be best if you stated exactly what you're trying to accomplish.

What are some good examples showing that "I am not the user"?

I'm a software developer who has a background in usability engineering. When I studied usability engineering in grad school, one of the professors had a mantra: "You are not the user". The idea was that we need to base UI design on actual user research rather than our own ideas as to how the UI should work.
Since then I've seen some good examples that seem to prove that I'm not the user.
User trying to use an e-mail template authoring tool, and gets stuck trying to enter the pipe (|) character. Problem turns out to be that the pipe on the keyboard has a space in the middle.
In a web app, user doesn't see content below the fold. Not unusual. We tell her to scroll down. She has no idea what we're talking about and is not familiar with the scroll thumb.
I'm listening in on a tech support call. Rep tells the user to close the browser. In the background I hear the Windows shutdown jingle.
What are some other good examples of this?
EDIT: To clarify, I'm looking for examples where developers make assumptions that turn out to be horribly false about what users will know, understand, etc.
I think one of the biggest examples is that expert users tend to play with an application.
They say, "Okay, I have this tool, what can I do with it?"
Your average user sees the ecosystem of an operating system, filesystem, or application as a big scary place where they are likely to get lost and never return.
For them, everything they want to do on a computer is task-based.
"How do I burn a DVD?"
"How do I upload a photo from my camera to this website."
"How do I send my mom a song?"
They want a starting point, a reproducible work flow, and they want to do that every time they have to perform the task. They don't care about streamlining the process or finding the best way to do it, they just want one reproducible way to do it.
In building web applications, I long since learned to make the start page of my application something separate from the menus with task-based links to the main things the application did in a really big font. For the average user, this increased usability hugely.
So remember this: users don't want to "use your application", they want to get something specific done.
In my mind, the most visible example of "developers are not the user" is the common Confirmation Dialog.
In most any document based application, from the most complex (MS Word, Excel, Visual Studio) through the simplest (Notepad, Crimson Editor, UltraEdit), when you close the appliction with unsaved changes you get a dialog like this:
The text in the Untitled file has changed.
Do you want to save the changes?
[Yes] [No] [Cancel]
Assumption: Users will read the dialog
Reality: With an average reading speed of 2 words per second, this would take 9 seconds. Many users won't read the dialog at all.
Observation: Many developers read much much faster than typical users
Assumption: The available options are all equally likely.
Reality: Most (>99%) of the time users will want their changes saved.
Assumption: Users will consider the consequences before clicking a choice
Reality: The true impact of the choice will occur to users a split second after pressing the button.
Assumption: Users will care about the message being displayed.
Reality: Users are focussed on the next task they need to complete, not on the "care and feeding" of their computer.
Assumption: Users will understand that the dialog contains critical information they need to know.
Reality: Users see the dialog as a speedbump in their way and just want to get rid of it in the fastest way possible.
I definitely agree with the bolded comments in Daniel's response--most real users frequently have a goal they want to get to, and just want to reach that goal as easily and quickly as possible. Speaking from experience, this goes not only for computer novices or non-techie people but also for fairly tech-savvy users who just might not be well-versed in your particular domain or technology stack.
Too frequently I've seen customers faced with a rich set of technologies, tools, utilities, APIs, etc. but no obvious way to accomplish their high-level tasks. Sometimes this could be addressed simply with better documentation (think comprehensive walk-throughs), sometimes with some high-level wizards built on top of command-line scripts/tools, and sometimes only with a fundamental re-prioritization of the software project.
With that said... to throw another concrete example on the pile, there's the Windows start menu (excerpt from an article on The Old New Thing blog):
Back in the early days, the taskbar
didn't have a Start button.
...
But one thing kept getting kicked up
by usability tests: People booted up
the computer and just sat there,
unsure what to do next.
That's when we decided to label the
System button "Start".
It says, "You dummy. Click here." And
it sent our usability numbers through
the roof, because all of a sudden,
people knew what to click when they
wanted to do something.
As mentioned by others here, we techie folks are used to playing around with an environment, clicking on everything that can be clicked on, poking around in all available menus, etc. Family members of mine who are afraid of their computers, however, are even more afraid that they'll click on something that will "erase" their data, so they'd prefer to be given clear directions on where to click.
Many years ago, in a CMS, I stupidly assumed that no one would ever try to create a directory with a leading space in the name .... someone did, and made many other parts of the system very very sad.
On another note, trying to explain to my mother to click the Start button to turn the computer off is just a world of pain.
How about the apocryphal tech support call about the user with the broken "cup holder" (CD/ROM)?
Actually, one that bit me was cut/paste -- I always trim my text inputs now since some of my users cut/paste text from emails, etc. and end up selecting extra whitespace. My tests never considered that people would "type" in extra characters.
Today's GUIs do a pretty good job of hiding the underlying OS. But the idosyncracies still show through.
Why won't the Mac let me create a folder called "Photos: Christmas 08"?
Why do I have to "eject" a mounted disk image?
Can't I convert a JPEG to TIFF just by changing the file extension?
(The last one actually happened to me some years ago. It took forever to figure out why the TIFF wasn't loading correctly! It was at that moment that I understood why Apple used to use embedded file types (as metadata) and to this day I do not understand why they foolishly went back to file extensions. Oh, right; it's because Unix is a superior OS.)
I've seen this plenty of times, it seems to be something that always comes up. I seem to be the kind of person who can pick up on these kind of assumptions (in some circumstances), but I've been blown away by what the user was doing other many times.
As I said, it's something I'm quite familiar with. Some of the software I've worked on is used by the general public (as opposed to specially trained people) so we had to be ready for this kind of thing. Yet I've seen it not be taken into account.
A good example is a web form that needs to be completed. We need this form completed, it's important to the process. The user is no good to us if they don't complete the form, but the more information we get out of them the better. Obviously these are two conflicting demands. If just present the user a screen of 150 fields (random large number) they'll run away scared.
These forms had been revised many times in order to improve things, but users weren't asked what they wanted. Decisions were made based on the assumptions or feelings of various people, but how close those feelings were to actual customers wasn't taken into account.
I'm also going to mention the corollary to Bevan's "The users will read the dialog" assumption. Operating off the "the users don't read anything" assumption makes much more sense. Yet people who argue that the user's don't read anything will often suggest putting bits of long dry explanatory text to help users who are confused by some random poor design decision (like using checkboxes for something that should be radio buttons because you can only select one).
Working any kind of tech support can be very informative on how users do (or do not) think.
pretty much anything at the O/S level in Linux is a good example, from the choice of names ("grep" obviously means "search" to the user!) to the choice of syntax ("rm *" is good for you!)
[i'm not hatin' on linux, it's just chock full of unix-legacy un-usability examples]
How about the desktop and wallpaper metaphors? It's getting better, but 5-10 years ago was the bane of a lot of remote tech support calls.
There's also the backslash vs. slash issue, the myriad names for the various keyboard symbols, and the antiquated print screen button.
Modern operating systems are great because they all support multiple user profiles, so everyone that uses my application on the same workstation can have their own settings and user data. Only, a good portion of the support requests I get are asking how to have multiple data files under the same user account.
Back in my college days, I used to train people on how to use a computer and the internet. I'd go to their house, setup their internet service show them email and everything. Well there was this old couple (late 60's). I spent about three hours showing them how to use their computer, made sure they could connect to the internet and everything. I leave feeling very happy.
That weekend I get a frantic call, about them not being able to check their email. Now I'm in the middle of enjoying my weekend but decide to help them out, and walk through all the things, 30 minutes latter, I ask them if they have two phone lines..."of course we only have one" Needless to say they forgot that they need to connect to the internet first (Yes this was back in the day of modems).
I supposed I should have setup shortcuts like DUN - > Check Email Step 1, Eduora - Check Email Step 2....
What users don't know, they will make up. They often work with an incorrect theory of how an application works.
Especially for data entry, users tend to type much faster than developers which can cause a problem if the program is slow to react.
Story: Once upon a time, before the personal computer, there was timesharing. A timesharing company's customer rep told me that once when he was giving a "how to" class to two or three nice older women, he told them how to stop a program that was running (in case it was started in error or taking to long.) He had one of the students type ^K, and the timesharing terminal responded "Killed!". The lady nearly had a heart attack.
One problem that we have at our company is employees who don't trust the computer. If you computerize a function that they do on paper, they will continue to do it on paper, while entering the results in the computer.

Resources