What techniques might be successful in monitoring UI events for the purpose of identifying friction points regarding the usability of an application? - algorithm

What techniques/algorithms might be successful in monitoring UI events for the purpose of identifying friction points regarding the usability of an application?
Most battle-tested, real-world software contains extensive error checking and logging. We frequently employ complex logging systems to help diagnose, and occasionally predict, failures before they happen. We generally focus on reporting the catastrophic failures on the server-side.
These failures are important of course, but I think there is another class of errors that are overlooked, and yet perhaps equally important. Whether you are using an iPhone, blackberry, laptop, desktop, or point-of-sale touch screen, user interaction is typically processed as discrete events. I suspect that identifying patterns of UI events can expose areas where the user is having difficulty regarding efficiently interacting with the application. I found an interesting academic paper on this subject here. I think the ideas presented in the paper are great, but perhaps other, simpler, techniques might yield nice results. What are your ideas and experiences in this area?

Interesting paper. One of the things I get out of it is that it's not easy to make sense out of user event logs unless you have a very specific hypothesis you are trying to test. They can be very useful, for example, if you know it's taking users too long to complete Task X or they're failing to complete it altogether. It's clearly a whole different ballgame to try to analyze sequences without any other supporting information and make sense out of them (though it can be done if you use the sophisticated techniques mentioned in the paper).
One simpler method would be to simply measure the total time to complete a given task that you know is common and important. If it's a shopping application, for example, the time to complete the check-out on a purchase would probably be something useful to collect. It's not quite that simple, though, because you'd have to at least take into account interruptions (e.g., the user's boss came into the room and he abandoned his shopping for actual work--not that I've ever done this :-P). You could have a simple rule that says, if there were no events logged for X seconds, assume the user is not paying attention to the screen.
Another simple thing you could do would be to check for obvious signs of errors, such as a user employing the "undo" facility or inputting information into an input box in a web app that causes a validation check to trigger (e.g., failing to enter required information and putting information in the wrong format). If certain input boxes result in a high number of errors, it might be a sign you should be more flexible in allowing different formats (e.g., allow users to enter a date as "6/28/09", "6-28-09", "june 28, 2009" instead of requiring a single format).
One other idea: if your application has contextual help, certainly count how many times people use it for each page/section/module of your application.
I doubt anything I'm saying is earth shattering, but maybe it will give you some ideas.
-Dan

I once wrote an app that tracked, for each command, whether it was accessed via the menu or a command-key equivalent; it gave us pretty good insight into which key-equivalents were expendable. We didn't have toolbars at the time, but the same kind of logging and analysis could of course apply to them, or to context-sensitive menus: anywhere you want to provide a limited set of valuable options.

You might want to Google usability testing. I've never heard of it being done by having a program recognize patterns of events, but rather by having humans watch humans use the program.

Related

Website Response Times - General Performance Rules

I am currently in the process of performance tuning a Web Application and have been doing some research into what is considered 'Good' performance. I know this depends often on the application being built, target audience, plus many other factors, but wondered if people follow a general set of rules.
There is always the risk with tuning that there is no end to the job, and one should at some point have to make a call one when to stop, but when is this? When can we be happy the job is done?
To kick off the discussion, I have been using the following rules, based on the Jakob Nielsen report (http://www.useit.com/alertbox/response-times.html), which says
The 3 response-time limits are the
same today as when I wrote about them
in 1993 (based on 40-year-old research
by human factors pioneers):
0.1 seconds gives the feeling of instantaneous response — that is, the
outcome feels like it was caused by
the user, not the computer. This level
of responsiveness is essential to
support the feeling of direct
manipulation (direct manipulation is
one of the key GUI techniques to
increase user engagement and control —
for more about it, see our Principles
of Interface Design seminar).
1 second
keeps the user's flow of thought
seamless. Users can sense a delay, and
thus know the computer is generating
the outcome, but they still feel in
control of the overall experience and
that they're moving freely rather than
waiting on the computer. This degree
of responsiveness is needed for good
navigation.
10 seconds keeps the
user's attention. From 1–10 seconds,
users definitely feel at the mercy of
the computer and wish it was faster,
but they can handle it. After 10
seconds, they start thinking about
other things, making it harder to get
their brains back on track once the
computer finally does respond.
A 10-second delay will often make users
leave a site immediately. And even if
they stay, it's harder for them to
understand what's going on, making it
less likely that they'll succeed in
any difficult tasks.
Even a few
seconds' delay is enough to create an
unpleasant user experience. Users are
no longer in control, and they're
consciously annoyed by having to wait
for the computer. Thus, with repeated
short delays, users will give up
unless they're extremely committed to
completing the task. The result? You
can easily lose half your sales (to
those less-committed customers) simply
because your site is a few seconds too
slow for each page.slow for each page.
If you have Apache as web server you can use the page-speed module made by Google.
Instead of waiting for developers to change legacy, make use of the CPU and memory you have available to provide a better UX.
http://code.google.com/speed/page-speed/docs/module.htmlct
It provides the solution to the most common pain factors and with immediate effect. No coding, no changes to the legacy code of web applications.
The rules are pretty much sensible. Indeed one should aim to have response times in 1 second or less but sometimes the processing will really take longer (bad design, slow machines, waiting on 3rd parties, intense data processing, etc). In this case one can use various tips & tricks to improve the user experience:
use caching (both in the browser and in your frequently processed data)
use progressive loading of data using ajax where possible (and use progress indicators to give feedback that tings are happening)
use tools such as Firebug, YSlow to detect potential issues with your html design and structure
etc etc

Name of the concept of designing an interface to allow expert users to become more efficient?

I'm searching for sources and further information on a particular concept in user experience design. It's not a particularly complicated concept, just that when designing user interfaces, you should both make it intuitive and simple for new users, but also provide way for users to become more efficient as they become more familiar with the application.
An example could be including a prominent button for a common action for new users, but also providing a keyboard shortcut / mnemonic for expert users. However, that's just an example, another example could be providing full functionality through a GUI, but allow expert users to script the same actions. The point is it's more difficult to learn, but it makes them more efficient.
I'm pretty sure there's a name for that which I can't recall, and I'm having trouble searching for sources and references on it.
Name of the concept of designing an interface to allow expert users to become more efficient?
Accelerators?
Flexibility and efficiency of use:
Accelerators -- unseen by the novice
user -- may often speed up the
interaction for the expert user such
that the system can cater to both
inexperienced and experienced users.
Allow users to tailor frequent
actions.
(source: Ten Usability Heuristics by Jakob Nielsen)
Well, reading only your question "Name of the concept of designing an interface to allow expert users to become more efficient?" I'm inclined to point you toward The Humane Interface: New Directions for Designing Interactive Systems by Jef Raskin, in which there is the concept of habituation:
2-3-1 Formation of Habits
When you perform a task repeatedly, it
tends to become easier to do.
Juggling, table tennis, and playing
piano are everyday examples in my
life; they all seemed impossible when
I first attempted them. Walking is a
more widely practiced example. With
repetition, or practice, your
competence becomes habitual, and you
can do the task without having to
think about it. ...
...
... The ideal humane interface would
reduce the interface component of a
user's work to benign habituation.
Many of the problems that make
products difficult and unpleasant to
use are caused by human-machine design
that fails to take into account the
helpful and injurious properties of
habit formation. One notable example
is the tendency to provide many ways
of accomplishing the same task. Having
multiple options can shift your locus
of attention from the task to the
choice of method...
But is contrary to what you describe in your question, as evidenced by the last 2 sentences. In fact in that book there is also a sub-chapter dedicated to dispel the myth of beginner-expert dichotomy:
3-6 Myth of the Beginner-Expert Dichotomy
... This dichotomy is invalid. As a user
of a complex system, you are neither
a beginner nor an expert, and you cannot
be placed on a single continuum between
these two poles. You independently know
or do not know each feature or each related
set of features that work similarly to one
another. You may know how to use many
commands and features of a software package;
you may even work with the package professionally,
and people may seek your advice on using it.
Yet you may not know how to use or even know
about the existence of certain other commands
or even whole categories of commands in that
same package. ...
So, perhaps is not such a good term/concept that you are looking for.
Update: were you looking for the term Adaptive User Interfaces, perhaps? Well, I think that, as usually understood and implemented, it is not such a great idea (for example, disappearing menu items in Microsoft products). But my impression is that researchers use the term for something quite different.
Update: but Adaptive User Interfaces does not cover scripting.
The answer is in your question: Efficiency. It's a fundamental component of usability that Jakob Nielsen long ago defined as "Once users have learned the design, how quickly can they perform tasks." A UI with expert-supporting elements like accelerators, context menus, and double-click-for-defaults is an efficient UI.
It is also correct to simply say that making things fast for experienced users is part of usability -just as usability also includes making it easy for users to accomplish basic tasks on the first encounter, and making it satisfying, and tolerating errors.

Should users see their own performance stats in real time

I know this questions seems extremely open ended. I will try to narrow scope.
I have been struggling for some time as to include or exclude real time user performance stats in an application gui.
Does anyone have any info on the harm vs gain in including these stats in an app?
i.e. number of emails answered, number of customer calls taken, average time per customer etc.
The users are begging for more info on their stats, as it is how they are rated. However there is concern that given access to see their performance real time or near real time it will negatively affect their work.
I can kind of equate it to being measured on how many lines of code I churned out in one day. Would this help me to be more productive or just teach me write code as fast as possible and most likely make a lot of mistakes.
In my application I can think of these scenario's
i.e. BAD: "I see I have spent 10 minutes on this issue already, I need to finish this up asap"
vs
i.e. GOOD: "I was able to help that customer quickly, My productivity is good today"
I think there may indeed be a phenomenon one could call “The Facebook Effect,” where by merely presenting a metric (e.g., number of “friends,” minutes per call), you imply that it’s important. However, I don’t think that’s an issue here because it’s clear that users already regard these metrics as very important, otherwise they wouldn’t be begging you for them.
I believe there is a wealth of psychology research showing that adding faster feedback on a metric will very likely improve user performance on that metric. Furthermore, it is emotionally stressful for people to be evaluated on a metric that they cannot track well themselves, which appears to be the case now.
I say show the metrics real time. Management obviously wants high performance on them otherwise they wouldn’t be rating the users on it. User want it, and it’ll reduce their job stress. Sounds like a win-win to me.
Of course, the customers probably lose out, so there’s a bit of an ethical dilemma. However, if these metrics are problem, then the problem lies in corporate policy, not in the user interface. By showing the metrics to the user, you will highlight whatever shortcomings this policy has to the managers. Maybe management will get a clue.
If it’s part of your role, you could propose measurements of customer satisfaction (e.g., after-call automated survey, random follow-up surveys by a human) so that managers can at least track the impacts of their policy. Assuming they care (how are they evaluated?).
KISS principle, is this information pertinent to what the users will be doing?
If no, then no do not add it. One thing that I like on web apps is the time it takes for a page to load after updating / submitting something (basically posting back data).
That way I / a user can tell if there is an issue with data going to the database or a caching issue.
The problem with displaying say for instance calls to customers is your average times may not be as accurate as you'd hope. You may have a customer who likes to chat, or is having technical issues that are irrelevant to your business but yet get reflected in your apps.
This data shouldn't be trusted because of these types of things. Another thing is, if you start displaying call times you end up having employee competitions to see who can get off the phone first..that's when you start hurting yourself more..bad customer service.
A few big names used to rate employees based on things like call volume, average time on the call, etc. Remember when Dell tried to outsource all their technical calls to India? Customers here in the US were frustrated and calls were either too long (not understanding) or too short (Customers did not want to deal with it). Well the big shooters thought hey call times are pretty inline with what we had forecasted and our costs are going down. But it hit rock bottom as time went on...
Your application is useful as a tool to convey this information, and you should definitely include it if users are clamoring for more visibility and transparency into the data it collects. A user's response to the information, however, will largely depend on what the organizational culture already does with it.
For example, does management aggressively encourage users to keep calls short and deal with as many customers as possible, and fire those who don't meet quotas? If so, that's going to provoke an equally strong reaction from users to keep their calls short when they discover they've been a bit on the longer side. ("Uh-oh, I'm getting to the 2-minute mark. I'd better hang up and fake a disconnection to avoid getting my average call length too high.")
Conversely, if management simply encourages everyone to do a good job and provide excellent customer service, this information can be synthesized by users in the overall context of this work. ("I've spent a lot of time on this customer -- I should see if I can wrap things up shortly, or escalate him if I can't fix this problem.")

Understanding Users - Does Performance Trump Looks?

It seems to me that whenever a GUI (Graphical User Interface) is involved, the look and feel of the interface nearly always trumps the performance of the application.
Is this a universal phenomenon?
Sufficiently bad looks trump any level of good performance.
Sufficiently bad performance trumps any level of good looks.
This boils down to the psychology of your target audience and about the architecture of your application. If the GUI reacts quickly and is laid out in such a way that it is intuitive to the user (as opposed to the developer), then the underlying layers may not need to perform so well. If however, the user wants to get data from a database and they're left hanging while the data loads, they're going to feel very differently. Compare 2 web applications just as an example:
Application one feels quite responsive but under the covers things take longer than it appears on the surface - it uses AJAX to talk to Web Services. The Web Services aren't the quickest, but everything happens on callback (asynchronously), so the user isn't held up while fields populate. It doesn't impede their workflow. On a bad day when network performance deteriorates the background performance, sure it's noticeable, but user activity isn't impeded any further than normal.
Application two doesn't feel quite so responsive. Everything happens on postback, there's no AJAX or Web Services, on a good day the page loads are fairly quick. Of course, on every postback the user's workflow is impeded while they wait for the page to reload. On a bad day, network performance causes performance to deteriorate noticeably, further impeding the user.
Application one is far less likely to get complaints because the user isn't held up even though fields aren't loaded so quickly. The user can enter data and move on. Everything is handled asynchronously. Of course, in the background, the Web Service process may actually be slower than the full page refresh but the user isn't going to care so much.
From many thousands of hours writing software and directly interacting with my users - frequently those who aren't necessarily as computer literate as your average 10 year old I've noted these points that are key to getting acceptance from just such an audience [written from a user perspective]:
It must do what I want how I want it: Don't just read the spec and expect your code to meet exactly what it says on the paper. Really read what it says on the paper and understand what the user meant by that. Design to the underlying meaning of the words not the black and white of the ink on the paper. If you don't understand exactly what I meant, come and talk to me and I'll explain it until you do. I'll be less happy if you deliver software that missed my whole point than I will by your questions. I'll feel much happier if I get the feeling you're on my side by really trying to understand me.
It must assist my workflow and not impede it: It's great if all I have to do is push one button to complete what would've taken me an hour to do before, but if it freezes my computer for the 20 minutes it takes to complete the task, I'm not going to be a happy camper.
It must be intuitive to use: That means I don't want to have to wade through the documentation you didn't provide me with in order to figure out how to use it. Neither do I want a 20 minute explanation that I'm going to forget 3 minutes after you walk out of the door. Design the software such that my 10 year old could figure it out as easily as they can program the PVR. It means that I should interact with it in a manner that seems logical to me as the person that will be using it day in day out. It doesn't matter if it's functionally correct, if I can't figure out how to use it, I'm not going to use it, much less pay for it.
It must be responsive: I don't want to have to click a button and then wait 10 seconds for a list to load and then select an item from that list and then wait for another screen to load before I can select an action to complete on that item which then takes 5 minutes to complete. Find a way to load the data quickly - if you can't load the data quickly in response to my action, then figure out a trick to make it feel like the data is loading quickly - perhaps by loading it in advance in the background and only displaying what I need displayed in response to my action... my point is, I don't care what you do, just make it appear like it's doing it quickly.
It must be robust: It doesn't matter what I throw at it, it should accept it and move on. If I do something wrong or put something incorrect in a field, tell me - IN PLAIN ENGLISH!! I don't care about buffer overflows or IOExceptions thrown at line 479 while attempting to open a file. Just handle it and tell me what I did wrong in language I understand.
Give me documentation: Okay, I know I'm not going to read it, and I'm more likely to pick the phone up and call you than remember where I put it when you gave it to me. But knowing its there makes me feel warm and fuzzy inside. It shows that you cared about the software enough - and me enough to write instructions that I can reference oustide business hours when you're not available.
Price: This depends entirely on your audience, but in my experience, if you met all of the above points, price tends to be far less of a concern than it might appear on the surface.
Although "you can't judge a book by its cover", people often do with software. I don't know if I would say this is "universal", but certainly common.
I don't think it's even a true phenomenon. I don't care how zippy the "look and feel" is, if it takes second to echo a keypress, the UI experience will suck. If it takes a long time to repaint the page for small changes, the UI will suck.
Now, as long as the response time of the application is less than some amount, then the look and feel will be a big part of the satisfaction.
Check out some of Jakob Neilsen's books on this.
Isn't it a bit of a false dichotomy? If the look and feel of an application isn't clean, well-organized and effective then you don't have a high-performing application. No matter how fast it may be.
I've found that the best combination is a snappy and easy-to-use GUI. This doesn't necessarily mean your app has to have great performance, but having the GUI freeze on you is a kiss of death. The iPhone's Safari does this well--you can continue to scroll around the screen even if the rendering engine can't keep up with you. Yeah, the no-content hatch marks are ugly, but at least the user knows he's in control.
I think it depends on the users. I work in a medium sized company in the IT department constructing web based software for consumption by the employees of said company. The users range from Human Resources, Manufacturing, R&D, Sales, Finance, to making applications for the CEO. Each of the different departments and users within those departments seem to have different criteria for what makes a quality application.
For instance, a Human Resource department usually deals with a lot of textual data. They spend heaps of time inputting things into forms like employee information, entitlements, recruiting etc. These types of users might opt for the look and feel of an application for this purpose, they want an aesthetically pleasing design that is easy to navigate and intuitive.
On the other hand a department like finance might favor performance in their reporting tools. I have had a few experiences with large SQL tables with complicated queries that took a considerable amount of time to execute. Users that run these kinds of reports many times a day soon get fed up of waiting and would gladly lose a bit of interface intuitiveness in exchange for time that could be spent on other tasks.
So, i would say that you can't make a blanket statement like "All users prefer a speedy application" or "All users like pretty applications". It really depends on the users preferences, their job requirements, and the applications purpose.
Balance is everything.
The UI needs to look respectable, professional and flow similar to other applications so the user has a common experience, thus little learning curve. It shouldn't have unecesssary whistles and bells unless specifically requested.
The performance should be at least tolerable. If you have extra time in a project, I would spend that time speeding it up unless the user specifically asks for UI enhancements. Many times, whistles and bells can compromise performance as some UI enhancements require additional CPU time AND sometimes add awkwardness to the app. At first glance, some of these apps look cool but long term usability suffers in favor of the NEATO factor.
Important for the user is that using the program is fun. The program should not only be able to do what I want it must feel good to use the program.
Making the user wait at moments the user does not understand or foresee isn't fun.
Crashing and making errors isn't fun either.
But looking good and helping me doing my task through the look and let me work fast and without work flow interruptions is fun.
Programmers often think that programs that are slow and use much memory are bad and they measure all their software on memory usage and the use of the processor. But most of you users won't start top or the windows task manager and look at the footprint of your program they will use it and if if feels good to use the program, and the rest of their computer with the program running they will fell good to.
One thing I read about often is the usage of as many CPUs as possible to get a task for the user done in the shortest time. Is this high performance? Your program is very fast. But the Computer is very slow at the moment and switching to the email program because I know the task will take its time is a pain in the ass. So sometimes you may want to free some resources to improve the feeling of your program. Even if that would slow down your own program.
The most important are price, functionality, compatibility, and reliability.
Looks and performance are both, relatively unimportant and in practice they are both therefore unable to "trump" anything:
Compatibility: for example, in the real world I use MS Word, not because it's fast or pretty but because it's compatible with everyone else who uses it.
Functionality: when I want to book a train in France, I use http://www.voyages-sncf.com/ not because it's fast or pretty (or even outstandingly reliable) but because it has the necessary functionality.
Reliability: if an application crashes then I'm probably not going to use it again, no matter how fast it crashes, or how nice it looked before it crashed.
Price: etc. (say no more).

How do you test the usability of your user interfaces [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release?
We are a small software house, but I am interested in the best practices of how to measure usability.
Any help appreciated.
I like Paul Buchheit's answer on this from startup school. The short version of what he said listen to your users. Listen does not mean obey your users. Take in the data filter out all the bad advice and iteratively clean up the site. Lather, rinse, repeat.
If you are a small shop you probably don't have a team of QA or Usability people or whatever to go through the site. Your users are going to be the ones that actually use the site though. Their feedback can be invaluable.
If something is too hard for one of your users to use or too complex to understand why they should use it, then it might be the same way for 1000 other users. Find a simpler way of accomplishing the same thing.
Once you have gathered all of this feedback and have a list of things to do, do the simplest ones first. That way you have forward moving usability progress.
What I like to do is give someone an install package, ask them to perform a number of tasks related to how the application works, and watch.
Hardest part is to keep your mouth shut.
Some of the best advice on usability testing is available on Jakob Nielsen's Website http://www.useit.com. He advocates what Will mentioned - ask users to perform various tasks on your website or web application and then sit back to see what they do.
Do not interrupt the users by asking questions or guiding them. Just observe them and document their flow. You can also get hardware and software to do eye-tracking and understand what captures the attention of the users.
However, usability should not start from the testing phase. You must have some general idea of what users generally like and do not like when you do development. There are many websites and books outlining generally accepted usability standards and principles.
Normally, we test the usability of new interfaces by asking a small selection of users to try out a beta version.
We give a small amount of instruction as to what the new features/screens are supposed to do and let them dive straight into it. It's very interesting to see where they are looking and clicking. We never demo the new features - we only talk about what it does.
If the UI changes are minimal then they go live and we gather feedback from real users. It's only when we are making big changes that we go through usability tests on beta.
When developing new screens it usually helps a hell of a lot to get a colleague sat in front of the UI and ask them what it does. Which areas do they click on? Where are they looking first? What sections are drawing their attention? etc.
I agree with Adam; using a very computer illiterate person is very helpful. However, what I've run into before with that is the program I want them to try out just isn't "up their alley" as far as something they would ever want to do.
A good way to start is with a paper prototype. Have specific tasks that you want your "user" to perform and have them do it. For more on paper prototyping, start here.
I frequently take any new interface I'm working on to one of our technical support people. They've heard every complaint about interfaces that you could ever imagine, so if anyone is going to think up potential problems, they will.
Also, and I'm not kidding about this, I often take the least computer literate person I know (you're mother is often a good choice...but they have to have used a computer before, otherwise it's going to by pointless) and let them loose on the interface with no instruction. If they can't figure out where things are intuitively, then your GUI likely needs work. Remember, Don't make them think! (yes, I know this is for web design, but it applies)
There are many ways to test the usability of a system. Please check any available literature you can find. I just want to insist that usability test is not so hard as you or anyone might think. In a famous paper called "A mathematical model of the finding of usability problems" in INTERACT'93 and CHI'93, J. Nielsen and T. K. Landauer showed that only five users are enough to find most problems in a small system.
If you have no way to read this paper, try this article in the author's website:
http://www.useit.com/alertbox/20000319.html
Z'been a while since this question was last active but here goes anyways.
From experience :
Always use Objectively measurable to decide if usability is better or not (time to accomplish carefully selected task, inactive time, KLM type metrics) here a key-mouse logger can be a precious ally
Never go too far ahead before consulting and measuring again with your client (do not encage yourself with the paper prototype and emerge with the finish product... that just never works)
read, read, read, try, evolve
Keep things simple and always remember the task at had (why the user needs the interface)
test, test and test again...
Always go to the bottom of the user requests. Although the check box the user request at this particular place may be the best thing to do, it almost always hides a more fundamental flaw
the system user (the one using it... as opposed to the one paying for it) is your best ally, keep him/her on your side
Never be afraid of refactoring your design and evolve your system. Also evolve your metrics and measurements also, however be careful in doing so not to break measurements continuity as it is the best token of objective progress in a VERY subjective world.
recommended reading (other than previously proposed):
Handbook of usability testing Jeff Rubin. A bit extreme but we toyed around an agile version of his approach and found that if we spent 30 minutes a week with users we would get a LOT of useful feedback while not getting swamped with too much info.
keep close watch to the Sneiderman and Nielsen of this world and other that may arrise
As usability inspection goes, there are several viable methods. They require a different amount of resources in regards to persons, analysis and equiptment.
The most common, and easiest to perform is called
Heuristic Evaluation
You basically walk through each screen to check if it conforms to the heuristics set by you, or your customer.
Check this article by Nielsen
Cognitive walkthrough
This method requires you to ask the user to complete steps in the application. You prepare steps for the user to complete. Issues that arrise during this walkthrough is taken into consideration when finishing the application.
Check this paper for details.
Think Aloud Analysis
I have used this method mostly in the early stages of prototyping. I let the user talk freely about the system while it is beeing used. Ask questions about use, design etc. You can get a really nice veiw of the general feeligns of the system, and what features are lacking.
Check this paper for details.
Interaction analysis
This is a more tricky one. I have only used the datagathering teqchniques proposed by this one. This technique takes into account context, activites, body language etc. Interaction analysis is commonly focused on research, not so much in commercial evaluations.
This link takes you to the article.
Keep in mind that these methods take practice to perfect. I would start with HE, continue to CW and THA. And only use Interaction Analysis if you have lots of resources and time.
There are a number of methods to test or evaluate usability of an application. Broken down into qualitative and quantitative methods and based on when you are planning to test.
Further it is categorized based on whether users are involved or experts do the testing.
To name a few methods,
Expert Reviews - user interface or usability experts rate the usability of an interface based on decided heuristics and principles
Formative usability testing - task flows are taken and users are provided with tasks to be completed. Qualitative feedback is collected based on what the users feel the pain points are during the testing. This form of testing is done during the design to provided feedback into the design of the application.
Summative Usability testing - task flows are taken and users are provided with tasks to be completed. The applications performance on efficiency, effectiveness and satisfaction are measured based on users completion of tasks.
The importance difference is whether you engage the user or a expert to tell you the difference in usability. Further on when you do the evaluation - at the end of the project or during the design phases.
I'm a strong believer in what I call 3-martini usability testing. When designing a system, imagine that the person who will be using it has just had 3 martinis.
Before handing over the system to colleagues (other programmers, quality assurance, tech support) or usability testers, an informal test with a couple of friends and a bottle of vodka (outside of work, of course) can often prove instructive.

Resources