What usability evaluation methods do you use?
GOMS?
Cognitive Walkthrough?
Think aloud protocol?
Others? (apart from 'ask your mum' tips that are well covered elsewhere on SO)
Dogfooding. Using our own application we quickly find out which parts are clunky or inconvenient to use.
Log and analyse support calls, and track what parts of the application users have trouble using or finding.
Usability tests. We write down a task, as near to an actual use case as we can, and ask a few users to perform it. We watch them do it, ask them to think aloud while they're doing it, and ask them lots of questions afterwards.
GOMS is way too heavyweight (and too far from reality) for most tasks ... i'd suggest something way more lightweight like hallway usability testing, as advocated by joel spolsky:
http://www.joelonsoftware.com/articles/fog0000000043.html
build the UI out of pieces of paper. create several task scenarios - probably don't expect to cover the entire UI in one sitting. find good test subjects based on target audience (this may be the hardest part). provide said subjects with printed agenda .. provide a human-being to act as 'help system' but otherwise stay quiet .. ask subjects to 'think out loud' - say what their brain is doing (why good subjects are hard to find) .. video what happens and analyze. ..fix stuff, and iterate.
Related
I am a UI/UX designer. Currently I got interviewed for a post and now they’ve given me an Exercise where I am supposed to do a complete UI review/Audit/Analysis of their product. I need some suggestions to do the exercise:
What medium do you think is best for presenting such a review? A video, An audio with visuals, or a document highlighting issues in UI with callouts?
What are the key points that need to be covered in such a review?
So that I submit the best exercise and get selected. Thanks in advance.
First of all you should know that UI/UX is not my area of expertise. But I've been directly or indirectly involved with it enough to know that it's not an exact science. People will often debate about what is the best for the user. Just remember that two users themselves might not agree on something.
So what I think is important is that you genuinely take the role of a user, try using the product imagining a scenario where you want to accomplish something. If you do it right, you should have some feedback on what works well and what could be improved. It's your opinion, the important part is that you can back them up with common sense or valid arguments.
As for the medium, I'd say that you should choose the one that communicates your views the best. I'd rather read through well organized text than go through a poor video. You'd probably want to impress, so whatever you choose, do it well! If the UX design was documented upfront, you would normally use that as the basis for your review. So look up UX design tools, you might be able to use one of them.
If you have time, a background in UML modeling might be helpful, particularly the ICONIX methodology, which encourages exploring different "what if" scenarios, rather than just desiging for the standard (most common) path. Also assuming that you've already looked up online resources describing common best practices.
I'm searching for sources and further information on a particular concept in user experience design. It's not a particularly complicated concept, just that when designing user interfaces, you should both make it intuitive and simple for new users, but also provide way for users to become more efficient as they become more familiar with the application.
An example could be including a prominent button for a common action for new users, but also providing a keyboard shortcut / mnemonic for expert users. However, that's just an example, another example could be providing full functionality through a GUI, but allow expert users to script the same actions. The point is it's more difficult to learn, but it makes them more efficient.
I'm pretty sure there's a name for that which I can't recall, and I'm having trouble searching for sources and references on it.
Name of the concept of designing an interface to allow expert users to become more efficient?
Accelerators?
Flexibility and efficiency of use:
Accelerators -- unseen by the novice
user -- may often speed up the
interaction for the expert user such
that the system can cater to both
inexperienced and experienced users.
Allow users to tailor frequent
actions.
(source: Ten Usability Heuristics by Jakob Nielsen)
Well, reading only your question "Name of the concept of designing an interface to allow expert users to become more efficient?" I'm inclined to point you toward The Humane Interface: New Directions for Designing Interactive Systems by Jef Raskin, in which there is the concept of habituation:
2-3-1 Formation of Habits
When you perform a task repeatedly, it
tends to become easier to do.
Juggling, table tennis, and playing
piano are everyday examples in my
life; they all seemed impossible when
I first attempted them. Walking is a
more widely practiced example. With
repetition, or practice, your
competence becomes habitual, and you
can do the task without having to
think about it. ...
...
... The ideal humane interface would
reduce the interface component of a
user's work to benign habituation.
Many of the problems that make
products difficult and unpleasant to
use are caused by human-machine design
that fails to take into account the
helpful and injurious properties of
habit formation. One notable example
is the tendency to provide many ways
of accomplishing the same task. Having
multiple options can shift your locus
of attention from the task to the
choice of method...
But is contrary to what you describe in your question, as evidenced by the last 2 sentences. In fact in that book there is also a sub-chapter dedicated to dispel the myth of beginner-expert dichotomy:
3-6 Myth of the Beginner-Expert Dichotomy
... This dichotomy is invalid. As a user
of a complex system, you are neither
a beginner nor an expert, and you cannot
be placed on a single continuum between
these two poles. You independently know
or do not know each feature or each related
set of features that work similarly to one
another. You may know how to use many
commands and features of a software package;
you may even work with the package professionally,
and people may seek your advice on using it.
Yet you may not know how to use or even know
about the existence of certain other commands
or even whole categories of commands in that
same package. ...
So, perhaps is not such a good term/concept that you are looking for.
Update: were you looking for the term Adaptive User Interfaces, perhaps? Well, I think that, as usually understood and implemented, it is not such a great idea (for example, disappearing menu items in Microsoft products). But my impression is that researchers use the term for something quite different.
Update: but Adaptive User Interfaces does not cover scripting.
The answer is in your question: Efficiency. It's a fundamental component of usability that Jakob Nielsen long ago defined as "Once users have learned the design, how quickly can they perform tasks." A UI with expert-supporting elements like accelerators, context menus, and double-click-for-defaults is an efficient UI.
It is also correct to simply say that making things fast for experienced users is part of usability -just as usability also includes making it easy for users to accomplish basic tasks on the first encounter, and making it satisfying, and tolerating errors.
I'm a recent graduate who is looking to get a job doing user experience. Next week, I have a technical interview in which I will be given a website and will have to talk about its usability issues as well as come up with ways of improving the user experience. I feel I have the natural skills to do this and have been doing a fair amount of reading into the subject, but I would like some further advice on how to effectively critique different kinds of websites.
Does anybody have any suggestions of common faults I should look out for, or advice on ways of structuring my evaluation in order that it is relatively air-tight and I do not miss anything obvious?
As I've said before, I'm already doing a lot of reading and I realize that practice makes perfect. However, I'm hopeful that those that have long-term experience with this can help me by imparting their wisdom on gotchas, common issues, and what to look out for in a good/bad website.
Thanks in advance!
How easy navigation is
Whether a user can easily find what he needs without resorting to "search" function. Edge case: whether a user can find the search input field without using the browser's search function (Ctrl+F)?
Whether a site is browsable with images turned off
How many clicks it takes to accomplish an operation. Is that many really necessary?
Are the most important / frequently used features right there in front of the user?
Whether you communicate with the user in geek language
Whether you overwhelm the user with long literary texts where one or two words will suffice
Whether you use standard ideas in your UI. Do buttons, links and menus look like buttons, links and menus? Do they also work that way?
If UI is made up of a limited set of controls with consistent look and behavior? Or each page is unique and has to be learned from scratch?
Whether UI is accomplished with mostly 2-3 colors or uses different colors everywhere to look cool
Also check out the following questions:
Worst UI You’ve Ever Used
What are common UI misconceptions and annoyances?
Why is good UI design so hard for some Developers?
What is the best UI you’ve ever used?
As the other answers have talked a bit about usability I'll mention some things about accessibility (although good accessibility and usability go hand-in-hand).
First of all you need to get the usability correct - a site with poor usability will straight away mean that it will almost certainly also have poor accessibility. Make sure it makes sense, is easy to navigate and is structured meaningfully - for good accessibility that needs to be reflected in the markup as well as visually (so use headings correctly, use things like (strong) instead of (b)old, etc). Automated tools can provide some limited help with this.
Secondly make sure you use the various pieces of markup that are available which will enhance usability (e.g. alt tags on images). Automated tools are excellent for this.
Next if you're going to use technologies like javascript try to use progressive enhancement so that users without those technologies available still have a workable experience. Automated tools won't help much with this.
Finally don't get lured into thinking that an accessible website is a dull boring featureless one - for every user with visual difficulties there will be many more who have cognitive difficulties such as dyslexia. The aim is to make it engaging for everyone, not cripple it for a minority of users (who will likely also be penalised if you start slashing content - for example youtube is one of the most popular sites for blind users).
My thinking process :
See what's different. I mean ask yourself, "is this button here also done that way on youtube/google/basecamp/whatever has been proven good enought".
If it's not the case, I ask myself "does it make sense to do it differently?". If it doesn't make sense, then it shouldn't be that way to avoid confusing the user.
If it makes sense, I ask myself "If it's not obvious, what's the learning curve for the user?", always keeping in mind that "the user" is not IT.
Then I'd see if I can improve it. If I can't, maybe you can't improve it, so even if the control is not perfect it's good enough.
Finally ask yourself "what does the website wants the user to do?". Is it buying something? Subscribing? It's all about figuring out what's the objective. Then see if the website is oriented toward something aiming to complete this objective.
As well as practical ideas about usability problems, you might want to think what kind of process you'd use to do this work (and how it would fit into the company's development process). Would you start out with research? How would you present your analysis and feedback?
I'm looking for resources that provide an actual lesson plan or path to encourage and reinforce programming practices such as TDD and mocking. There are plenty of resources that show examples, but I'm looking for something that actually provides a progression that allows the concepts to be learned instead of forcing emulation.
My primary goal is speeding up the process for someone to understand the concepts behind TDD and actually be effective at implementing them. Are there any free resources like this?
It's a difficult thing to encourage because it can be perceived (quite fairly) as a sea-change; not so much a progression to a goal but an entirely different approach to things.
The short-list of advice is:
You need to be the leader, you need to become proficient before you can convince others to, you need to be able to show others the path and settle their uncertainties.
First become proficient in writing unit tests yourself
Practice writing tests for existing methods. You'll probably beat your head on the desk trying to test lots of your code--it's not because testing is hard or you can't understand testing; it's more likely because your existing code and coding style isn't very testable.
If you have a hard time getting started then find the simplest methods you can and use them as a starting point.
Then focus on improving the testability of the code you produce
The single biggest tip: make things smaller and more to the point. This one is the big change--this is the hardest part to get yourself to do, and even harder to convince others of.
Personally I had my "moment of clarity" while reading Bob Martin's "Clean Code" book; an early chapter talks about what a clean method will look like and as an example he takes a ~40 line method that visually resembled something I'd produce and refactors it out into a class which is barely larger line-count wise but consists of nothing but bite-sized methods that are perhaps 3-7 lines each.
Looking at these itty-bitty methods it suddenly clicked that the unit-testing cornerstone "each test only tests one thing" is easiest to achieve when your methods only do one thing (and do that one thing without having 30 internal mechanisms at play).
The good thing is that you can begin to apply your findings immediately; practice writing small methods and small classes and testing along the way. You'll probably start out slow, and hit a few snags fairly quickly, but the first couple months will help get you pointed in the right direction.
You could try attending (or hosting one if there is none near you!) a coding dojo
I attended one such excercise and it was fun learning TDD.
Books are always a good resource - even though not free - they may be worth your time searching for the good free resources - for the money those books cost.
"Test driven development by example" by Kent Beck.
"Test Driven Development in Microsoft .NET" by James W. Newkirk and Alexei A. Vorontsov
please feel free to add to this list
One thing I worked through that helped me appreciate TDD more was NHibernate and the Unit of Work Pattern. Although it's specific to NHibernate and .NET, I liked the way that it was arranged. Using TDD, you develop something (a UnitofWork) that's actually useful rather than some simple "this is what a mock looks like" example.
How I learn a concept best is by putting it to use towards an actual need. I suggest you take a look at the structure of the article and see if it's along the lines of what you're looking for.
Geeks are excellent at working to metrics, whether they are good for them or not!
You can use this to your advantage. Set up a CI server and fail the build whenever code coverages drops below 50 percent. Let them know that the threshold will rise 10 percent every month until it's 90. You could perhaps use some commit hooks to stop them being able to check code in to begin with but I've never tried this myself.
Let them know the coverage by the team will be taken into effect in any performance reviews, etc. By emphasising it is the coverage of the team, you should get peer pressure helping you ensure good coverage.
This will only ensure they are testing their code, not how well they are testing their code, nor whether they are writing the tests first. However, it is strongly encouraging (or forcing) them to incorporate testing into their daily development process.
Generally, once people have something in their process they'll want to do something as easily/ efficiently as possible. TDD is the easiest way to write code with high coverage as you don't write a line of code without it being covered.
Find someone with experience and talk to them. If there isn't a local developer group, then start one.
You should also try pushing things too far to start with, and then learn when to back off. For example, the whole mock thing started when someone asked "What if we program with no getters".
Finally, learn to "listen to the tests". When the tests look dreadful, consider whether it's the code that's at fault, not your testing technique.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release?
We are a small software house, but I am interested in the best practices of how to measure usability.
Any help appreciated.
I like Paul Buchheit's answer on this from startup school. The short version of what he said listen to your users. Listen does not mean obey your users. Take in the data filter out all the bad advice and iteratively clean up the site. Lather, rinse, repeat.
If you are a small shop you probably don't have a team of QA or Usability people or whatever to go through the site. Your users are going to be the ones that actually use the site though. Their feedback can be invaluable.
If something is too hard for one of your users to use or too complex to understand why they should use it, then it might be the same way for 1000 other users. Find a simpler way of accomplishing the same thing.
Once you have gathered all of this feedback and have a list of things to do, do the simplest ones first. That way you have forward moving usability progress.
What I like to do is give someone an install package, ask them to perform a number of tasks related to how the application works, and watch.
Hardest part is to keep your mouth shut.
Some of the best advice on usability testing is available on Jakob Nielsen's Website http://www.useit.com. He advocates what Will mentioned - ask users to perform various tasks on your website or web application and then sit back to see what they do.
Do not interrupt the users by asking questions or guiding them. Just observe them and document their flow. You can also get hardware and software to do eye-tracking and understand what captures the attention of the users.
However, usability should not start from the testing phase. You must have some general idea of what users generally like and do not like when you do development. There are many websites and books outlining generally accepted usability standards and principles.
Normally, we test the usability of new interfaces by asking a small selection of users to try out a beta version.
We give a small amount of instruction as to what the new features/screens are supposed to do and let them dive straight into it. It's very interesting to see where they are looking and clicking. We never demo the new features - we only talk about what it does.
If the UI changes are minimal then they go live and we gather feedback from real users. It's only when we are making big changes that we go through usability tests on beta.
When developing new screens it usually helps a hell of a lot to get a colleague sat in front of the UI and ask them what it does. Which areas do they click on? Where are they looking first? What sections are drawing their attention? etc.
I agree with Adam; using a very computer illiterate person is very helpful. However, what I've run into before with that is the program I want them to try out just isn't "up their alley" as far as something they would ever want to do.
A good way to start is with a paper prototype. Have specific tasks that you want your "user" to perform and have them do it. For more on paper prototyping, start here.
I frequently take any new interface I'm working on to one of our technical support people. They've heard every complaint about interfaces that you could ever imagine, so if anyone is going to think up potential problems, they will.
Also, and I'm not kidding about this, I often take the least computer literate person I know (you're mother is often a good choice...but they have to have used a computer before, otherwise it's going to by pointless) and let them loose on the interface with no instruction. If they can't figure out where things are intuitively, then your GUI likely needs work. Remember, Don't make them think! (yes, I know this is for web design, but it applies)
There are many ways to test the usability of a system. Please check any available literature you can find. I just want to insist that usability test is not so hard as you or anyone might think. In a famous paper called "A mathematical model of the finding of usability problems" in INTERACT'93 and CHI'93, J. Nielsen and T. K. Landauer showed that only five users are enough to find most problems in a small system.
If you have no way to read this paper, try this article in the author's website:
http://www.useit.com/alertbox/20000319.html
Z'been a while since this question was last active but here goes anyways.
From experience :
Always use Objectively measurable to decide if usability is better or not (time to accomplish carefully selected task, inactive time, KLM type metrics) here a key-mouse logger can be a precious ally
Never go too far ahead before consulting and measuring again with your client (do not encage yourself with the paper prototype and emerge with the finish product... that just never works)
read, read, read, try, evolve
Keep things simple and always remember the task at had (why the user needs the interface)
test, test and test again...
Always go to the bottom of the user requests. Although the check box the user request at this particular place may be the best thing to do, it almost always hides a more fundamental flaw
the system user (the one using it... as opposed to the one paying for it) is your best ally, keep him/her on your side
Never be afraid of refactoring your design and evolve your system. Also evolve your metrics and measurements also, however be careful in doing so not to break measurements continuity as it is the best token of objective progress in a VERY subjective world.
recommended reading (other than previously proposed):
Handbook of usability testing Jeff Rubin. A bit extreme but we toyed around an agile version of his approach and found that if we spent 30 minutes a week with users we would get a LOT of useful feedback while not getting swamped with too much info.
keep close watch to the Sneiderman and Nielsen of this world and other that may arrise
As usability inspection goes, there are several viable methods. They require a different amount of resources in regards to persons, analysis and equiptment.
The most common, and easiest to perform is called
Heuristic Evaluation
You basically walk through each screen to check if it conforms to the heuristics set by you, or your customer.
Check this article by Nielsen
Cognitive walkthrough
This method requires you to ask the user to complete steps in the application. You prepare steps for the user to complete. Issues that arrise during this walkthrough is taken into consideration when finishing the application.
Check this paper for details.
Think Aloud Analysis
I have used this method mostly in the early stages of prototyping. I let the user talk freely about the system while it is beeing used. Ask questions about use, design etc. You can get a really nice veiw of the general feeligns of the system, and what features are lacking.
Check this paper for details.
Interaction analysis
This is a more tricky one. I have only used the datagathering teqchniques proposed by this one. This technique takes into account context, activites, body language etc. Interaction analysis is commonly focused on research, not so much in commercial evaluations.
This link takes you to the article.
Keep in mind that these methods take practice to perfect. I would start with HE, continue to CW and THA. And only use Interaction Analysis if you have lots of resources and time.
There are a number of methods to test or evaluate usability of an application. Broken down into qualitative and quantitative methods and based on when you are planning to test.
Further it is categorized based on whether users are involved or experts do the testing.
To name a few methods,
Expert Reviews - user interface or usability experts rate the usability of an interface based on decided heuristics and principles
Formative usability testing - task flows are taken and users are provided with tasks to be completed. Qualitative feedback is collected based on what the users feel the pain points are during the testing. This form of testing is done during the design to provided feedback into the design of the application.
Summative Usability testing - task flows are taken and users are provided with tasks to be completed. The applications performance on efficiency, effectiveness and satisfaction are measured based on users completion of tasks.
The importance difference is whether you engage the user or a expert to tell you the difference in usability. Further on when you do the evaluation - at the end of the project or during the design phases.
I'm a strong believer in what I call 3-martini usability testing. When designing a system, imagine that the person who will be using it has just had 3 martinis.
Before handing over the system to colleagues (other programmers, quality assurance, tech support) or usability testers, an informal test with a couple of friends and a bottle of vodka (outside of work, of course) can often prove instructive.