Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As we are a small company, I work as both a project manager and developer. The specifications I create for clients contain a number of elements used to describe and define the project, including user stories alongside any other elements I feel need to be included to define the project (e.g. wireframes, userflows, sitemaps etc.) to the client.
If a functional specification “describes how a product will work entirely from the user's perspective. It doesn't care how the thing is implemented. It talks about features.“. Then does anyone see any problem with using User Stories to define a functional specification for a website? Does anyone actually do functional specifications in this way?
Really I am trying to up my game a little, and wondering if this would approach would work for larger clients who perhaps have more stringent ideas on what a functional specification should contain, whereby a formal approach may be required. Definitely at the moment our clients respond well to our method of producing documentation.
I am interested in hearing what people who do project management professionally think about this.
I'm at odds with what a couple of other people have said.
First up the bit I agree with - stories are a great way of stating functional requirements. For my money they're one of the best ways of actually communicating requirements in a way end users will really take in. I've seen too many specs that get signed off without ever having been read.
The one thing I would say you might want to append to them is non-functional requirements - covering performance, security, data volumes, audit, archive and so on. While they can be covered in stories, sometimes they're better covered in a way that crosses all stories.
In terms of whether it's suitable for large companies this is where I disagree. In my experience (and I've done projects for Shell, American Express, a couple of multi-national banks and others) they're often no more formal than smaller companies so they'll be fine with stories. The reality is that a user in a large company is no better equiped (or interested) in reading class and sequence diagrams than they are elsewhere.
The size and complexity of the project may require more detailed requirements but it's the size of the project, not the size of the company that matters when you're determining how you document requirements.
For me the best requirements documentation is documentation that's suited to it's audience, and for me user stories hit the nail on the head most of the time - they're short enough and clear enough that when they sign them off they mean something because they've read and understood them (as opposed to the sign off of a 100 page spec which invariably means they haven't really read it), but good enough for the developers to work from too.
Yes, you can use user stories for your functional requires. I do it all the time, and have been for years. In my opinion, it works really well, and better than other systems I have used.
Would this approach work for larger clients? To make a gross generalization, no. They are going to have some system that use to define requirements, and likely its not user stories. If you come in with user stories, there is going to be a disconnect with the current practices, which you will have to work through.
I have been successful using user stories with larger organizations, but it take a concerted effort, which both parties need to be committed to.
What you're describing are the use-case scenarios that define the features, this is what is required from a usability perspective and is exactly what the client will understand and agree to. Screen mockups and flow diagrams will definately help both the client and developers.
An implementation specification may then be required to instruct developers on what needs to be included in the actual construction, the depth of this will be determined by your developers capabilities that include their knowledge of the house architecture/framework and methodologies/conventions and may include specifics on what impacts various parts of the application.
We also work in small teams (sometimes one or two developers) and believe the above is all that's required in this instance.
Larger companies with much larger teams may use Modeling Software, UML diagrams and other more detailed specifications. In the case where you the primary developer, you should still spend the time designing your application, but if nobody is going to review the designs and insist on all the formalities, your time is better spend implementing the software.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services.
In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website.
The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too.
The basic question is: How to create an interface that triggers a user to buy/use your services
I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept.
Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.
Adhere to some standard UI Design Principles:
The structure principle: Your design
should organize the user interface
purposefully, in meaningful and
useful ways based on clear,
consistent models that are apparent
and recognizable to users, putting
related things together and
separating unrelated things,
differentiating dissimilar things
and making similar things resemble
one another. The structure principle
is concerned with your overall user
interface architecture.
The simplicity principle: Your
design should make simple, common
tasks simple to do, communicating
clearly and simply in the user’s own
language, and providing good
shortcuts that are meaningfully
related to longer procedures.
The visibility principle: Your
design should keep all needed
options and materials for a given
task visible without distracting the
user with extraneous or redundant
information. Good designs don’t
overwhelm users with too many
alternatives or confuse them with
unneeded information.
The feedback principle: Your design
should keep users informed of
actions or interpretations, changes
of state or condition, and errors or
exceptions that are relevant and of
interest to the user through clear,
concise, and unambiguous language
familiar to users.
The tolerance principle: Your design
should be flexible and tolerant,
reducing the cost of mistakes and
misuse by allowing undoing and
redoing, while also preventing
errors wherever possible by
tolerating varied inputs and
sequences and by interpreting all
reasonable actions reasonable.
The reuse principle: Your design
should reuse internal and external
components and behaviors,
maintaining consistency with purpose
rather than merely arbitrary
consistency, thus reducing the need
for users to rethink and remember.
Try to look for Websites or Web Application which has successfully achieved the goal you are looking to achieve, study their UI's, try to find common parameters & patterns which engages the user on their sites.
I always believe amazon is very good at keeping user engaged on website by showing relevant recommendations, what other people are looking types recommendations, people who bought this also bought this kind of recommendations.
Another good read: What should a developer know about interface design usability and user psychology
Also, Good Read on UI design considerations of e-commerce websites.
When it comes to UI design, ideally you will have an actual visual designer provide some guidance on your use of colors and a UxD provide some insight into your layout and flows based upon their expertise in these areas. Barring these folks having some input, if you design the pages and create the visuals yourself, iterative discovery is the best method to inform your design and provide insight into how these items affect the user and the overall experience you have created.
While there are certainly numerous books you can read and "guidelines" you can follow (and should for the initial design phases), no amount of book learning can replace real user interactions.
Build a functional prototype of your site/app/service/etc. and get it in front of actual users to gauge usability and value. This should be done in an ad-hoc format (versus formal usability testing) and the prototype should consist of smoke and mirrors as needed (i.e. it could be only clickable comps or primarily images with only the flows you're testing actually working).
Once you have some level of prototype, bring it to a place where ppl tend to be (and where you have i-net access if needed). I have found Starbucks to be great for this. Grab some ppl and ask if you can have 10 minutes of their time - you'll find tons of willing participants. Provide these folks with a simple / specific scenario to complete in your prototype and watch and learn.
People in a real-world situation using your software will quickly find its flaws and you'll be learning more than you could ever glean from a book or guideline. You'll be iterating on the design and tweaking items every time you test.
Test like this over a few weeks and you'll be discovering the perfect design very quickly. Once you have something that ppl can use and find value in, you're ready to get it live. But, testing should not end there - once live, you should continue to test and tweak via A/B and multi-variant testing while keeping a close eye on on your analytics and user behavior.
Discovery testing followed continual A/B allows you to continuously tweak, test and learn and ultimately to create the best solution possible.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Based on what criteria they choose the projects and what are the things based on which they choose a project...?
Return on investment, if they want to stay in business.
Return on investment is ofcourse the final product. But it takes a number of factors to get there:
Their own expertise: Do we have people with skills needed to do this? Can we hire some?
Available resources: Programmers, Managers, Hardware, Time, Financial resources.
PR: Even if we dont get paid that much, will this project get us more business?
PR: Pay is great, but do we really want to be associated with this client?
Their Mission/Goals: What fields/niche do they want to compete in. Do they want to expand?
Past experiences: We did a project like this, it was horrible. Lets not do that again.
Past experiences: It was fun last time, AND we can reuse half the code! Lets do it!
Usually the management uses more sophisticated matrices and all to make their decision, but more or less, these are the factors they usually put in.
I am sure someone can provide a more specific/scientific answer.
Good question. The straightforward answer may seem to be Return on Investment (ROI). However, ROI is criticised for three reasons:
Short-termism: ROI is seldom calculated beyond 5-7 years (due to increasing discount rate on any cash flows produced in the future), some projects really worth doing realise full benefits much further in the future.
It’s hard or impossible to put monetary value on some things. The often cited example is human life. The other is moral principles. However, most frequently encountered thing in software world what is very hard to put a price on is opportunities that will never emerge unless this project goes live. It’s hard to put a value on the emerging opportunities, because we don’t know what they are until they actually emerge. And I don't mean opportunities that will simply not “open”, but specifically emerge.
ROI doesn’t take into account wider strategy. The importance of strategy in software world should not be underestimated and the strategy should take into the account specifics of providing software products or services. Geoffrey Moore’s “Crossing the Chasm” is a brilliant book I recommend and is very pertinent to the software world.
Joel’s recent instalment “Fruity treats, customization, and supersonics: FogBugz 7 is here” has a great sample of strategy document and the reasoning behind it. It seems that FogCreek plans to leave the bawling valley and enter the tornado (according to Geoffrey Moore’s classification) with their FogBugz 7.0 and hence the strategy of removing barriers that prevent people from switching to FogBugz, instead of spending time to introduce some more vertical features.
Other tools that can be used for selecting projects are SWOT analysis, Pareto analysis (i.e. choosing a project to address 20% of causes that are responsible for 80% of problems), PESTLE, Cost-Benefit analysis (similar to ROI, including the critique).
However, it seems that a sane strategy that states that the company is planning to do and not be doing in the finite period of time (often next year or two, in high tech market conditions are hard to predict beyond that horizon), gives a simple guidelines for choosing priorities and clear direction for joint efforts is the best starting point.
I also recommend reading a fabulous book “Almost Perfect” by Pete Peterson (former CEO of the maker of WordPerfect) that is available online. The book tells a real-life story of different strategies SSI Inc followed, some planned and stated and some ad hoc, and the way they were used to select what to work on.
ROI is only one measure. There are many other factors:
Risk management - for example, improving the process may not show any direct return on investment, but by adding e.g. unit tests the quality of the software can be improved and risk of a production bug reduced.
Compliance - there may be requirements by industry or government that need to be followed. Directly this may not show a return on investment because they may never be audited, but the downside to being non-compliant is huge (being shut down).
Manageability - providing metrics on bugs, project schedules etc. may not show a direct return on investment but it may allow them to better predict and manage their projects.
Security - this may be considered as a part of risk management, but it is a broad enough area to merit its own category. Making legacy code secure can cost a large amount of money and not show any immediate return, but there are obvious reasons why this is worthwhile.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Probably an easy one:
Are there any rules of thumb or pointers that could help recognise political requirements?
Let’s say one of stakeholders (your boss, a head of another department or an actual user) asks for a feature or particular characteristic of software being developed by yourself or your team. Is there a litmus test to determine whether requirement is political?
This question is really simple and is not about how to deal with political requirements or whether they are bad or good for software. How do you tell that whatever you have been asked to do is to pursue someone’s tacit or actually openly stated political agenda?
Will it really help you to know? I mean - if you're already embroiled in political games you'll know anyway. If you're not it isn't something you'll be able to use.
If you're going to have to implement the feature anyway I'd say just get on with it. Finding out that it's part of some management game will only demotivate you.
That said - if you're working on the sort of application that's so themware that you can't tell whether it's a real user feature or a political lever of some sort then it's probably a safe assumption that everything is political.
I would say that you should assume that all requirements are political.
If you are in a situation where more than one person is responsible for determining the set of features you implement, then every feature is effectively a negotiation between those people. That negotiation makes those features political.
However, even if there is only person deciding what features ship, there is still a pretty strong chance that those decisions are political. In any organization of reasonable size (say more than ten people), you are going to have politics. The politics in that situation will differ than the "design by committee situation". They will focus on currying the favor of the person who decides which features ship, rather than on "if you support my feature, I'll support yours" that exists in the committee scenario. That process, however, is still political.
I'm not trying to say that it's not possible to have a development environment free of politics. It is. However, I would say that to pull of it off that you need:
A small, tightly knit team
A boss that focuses on creating an environment that fosters creativity, and delegating creative ownership, rather than focusing on control over the creative process.
Smart, highly talented and creative people that share a strong sense of purpose and aesthetic values.
Missing any one of those things, you are doomed to a repetitive deluge of office politics.
The best idea is to find out what all the features are to be used for, i.e. find out not only how a feature is to be implemented, but also learn why it should be done. It really helps to know the background of the desired solution. It might even allow you to suggest an alternate feature set that might better suit your customer (maybe even play your own political game).
As long as there is anything you do not understand, do not do the project. It will only cause problems at some point.
Obviously it's a tricky question and much depends on your definition of the 'political'.
I would start with the simple question:
* Are the authors/originators of the requirement really using the software in question?
The requirement could come from your boss but it could be a translated valid requirement of the real user
Here are some I've seen:
It directly contradicts other requirements
It is clearly not feasible technically
It is "out in left field" ... it doesn't fit into the defined problem space
It contradicts common sense
BEWARE ... sometimes this results from your use-cases being wrong or incomplete. I've also purposely allowed some of these to proceed to development (e.g Eye-candy that sells the product but is useless or at least generally not used by the operators).
Use the SCRUM approach. Don't describe a feature as
"It should be doing this and that in the following way"
While the sentence above describes all you need to know to implement the feature, it does not justify the feature. My SCRUM book says features should be written down as a story. A story looks like this:
"As a <user-role>
I need a <functionality>
So that I get <business value>"
A feature that cannot be justified using such a story is an unjustified feature and thus there is no use to actually implement it.
E.g.
"As a visitor of a web portal I need a way to authenticate, so I can access my customer data, but nobody else can"
Now you don't only know that you need an authentication for your web portal, you also know who needs it (the visitors, basically everyone planing on using it more intensively) and you also know why it is needed, as it gives the user some value.
Other examples:
"As a passenger I need a list of all my booked journeys, so that I know when I'm going to travel where and won't lose the overview"
"As a book keeper, I'd like to have the sales tax being automatically printed to each bill based on customer data, so that I don't have to enter it manually each time I'm printing a bill"
If every feature needs to be written like that, you'll automatically see if a feature is for the customer, because it is really necessary, or just something your boss/company wants to have and also why they want to have it (what is the big picture behind it? Why are they doing it?).
The use of ambiguous words or phrases is often political.
However,
Never attribute to malice that which is adequately explained by stupidity.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are just starting on a pretty big project with lots of sub projects. we don't currently use any kind of named process but I am hoping to get some kind of agile/scrumlike process in by the back door.
The area I will be focusing on most is having a good backlog for the whole project and, at least in my head, the idea of an iteration where some things are taken from the backlog, looked at in more detail and developed to a reasonable deadline.
I wonder what techniques people use to break projects down into things to go in the backlog, and once the backlog is created how it is maintained and ordered. also how relationships between elements are maintained (ie this must be done before it is possible to do that, or this was one story now it is five)
I am not sure what I expect the answer for this question to look like. I think what may be most helpful is if there is an open source project that keeps its backlog online in some way so I can see how others do it.
Something else that would get +1 from me is examples of real user stories from real projects (the "a user can log on" story does not help me picture things in my project.
Thanks.
I would counsel you to think carefully before adopting a tool, especially since it sounds like your process is likely to be fluid at first as you find your feet. My feeling is that a tool may be more likely to constrain you than enable you at this stage, and you will find it no substitute for a good card-wall in physical space. I would suggest you instead concentrate your efforts on the task at hand, and grab a tool when you feel like you really need one. By that stage you'll more likely have a clear idea of your requirements.
I have run several agile projects now and we have never needed a more complex tool than a spreadsheet, and that on a project with a budget of over a million pounds. Mostly we find that a whiteboard and index cards (one per user story) is more than sufficient.
When identifying your stories, make sure you always express them in terms that make sense to your users - some (perhaps only small) piece of surfaced functionality. Never allow yourself to slip into writing stories about technical details that you could not demonstrate to a user.
The skill when scheduling the stories is to try to prioritise the things you know least about first (plan for what you want to learn, rather than what you want to do) whilst also starting with the stories that will allow you to develop the core features of your application, using subsequent stories to wrap functionality (and technical complexity) around them.
If you're confident that you can leave some piece of the puzzle till later, don't sweat on getting into the details of that - just write a single story card that represents the big conversation you'll need to have later, and get on with the more important stuff. If you need to have a feel for the size of what's to come, look at a wideband delphi estimation technique called planning poker.
The Mike Cohn books, particularly Agile Estimating and Planning will help you a lot at this stage, and give you some useful techniques to work with.
Good luck!
Like DanielHonig we also use RallyDev (on a small scale) and it sounds like it could be a useful system for you to at least investigate.
Also, a great book on the user story method of development is User Stories Applied by Mike Cohn. I'd certainly recommend reading it if you haven't already. It should answer a lot of your questions.
I'm not sure if this is what you're looking for, but it may still be helpful. Max Pool from codesqueeze has a video explaining his "agile wall". It's cool to see his process, even if it may not necessarily relate to your question:
My Agile Wall (Plus A Few Tricks)
So here are a few tips:
We use RallyDev.
We created a view of packages that our requirements live in.
Large stories are labeled as epics and placed into the release backlog of the release they are intended for. Child stories are added to the epics. We have found it best to keep the stories very granular. Coarse grained stories make it difficult to realistically estimate and execute the story.
So in general:
Organize by the release
Keep
iterations between 2-4 weeks
Product owners and project
managers add stories to the release
backlog
The dev team estimates
the stories based on TShirt sizes,
points, etc...
In Spring planning
meeetings the dev team selects the
work for the iteration from the
release backlog.
This is what we've been doing for the past 4 months and have found it to work well. Very important to keep the size of the stories small and granular.
Remember the Invest and Smart acronyms for evaluating user stories, a good story should be:
I - Independent
N - Negotiable
V - Valuable
E - Estimable
S - Small
T - Testable
Smart:
S - Specific
M - Measurable
A - Achievable
R - Relevant
T - Time-boxed
I'd start off by saying Keep it Simple.. use a shared spreadsheet with tracking (and backup). If you see scaling or synchronization problems such that maintaining the backlog in a consistent state is getting more and more time-consuming, trade up. This will automatically validate and justify the expenditure/retraining costs.
I've read some good things about Mingle from Thoughtworks.
here is my response to a similar question that may give you some ideas
Help a BA! Managing User Stories ...
A lot of these responses have been with suggestions about tools to use. However, the reality is that your process will be the much more important than the tools you use to implement the process. Stay away from tools that attempt to cram a methodology down your throat. But also, be wary of simply implementing an old non-agile process using a new tool. Here are some strong facts to consider when determining tools for processes:
A bad process instrumented with a software tool will result in a bad
software tool implemention.
Processes will change based on the group you are managing. The
important thing is the people, not the process. Implement something
they can work successfully in, and your project will be successful.
All that said, here are a few guidelines to help you:
Start with a pure implementation of a documented process,
Make your iterations small,
After each iteration talk with your teams and ask what they they
would change, implement the changes that make sense.
For larger organizations, if you are using SCRUM, use a cascading stand-up mechanism. Scrum masters meet with thier teams. Then the Scrum Masters meet in stand-ups of 6 - 9, with a Super-Scrum-MAster responsible for reporting the items from the Scum-Master's scrum to the next level... and so forth..
You may find that have weekly super-scrum meetings will suffice at the highest level of your hierarchy.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release?
We are a small software house, but I am interested in the best practices of how to measure usability.
Any help appreciated.
I like Paul Buchheit's answer on this from startup school. The short version of what he said listen to your users. Listen does not mean obey your users. Take in the data filter out all the bad advice and iteratively clean up the site. Lather, rinse, repeat.
If you are a small shop you probably don't have a team of QA or Usability people or whatever to go through the site. Your users are going to be the ones that actually use the site though. Their feedback can be invaluable.
If something is too hard for one of your users to use or too complex to understand why they should use it, then it might be the same way for 1000 other users. Find a simpler way of accomplishing the same thing.
Once you have gathered all of this feedback and have a list of things to do, do the simplest ones first. That way you have forward moving usability progress.
What I like to do is give someone an install package, ask them to perform a number of tasks related to how the application works, and watch.
Hardest part is to keep your mouth shut.
Some of the best advice on usability testing is available on Jakob Nielsen's Website http://www.useit.com. He advocates what Will mentioned - ask users to perform various tasks on your website or web application and then sit back to see what they do.
Do not interrupt the users by asking questions or guiding them. Just observe them and document their flow. You can also get hardware and software to do eye-tracking and understand what captures the attention of the users.
However, usability should not start from the testing phase. You must have some general idea of what users generally like and do not like when you do development. There are many websites and books outlining generally accepted usability standards and principles.
Normally, we test the usability of new interfaces by asking a small selection of users to try out a beta version.
We give a small amount of instruction as to what the new features/screens are supposed to do and let them dive straight into it. It's very interesting to see where they are looking and clicking. We never demo the new features - we only talk about what it does.
If the UI changes are minimal then they go live and we gather feedback from real users. It's only when we are making big changes that we go through usability tests on beta.
When developing new screens it usually helps a hell of a lot to get a colleague sat in front of the UI and ask them what it does. Which areas do they click on? Where are they looking first? What sections are drawing their attention? etc.
I agree with Adam; using a very computer illiterate person is very helpful. However, what I've run into before with that is the program I want them to try out just isn't "up their alley" as far as something they would ever want to do.
A good way to start is with a paper prototype. Have specific tasks that you want your "user" to perform and have them do it. For more on paper prototyping, start here.
I frequently take any new interface I'm working on to one of our technical support people. They've heard every complaint about interfaces that you could ever imagine, so if anyone is going to think up potential problems, they will.
Also, and I'm not kidding about this, I often take the least computer literate person I know (you're mother is often a good choice...but they have to have used a computer before, otherwise it's going to by pointless) and let them loose on the interface with no instruction. If they can't figure out where things are intuitively, then your GUI likely needs work. Remember, Don't make them think! (yes, I know this is for web design, but it applies)
There are many ways to test the usability of a system. Please check any available literature you can find. I just want to insist that usability test is not so hard as you or anyone might think. In a famous paper called "A mathematical model of the finding of usability problems" in INTERACT'93 and CHI'93, J. Nielsen and T. K. Landauer showed that only five users are enough to find most problems in a small system.
If you have no way to read this paper, try this article in the author's website:
http://www.useit.com/alertbox/20000319.html
Z'been a while since this question was last active but here goes anyways.
From experience :
Always use Objectively measurable to decide if usability is better or not (time to accomplish carefully selected task, inactive time, KLM type metrics) here a key-mouse logger can be a precious ally
Never go too far ahead before consulting and measuring again with your client (do not encage yourself with the paper prototype and emerge with the finish product... that just never works)
read, read, read, try, evolve
Keep things simple and always remember the task at had (why the user needs the interface)
test, test and test again...
Always go to the bottom of the user requests. Although the check box the user request at this particular place may be the best thing to do, it almost always hides a more fundamental flaw
the system user (the one using it... as opposed to the one paying for it) is your best ally, keep him/her on your side
Never be afraid of refactoring your design and evolve your system. Also evolve your metrics and measurements also, however be careful in doing so not to break measurements continuity as it is the best token of objective progress in a VERY subjective world.
recommended reading (other than previously proposed):
Handbook of usability testing Jeff Rubin. A bit extreme but we toyed around an agile version of his approach and found that if we spent 30 minutes a week with users we would get a LOT of useful feedback while not getting swamped with too much info.
keep close watch to the Sneiderman and Nielsen of this world and other that may arrise
As usability inspection goes, there are several viable methods. They require a different amount of resources in regards to persons, analysis and equiptment.
The most common, and easiest to perform is called
Heuristic Evaluation
You basically walk through each screen to check if it conforms to the heuristics set by you, or your customer.
Check this article by Nielsen
Cognitive walkthrough
This method requires you to ask the user to complete steps in the application. You prepare steps for the user to complete. Issues that arrise during this walkthrough is taken into consideration when finishing the application.
Check this paper for details.
Think Aloud Analysis
I have used this method mostly in the early stages of prototyping. I let the user talk freely about the system while it is beeing used. Ask questions about use, design etc. You can get a really nice veiw of the general feeligns of the system, and what features are lacking.
Check this paper for details.
Interaction analysis
This is a more tricky one. I have only used the datagathering teqchniques proposed by this one. This technique takes into account context, activites, body language etc. Interaction analysis is commonly focused on research, not so much in commercial evaluations.
This link takes you to the article.
Keep in mind that these methods take practice to perfect. I would start with HE, continue to CW and THA. And only use Interaction Analysis if you have lots of resources and time.
There are a number of methods to test or evaluate usability of an application. Broken down into qualitative and quantitative methods and based on when you are planning to test.
Further it is categorized based on whether users are involved or experts do the testing.
To name a few methods,
Expert Reviews - user interface or usability experts rate the usability of an interface based on decided heuristics and principles
Formative usability testing - task flows are taken and users are provided with tasks to be completed. Qualitative feedback is collected based on what the users feel the pain points are during the testing. This form of testing is done during the design to provided feedback into the design of the application.
Summative Usability testing - task flows are taken and users are provided with tasks to be completed. The applications performance on efficiency, effectiveness and satisfaction are measured based on users completion of tasks.
The importance difference is whether you engage the user or a expert to tell you the difference in usability. Further on when you do the evaluation - at the end of the project or during the design phases.
I'm a strong believer in what I call 3-martini usability testing. When designing a system, imagine that the person who will be using it has just had 3 martinis.
Before handing over the system to colleagues (other programmers, quality assurance, tech support) or usability testers, an informal test with a couple of friends and a bottle of vodka (outside of work, of course) can often prove instructive.