Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm building Desktop Software for over 10 years now, mostly it's simple Data-Input Software. My problem is, it's always looking the same: A Treeview on the Left and a lot of Text/Data Fields to the right, depending on the type of data currently is worked on. Are there any fresh ideas how such software nowadays should look like?
For further clarification:
It's very hierarchical data, mostly for electronic devices. There are elements of data which provide static settings for the device and there are parts which describe some sort of 'Program' for the device. There are a lot (more than 30) of different input masks. Of course i use combo boxes and Up/Down Entry Fields.
Having all of your software look the same thing is a good thing. One of the best ways to make it easy for people to use your software is to make it look exactly the same as other software your users already know how to use.
There are basically two common strategies for how to handle entry of a lot of data. The first is to have lots of data entry fields on one page. The next is to have only a few data entry fields but a lot of pages in a sort of wizard-style interface. Expert users find the latter much slower to use, as do users who are entering data over and over again. However, the wizard style interface is less confusing for newer users since it offers fewer elements at once and tends to provide more detail on them.
I do suggest replacing as many text fields as possible with auto-complete-based combo-boxes. This allows users to enter data exactly the same as with text-boxes, but also allows users to save typing by hitting the down key to scroll through choices after typing part of the data in.
Providing more detail on what data is being entered would probably yield more specific answers.
I'd also answer with a question, which is to ask what your motivation for considering a change is? Like the other posters, I'd agree that there is some value in consistency, but there's also a strong value in not ignoring niggles-in-the-back-of-the-mind feelings you have. Maybe you have a sense that your users aren't as productive as you'd like them to be, or you've heard feedback to that effect from your customers, or you're just looking to add some innovation for your own interest. Scratching itches is a good trait in a developer, in my view.
One thing I'd advocate would be a detailed user study. How much do you know about what your users do with the interfaces you create? Do you know the key tasks, the overall workflow? Would you know if one task regularly consumed 60% of your users' time, or if there was a task that was only performed once a month? Getting a good sense of what the users actually do (and not what they say they do) is a great place to start thinking about what changes might be worthwhile, especially if you can refactor the task to get a qualitatively different user experience.
A couple of specific alternative designs you might like to include in re-visioning the UI might be be facet browsing (works well for searching and exploring in hierarchies), or building a database of defaults / past responses so that text boxes can use predictive completion. However, I think my starting point would be the user study.
Ian
If it works...
Depending on what you've got happening with the data (that is, is it hierarchical, or fairly flat), you might want to try a tab-based metaphor, or perhaps the "Outlook-style", with a sidebar showing the sections of an application. One other notion I've played with lately is the "Object desktop" that I first saw proposed by Scott Ambler (Building Object Applications That Work). In this, you can display collections of items, or the user can "peel off" individual records for easy access.
Your information is not enough to really suggest you an interface alternative. However, may I answer your question with a question? Why do you think you have to change it? Has your customer complained? If not, it looks like your customer is happy with the way the software works right now, thus I wouldn't change it. If your customer complains about it, he'll most likely not just say "It's bad", he will say "Why can't it look like ..." and this will give you an idea how to change it.
I once had to re-design a very outdated goods management system. The old one was written for a now dead database system, still running in MS-DOS. The customer suggested I should create a prototype how this re-implementation might look like and then he'll decide if I get that job or not. I replaced the old, dead database with a modern MySQL database, I replaced the problematic shared peer access with a client server approach and I chose to rewrite the UI in Java, since different OSes were used and this had the smallest porting costs. So far the concept seemed good, the customer liked it. However, when he asked his employees what they think about it, they asked "So far it's great, but we have one question: Why doesn't it look like the old one?". Actually, it turned out that even with all the modern technologies, they wanted the interface to exactly look and being operated like the old one. So I had to re-build a 1986 usability nightmare MS-DOS UI in Java, because no other UI was accepted.
For me it is more about a clean, usable, logical design than anything else. If your program makes sense to the user, isn't clunky and works as advertised, then everything else UI related is essentially just like painting the house. I've sometimes rolled out a new version of a program with essentially the same controls that are skinned differently.
There's a reason that you've probably chosen the tree view - because it probably makes really good sense to do so. There are different containers and controls available in the various UI libraries, depending on the language, but I tend to stick with the familiar because the user probably gets how a tree control works and how a combobox works.
A user interface needs to be usable, just don't do the misstake to change to something working to something fancy-schmancy just because it looks better (been down that road)...
Make sure that added
widgets/controls really add a business value
Make sure that the added
widgets/controls do not mess up your
architecture (too much) and makes
the application harder to
manage/maintain
Try to keep platform standards on
how to do things (for example the Vista ux guidelines)
:)
//W
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What UI/GUI guidelines should be followed that subtly (or not so subtly) direct users so they don't shoot themselves in the foot.
For instance, you might want to give power users the ability to "clean" a database of infrequently used records, but you don't want a new user to try out that option if they've just spent hours entering new records - they may lose them all because they're 'infrequently used'. Please don't address this specific issue - it's just here to clarify the question.
While one could code a bunch of business logic in place to prevent some issues, you can't account for everything a user might do.
What are some common techniques, tips, and tricks that prevent improper usage?
ie, How should I design the interface to alert users that a function or action is to be taken with care
What should I design in that limits risk and exposure if a poor action is taken?
-Adam
Everything can be undone. Don't erase - deactivate. Back up before every destructive operation, and give the user a way to restore.
That's the path. It's hard to follow it all the way, but it's what you're aiming for.
Make it possible to undo dangerous actions.
If it's a reasonably big application or system, require separate admin access for dangerous operations as well.
Don't Make Me Think
And you can, in fact you HAVE to account for everything they might do. Because you (as the designer) as the one who gives them the ability to do all those things.
Before putting ANY item on a gui as yourself "Can this be misused?" and if it can, you might want to go with a lower level of customizability.
Example hierarchy
Button - Can be clicked.
T/F Radio Button (mandatory) - Only two options.
Combo Box - Many options, possibly "no option". more confusing.
Text field - Myriad of wildly inconsistent options. More confusing for user, more dangerous for coder.
Basically, if the user doesn't need extra options, then don't give them extra options. You'll only confuse them.
This is an old article, but it's still a great one:
Microsoft Inductive User Interface Guidelines
Never rely on anything that says "Are you sure?" The user is ALWAYS sure and that's if they even bothered to read it before dismissing.
Partition your users and have fine-grained permissions.
Define some power-user permissions that enable the "more dangerous" operations.
Power user permission is not given out lightly -- only to actual power users -- and revoked readily.
I'm of the school of thought that, in case of inclarities or ambiguousity, the user is rarely wrong, and the UI is always to blame. So, when you say "punch the user in the face", tagged with "pebkac", I'm thinking that you would do good with a slap in the face.
Unfortunately, I'm unable to give any good UX-advice, since I'm a mere programmer, and therefore more or less by definition disqualify as a good UI-designer. I'd like just to point out the possibility, that you actually could be the one who needs to get a clue, and try to be more humble towards the users.
Edit for Adam:
The little I know about UX is how little I know about it. It's an entire career path. I know for a fact that there's very little anyone can learn by asking a single make-me-good-at-this question at Stack Overflow. It's like me asking "help me write better code", with the body text formulated as a story of how my colleagues ridicule me of my code.
We, programmers, are engineers. We like order and reason and logical decisions. But the average user is not a programmer, not an engineer and, in many cases, not interested in computers themselves the very least bit.
I'm glad that people are giving you nuggets of good advice, and I'm glad that you, contrary to my first impression (I'm sorry about that), are eager to take those bits and understand the needs of the user.
But the point remains: You need to buy books (Don't Make Me Think is a great place to start, as already recommended). You need to watch how people use your software. You need to observe where they stumble, and jig things around until your UIs seem natural.
I'm sorry I still can't give you an answer. Because I don't have it. And even if I would have it, I would probably have to charge you 50EUR an hour, for years into the future.
Make the results of the user's action visible and offer a way to undo those changes.
When the changes are visible, then the user gets feedback of whether the results were what he intended to do, and if they are not, then the possibility to undo will let the user to try again to reach his goal. If possible, make the results of the action visible before the user invokes the action (for example, when dragging some element, show what would happen if the user would release the mouse button, for example visualize addition of the element where it will be moved to and visualize the removal of the element from where it was moved from).
There are a couple of types of undo. The most simple is a single-step undo (as in Notepad), but it is often not enough. Better is a multi-step undo (as in Word), which covers most of the cases, but does not allow undoing a specific action without undoing all the actions that have been done after it. That can be solved by object-specific undo, for example in a form with many fields (or cells in a grid like in Excel), right-clicking the field would show a list of previous values in that field. For deleted data you could have a store of deleted data, from where the user can restore things after deleting them (for example if the user deletes a slide in Powerpoint). And finally you could have a full version history of every change, for example as Local History works in IntelliJ IDEA - make a history entry every time the file is saved (and save everything automatically after a couple seconds of inactivity).
Confirmation dialogs don't help. The user might read it the first time, but soon after that clicking "OK" in the dialog becomes an automated process, and the user will press Enter before the dialog even shows up. Then the confirmation dialog has become just a source of unnecessary mechanical work. The user is always sure about doing some action, even when he is wrong - otherwise he would not have done that action.
Well there are a few different ways that I can/do go about these types of things.
User documentation - first and foremost give them some documentation to work with, and make the systems easy for them to use. Just general usability and descriptive names/actions for everything.
Provide confirmation screens with warnings. Full disclosure of what the action is going to do, with the warnings inside of a yellow box. It draws attention to it and helps prevent the need for the other items.
Have a roll-back plan. For large risky operations you can either simply set "deleted" flags, or offload the data to a temporary "recycle bin" of sorts should they accidentally remove/modify data that was unintended.
Require multiple approvals, for data purge operations especially go to a two-tiered approach, requiring approval from separate users.
These are just a few of the ideas that I have.
Two things immediately come to mind.
The first is the notion of progressive disclosure, i.e., only show users what they need in order to accomplish the task at hand. How many UIs have we seen that have hundreds of controls on a single dialog? Divide the controls into their respective tasks and only allow the user to do a single task at a time. An Advanced button on a dialog is one way to implement this, and this concept has the added benefit of separating the power users from the run-of-the-mill users. Run-of-the-mill users are less likely to attempt a task that is likely to be beyond their skill level.
The second is to leverage the wizard concept for complicated tasks. I know wizards have fallen out of style, but if a task is truly complicated, users usually appreciate having their hands held the first few times. A good example of this is the WinZip wizard interface. If you've never zipped a file before, this wizard uses a logical progression to walk you through the process. And then, once you've grown comfortable with it, you can switch to the classic interface to zip files more quickly.
Of course to do all of this requires a committment not only by the developers, but by management. And that, sadly, is where many of these usability battles are lost.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I ask about the pattern, not framework. This is kind of follow-up to a question on UI auto-generation.
Do you believe in the concept of UI auto-generation from metadata?
What kind of problems can be approached this way?
The question arose when I've created a small library to support my student projects, which generates interactive CLI in runtime based on object's metadata. And I think CLI it generates is quite decent.
On the other extreme is the Naked Objects Framework, which is rather universal, but UI it generates is horrible, IMO.
It's clear, every problem is specific and needs specific UI, but maybe there are several classes of problems where auto-generation is acceptable?
Yes, I believe the concept of metadata-based auto-generated applications is very sound - mainly because it drastically reduces development time and improves code quality by reducing the massive redundancy you have in most applications where each domain data field is represented in the database, in the model, in the UI, and often also several times in various mapping layers.
I think the future is auto-generated apps that can be modified wherever necessary. Currently, this is AFAIK not really possible; for example, Rails only allows you to fully customize the UI when you use static scaffolding, which basically means code generation, i.e. many further changes in the domain model are then not automatically represented in the UI because the duplication has happened when the code was generated.
I believe the first framework that manages to combine complete auto-generation with complete modifiability afterwards will become the de-facto development standard to a previously unknown degree. Though most likely we'll get there in small steps so that there will not be such a single dominating framework.
Take a look at JMatter, which is a rather better-looking implementation of Naked Objects.
http://www.jmatter.org
There is also Chris Muller's work on MAUI, and Lukas Renggli's work on Magritte (both Squeak /Smalltalk)
We have lots of generated UI in the configuration part of our apps. All those lists that are around forever and changed once in a blue moon by a system administrator.
I find that most applications with a database back-end tend to have a bad design from an OO and NO perspective, as already shown in the NO book by Pawson and Matthews.
Re: qn #1 ... Do you believe in the concept of UI auto-generation from metadata? ... I'm definitely going to answer 'yes' to your first question, being one of the committers to the Naked Objects (Java) framework and writing a book on DDD + NO.
The question mentions metadata. I think this is key to NO being able to succeed. In the latest version (which will be going beta in Feb) the metamodel has been opened up so that it is very extensible, either so you can write your domain model following your own programming conventions/annotations, or, potentially so that more sophisticated viewers can look for their own metadata to provide more sophisticated views. (For example, consider that if an object implemented a Location interface then it is displayed in a google maps).
Regarding qn #2 ... what kind of problems can be approached this way ... we've always said that NO is more suitable for "sovereign applications" (transactional, operational systems ones used internally within an organization) to "transient applications" (like an airport kiosk, say). An NO GUI does require that the user is familiar with the domain, otherwise they won't know what they are looking at.
What's missing still is sophisticated viewers, of course. You are right about the NO GUI, it is definitely low fidelity (though the .NET version is a big improvement, see recent infoq.com article). On the Java side there is a sister project called scimpi.org that has a lot of promise though... it provides a basic web GUI for free but lets you hand-craft web pages as necessary and incrementally. I'm also working on an Eclipse RCP GUI that'll work similarly.
The other thing to add to this though is that the NO approach has value (I believe) even if you choose to write a custom GUI and/or presentation layer. That is, you can use it as a design tool for building a very solid pojo domain layer, and then skin it as you will. Trouble is that NO was never originally sold in those terms, so most will see the NO pattern as an all-or-nothing affair.
Dan
One way to look at this is to consider the difference between the user interface you get from something like Toad or MySQL Browser, where the user interface is directly constructed from the tables and their associated meta data, and the user interface that a skilled designer would develop for the actual application. IF there not too disimilar then it should be fairly low hanging fruit for an auto-generation framework.
As you say there are classes of problems which will work quite well with this kind of auto generation and some which wouldn't. To my mind the key things are how well the implementation model (or portion thereof) which you are exposing in the user interface maps to the conceptual model of the user. Secondly how well can the behavior of the application can be expressed through a limited set of user interface components (assuming this is a general purpose UI generation framework).
This article "Universal Model of a User Interface" may be of interest .
I think the idea of automatically generated UIs has a lot of potential especially for your average form-and-table layout database user interface. However, even there a human needs to be in the loop, having the ability to override the output without it being overwritten with the next regeneration.
I suspect automatically generated UIs would be more successful today if interaction designers were more involved in developing the generation algorithms. My impression is that historically the creators of these systems don’t know what kinds of UI-related metadata to include or how to use it. Specifying labels, value ranges, formats, and orders for fields is a start, but more high level information is needed. Sufficient modeling of the tasks and user roles in particular tends to be lacking, along with some basic style-guide-level principles for UI.
Oracle’s Designer 2000, for example, was on the right track in including not only the entities and relations in the model, but also the tasks in the form of a functional hierarchy. Then they blew it by misapplying this metadata (e.g., assuming that depth is always preferred to breadth) and including fundamental flaws when generating the UI (e.g., only one primary window can be opened at a time). The result was IUs that were not even consistent with Oracle’s own Applications User Interface Standards.
Getting a basic UI up quickly that lets the customer try out the system and create test data must be of value. Naked Objects frameworks can help for the “boot strapping” even if you have to have replace it with “hand crafted” UI before you ship.
In most system I have worked on, there have been lots of simple housekeeping tables. All these tables need a UI to edit and view them etc. There is also great value in these simple editors being consistent. Here a naked Objects framework could save a lot of time, even if the main “day to day” UI is “hand crafted”
I have seen a couple of failed projects (cases where I was brought in as a rather expensive consultant to help architect the replacement) which used the "naked objects" approach (not the framework, AFAIK) - all with simply atrocious UIs, and worked replacing a lot of the UI on one project which, in its original incarnation, had a similar approach (the entire application was a tree of objects accessed through context menus and property sheets - this was NetBeans 2.0 circa 1998 - IDE as a giant hierarchical JavaBean).
The bottom line is, your users don't care about your architecture, they care about getting what they need to do done in the most comprehensible-to-mere-mortals set of interactions you can come up with. If that happens to align with your architecture, you are having a lucky day - but it really is serendipity. Trying to force users to care (or even know) about your architecture is a recipe for software nobody wants to use.
Code generally needs to be designed around two not-always-compatible goals:
Maintainability - people who didn't write the code can understand the code
Stability and performance - i.e. the activities the code asks the computer to physically do are both possible, and can be completed within a reasonable time frame
The abstractions and code structures that it makes sense to create to meet those two goals very, very rarely map exactly to user interface elements of any sort. Sometimes you can get away with it - barely - if your audience is technical. But even there, you are likely to please more users with at least a "presentation layer" adapter layer on top of the architecture that makes sense for programmers and machines.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I would like to know your experience when you need to take over somebody else's software project - more so when the original software developer has already resigned.
The most success that we've had with that is to "wiki" everything. During the notice period ask the leaving developer to help you document everything in the team/company wiki and see if you can do code reviews with him/her and add comments to the code while doing the reviews that explain sections. Best for the "taking over" developer to write the comments in the code under the supervision of the leaver.
Cases where original devs leaved before handing over the project are always the most interesting: you're stuck with a codebase in an unknown state. What I always find intriguing is how the new devs often do their utmost best to comment on how badly designed the code is: they forget about the constraints the old devs might have been under, the shortcuts they might have been forced to make. The saying is always Old dev == bad dev. What do you people think:
I would even call this out as an official bad practice: bad-mouthing the ones who have been before us.
I try to take as much a pragmatic approach as possible: learn the codebase, wander around a bit. Try to understand the relation between requirements and code, even is there is no clear initial relationship at all. There will always be the "aha moment" when you realise why they did something was done this way or that. If you're still convinced something is implemented the wrong way, do your refactorings if possible. And isolate the pieces of code you cannot change: unit test them by using a mocking framework.
Hail to the maintenance developer.
I once joined a team which has been handed over a pile of steaming crap from outsourcing. The original project - a multimedia content manager based on Java, Struts, Hibernate|Oracle - was well structured (it seems like it was the work of a couple of people, pair programming, wise use of design patterns, some unit testing). Then someone else inherited the project and endlessly copy-pasted features, loosened the business rules, patched, branched until it became a huge spaghetti monster with fine crafted piece of codes like:
List<Stuff> stuff = null;
if (LOG.isDebugEnabled())
{
stuff = findStuff();
LOG.debug("Yeah, I'm a smart guy!");
for (Stuff stu : stuff)
{
LOG.debug("I've got this stuff: " + stu);
}
}
methodThatUsesStuff(stuff);
hidden amongst the other brilliant ingenuity.
I tamed the beast via patient refactoring (extracting methods and classes more of the times), commenting the code from time to time, reorganizing everything till the codebase shrunk by 30%, getting more and more manageable over time.
I had to take over someone else’s code of different degrees of quality on several occasions. Hence the tips:
Make effort to take structured notes of any piece of significant information from minute one: names of stakeholders, business rules, code and document locations etc. It is best to dedicate a fresh spiral notebook, so you could tear pages out if you had to.
Make use of one of the better free indexing and desktop search tools available on the market (Google Desktop Search, MS Windows Search will do). Add all document, e-mail, code locations to it.
Before developing anything do document analysis: find everything you can get you hands on electronically on network and printed out docs, make effort of simply reading it. There is amazingly much of useful information even within unfinished drafts.
Mind map the code, architecture etc as you go.
With lesser documented and maintained systems you inevitably will have moments of despair that are likely to push you into procrastination mode. Especially during your first days or week when amount of new information your mind has to digest is overwhelming. At these times it is nice to have someone to remind you (or just do it yourself) to take it easy, concentrate on important things first and revert to making smaller steps in trying to gain understanding instead of trying to leap forward.
Keep taking notes, making diagrams, drawing rich pictures, mind mapping. It really helps to digest the copious amounts of new information, mostly disorganised.
Hei, good luck!
We actually have a specified set of "Deliverables" that has to be present for us to take over a project.
If we have the chance we try to push in one of our folks within the group developing the project at first. That way we get some firsthand knowledgde before our group takes over the code. (in the line of what #Guy wrote)
That being said, the most important part for me would be:
Some kind og highlevel overview(drawing?) of what the code do.
Easy access to ask questions of the people who actually wrote the code
This for me is alpha omega when taking over code and projects
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What is your best practical user-friendly user-interface design or principle?
Please submit those practices that you find actually makes things really useful - no matter what - if it works for your users, share it!
Summary/Collation
Principles
KISS.
Be clear and specific in what an option will achieve: for example, use verbs that indicate the action that will follow on a choice (see: Impl. 1).
Use obvious default actions appropriate to what the user needs/wants to achieve.
Fit the appearance and behavior of the UI to the environment/process/audience: stand-alone application, web-page, portable, scientific analysis, flash-game, professionals/children, ...
Reduce the learning curve of a new user.
Rather than disabling or hiding options, consider giving a helpful message where the user can have alternatives, but only where those alternatives exist. If no alternatives are available, its better to disable the option - which visually then states that the option is not available - do not hide the unavailable options, rather explain in a mouse-over popup why it is disabled.
Stay consistent and conform to practices, and placement of controls, as is implemented in widely-used successful applications.
Lead the expectations of the user and let your program behave according to those expectations.
Stick to the vocabulary and knowledge of the user and do not use programmer/implementation terminology.
Follow basic design principles: contrast (obviousness), repetition (consistency), alignment (appearance), and proximity (grouping).
Implementation
(See answer by paiNie) "Try to use verbs in your dialog boxes."
Allow/implement undo and redo.
References
Windows Vista User Experience Guidelines [http://msdn.microsoft.com/en-us/library/aa511258.aspx]
Dutch websites - "Drempelvrij" guidelines [http://www.drempelvrij.nl/richtlijnen]
Web Content Accessibility Guidelines (WCAG 1.0) [http://www.w3.org/TR/WCAG10/]
Consistence [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746]
Don't make me Think [http://www.amazon.com/Dont-Make-Me-Think-Usability/dp/0321344758/ref=pdbbssr_1?ie=UTF8&s=books&qid=1221726383&sr=8-1]
Be powerful and simple [http://msdn.microsoft.com/en-us/library/aa511332.aspx]
Gestalt design laws [http://www.squidoo.com/gestaltlaws]
I test my GUI against my grandma.
Try to use verbs in your dialog boxes.
It means use
instead of
Follow basic design principles
Contrast - Make things that are different look different
Repetition - Repeat the same style in a screen and for other screens
Alignment - Line screen elements up! Yes, that includes text, images, controls and labels.
Proximity - Group related elements together. A set of input fields to enter an address should be grouped together and be distinct from the group of input fields to enter credit card info. This is basic Gestalt Design Laws.
Never ask "Are you sure?". Just allow unlimited, reliable undo/redo.
Try to think about what your user wants to achieve instead of what the requirements are.
The user will enter your system and use it to achieve a goal. When you open up calc you need to make a simple fast calculation 90% of the time so that's why by default it is set to simple mode.
So don't think about what the application must do but think about the user which will be doing it, probably bored, and try to design based on what his intentions are, try to make his life easier.
If you're doing anything for the web, or any front-facing software application for that matter, you really owe it to yourself to read...
Don't make me think - Steve Krug
Breadcrumbs in webapps:
Tell -> The -> User -> Where -> She -> Is in the system
This is pretty hard to do in "dynamic" systems with multiple paths to the same data, but it often helps navigate the system.
I try to adapt to the environment.
When developing for an Windows application, I use the Windows Vista User Experience Guidelines but when I'm developing an web application I use the appropriate guidelines, because I develop Dutch websites I use the "Drempelvrij" guidelines which are based on the Web Content Accessibility Guidelines (WCAG 1.0) by the World Wide Web Consortium (W3C).
The reason I do this is to reduce the learning curve of a new user.
I would recommend to get a good solid understanding of GUI design by reading the book The Design of Everyday Things. Although the main printable is a comment from Joel Spolsky: When the behavior of the application differs to what the user expects to happen then you have a problem with your graphical user interface.
The best example is, when somebody swaps around the OK and Cancel button on some web sites. The user expects the OK button to be on the left, and the Cancel button to be on the right. So in short, when the application behavior differs to what the user expects what to happen then you have a user interface design problem.
Although, the best advice, in no matter what design or design pattern you follow, is to keep the design and conventions consistent throughout the application.
Avoid asking the user to make choices whenever you can (i.e. don't create a fork with a configuration dialog!)
For every option and every message box, ask yourself: can I instead come up with some reasonable default behavior that
makes sense?
does not get in the user's way?
is easy enough to learn that it costs little to the user that I impose this on him?
I can use my Palm handheld as an example: the settings are really minimalistic, and I'm quite happy with that. The basic applications are well designed enough that I can simply use them without feeling the need for tweaking. Ok, there are some things I can't do, and in fact I sort of had to adapt myself to the tool (instead of the opposite), but in the end this really makes my life easier.
This website is another example: you can't configure anything, and yet I find it really nice to use.
Reasonable defaults can be hard to figure out, and simple usability tests can provide a lot of clues to help you with that.
Show the interface to a sample of users. Ask them to perform a typical task. Watch for their mistakes. Listen to their comments. Make changes and repeat.
The Design of Everyday Things - Donald Norman
A canon of design lore and the basis of many HCI courses at universities around the world. You won't design a great GUI in five minutes with a few comments from a web forum, but some principles will get your thinking pointed the right way.
--
MC
When constructing error messages make the error message be
the answers to these 3 questions (in that order):
What happened?
Why did it happen?
What can be done about it?
This is from "Human Interface Guidelines: The Apple Desktop
Interface" (1987, ISBN 0-201-17753-6), but it can be used
for any error message anywhere.
There is an updated version for Mac OS X.
The Microsoft page
User Interface Messages
says the same thing: "... in the case of an error message,
you should include the issue, the cause, and the user action
to correct the problem."
Also include any information that is known by the program,
not just some fixed string. E.g. for the "Why did it happen" part of the error message use "Raw spectrum file
L:\refDataForMascotParser\TripleEncoding\Q1LCMS190203_01Doub
leArg.wiff does not exist" instead of just "File does
not exist".
Contrast this with the infamous error message: "An error
happend.".
(Stolen from Joel :o) )
Don't disable/remove options - rather give a helpful message when the user click/select it.
As my data structure professor pointed today: Give instructions on how your program works to the average user. We programmers often think we're pretty logical with our programs, but the average user probably won't know what to do.
Use discreet/simple animated features to create seamless transitions from one section the the other. This helps the user to create a mental map of navigation/structure.
Use short (one word if possible) titles on the buttons that describe clearly the essence of the action.
Use semantic zooming where possible (a good example is how zooming works on Google/Bing maps, where more information is visible when you focus on an area).
Create at least two ways to navigate: Vertical and horizontal. Vertical when you navigate between different sections and horizontal when you navigate within the contents of the section or subsection.
Always keep the main options nodes of your structure visible (where the size of the screen and the type of device allows it).
When you go deep into the structure always keep a visible hint (i.e. such as in the form of a path) indicating where you are.
Hide elements when you want the user to focus on data (such as reading an article or viewing a project). - however beware of point #5 and #4.
Be Powerful and Simple
Oh, and hire a designer / learn design skills. :)
With GUIs, standards are kind of platform specific. E.g. While developing GUI in Eclipse this link provides decent guideline.
I've read most of the above and one thing that I'm not seeing mentioned:
If users are meant to use the interface ONCE, showing only what they need to use if possible is great.
If the user interface is going to be used repeatedly by the same user, but maybe not very often, disabling controls is better than hiding them: the user interface changing and hidden features not being obvious (or remembered) by an occasional user is frustrating to the user.
If the user interface is going to be used VERY REGULARLY by the same user (and there is not a lot of turnover in the job i.e. not a lot of new users coming online all the time) disabling controls is absolutely helpful and the user will become accustomed to the reasons why things happen but preventing them from using controls accidentally in improper contexts appreciated and prevents errors.
Just my opinion, but it all goes back to understanding your user profile, not just what a single user session might entail.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've seen different program managers write specs in different format. Almost every one has had his/her own style of writing a spec.
On one hand are those wordy documents which given to a programmer are likely to cause him/her missing a few things. I personally dread the word documents spec...I think its because of my reading style...I am always speed reading things which I think will cause me to miss out on key points.
On the other hand, I have seen this innovative specs written in Excel by one of our clients. The way he used to write the spec was kind of create a mock application in Excel and use some VBA to mock it. He would do things like on button click where should the form go or what action should it perform (in comments).
On data form, he would display a form in cells and on each data entry cell he would comment on what valid values are, what validation should it perform etc.
I think that using this technique, it was less likely to miss out on things that needed to be done. Also, it was much easier to unit test it for the developer. The tester too had a better understanding of the system as it 'performed' before actually being written.
Visio is another tool to do screen design but I still think Excel has a better edge over it considering its VBA support and its functions.
Do you think this should become a more popular way of writing spec? I know it involves a bit of extra work on part of project manager(or whoever is writing the spec) but the payoff is huge...I myself could see a lot of productivity gain from using it. And if there are any better formats of specs that would actually help programmer.
Joel on Software is particularly good at these and has some good articles about the subject...
A specific case: the write-up and the spec.
Two approaches have worked well for me.
One is the "working prototype" which you sort of described in your question. In my experience, the company contracted a user interface expert to create fully functional HTML mocks. The data on the page was static, but it allowed for developers and management to see and play with a "functional" version of the site. All that was left to do was replace the static data on the pages with dynamic content - this prototype was our spec for the initial version of our product. The designer even included detailed explanation of some subtle behavior in popup dialogs that would appear when hovering over mock links. It worked well for our team.
On a subsequent project, we didn't have the luxury of the UI expert, but we used similar approach. We used a wiki to mock a version of the site. We created links between the functional aspects of the system and documented each piece of functionality in detail. Each piece of functionality could, in turn, link to detailed design and architecture decisions. We also used to wiki to hold our to list feature list for each release (which became our release notes). These documents linked back to the detailed feature page. The wiki became a living document - describing our releases and evolution of our system in great detail. It was an invaluable resource.
I prefer the wiki to the working prototype because it's more easily extensible - growing and becoming more valuable as your system evolves.
I think you may have a look about Test-Driven Requirements, which is a technique to make executable specifications.
There are some great tools like FIT, Fitnesse, GreenPepper or Concordion for that purpose.
One of the Microsoft Press books has excellent examples of various documents, including an SRS (which I think is what you are talking about). It might be one of the requirements books by Weigert (I think that's his name, I'm blanking on it right now). I've seen US government organizations use that as a template, and from my three work experiences with the government, they like to make their own whereever they can, so if they are reusing it, it must be good.
Also - a spec should contain NO CODE, in my opinion. It should focus on what the system must do, should do, and can not do using text and diagrams.