UML time visualisation - refactoring

I'm sure many of you have used or at least heard of Gource and Code Swarm. They are very nice tools for visualising the commit history of a project as it evolves.
What I am interested in is an similarly repo-driven animation of the code itself in UML form as it evolves over time.
I have put a great deal of work into refactoring and cleaning up the codebase of a project over the last 3 years and it would be really neat to be able to demo this in tangible form to management. Of course 'before' and 'after' diagrams would work, but where's the fun in that? :P
Does any such tool exist? Specifically I am looking for ObjC++ but am interested in anything available in any language.
Shout out any other tools that could make for a cool demo. Refactoring is sometimes hard to sell without having anything to show!

Convincing your CEO about the importance of that kind of tech stuff it's always very tricky. In case you are able to show a UML history animation, I don't think it will be appreciated by any non-engineer person.
My approach is "show results". If you have a bug/changes tracking tool and you are doing refactor in a correct way, time spent on bugs or changes should drop in some way.
Do some spreadsheets, paint some charts and put it all in a presentation. Unfortunatelly, that's the kind of message that reaches CEO's minds.
Like Martin Folwer says in his book Refactoring, sometimes is better to don't tell until you have results. Then results is what you should show! (see "What Do I Tell My Manager?" chapter)

I suspect the best you would find to do this is a research prototype developed in a university. I'm thinking of something like REFVIS. Even then it would be a lot of work to make it work with your code. That aside, automatically extracted UML diagrams tend to carry way too much detail for human consumption, so reducing this to make it presentable and informative to management would be a hard problem. I think you're posing an interesting research challenge, but not one to which you'll find a ready solution!

Related

What human learning techniques can be applied to improve code layout?

Is it possible to use the results of studies made into human learning in order to identify how code might be laid out to improve comprehension?
Code layout wars almost always end up defending consistency and the prevailing style, but could there be ways of laying out code that are provably better than others?
What is Code Layout to you?
On one hand there are these evil things called coding conventions, which place everyone in a corset. I loathe these and I believe we're far behind schedule to eliminate them. We can parse code and I do not understand, why our IDEs still display code based on the very textual format it is stored in. What's so hard in allowing each user set up his layout prefences and the IDE displays all source code accordingly? Most IDEs offer some kind of auto-format option, yet you often cannot customize how it works.
However, a far more interesting approach is whether our current point of view on source code is suitable for learning at all. Projects like Code Bubbles are pioneering a new way there. And then of course, we have model-based approaches which are often more accessible from a learner's point of view.
I'm afraid there is no definite answer to this question. In fact, if you can write down a detailed answer for it, don't forget to claim a PhD for it ;)
Could there be ways of laying out code that are provably better than others?
Yes. This problem was studied extensively in the 1980s. You could read all about it :-)
A good university library should have Human Factors and Typography for More Readable Programs by Ronald M. Baecker and Aaron Marcus, published by Addison-Wesley in 1990.
I think this comes down to personal preference. I prefer to have very little shorthand in my codes, I think it's the best way for me to comprehend what's going on inside my codes without having to remember which order shorthand works in, maybe my memory is bad.
Possibly it would be a good idea to use such studies say on a class of students learning to make codes the same way, but everyone develops their own way of coding after time. There are already "provably better ways" as laid out by the best practice suggestions for each language.
Interesting question.
The biggest problem for me with understanding code is not code layout (however code should be formatted consistently) but following execution order. In complex OO source code it is hard to see the complete code involved in execution.
I think that IDE functions can help a lot for code understanding. For me (as a java developer) tools like the Call Hierarchy view in Eclipse and Mylyn are very useful.
An interesting (new) way of understanding code is shown in the Code Bubbles Project.
I expect more steps in these directions in the future.
I think teaching programming may have given me some skill in this area, because to get ideas across to students you have to keep things small, simple, and introducing only one concept at a time.
However, as one of my colleagues used to say to his students:
Teaching is my job.
Learning is yours.
As that applies to programming, I think it is the programmer's responsibility to write the code so as to educate others as to what he/she is trying to accomplish, but there is no code that will be clear to readers who do not put in effort.

What are some good strategies to fix bugs as code becomes more complex?

I'm "just" a hobbyist programmer, but I find that as my programs get longer and longer the bugs get more annoying--and harder to track. Just when everything seems to be running smoothly, some new problem will appear, seemingly spontaneously. It may take me a long time to figure out what caused the problem. Other times I'll add a line of code, and it'll break something in another unit. This can get kind of frustrating if I thought everything was working well.
Is this common to everyone, or is it more of a newbie kind of thing? I hear about "unit testing," "design frameworks," and various other concepts that sound like they would decrease bugginess, make my apps "robust," and everything easy to understand at a glance :)
So, how big a deal are bugs to people with professional training?
Thanks -- Al C.
The problem of "make a fix, cause a problem elsewhere" is very well known, and is indeed one of the primary motivations behind unit testing.
The idea is that if you write exhaustive tests for each small part of your system independently, and run them on the entire system every time you make a change anywhere, you will see the problem immediately. The main benefit, however, is that in the process of building these tests you'll also be improving your code to have less dependencies.
The typical solution to these sort of problems is to reduce coupling; make different parts less dependent on one another. More experienced developers sometimes have habits or design skills to build systems in this manner. For example, we use interfaces and implementations rather than classes; we use model-view-controller for user interfaces, etc. In addition, we can use tools that help further reduce dependencies, like "Dependency injection" and aspect oriented programming.
All programmers make mistakes. Good and experienced programmers build their programs so that it is easier to find the mistakes and restrict their effects.
And it is a big deal for everyone. Most companies spend more time on maintenance than on writing new code.
Are you automating your tests? If you do not, you're signing up creating bugs without finding them.
Are you adding tests for bugs as you fix them? If you do not, you are signing up for creating the same bugs over and over.
Are you writing unit tests? If not, you are signing up for long debugging sessions when a test fails.
Are you writing your unit tests first? If not, your unit tests will be hard to write when your units are tightly coupled.
Are you refactoring mercilessly? If not, every edit will become more difficult and more likely to introduce bugs. (But make sure you have good tests, first.)
When you fix a bug, are you fixing the entire class? Don't just fix the bug; don't just fix similar bugs throughout your code; change the game so you can never create that kind of bug again.
Bugs are a big deal to everyone. I've always found that the more I program, the more I learn about programming in general. I cringe at the code I wrote a few years back!! I started out as a hobbyist and liked it so much that I went to engineering college to get a Computer Science Engineering major (I am in my final semester). These are the things that I have learned :
I take time to actually design what I am going to write and document the design. It really eliminates a lot of problems down the line. Whether the design is as simple as writing down a few points on what I am going to write or full blown UML modeling (:( ) doesn't matter. Its the clarity of thought and purpose and having material to look back at when I come back to the code after a while that matter the most.
No matter what language I write in, keeping my code simple and readable is important. I think that it is extremely important not to over complicate the code and at the same time not to over simplify it. (Hard learned lesson!!)
Efficiency optimizations and fancy tricks should be applied at the end, only when necessary and only if they are needed. Another thing is that I apply them only If I really know what I am doing and I always test my code!
Learning language dependant details helps me keep my code bug free. For instance I learned that scanf() is evil in C!
Others have already commented on the zen of writing tests. I would like to add that you should always do regression tests. (i.e. Write new code, test all parts of your code to see if it breaks)
Keeping a mental picture of code is hard at times, so I always document my code.
I use methods to make sure that there is a bare minimum dependence between different parts of my code. Interfaces, class hierarchies etc. (Decoupled design)
Thinking before I code and being disciplined in whatever I write is another crucial skill. I know people who don't format their code so its readable (Shudder!).
Reading other peoples source to learn best practices is good. Making my own list is better!. When working in a team, there must be a common set of them.
Don't be paralyzed by analysis. Write tests, then code, then execute and test. Rinse wash repeat!
Learning to read over my own code and combing it for mistakes is important. Improving my arsenal of debugging skills was a great investment. I keep them sharp by helping my classmates fix bugs regularly.
When there is a bug in my code, I assume its my mistake, not the computers and work from there. That is a state of mind that really helps me.
A fresh pair of eyes aids in debugging. Programmers tend to miss even the most obvious errors in their own code when exhausted. Having someone to show your code to is great.
having someone to throw ideas at and not be judged is important. I talk to my mom (who is not a programmer) , throw ideas at her and find solutions. She helps me bounce my ideas back and forth and refine them. If she is unavailable, I talk to my pet cat.
I am not so be discouraged by bugs anymore. I've learned to love removing bugs almost as much as programming.
Using version control has really helped me manage different ideas I get while coding. That helps reduce errors. I recommend using git or any other version control system you might like.
As Jay Bazzuzi said - Refactor code. I just added this point after reading his answer, to keep my list complete. All credit goes to him.
Try to write reusable code. Reuse code, both yours and from libraries. Using libraries which are bug free to do some common tasks really reduces bugs (sometimes).
I think the following quote says it best - "If debugging is the art of removing bugs, programming must be the art of putting them in."
No offense to anyone who disagrees. I hope this answer helps.
Note
As others Peter has pointed out, use Object Oriented Programming if you are writing a large amount of code. There is a limit to code length after which it becomes harder and harder to manage if written procedurally. I like procedural for smaller stuff, like playing with algorithms.
There are two ways to write error-free programs; only the third one works. ~Alan J. Perlis
The only way for errors to occur in a program is by being put there by the author. No other mechanisms are known. Programs can't acquire bugs by sitting around with other buggy programs. ~Harlan Mills
Obviously, bugs are a big deal to any programmer. Just look through the list of questions on Stack Overflow to see this illustrated.
The difference between a hobbyist and an experienced professional is that the pro will be able to use his experience to code in a more "defensive" way, avoiding many types of bugs in the first place.
All the other answers are great. I'll add two things.
Source control is mandatory. I'm assuming you're on windows here. VisualSVN Server is free and maybe 4 clicks to install. TortoiseSVN is also free and it integrates into Windows Explorer, getting around the VS Express limitations of no add-ins. If you create too many bugs, you can revert your code and start over. Without source control, this is next to impossible. Plus you can sync your code if you have a laptop and a desktop.
People are going to recommend many techniques like unit testing, mocking, Inversion of Control, Test Driven Development, etc. These are great practices, but don't try to cram it all into your head too quickly. You have to write code to get better at writing code, so work these techniques slowly into your code writing. You have to crawl before you walk and walk before you can run.
Best of luck in your coding adventures!
This is a common newbie thing. As you get more experience, of course, you'll still have bugs, but they'll be easier to find and fix because you'll learn how to make your code more modular (so that changing one thing doesn't have ripple effects everywhere else), how to test it, and how to structure it to fail fast, close to the source of the problem, rather than in some arbitrary place. One very basic but useful thing that doesn't require complex infrastructure to implement is to check the inputs to all functions that have non-trivial precondtions with asserts. This has saved me several times in cases where I would have otherwise gotten weird segfaults and arbitrary behavior that would have been near impossible to debug.
If bugs weren't a problem then I'd be able to write a 100,000 line program in 10 minutes!
Your question is like, "As an amateur doctor, I worry about my patients' health: sometimes when I'm not careful enough, they sicken. Is patients' health a problem for you professional doctors too?"
Yes: it's the central problem, even the only problem (for any sufficiently all-inclusive definition of 'bug').
Bugs are common to everyone -- professional or not.
The larger and more distributed the project, the more careful one must be. One look at any open source bug database (ex: https://bugzilla.mozilla.org/ ) will confirm this for you.
The software industry has evolved various programming styles and standards, which when used right, make wrong code easier to spot or limited in its impact.
Therefore, training has a very positive on code quality... But at the end of the day, bugs still sneak through.
If you're just a hobbyist programmer, learning full bore TDD and OOP may involve more time than you're willing to put in. So, going on the assumption that you don't want to put in the time on them, a few easily digestible suggestions to cut down on bugs are:
Keep each function doing one thing. Be suspect of a function more than, say, 10 lines long. If you think you can break it into two functions, you probably should. Something that will help you control this is naming your functions according to exactly what they are doing. If you find that your names are long and unwieldy then you function is probably doing too many things.
Turn magic strings into constants. That is, instead of using:
people["mom"]
use instead
var mom = "mom";
people[mom]
Design your functions to either do something (command) or get something (query), but not both.
An extremely short and digestible take on OOP is here http://www.holub.com/publications/notes_and_slides/Everything.You.Know.is.Wrong.pdf. If you get this, you've got the gist of OOP and are quite frankly ahead of a lot of professional programmers.
The prevailing wisdom seems to be that the average programmer creates 12 bugs per 1000 lines of code - depends on who you ask for the exact number, but it's always per lines of code - so, the bigger the program, the more the bugs.
Subpar programmers tend to create way more bugs.
Newbies are often trapped by idiosyncrasies of the language, and lacking experience tends towards more bugs too. As you go on, you will get better, but never will you create bug-free code... well I still have bugs, even after 30 years, but that could be just me.
Nasty bugs happen to everyone from pros to hobbyists. Really good programmers get asked to track down really nasty bugs. It's part of the job. You'll know you've made it as a software developer when you stare at a nasty bug for two days and in frustration you shout, "Who wrote this crap!?!?" ... only to realize it was you. :-)
Part of the skill of a software developer is the ability to keep a large set of interrelated items straight in his/her head. It sounds like you're discovering what happens when your mental model of the system breaks down. With practice you will learn to design software that doesn't feel so brittle. There are tons of books, blogs, etc. out there on the subject of software design. And Stack Overflow of course for specific questions.
All that said, here's a couple of things you can do:
A good debugger is invaluable. Often you have to step through your code line by line to figure out what went wrong.
Use a garbage-collected language such as Python or Java if it makes sense for your project. GC will help you focus on making things work instead of getting bogged down by maddening memory errors.
If you write C++, learn to love RAII.
Write LOTS of code. Software is somewhat of an art form. Lots of practice will make you better at it.
Welcome to Stack Overflow!
What really changed my odds against code complexity and bugs was using a coding standart - how to place brackets an so on. It may seem like just boring and useless thing but it really unifies all the code and makes it much easier to read and maintain. So do you use a coding standart?
If you're not well organized, your codebase will become your very own Zebra Puzzle. Adding more code is like adding more people/animals/houses to your puzzle, and soon you have 150 various animals, people, houses and cigarette brands in your puzzle and you realize that it just took you a week to add 3 lines of code because everything is so inter-related that it takes forever to make sure the code still executes how you want it to.
The most popular organizational paradigm seems to be Object Oriented Programming, if you can break your logic down into small units which can be constructed and used independently of each other, then you will find bugs far less painful when they occur.

How to justify to your colleagues that they produce crappy code?

I am finding somewhat difficult to carry on working in my current job.
The codebase has become a bit wild lately (but definitely not the worse I've seen), and I'm having a hard time dealing with some parts of the code. I could be stupid, but most likely it's just that it demotivates me a lot to start working on something that is hard to reason about.
My boss is already aware of my thoughts - I expressed what it feels like to work like this. He asked me to provide examples of what was wrong. When I pointed out two or three small issues, he said "yeah, ok" but that refactoring costs him a lot of money, and that we have to get the product out (not the first time I hear this).
I have to admit that the examples were not the most compelling, but the problem is actually tough to explain. It's made up of a lot of tiny "bad decisions" throughout the codebase. (We also see this issue is absolutely subjective). For instance, bad naming, dealing with nulls, boilerplate, not making code reusable (or the opposite) and so on. It can be tiring to re-think someone else's code over again to justify I would have done it differently.
Do you have thoughts on how to deal with this?
I am a bit fed up of having to go hacking around a quick 'n dirty codebase every time!
Sometimes your fellow programmers do things very differently than you, and things you might feel are way wrong might actually have positive aspects. We all have our schools we come from. I think I've come across programmers who complain about things I don't understand equally as often that I myself have felt something needs to be complained about.
Make sure you can deduce what you complain about into a concrete disadvantage. If for no other reason so that you can motivate middle management about improvements to make. Things that are hard to deduce into measurable facts usually originates from difference in taste/style rather than quality (there are boooks to read about this subject). The answer posted by smacl have good and concrete advice!
If you can deduce your concern into a real disadvantage, then I really do not agree when people say that one have to "accept" situations like this. I've been exposed to this problem more than once, and let me tell you, refactoring is not the solution to the problem. Refactoring only fixes the symptoms.
Accepting a situation like this is the same as saying "bad quality product lines and expensive and frustrating maintenance is something my company can live with". This is ofcourse seldomly the case. However management (i.e. those with the go/no-go on what projects to prioritize) are very often not technically aware of what the problems are, or why development is expensive. They shouldn't have to be for that matter.
That's why you need a development organization with technical leads, chief architects, a good organisational structure and tiered model etc. Experienced software professionals who have seen where the road leads to if you ignore certain aspects of development. It's about changing the "culture" of your team(s).
Either you stick with your company and try to change how you do things from the roots, or you find another place to work and make sure you find out during the interview exactly how they work in every-day development.
Good luck
I recently faced a very similar problem and a friend gave me some advice that helped a great deal. He said: "keep yourself out of it."
What he meant was, that you must communicate the problems because they are real, costly problems with consequences in terms of time and money. But when you do communicate, talk only about the consequences for the organization. Do not mention the consequences to you, because then it just sounds like whining and will be ignored.
For example:
Not keeping yourself out of it:
"The other developers use these obscure, misleading identifiers and then I have to spend hours going over the code trying to discover what they meant. It's taking up a lot of my time."
Keeping yourself out of it:
"It would be very helpful and cost effective to do some refactoring of class and variable names and also establish some coding standards around identifiers. The immediate payoff will be an easier-to-understand codebase for everyone, leading to better productivity. The longer-term payoff will be that later we'll be able to modify the code and fix things faster. If a critical bug is discovered right before a release, an understandable codebase will be really important."
I hope that helps.
1) Make the problem more visible and get management buy-in
Keep a very detailed diary of the time spent on various coding tasks over the period of about a month. At the end of the month analyse and summarise the contents for your boss, i.e. time wasted and hence money wasted, to illustrate that change of some form is necessary.
2) Think of a cost effective way of moving forward
For example; Rather than refactoring the entire code base, seperate interfaces from implementations, and enforce tighter standards, including unit tests, naming conventions, etc.. at an interface layer. Thus each programmer can have confidence in using code that they have not written. While this is sweeping the crap under the carpet to a certain extent, it is a good way of preparing for larger scale refactoring.
It is important from a management perspective that workflow is not interrupted, and positive results are visible, so plan accordingly.
3) Agree longer terms improvements with your co-workers
Sit down and agree reasonable coding standards for future code with the other programmers.
Perhaps you could setup monthly meetings and at those meetings you could demonstrate good and bad code. Obviously you don't want to point fingers so you'd want to use generic code examples that are based off of stuff you saw in your project. This way you can constructively gather support from others in your style. You might want to compile these after the meetings so people can easily reference them.
I think it is real easy to point out issues, and complain but to mentor people and help them change requires effort. It isn't an easy task but if you are having trouble being motivated with your job perhaps this would give you a nice burst of motivation. You might learn some things a long the way.
You'll find that this is common-place. What you can do is accept that things are done differently by different people. As you fix bugs or add features, you'll get a brief window into a sub-section of the application that you can improve. When you work on the code, you can make it better, and they don't need to know that you're piecemeal improving the code.
Be very careful though. Sometimes code is written in a way that looks 'hacked', but solves a bug that is not easy to discern. Especially if it is older code which has been tried and tested.
On another note, complaining will only get you viewed as a complainer. Think about what outcome you want, and what actions will most likely produce that outcome. You will always hear the answer 'No' when you ask, 'Can I do X-days of work for absolutely no noticable result?'
You could quit and hope to find something better.
Or, you could stick it out and try improve the code that you can control, when you can control it. No matter how well intentioned the developers are, if there is more than one developer the code base will be "ugly" by a competent developers standards. Work with the other developers to improve their abilities and refactor code as you make enhancements.
For starters:
Enforce the use of static code analysis tools. Every language has a few well known tools.
Show some before and after refactored code examples, and explain why you think it's better. Try not to put any one person on the spot.
Code reviews by experienced developers.
keep in mind, some developers can't be helped no matter how much you try...
If someone critiques your code be polite and open minded, you might learn something.
Cyclomatic complexity / number of changesets/bugs. Complex code is more likely to break, cause more bugs which causes more changes, which cost more money!
99% of the time you never get to choose the people you work with. Not all relationships work out, be they work or otherwise.
It would be best if your project was broken up enough so that each developer can contribute to a spec of what the other needs, so programmers don't step on each other's toes.
Getting people to change their coding style is hard. It takes a cast iron technical lead committed to such things and will help when you bring it up. Management types can't do this, leadership needs to provide technical details.
It sounds to me like you don't have a problem with the code so much as your coworkers. It will probably be very difficult for you to force the changes you want to see. Your best bet would probably be to start updating your resume and keep your eyes open for other opportunities
I think that once you're in the middle of the weeds, you do not really have a good chance of getting things done right, you just have to get them done. I would say most developers do not like firefighting and want the ideal code base, but in my opinion this requires you to spend the time up front planning the system out.
I'd recommend trying to work with your manager to ensure that the areas you feel are lacking now are not lacking in the next project. Maybe its putting you on the lead, having more code reviews with peers, maybe it is further training for the entire team.
Either way, I think this is something that most of us go through. I do agree with the other person advising some caution on this. I know that code I wrote yesterday seemed great at the time and looking back on it, can probably find 10 other ways to do it and make it look cleaner.
Have you considered maybe adding fxcop to the automated builds to enforce coding style? Other than that, you could try suggesting TDD which would give the power to whomever writes the test to enfore that the interfaces for each class are structured in a particular way.
Off the top of my head, that's all that I can think of.
Things in life are not perfect and if you start nitpicking, feathers will be ruffled and relationships soured.
The best method is to pick your battles carefully. If something is small enough ignore it and live with it. If it is big and worthwhile (i.e. the management sees ROI in backing you) go for it.
This is apt for your situation...
God, grant me the serenity to accept the things I cannot change courage to change the things I can and the wisdom to know the difference.
One thing I try to do and it may help you. If a part of code is bad, and the idea you propose to fix it is agreed as best but "no time" excuse is given, why dont you rewrite it? say on your own time? If you decide on sticking around at that job for a while it will only help you. And only you will learn and become a better programmer.
Note that it is a good idea and I would even say required, to do a complete code review of that change before check-in and you should try to time the check-in so that it is before a complete regresion test cycle for a release. That way your refactoring is completely tested out. Over a period of 6 months or so, it will start showing a beneficial impact and you can then ask for time allocation for this, with proof to back it up.
The only thing that has a chance of convincing management is demonstrating that the things you are citing as perceived problems become actual problems.
To try to take advantage of this, try to keep the "complainer" tone down to a minimum, that is, focus on how this affects the bottom line rather than how it makes you feel. Point out possible consequences of poor decisions that you see being made. If those consequences come to pass, and they cost more than an up-front fix would have, gently remind management that you foresaw the difficulty and provide a helpful suggestion as to how future similar costs can be avoided with a little up-front effort.
The problem is, in many organizations, the problems will never cause enough of a problem for management to care, or if they do, they won't see the connection between your perception of the problem and the actual problem the way it occurs. In these cases, you end up seemin like a needlessly persnickety technical person, which isn't a reputation you want to have.
So my advice is, pick your battles. If there is something very egregious that others are about to let slip, then you can speak up and perhaps be vindicated later. For the little details that just grind away at you, I'm afraid there's not much you can do but put up with it.
Show them their own forgotten code disguised as yours for critique.
Take an old piece of their code they have forgotten about
Pretend you wrote it
Ask them to figure out something with it
Make sure they point out how bad the code is for whatever reason
Add your own items. Brainstorm what should be done since it's your fault.
Let them know you didn't know how to bring it up to offend them, but it's their code.
If they recall that they wrote it, they might catch on..
If you have a good relationship with your manager, you might be able to use this to work yourself into a "Senior" or "Lead" Developer role. You could propose that it would be best if one person on the team takes technical leadership of the code base. It would be your job to review the code of others and ask them to make improvements when you feel it is necessary. If you go this route, just make sure to take it slowly. If you ask for a lot very quickly, then you could end up pissing off all the other developers.

Why is good UI design so hard for some Developers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Some of us just have a hard time with the softer aspects of UI design (myself especially). Are "back-end coders" doomed to only design business logic and data layers? Is there something we can do to retrain our brain to be more effective at designing pleasing and useful presentation layers?
Colleagues have recommended a few books me including The Design of Sites, Don't make me think and Why Software sucks , but I am wondering what others have done to remove their deficiencies in this area?
Let me say it directly:
Improving on this does not begin with guidelines. It begins with reframing how you think about software.
Most hardcore developers have practically zero empathy with users of their software. They have no clue how users think, how users build models of software they use and how they use a computer in general.
It is a typical problem when an expert collides with a laymen: How on earth could a normal person be so dumb not to understand what the expert understood 10 years ago?
One of the first facts to acknowledge that is unbelievably difficult to grasp for almost all experienced developers is this:
Normal people have a vastly different concept of software than you have. They have no clue whatsoever of programming. None. Zero. And they don't even care. They don't even think they have to care. If you force them to, they will delete your program.
Now that's unbelievably harsh for a developer. He is proud of the software he produces. He loves every single feature. He can tell you exactly how the code behind it works. Maybe he even invented an unbelievable clever algorithm that made it work 50% faster than before.
And the user doesn't care.
What an idiot.
Many developers can't stand working with normal users. They get depressed by their non-existing knowledge of technology. And that's why most developers shy away and think users must be idiots.
They are not.
If a software developer buys a car, he expects it to run smoothly. He usually does not care about tire pressures, the mechanical fine-tuning that was important to make it run that way. Here he is not the expert. And if he buys a car that does not have the fine-tuning, he gives it back and buys one that does what he wants.
Many software developers like movies. Well-done movies that spark their imagination. But they are not experts in producing movies, in producing visual effects or in writing good movie scripts. Most nerds are very, very, very bad at acting because it is all about displaying complex emotions and little about analytics. If a developer watches a bad film, he just notices that it is bad as a whole. Nerds have even built up IMDB to collect information about good and bad movies so they know which ones to watch and which to avoid. But they are not experts in creating movies. If a movie is bad, they'll not go to the movies (or not download it from BitTorrent ;)
So it boils down to: Shunning normal users as an expert is ignorance. Because in those areas (and there are so many) where they are not experts, they expect the experts of other areas to have already thought about normal people who use their products or services.
What can you do to remedy it? The more hardcore you are as a programmer, the less open you will be to normal user thinking. It will be alien and clueless to you. You will think: I can't imagine how people could ever use a computer with this lack of knowledge. But they can. For every UI element, think about: Is it necessary? Does it fit to the concept a user has of my tool? How can I make him understand? Please read up on usability for this, there are many good books. It's a whole area of science, too.
Ah and before you say it, yes, I'm an Apple fan ;)
UI design is hard
To the question:
why is UI design so hard for most developers?
Try asking the inverse question:
why is programming so hard for most UI designers?
Coding a UI and designing a UI require different skills and a different mindset. UI design is hard for most developers, not some developers, just as writing code is hard for most designers, not some designers.
Coding is hard. Design is hard too. Few people do both well. Good UI designers rarely write code. They may not even know how, yet they are still good designers. So why do good developers feel responsible for UI design?
Knowing more about UI design will make you a better developer, but that doesn't mean you should be responsible for UI design. The reverse is true for designers: knowing how to write code will make them better designers, but that doesn't mean they should be responsible for coding the UI.
How to get better at UI design
For developers wanting to get better at UI design I have 3 basic pieces of advice:
Recognize design as a separate skill. Coding and design are separate but related. UI design is not a subset of coding. It requires a different mindset, knowledge base, and skill group. There are people out there who focus on UI design.
Learn about design. At least a little bit. Try to learn a few of the design concepts and techniques from the long list below. If you are more ambitious, read some books, attend a conference, take a class, get a degree. There are lot of ways to learn about design. Joel Spolky's book on UI design is a good primer for developers, but there's a lot more to it and that's where designers come into the picture.
Work with designers. Good designers, if you can. People who do this work go by various titles. Today, the most common titles are User Experience Designer (UXD), Information Architect (IA), Interaction Designer(ID), and Usability Engineer. They think about design as much as you think about code. You can learn a lot from them, and they from you. Work with them however you can. Find people with these skills in your company. Maybe you need to hire someone. Or go to some conferences, attend webinars, and spend time in the UXD/IA/ID world.
Here are some specific things you can learn. Don't try to learn everything. If you knew everything below you could call yourself an interaction designer or an information architect. Start with things near the top of the list. Focus on specific concepts and skills. Then move down and branch out. If you really like this stuff, consider it as a career path. Many developers move into managements, but UX design is another option.
Learn fundamental design concepts. You should know about affordances, visibility, feedback, mappings, Fitt's law, poka-yokes, and more. I recommend reading The Design of Everyday Things (Don Norman) and Universal Principles of Design (Lidwell, Holden, & Butler)
Learn about user experience. This is becoming the umbrella term for the human-centered design of web sites, applications, and any other digital artifact. The classic primer here is The elements of User Experience (Jesse James Garrett). You can get an overview and the first few chapters from the author's site.
Learn to sketch designs. Sketching is fast way to explore design options and find the right design, whereas usability testing is about getting the design right. Paper prototyping is fast, cheap, and effective during the early design stages. Much faster than coding a digital prototype. The key text here is Sketching User Experience: Getting the design right and the right design (Bill Buxton). Sketching is a particularly useful skill when working with IA/ID/UX designers. Your collaboration will be more effective. For a good primer on how and why designers sketch, watch the presentation How to be a UX team of one by Leah Buley from the 2008 IA Summit.
Learn paper prototyping. The fastest way to iteratively test an interface before you write code. Different from sketching and usability testing. The definitive book here is Paper Prototyping (Carolyn Snyder). You can get a good DVD on this from the Nielsen Norman Group.
Learn usability testing. Discount testing is easy and effective. But for many UIs, usability is hard to do well. You can learn the basics quickly, but good usability people are invaluable. If you want a book, the classic is The Handbook of Usability Testing (Jeffrey Rubin). It's older but offers thorough coverage of lab-based testing. The famous starter book is Don't Make Me Think (2nd Ed) (Steve Krug). I caution people about this one: Krug makes it sound easier than it is. But it is a good starting point. The user research books listed in the next point also cover this topic. And you can find piles about it online.
Learn about information architecture. The main book here is Information Architecture for the World Wide Web (3rd) (Louis Rosenfeld & Peter Morville). A good starter book is Information Architecture: Blueprints for the Web (Christina Wodtke). For more, visit the Information Architecture Institute or attend the annual Information Architecture Summit.
Learn about interaction design. The main book here is The Essentials of Interaction Design (3rd) (Alan Cooper, et al). A good starter book is Designing for interaction (Dan Saffer). For more, visit the Interaction Design Association (IxDA) or attend the annual Interaction Design conference.
Learn fundamentals of graphic design. Graphic design is not UI design, but concepts from graphic design can improve an interface. Graphic design introduces design principles for the visual presentation of information, such as proximity, alignment, and small multiples. I recommend reading The non-designer's design book (Robin Williams) and Envisioning Information (Edward Tufte)
Learn to do user research. Where usability tests an interface, user research tries to model users and their tasks through personas, scenarios, user journeys, and other documents. It's about understanding users and what they do, then using that to inform the design instead of guessing. Some techniques are interviews, surveys, diary studies, and cart sorting. Good books on this are Observing the User Experience (Mike Kuniavsky) and Understanding Your Users (Courage & Baxter)
Learn to do field research. Watching people in the lab under artificial conditions helps (ie: usability), but there is nothing like watching people use your code in context: their home, their office, or wherever they use it. Goes by various names, including ethnography, field studies, and contextual inquiry. Here is a good primer on field research. Two of the better known books here are Rapid Contextual Design (Karen Holtzblatt et al) and User and task analysis for interface design (Hackos & Redish).
Read UX design web sites. Some of the big ones are Boxes & Arrows, UX Mag, UX Matters, and Digital Web magazine.
Use UI pattern libraries. There are patterns for interfaces. For web sites, I recommend The Design of Sites, 2nd ed (Van Duyne, et al) and Homepage usability: 50 websites deconstructed (Jakob Nielsen & Marie Tahir). For desktop applications I recommend Designing interfaces (Jennifer Tidwell), and for web applications I recommend Designing Web Interfaces: Principles and Patterns for Rich Interactions (Bill Scott & Theresa Neil). Online you should check Welie pattern library, UI patterns, and Web UI patterns.
Attend UX design conferences. Some good annual conferences are: Information Architecture Summit, Interaction '09 (IxDA), User Interface, and UX week.
Attend a workshop or webinar. You can take workshops, webinars, and online courses. This is far from a comprehensive list, but you might try the UIE virtual seminars, Adaptive Path virtual seminars, and UX webinars from Rosenfeld Media.
Get a degree. A graduate degree in HCI is one approach, but these programs are mostly about writing coding. If you want to learn about the design of digital artifacts and devices, then you want a graduate program that's not in CS. Some options include Interaction Design at Carnegie Mellon, the d-School at Stanford, the ITP program at NYU, and Information Architecture & Knowledge Management at Kent State (disclosure: I'm on faculty at Kent; we are seeing more and more people with CS degrees moving into UX design instead of management, which is interesting, because management is the traditional path for developers who want to move away from writing code while staying in their field). There are many more programs. Each has their own perspective, areas of emphasis, and technical expectations. Some come out of the arts and visual design, others out of library and information science, and some from CS. Most are hybrids, but every hybrid has deeper roots in one or more fields. If this interests you, look around and try to understand the differences between these programs. Some offer online courses and certificate programs in addition to full-fledged degrees.
Why UI design is hard
Good UI design is hard because it involves 2 vastly different skills:
A deep understanding of the machine. People in this group worry about code first, people second. They have deep technological knowledge and skill. We call them developers, programmers, engineers, and so forth.
A deep understanding of people and design: People in this group worry about people first, code second. They have deep knowledge of how people interact with information, computers, and the world around them. We call them user experience designers, information architects, interaction designers, usability engineers, and so forth.
This is the essential difference between these 2 groups—between developers and designers:
Developers make it work. They implement the functionality on your TiVo, your iPhone, your favorite website, etc. They make sure it actually does what it is supposed to do. Their highest priority is making it work.
Designers make people love it. They figure out how to interact with it, how it should look, and how it should feel. They design the experience of using the application, the web site, the device. Their highest priority is making you fall in love with what developers make. This is what is meant by user experience, and it's not the same as brand experience.
Moreover, programming and design require different mindsets, not just different knowledge and different skills. Good UI design requires both mindsets, both knowledge bases, both skill groups. And it takes years to master either one.
Developers should expect to find UI design hard, just as UI designers should expect to find writing code hard.
What really helps me improve my design is to grab a fellow developer, one the QA guys, the PM, or anyone who happens to walk by and have them try out a particular widget or screen.
Its amazing what you will realize when you watch someone else use your software for the first time
Ultimately, it's really about empathy -- can you put yourself in the shoes of your user?
One thing that helps, of course, is "eating your own dogfood" -- using your applications as a real user yourself, and seeing what's annoying.
Another good idea is to find a way to watch a real user using your application, which may be as complicated as a usability lab with one-way mirrors, screen video capture, video cameras on the users, etc., or can be as simple as paper prototyping using the next person who happens to walk down the hall.
If all else fails, remember that it's almost always better for the UI to be too simple than too complicated. It's very very easy to say "oh, I know how to solve that, I'll just add a checkbox so the user can decide which mode they prefer". Soon your UI is too complicated. Pick a default mode and make the preference setting an advanced configuration option. Or just leave it out.
If you read a lot about design you can easily get hung up on dropped shadows and rounded corners and so forth. That's not the important stuff. Simplicity and discoverability are the important stuff.
Contrary to popular myth there are literally no soft aspects in UI design, at least no more than needed to design a good back end.
Consider the following; good back end design is based upon fairly solid principles and elements any good developer is familiar with:
low coupling
high cohesion
architectural patterns
industry best practices
etc
Good back end design is usually born through a number of interactions, where based on the measurable feedback obtained during tests or actual use the initial blueprint is gradually improved. Sometimes you need to prototype smaller aspects of back end and trial them in isolation etc.
Good UI design is based on the sound principles of:
visibility
affordance
feedback
tolerance
simplicity
consistency
structure
UI is also born through test and trial, through iterations but not with compiler + automated test suit, but people. Similarly to back end there are industry best practises, measurement and evaluation techniques, ways to think of UI and set goals in terms of user model, system image, designer model, structural model, functional model etc.
The skill set needed for designing UI is quite different from designing back-end and hence don’t expect to be able to do good UI without doing some learning first. However that both these activities have in common is the process of design. I believe that anyone who can design good software is capable of designing good UI as long as they spend some time learning how.
I recommend taking a course in Human Computer Interaction, check MIT and Yale site for example for online materials:
MIT User Interface Design and Implementation Course
Structural vs Functional Model in Understanding and Usage
The excellent earlier post by Thorsten79 brings up the topic of software development experts vs users and how their understanding of software differ. Human learning experts distinguish between functional and structural mental models. Finding way to your friend's house can be an excellent example of the difference between the two:
First approach includes a set of detailed instructions: take the first exit of the motorway, then after 100 yards turn left etc. This is an example of functional model: list of concrete steps necessary to achieve a certain goal. Functional models are easy to use, they do not require much thinking just a straight forward execution. Obviously there is a penalty for the simplicity: it might not be the most efficient route and any any exceptional situation (i.e. a traffic diversion) can easilly lead to a complete failure.
A different way to cope with the task is to build a structural mental model. In our example that would be a map that conveyes a lot of information about the internal structure of the "task object". From understanding the map and relative locations of our and friend's house we can deduct the functional model (the route). Obviously it's requires more effort, but much more reliable way of completing the task in spite of the possible deviations.
The choice between conveying functional or structural model through UI (for example, wizard vs advanced mode) is not that straight forward as it might seem from Thorsten79's post. Advanced and frequent users might well prefer the structural model, whereas occassional or less expirienced users — functional.
Google maps is a great example: they include both functional and structural model, so do many sat navs.
Another dimension of the problem is that the structural model presented through UI must not map to the structure of software, but rather naturally map to structure of the user task at hand or task object involved.
The difficulty here is that many developers will have a good structural model of their software internals, but only functional model of the user task the software aims to assist at. To build good UI one needs to understand the task/task object structure and map UI to that structure.
Anyway, I still can't recommend taking a formal HCI course strongly enough. There's a lot of stuff involved such as heuristics, principles derived from Gestalt phychology, ways humans learn etc.
I suggest you start by doing all your UI in the same way as you are doing now, with no focus on usability and stuff.
alt text http://www.stricken.org/uploaded_images/WordToolbars-718376.jpg
Now think of this:
A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.
— Saint-Exupéry
And apply this in your design.
A lot of developers think that because they can write code, they can do it all. Designing an interface is a completely different skill, and it was not taught at all when I attended college. It's not just something that just comes naturally.
Another good book is The Design of Everyday Things by Donald Norman.
There's a huge difference between design and aesthetics, and they are often confused.
A beautiful UI requires artistic or at least aesthetic skills that many, including myself, are incapable of producing. Unfortunately, it is not enough and does not make the UI usable, as we can see in many heavyweight flash-based APIs.
Producing usable UIs requires an understanding of how humans interact with computers, some issues in psychology (e.g., Fitt's law, Hick's law), and other topics. Very few CS programs train for this. Very few developers that I know will pick a user-testing book over a JUnit book, etc.
Many of us are also "core programmers", tending to think of UIs as the facade rather than as a factor that could make or break the success of our project.
In addition, most UI development experience is extremely frustrating. We can either use toy GUI builders like old VB and have to deal with ugly glue code, or we use APIs that frustrate us to no end, like trying to sort out layouts in Swing.
Go over to Slashdot, and read the comments on any article dealing with Apple. You'll find a large number of people talking about how Apple products are nothing special, and ascribing the success of the iPod and iPhone to people trying to be trendy or hip. They will typically go through feature lists, and point out that they do nothing earlier MP3 players or smart phones didn't do.
Then there are people who like the iPod and iPhone because they do what the users want simply and easily, without reference to manuals. The interfaces are about as intuitive as interfaces get, memorable, and discoverable. I'm not as fond of the UI on MacOSX as I was on earlier versions, I think they've given up some usefulness in favor of glitz, but the iPod and iPhone are examples of superb design.
If you are in the first camp, you don't think the way the average person does, and therefore you are likely to make bad user interfaces because you can't tell them from good ones. This doesn't mean you're hopeless, but rather that you have to explicitly learn good interface design principles, and how to recognize a good UI (much as somebody with Asperger's might need to learn social skills explicitly). Obviously, just having a sense of a good UI doesn't mean you can make one; my appreciation for literature, for example, doesn't seem to extend to the ability (currently) to write publishable stories.
So, try to develop a sense for good UI design. This extends to more than just software. Don Norman's "The Design of Everyday Things" is a classic, and there's other books out there. Get examples of successful UI designs, and play with them enough to get a feel for the difference. Recognize that you may be having to learn a new way of thinking abou things, and enjoy it.
The main rule of thumb I hold to, is never try to do both at once. If I'm working on back-end code, I'll finish up doing that, take a break, and return with my UI hat on. If you try to work it in whilst you're doing code, you'll approach it with the wrong mindset, and end up with some horrible interfaces as a result.
I think it's definitely possible to be both a good back-end developer and a good UI designer, you just have to work at it, do some reading and research on the topic (everything from Miller's #7, to Nielsen's archives), and make sure you understand why UI design is of the utmost importance.
I don't think it's a case of needing to be creative but rather, like back-end development, it is a very methodical, very structured thing that needs to be learned. It's people getting 'creative' with UIs that creates some of the biggest usability monstrosities... I mean, take a look at 100% Flash websites, for a start...
Edit: Krug's book is really good... do take a read of it, especially if you're going to be designing for the Web.
There are many reasons for this.
(1) Developer fails to see things from the point of view of the user. This is the usual suspect: lack of empathy. But it is not usually true since developers are not as alien as people make them out to be.
(2) Another, more common reason is that the developer being so close to his own stuff, having stayed with his stuff for so long, fails to realize that his stuff may not be so familiar(a term better than intuitive) to other people.
(3) Still another reason is the developer lacks techniques.
MY BIG CLAIM: read any UI, human interection design, prototyping book. e.g. Designing the Obvious: A Common Sense Approach to Web Application Design, Don't Make Me Think: A Common Sense Approach to Web Usability, Designing the moment, whatever.
How do they discuss task flows? How do they describe decision points? That is, in any use case, there are at least 3 paths: success, failure/exception, alternative.
Thus, from point A, you can go to A.1, A.2, A.3.
From point A.1, you can get to A.1.1, A.1.2, A.1.3, and so on.
How do they show such drill-down task flow?
They don't. They just gloss over it.
Since even UI experst don't have a technique, developers have no chance.
He thinks it is clear in his head. But it is not even clear on paper, let alone clear in software implementation.
I have to use my own hand-made techniques for this.
I try to keep in touch with design-specific websites and texts. I found also the excellent Robin Williams book The Non-Designer's Design Book to be very interesting in these studies.
I believe that design and usability is a very important part of software engineering and we should learn it more and stop giving excuses that we are not supposed to do design.
Everyone can be a designer once in a while, as also everyone can be a programmer.
When approaching UI design, here are a few of the things I keep in mind throughout (by far not a complete list):
Communicating a model. The UI is a narrative that explains a mental model to the user. This model may be a business object, a set of relationships, what have you. The visual prominence, spatial placement, and workflow ordering all play a part in communicating this model to the user. For example, a certain kind of list vs another implies different things, as well as the relationship of what's in the list to the rest of the model. In general I find it best to make sure only one model is communicated at a time. Programmers frequently try to communicate more than one model, or parts of several, in the same UI space.
Consistency. Re-using popular UI metaphors helps a lot. Internal consistency is also very important.
Grouping of tasks. Users should not have to move the mouse all the way across the screen to verify or complete a related sequence of commands. Modal dialogs and flyout-menus can be especially bad in this area.
Knowing your audience. If your users will be doing the same activities over and over, they will quickly become power users at those tasks and be frustrated by attempts to lower the initial entry barrier. If your users do many different kinds of activities infrequently, it's best to ensure the UI holds their hand the whole time.
Read Apple Human Interface Guidelines.
I find the best tool in UI design is to watch a first-time User attempt to use the software. Take loads of notes and ask them some questions. Never direct them or attempt to explain how the software works. This is the job of the UI (and well written documentation).
We consistently adopt this approach in all projects. It is always fascinating to watch a User deal with software in a manner that you never considered before.
Why is UI design so hard? Well generally because the Developer and User never meet.
duffymo just reminded me why: Many Programmers think "*Design" == "Art".
Good UI design is absolutely not artistic. It follows solid principles, that can be backed up with data if you've got the time to do the research.
I think all programmers need to do is take the time to learn the principles. I think it's in our nature to apply best practice whenever we can, be it in code or in layout. All we need to do is make ourselves aware of what the best practices are for this aspect of our job.
What have I done to become better at UI design?
Pay attention to it!
It's like how ever time you see a chart on the news or an electronic bus sign and you wonder 'How did they get that data? Did they do that with raw sql or are they using LINQ?' (or insert your own common geek curiosity here).
You need to start doing that but with visual elements of all kinds.
But just like learning a new language, if you don't really throw yourself into it, you won't ever learn it.
Taken from another answer I wrote:
Learn to look, really look, at the world around you. Why do I like that UI but hate this one? Why is it so hard to find the noodle dishes in this restaurant menu? Wow, I knew what that sign meant before I even read the words. Why was that? How come that book cover looks so wrong? Learn to take the time to think about why you react the way you do to visual elements of all kinds, and then apply this to your work.
However you do it (and there are some great points above), it really helped me once I accepted that there is NO SUCH THING AS INTUITIVE....
I can hear the arguments rumbling on the horizon... so let me explain a little.
Intuitive: using what one feels to be right or true based on an unconscious method or feeling.
If (as Carl Sagan postulated) you accept that you cannot comprehend things that are absolutely unlike anything you have ever encountered then how could you possibly "know" how to use something if you have never used anything remotely like it?
Think about it: kids try to open doors not because they "know" how a doorknob works, but because they have seen someone else do it... often they turn the knob in the wrong direction, or pull too soon. They have to LEARN how a doorknob works. This knowledge then gets applied in different but similar instances: opening a window, opening a drawer, opening almost anything big with a big, knob-looking handle.
Even simple things that seem intuitive to us will not be intuitive at all to people from other cultures. If someone held their arm out in front of them and waived their hand up-and-down at the wrist while keeping the arm still.... are they waiving you away? Probably, unless you are in Japan. There, this hand signal can mean "come here". So who is right? Both, of course, in their own context. But if you travel to both, you need to know both... UI design.
I try to find the things that are already "familiar" to the potential users of my project and then build the UI around them: user-centric design.
Take a look at Apple's iPhone. Even if you hate it, you have to respect the amount of thought that went into it. Is it perfect? Of course not. Over time an object's perceived "intuitiveness" can grow or even fade away completely.
For example. Most everyone knows that a strip of black with two rows of holes along the top and bottom looks like a film strip... or do they?
Ask your average 9 or 10 year old what they think it is. You may be surprised how many kids right now will have a hard time identifying it as a film strip, even though it is something that is still used to represent Hollywood, or anything film (movie) related. Most movies for the past 20 years have been digitally shot. And when was the last time any of us held a piece of film of ANY kind, photos or film?
So, what it all boils down to for me is: Know your audience and constantly research to keep up with trends and changes in things that are "intuitive", target your main users and try not to do things that punish the inexperienced in favor of the advanced users or slow down the advanced users in order to hand-hold the novices.
Ultimately, every program will require a certain amount of training on the user's part to use it. How much training and for which level of user is part of the decisions that need to be made.
Some things are more or less familiar based on your target user's past experience level as a human being, or computer user, or student, or whatever.
I just shoot for the fattest part of the bell curve and try to get as many people as I can but realizing that I will never please everyone....
I know that Microsoft is rather inconsistent with their own guidelines, but I have found that reading their Windows design guidelines have really helped me. I have a copy on my website here, just scroll down a little the the Vista UX Guide. It has helped me with things such as colors, spacing, layouts, and more.
I believe the main problem has nothing to do with different talents or skillsets. The main problem is that as a developer, you know too much about what the application does and how it does it, and you automatically design your UI from the point of view of someone who has that knowledge.
Whereas a user typically starts out knowing absolutely nothing about the application and should never need to learn anything about its inner workings.
It is very hard, almost impossible, to not use knowledge that you have - and that's why an UI should not be designed by someone who's developing the app behind it.
"Designing from both sides of the screen" presents a very simple but profound reason as to why programmers find UI design hard: programmers are trained to think in terms of edge cases while UI designers are trained to think in terms of common cases or usage.
So going from one world to the other is certainly difficult if the default traning in either is the exact opposite of the other.
To say that programms suck at UI design is to miss the point. The point of the problem is that the formal training that most developers get go indepth with the technology. Human - Computer Interaction is not a simple topic. It is not something that I can "mind-meld" to you by providing a simple one line statement that makes you realize "oh the users will use this application more effectively if I do x instead of y."
This is because there is one part of UI design that you are missing. The human brain. In order to understand how to design a UI, you have to understand how the human mind interacts with machinery. There is an excellent course I took at the University of Minnesota on this topic taught by a professor of Psychology. It is named "Human - Machine Interaction". This describes many of the reasons of why UI design is so complicated.
Since Psychology is based on Correlations and not Causality you can never prove that a method of UI design will always work in any given situation. You can correlate that many users will find a particular UI design appealing or efficient, but you cannot prove that it will always generalize.
Additionally, there are two parts to UI design that many people seem to miss. There is the aesthetical appeal, and the functional workflow. If you go for a 100% aesthetical appeal, sure people will but your product. I highly doubt that aesthetics will ever reduce user frustration though.
There are several good books on this topic and course to take (like Bill Buxton's Sketching User Experiences, and Cognition in the Wild by Edwin Hutchins). There are graduate programs on Human - Computer Interaction at many universities.
The overall answer to this question though lies in how individuals are taught computer science. It is all math based, logic based and not based on the user experience. To get that, you need more than a generic 4 year computer science degree (unless your 4 year computer science degree had a minor in psychology and was emphasized in Human - Computer Interaction).
Let's turn your question around -
Are "ui designers" doomed to only design information architecture and presentation layers? Is there something they can do to retrain their brains to be more effective at designing pleasing and efficient system layers?
Seems like them "ui designers" would have to take a completely different perspective - they'd have to look from the inside of the box outwards; instead of looking in from outside the box.
Alan Cooper's "The Inmates are Running the Asylum" opinion is that we can't successfully take both perspectives - we can learn to wear one hat well but we can't just switch hats.
I think its because a good UI is not logical. A good UI is intuitive.
Software developers typically do bad on 'intuitive'
A useful framing is to actively consider what you're doing as designing a process of communication. In a very real sense, your interface is a language that the user must use to tell the computer what to do. This leads to considering a number of points:
Does the user already speak this language? Using a highly idiosyncratic interface is like communicating in a language you've never spoken before. So if your interface must be idiosyncratic at all, it had best introduce itself with the simplest of terms and few distractions. On the other hand, if your interface uses idioms that the user is accustomed to, they'll gain confidence from the start.
The enemy of communication is noise. Auditory noise interferes with spoken communication; visual noise interferes with visual communication. The more noise you can cut out of your interface, the easier communicating with it will be.
As in human conversation, it's often not what you say, it's how you say it. The way most software communicates is rude to a degree that would get it punched in the face if it were a person. How would you feel if you asked someone a question and they sat there and stared at you for several minutes, refusing to respond in any other way, before answering? Many interface elements, like progress bars and automatic focus selection, have the fundamental function of politeness. Ask yourself how you can make the user's day a little more pleasant.
Really, it's somewhat hard to determine what programmers think of interface interaction as being, other than a process of communication, but maybe the problem is that it doesn't get thought of as being anything at all.
There are a lot o good comments already, so I am not sure there is much I can add.
But still...
Why would a developer expect to be able to design good UI?
How much training did he had in that field?
How many books did he read?
How many things did he designed over how many years?
Did he had the opportunity to see the reaction of it's users?
We don't expect that a random "Joe the plumber" to be able to write good code.
So why would we expect the random "Joe the programmer" to design good UI?
Empathy helps. Separating the UI design and the programming helps. Usability testing helps.
But UI design is a craft that has to be learned, and practiced, like any other.
Developers are not (necessarily) good at UI design for the same reason they aren't (necessarily) good at knitting; it's hard, it takes practice, and it doesn't hurt to have someone show you how in the first place.
Most developers (me included) started "designing" UIs because it was a necessary part of writing software. Until a developer puts in the effort to get good at it, s/he won't be.
To improve just look around at existing sites. In addition to the books already suggested, you might like to have a look at Robin Williams's excellent book "The Non-designers Design Book" (sanitised Amazon link)
Have a look at what's possible in visual design by taking a look at the various submissions over at The Zen Garden as well.
UI design is definitely an art though, like pointers in C, some people get it and some people don't.
But at least we can have a chuckle at their attempts. BTW Thanks OK/Cancel for a funny comic and thanks Joel for putting it in your book "The Best Software Writing I" (sanitised Amazon link).
User interface isn't something that can be applied after the fact, like a thin coat of paint. It is something that needs to be there at the start, and based on real research. There's tons of Usability research available of course. It needs to not just be there at the start, it needs to form the core of the very reason you're making the software in the first place: There's some gap in the world out there, some problem, and it needs to be made more usable and more efficient.
Software is not there for its own sake. The reason for a peice of software to exist is FOR PEOPLE. It's absolutely ludicrous to even try to come up with an idea for a new peice of software, without understanding why anyone would need it. Yet this happens all the time.
Before a single line of code is written, you should go through paper versions of the interface, and test it on real people. This is kind of weird and silly, it works best with kids, and someone entertaining acting as "the computer".
The interface needs to take advantage of our natural cognitive facilities. How would a caveman use your program? For instance, we've evolved to be really good at tracking moving objects. That's why interfaces that use physics simulations, like the iphone, work better than interfaces where changes occur instantaneously.
We are good at certain kinds of abstraction, but not others. As programmers, we're trained to do mental gymnastics and backflips to understand some of the weirdest abstractions. For instance, we understand that a sequence of arcane text can represent and be translated into a pattern of electromagnetic state on a metal platter, which when encountered by a carefully designed device, leads to a sequence of invisible events that occur at lightspeed on an electronic circuit, and these events can be directed to produce a useful outcome. This is an incredibly unnatural thing to have to understand. Understand that while it's got a perfectly rational explanation to us, to the outside world, it looks like we're writing incomprehensible incantations to summon invisible sentient spirits to do our bidding.
The sorts of abstractions that normal humans understand are things like maps, diagrams, and symbols. Beware of symbols, because symbols are a very fragile human concept that take conscious mental effort to decode, until the symbol is learned.
The trick with symbols is that there has to be a clear relationship between the symbol, and the thing it represents. The thing it represents either has to be a noun, in which case the symbol should look VERY MUCH like the thing it represents. If a symbol is representing a more abstract concept, that has to be explained IN ADVANCE. See the inscrutable unlabled icons in msword's, or photoshop's toolbar, and the abstract concepts they represent. It has to be LEARNED that the crop tool icon in photoshop means CROP TOOL. it has to be understood what CROP even means. These are prerequisites to correctly using that software. Which brings up an important point, beware of ASSUMED knowledge.
We only gain the ability to understand maps around the age of 4. I think I read somewhere once that chimpanzees gain the ability to understand maps around the age of 6 or 7.
The reason that guis have been so successful to begin with, is that they changed a landscape of mostly textual interfaces to computers, to something that mapped the computer concepts to something that resembled a physical place. Where guis fail in terms of usability, is where they stop resembling something you'd see in real life. There are invisible, unpredictable, incomprehensible things that happen in a computer that bare no resemblance to anything you'd ever see in the physical world. Some of this is necessary, since there'd be no point in just making a reality simulator- The idea is to save work, so there has to be a bit of magic. But that magic has to make sense, and be grounded in an abstraction that human beings are well adapted to understanding. It's when our abstractions start getting deep, and layered, and mismatched with the task at hand that things break down. In other words, the interface doesn't function as a good map for the underlying software.
There are lots of books. The two I've read, and can therefore reccomend, are "The Design of Everyday Things" by donald norman, and "The Human Interface" by Jef Raskin.
I also reccomend a course in psychology. "The Design of Every day Things" talks about this a bit. A lot of interfaces break down because of a developer's "folk understanding" of psychology. This is similar to "folk physics". An object in motion stays in motion doesn't make any sense to most people. "You have to keep pushing it to keep it in motion!" thinks the physics novice. User testing doesn't make sense to most developers. "You can just ask the users what they want, and that should be good enough!" thinks the psychology novice.
I reccomend Discovering Psychology, a PBS documentary series, hosted by Philip Zimbardo. Failing that, try and find a good physics textbook. The expensive kind. Not the pulp fiction self help crap that you find in Borders, but the thick hardbound stuff you can only find in a university library. This is a necesesary foundation. You can do good design without it, but you'll only have an intuitive understanding of what's going on. Reading some good books will give you a good perspective.
If you read the book "Why software sucks" you would have seen Platt's answer, which is a simple one:
Developers prefere control over user-friendliness
Average people prefere user-friendliness over control
But another another answer to your question would be "why is dentistry so hard for some developers?" - UI design is best done by a UI designer.
http://dotmad.net/blog/2007/11/david-platt-on-why-software-sucks/

Feature bloat - how much is too much?

I'm a computer science student designing a project and I've started wondering what are good examples or software, or even hardware that are toeing the line between being feature rich with good usable features for regular users and being too intimidating for new users. Also could anyone recommend any good tips/books for designing good quality applications that are feature rich but not "bloated"?
"Make everything as simple as possible, but not simpler." - Albert Einstein
"Perfection is reached not when there is nothing left to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry
I am not trying to be flippant but these quotes really are the best advice. Simplicity of design should be your goal. Not that achieving simplicity is easy! On the contrary, it is quite difficult but it is possible.
Try thinking about things a bit differently. Rather than
How many things can I add before this becomes bloated?
try
What are the fewest number of features and elements I can include while still providing a superior experience for my users?
Here's a good set of slides from a presentation on the topic: Rescue Princess 2.0.
The first order of business should just be keeping the application easy to use. Beyond that, all I can say is, beware of writing features for an imaginary user: make sure someone actually needs it before you start coding.
As a direct answer to your question: pretty much any Microsoft product. I'm showing my bias here, but Microsoft has a strong tendency to keep their codebase, and add features on top of features until the original functionality of the app is nearly lost beneath mounds of accreted crud.
Look at MS Word, for example; while you can still just open it up and start typing, god forbid if you want to renumber a section of your document while leaving the rest alone. Heaven forbid if you want to generate a Table of Contents that includes references to an Appendix. This sort of stuff is something that is de rigeur for Word Processors, and Word supports it, it just supports it in a way that you cannot get it done without a manual, several cups of coffee, and bandages to stop the bleeding from banging your head on the desk.
Microsoft isn't alone in doing this; this thing tends to happen all the time, with all sorts of products; but they are among the worst offenders, I've found.
1: What do your users need, and want, and
2: Which features will you have time to implement?
Your question is pretty general. Which features constitute bloat? That kind of depends on whether you're writing an antivirus scanner, an OS or a word processor.
There is no clear barrier between "good" and "too much".
However, it depends on what you want to do.
If you're developing a SDK, I recommend splitting your implementation in several small libraries(rather than just one big SDL library, there is the SDL core, SDL_Mixer, SDL_Image, etc.)
If you're developing an application, keep a module-based system and a plug-in mechanism.
That way, new features can be added more easily and bloat can be more easily detected.
You may get to a point where you'll add new features some will consider "great" and others "bloat". Otherwise, your application may reach a point that some will call it "feature-poor" and others will call it "just enough".
This isn't an exact quote, but the idea was something like this:
A piece of software is perfect not when there is nothing more to add, but when there is nothing more to remove.
In essence, the simpler and more to-the-point is a software, the better.
To get examples of good software design, take a look at programs that are popular today. Google applications would be a nice place to look. Skype perhaps. Heh, even StackOverflow. :)
If you want intimidating, go to the world of CAD. Check out for example Blender. That's a freeware 3D designer software. Good tool I'm told, but the UI has so many buttons/panels/menus/etc. that it makes baby bunnies cry. Unfortunately I cannot say if this would be a good example of a "bad" UI. 3D designing is a very complex process and all those tools are probably in the right place. But it's definately intimidating. :)
A bad UI design can often be found with propieritary software that comes with propieritary hardware. Unfortunately I cannot give you any examples from the top of my head.
I always tend to design my projects in a way that they're just skeletons which are as extensible as possible. Limiting factors are performance, complexity or Thirdparty-limitations.
This way you could add additional features after finishing the basic structure. A user could also add his needed features.
This probably does not work very good for GUI-applications which should have a good usability without much configuration, but I'm sticking good with this approach for those libs I develop. (They're used by other coders who like to have a highly modifable piece of software)
It's not very hard to develop an application/lib which is bloated with features. But it is to develop an app which could be easily extended by other developers/users to match their own needs.
Develop a wide-ranging plug-in system so you add and take out stuff at any time. Problem solved. If only that was as easy as writing spaghetti code. ;)

Resources