How to learn the necessary anthropology to create social software? - social-networking

Joel Spolsky repeats over and over that today, knowing a bit of anthropology can be very useful for a programer because much of what's being created is social software.
How can someone that already knows the computer science learn the anthropology needed to know how human beings works? Any books? Any recorded lectures?

I agree that knowing a bit about how we think is more important now for a developer then ever. The book Consciousness Explained by Dan Dennett was a real eye opener for me in understanding that we don't think the way we think we think.

I would suggest Clay Shirky's site is a good place to start. It's social anthropology set in a context of the internet, so it's more accessible (to programmers) than purely academic anthropology.

There is a book I've heard is good, but didn't have a chance to dig through it yet: Programming collective intelligence. It gives you some algorithms to quantify human behavior in social software. Sounds interesting.
Mathew Podwysocki wrote a post some time ago about implementing these ideas in Haskell.

I'm not sure that approaching contemporary anthropology is a whole is
the absolute best way to develop the knowledge that you
seek. Anthropologists study a bunch of different things, and while
knowing this stuff will help you be able to develop better designs and
products, this is a case where being a generalist is probably not an
effective use of time.
Anthropologists study culture, the superstructural stuff that
happens when you put a bunch of people in close proximity and let the
situation stew for a while. Apologies for the rough
definition. Knowing about culture, how cultures and societies
function, what causes them to break, what causes them to flourish is
fascinating and useful. Reading the "anthropological cannon" will help
you begin to understand this, but again long road, and I think the
questions you need answered are more easily addressed with some
specific projects.
First I'd like to just characterize Anthropology for a moment:
Although Anthropology isn't an experimental field, it's incredibly
empirical. Anthropologists collect a lot of data, and attempt to
describe what they see as totally as possible. This methodology, and
approach is--I think--extremely useful to software developers. It's
very easy to say "people want this," or "users feel this way," about a
feature or aspect of your software based on your experiences. It's
terribly difficult to figure out how users actually feel and interact
with your software in a precise way. If you had to take one
Anthropology class as a software developer, I'd recommend something
with a methodological emphasis.
In terms of specific resources, the following directions spring to mind
Dona Harroway's "The Cyborg Manifesto," springs instantly to mind as
the foundational work in a field of study that explores the
interaction between people, and machines as a social phenomena. It's
short. Good read. Amber Case, a young "cyborg anthropologist" does
work in Harroway's tradition, and I'd follow up on both of these
folks.
Secondly, I'd explore studies of cities and small communities. Except
in some very extreme cases (i.e. Twitter, Facebook, etc.), whole
cultures aren't using your software. Groups are. Learn about them. I
think urban studies and work that gets called "urban sociology" might
begin to provide you the kinds of answer that you'd be interested
in. I think that would be a good place to start.

The only rule to know about social software is that "people will do anything to make money or get laid" :)
But on a serious note, I don't think anthropology is what matters, but rather an understanding of the motivation that people have to contribute to social software or to expose themselves on social software. There have been quite a few recent books that explain a lot of these concepts in good terms. A good start could be "Here comes everybody" by Clay Shriky.

The Design of Everyday Things
The Humane Interface

Many of the answers here are pointing towards texts on how consciousness works or how people interact with devices. This is a great start, since it shows where you would want to go. Beyond that, you could consider understanding fundamental social and experiential aspects of how humans work. This way, you can develop software with an understanding of how humans could experience your software, as well as how it could be part of a social world.
To this I recommend The Ethical Primate from Mary Midgley. The text is about philosophy, ethics, freedom, and evolution, but it is firmly grounded in empirical knowledge. It will also give you tools to be able to critically examine the language and knowledge that—in my experience as a computer science major—STEM usually uses when discussing people. If you want to read a shorter text regarding this last point on the dangers of STEM language when describing humans, you could read Mary Midgley's Biotechnology and Monstruosity.
A text that deals less with the ethical and social implications of theorizing about humans is The Tangled Wing.
There are many ethnographies that describe how people interact with technologies such as social media. These are more specific to the kind of technology that you're working on.

Related

Resources to learn how to design algorithms like which Nest Thermostat uses?

I am trying to build some smart home devices by myself. And I am very interested in building IoT algorithms like Nest Thermostat does which is able to learn the characteristics of the house and the behavior of the family members.
Though I have some machine learning basics, I barely know about thermal model which is all the researches and methods of Nest based on.
So if I want do some study and create similar algorithms like Nest do by myself, how should I get started? Any suggested references?
You said it yourself - thermal modelling. So read up on thermodynamics. If you don't read on thermodynamics you won't know which part of thermodynamics to read on to model heat distribution in a house.
One of the most important thing about being a programmer is not programming. Programming is almost the least important thing a programmer does (slightly lower than debugging). The most important thing about being a programmer is to understand the requirements of the program.
So someone writing an accounting program should know a bit about accounting. He doesn't need to be an expert but he should at least be able to spot a bug.
Working for big companies you'll find that usually you'll have project managers and systems analysts helping you figure out the requirements. But coding your own project you have to be your own project manager and architect. So you have to do the reading-up.
Now, apart from the general advice above, when writing software to control real-world objects and phenomena you can't get away from knowing about the PID loop (Proportional, Integral, Differential). It's how software thermostats control the temperature of industrial ovens. It's how quadcopters can hover without becoming unstable. It's how Segways balance themselves.
The theory behind PID is more than a hundred years old. It was developed to govern steam engines. But it is so useful and important that we generally still depend on it in electronics.
There's a lot of math-heavy theory out there about PIDs. There are also a lot of less complicated rule-of-thumb guides about PIDs aimed at technicians and mechanics. I suggest reading the simpler less theory-heavy guides first then work your way up if you need to know something.

When can an application be a game?

Sometimes, game-like features in an application can make work fun. For example, Stack Overflow uses badges and points to coerce its users into doing work.
What game-like features are
transferable to applications?
What kinds of applications are
appropriate for game-like features?
Why are game-like features uncommon in applications?
I think the main issue is that in most applications, they are used for a purpose. They don't need to incentivise the user by making it more "fun" and it's generally a distraction. Imagine what would happen if Visual Studio (or whatever your favorite IDE is) gave you badges... Just like here, many people would concentrate on acquiring those badger instead of writing good code.
Another thing is that, at least in the case of badges/achievements, they're fairly meaningless for offline applications.
Games are really educational applications. True, what they generally teach is how to play the game, but they're still educational.
By the time you finish a typical game, you're an expert in a dozen different mechanics, know how to handle complex scenarios, and can recognize multiple different foes and their patterns.
While game mechanics themselves ("jump!") may not be applicable to typical applications, a look at how games approach teaching certainly could be.
One of the places where you can see this principle being effectively applied is for applications that use people to generate or index content. In these cases, the game-like aspects are a way to encourage self-moderation. For example, on SO, the rep and badges aim to encourage constructive behaviour like higher-quality answers, peer review etc. Similar systems exist on many generic forums, as well as sites like boardgamegeek and wikipedia.
I could imagine this kind of thing working well for things like community/company wikis, software documentation, or adherence to coding standards or test coverage. The problem, as ever, is to stop the game becoming the main focus. For example, if you could get rep for tidying up your intranet wiki, I can guarantee there would be some people who would do that all day, when their main job was something quite different!
Flashing lights and other shiny stuff. Good games are loaded with colour and give the same pleasant stimulation as watching fireworks.
It's definitely true that 'game' features in an application might be distracting and detracting from the effectiveness of a lot of applications.
The idea of adding game features to a product is to impose some sort of economy to productivity--a reason for working. For example, the badges here are kinda neat, but what really drives people to do well on SO is the reputation. It enables them to make a larger difference and more impact, and then also ties them to a feel of responsibility for the site. I think SO really strikes a good balance here.
Although, game features in other apps can be insulting imagine this:
> gcc -c main.c -o main.o
Compiling... while your waiting, what's your favorite color?
Edit
The question you might want to answer very specifically is "What behaviour are you rewarding, why are you rewarding it, and what is the reward?" If all those have to do with productivity and nothing to do with some orthogonal happiness (ie social standing) I'm not sure its going to work.
End Edit
On a completely different note, you must watch this talk on "Human Computation". Wow.
http://video.google.com/videoplay?docid=-8246463980976635143
It talks about using games to categorize images for Google. A little off topic, but you might appreciate it.
Nowadays, games are synonymous with community.
Most line of business applications don't include a wide variety of multiplayer or community aspects to them.
Brilliant question Evan! And now for my definitive answer:
I think that any work can become fun if you break it into attainable challenges. An application becomes a game when it provides these challenges, explains them, and gauges success or failure.
The difficulties in building challenges into applications are...
The challenges must align with the
work, so that effort spent
surmounting the challenge is also
progress towards the user's goals.
Otherwise the challenge is only a
distraction. Application users have
few common goals, so a predetermined
set of challenges cannot be very
useful.
The typical goal of most work done in an application is to impress a human being through creativity and ingenuity. This cannot be gauged very well in software.
For these reasons, building specific challenges into an application has very limited value. Social games may be an exception because other users partially define challenges and gauge progress appropriately on a case-by-case basis.
Doom as an interface for process management, anyone?
http://www.cs.unm.edu/~dlchao/flake/doom/chi/chi.html

Besides "treat warnings as errors" and fixing memory leaks, what other ideas should we implement as part of our coding standards?

First let me say, I am not a coder but I help manage a coding team. No one on the team has more than about 5 years experience, and most of them have only worked for this company.. So we are flying a bit blind, hence the question.
We are trying to make our software more stable and are looking to implement some "best practices" and coding standards. Recently we started taking this very seriously as we determined that much of the instability in our product could be linked back to the fact that we allowed Warnings to go through without fixing when compiling. We also never bothered to take memory leaks seriously enough.
In reading through this site we are now quickly fixing this problem with our team but it begs the question, what other practices can we implement team wide that will help us?
Edit: We do fairly complex 2D/3D Graphics Software that is cross-platform Mac/Windows in C++.
Typically, the level of precision/exactingness in coding standards/process is directly connected to the safety level required. E.g., if you are working in aerospace, you will tightly control pretty much everything. But, on the other end of the spectrum, if you are working on a computer gaming forum site...if something breaks, no biggie. You can have slop. So YMMV, depending on your field.
The classic book on coding is Code Complete 2nd edition, by Steve McConnell. Have a team copy & strongly recommend your developers purchase it(or have the company get it for them). That will satisfy probably 70% of the stylistic questions. CC addresses the majority of development cases.
edit:
Graphics software, C++, Mac/Windows.
Since you're doing cross-platform work, I would recommend having an automated "compile-on-checkin" process for your Mac(10.4(maybe), 10.5, 10.6), and Windows(XP(maybe), Vista, 7). This ensures your software at the least compiles, and you know when it doesn't.
Your source control(which you are using, I assume), should support branching, and your branching strategy can reflect cross-platformy-ness as well. It's also advantageous to have mainline branches, dev branches, and experimental branches. YMMV; you will probably need to iterate on that and consult with with people who are familiar with configuration management.
Since it's C++, you will probably want to be running Valgrind or similar to know if there is a memory leak. There are some static analyzers which you can get: I don't know how effective they are at the modern C++ idiom. You can also invest in writing some wrappers to help watch memory allocations.
Regarding C++...The books Effective C++, More Effective C++, and Effective STL(all by Scott Meyers) should be on someone's shelf, as well as Modern C++ by Andrescu. You may find Lippman's book on the C++ object model useful as well, I don't know.
HTH.
There are a lot of consultants/companies who have coding rules to sell you, you should have no difficulty finding one. However, one that doesn't first ask you the field you are in (you didn't mention it in your question) is providing you with snake oil.
Test-Driven Development. TDD helps check for logic errors at the development phase.
Get everyone to read and discuss various standards and guidelines. I (as well as Stroustrup) suggest the Joint Strike Fighter coding standards. Ask your developers to classify the guidelines therein among
Already met
Could be met easily (few changes from current condition)
Should work toward in old code and follow in new development
Not worth it
Have the long technical discussions, and settle on a set for the team to adopt.
Code reviews have been shown to provide significant benefits to code quality, even more so than traditional testing. I would suggest getting in the habit of performing routine design and code reviews; the number of stages at which reviews are performed, the formality and detail of the reviews, and the percentage of work subject to review can all be set according to your business requirements. Coding standards can be useful when done right (and if everyone's code looks similar, it is also easier to review), but where you put your braces and how far you indent blocks isn't really going to affect defect rates.
Also, it's worth familiarizing yourself and your peers with the concept of technical debt and working bit by bit to redesign and improve parts of the system as you come in contact with them. However, unless you have comprehensive unit testing and/or processes in place to ensure high code quality, this may not help things.
Given that this is Stack Overflow, someone should reference The Joel Test. I like to automate as much as possible, so using Lint is also a must.
These basics are good for most any industry or team size:
Use Agile methodology (scrum is a good example).
http://www3.software.ibm.com/ibmdl/pub/software/rational/web/whitepapers/2003/rup_bestpractices.pdf
Use Test-driven development. http://www.agiledata.org/essays/tdd.html
Use consistent coding standards. Here is an example document:
http://www.dotnetspider.com/tutorials/BestPractices.aspx
Get your team familiar with good
design patterns.
http://www.dofactory.com/Patterns/Patterns.aspx
You can't go wrong with these basics. Build from there with new team members who have been there and done that. I'd strongly suggest pair programming once you've got those guys on the team. It is the best way to infect people with best practices.
Best of luck to you!
The first thing you need to consider when adding coding standards/best practices is the effect it will have on your team's morale and cohesiveness. Developers usually resent any practices that are imposed on them even if they are good ideas. The people issues have to be addressed for a big change to be successful.
You will need to involve your group in developing the standards and try to achieve consensus. That said, you will never get universal agreement on anything, so you will have to balance consensus and getting to standards. I've seen major fights over something as simple as tabs versus spaces in source.
The best book I've seen for C/C++ guidelines in complicated projects is Large Scale C++ Software Design. That book along with Code Complete (which is a must-read classic) are good starting points.
You don't mention any language, and while it is true that most of coding standards are language independent, it will also help you in your search. On most of the companies I had work they have different coding standards for different programming languages. So my advice will be:
Choose your language
Search the web since there are plenty of standards out there for your language
Gather all the standards you found
Divide your team into groups and give them a few of the documents to analyze. They should come with a list of things they think worthy to have in their new standards.
Have a meeting so each group present its findings to everybody (there will be a lot of redundancy between groups). That should be an open discussion and everybody's opinion should be accounted.
Compile a list of the standards that were selected by the majority of the coders and that should be your starting point.
Perform semi annual reviews of the standards, to add or remove things.
Now, The logic behind this is : Most of the problems from putting a coding standard from scratch is developer's acceptance. Each of us have a way of doing things and it sucks when somebody from the outside believes one way of doing things is better from another. So, if developers understand the logic and the purpose of the coding standards then you have half of the work done. The other thing is that standards should be design and created specifically for your company's needs. There will be some things that will made sense, and some that don't. With the above approach you could discriminate between those. The other thing is that standards should be able to change over time to reflect the company needs, so a coding standard should be a living document.
This blog post describes a lot of the common practices of mediocre programming. These are some of the potential issues you're team is having. It includes a quick explanation of the "best practice" for each one.
One thing you should have rules about is some kind of naming standard. It just makes life easier for people while not being really invasive.
Other than that, I'd have to say it depends on the level of your team. Some need more rules than others. The better people are, the less "support" they need from rules.
If you want a complete set of coding rules to control every little detail, you're going to spend lots of time arguing about rules and exceptions to rules and what you should write rules about. I'd go with something already written instead.
If you are concerned about quality then one thing you could do that really isn't about rules, is:
Automated building and testing. This has helped me a lot. Once you find a problem, it really helps to have an environment where you can write a test to verify the problem. Fix the problem and then easily add your test to an automatic test suite that makes sure that sort of problem can't come back without being spotted.
Then make sure these run often. Preferably every time someone checks something in.
If your framework requires certain rules to function well, put those in your coding standard.
If you decide to have coding standards, you want to be very careful about what you put in. If the document is too long or focuses on arbitrary stylistic details, it will just get ignored and nobody will bother to read it. Often a lot of what goes into coding standards is just the preferences of the person that wrote the document (or some standards that have been copied off the web!). If something is in the standard, it needs to be very clear to the reader how it improves quality and why it is important.
I would argue that a large proportion of what makes code readable is to do with design rather than the layout of the code. I have seen a lot of code that would adhere to the standards but still be difficult to read (really long methods, bad naming etc.) - you can't have everything it the standards, at some point it comes down to how skilled and disciplined your developers are - do what you can to increase their skills.
Perhaps rather than a coding standards document, try to get the team to learn about good design (easier said than done, I know). Make them aware of things like the SOLID principles, how to separate concerns, how to handle exceptions properly. If they design well, the code will be easy to read and it won't matter if there are enough white lines or the curly braces are in the right place.
Get some books about design principles (see a couple of recommendations below). Maybe get the the team to do some workshops to discuss some of the topics. Perhaps get them to collectively write a document on what principles might be important for their project. Whatever you do, make sure it is the team as a whole who decides what the standards / principles are.
http://www.amazon.co.uk/Principles-Patterns-Practices-Robert-Martin/dp/0131857258/
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
Don't write your own standards from scratch.
Chances are there are several out there that define what you want already, and are more complete than you could come up with on your own. That said, don't worry too much if you don't agree 100% with it on minor matters, you can swap in some parts of others, or call some infraction of it an warning rather than an error - depending on your own needs. (for example, some standards would throw a warning if the length of a line is more than 80 characters long, I prefer no more than 120 as a hard limit, but would make sure there was a good reason - readability & clarity for example - if there was > 80).
Also, do try to find automated methods of checking your code against the standard - including your own minor changes as required.
Besides books already recommended, I would also mention,
C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Herb Sutter and Andrei Alexandrescu (Paperback - Nov 4, 2004)
If you're programming on VB.NET, make sure Option Explicit and Option Strict are set to ON. This will save you a lot of grief tracking down mysterious bugs. These can be set at project level so that you never have to remember to to set them in your code files
I really like:
MISRA C standard (it's a little strict tho' but the ideas hold for C++)
and Hi-Integrity's http://www.codingstandard.com/HICPPCM/index.html C++ standard which borrows heavily from MISRA
LDRA (a static analysis tool) uses these standards to grade your work (this I don't use as it's expensive) but I can vouch for running cppcheck as a good 'free/libre' static analysis checker.

What metrics would be usable to determine expertise level in a particular programming language

I am interesting in the raw (or composite) metrics used to get a handle on how well a person can program in a particular language.
Scenario: George knows a few programming languages and wants to learn "foobar", but He would like to know when he has a reasonable amount of experience in "foobar".
I am really interesting in something broader than just the LOC (lines of code) metric.
My hope for this question is to understand how engineers quantify the programming language experiences of others and if this can be mechanically measured.
Thanks in Advance!
In reply to the previous two posters, I'd guess that there is a way to get a handle on how well a person can program in a particular language: you can test how well someone knows English, or Maths, or Music, or Medicine, or Fine Art, so what's so special about a programming language?
In reply to the OP, I guess the tests must assess:
How well you can program
How well you can use the programming language
Therefore the metrics might be:
What's the goodness of the person's programming (and there are various dimensions of goodness such as bug-free, maintainable, quick/cheap to write, runs quickly, meets user requirements, etc.)?
Does the person use appropriate/idiomatic features of the programming language in question in order to do that good programming?
It would be difficult to make the test 'mechanical', though: most exams that I know of are graded by a human examiner. In the case of programming, part of the test could be graded mechanically (i.e. "does it run?") but part of it ("is it understandable and idiomatic?") is intended to benefit, and is better judged by, other human programmers.
The best indicator of your expertise in a particular language, in my opinion, is how productive you are in it.
Productivity is not just how fast you can work but, importantly, how few bugs you create and how little refactoring/rework is required later on.
For example, if you took two languages you have similar level of experience with, and were (in parallel universes) to build the same system with both, I would say the language you build the system with faster and with fewer defects/design flaws, is the language you have more expertise in.
Sorry it's not a "hard" metric for you, it's a more practical approach.
I don't believe that this can be "mechanically measured". I've thought about this a lot though.
Hang on...
Even the "LOC" of a program is a heavily disputed topic!
(Are we talking about the output of cat *.{h,c} | wc -l or some other mechnanism, for instance? What about blank lines? Comments? Are comments important? Is good code documented?)
Until you've realised how pointless a LOC comparison is, you've no hope of realising how pointless other metrics are.
It's a rather qualitative thing that is rarely measured with any great accuracy. It's like asking "how smart was Einstein?". Certification is one (and a reasonably thorough) quantitative indicator, but even it falls drastically short of identifying "good programmers" as many recruiters discover.
What are you ultimately trying to achieve? General programming aptitude can be more important than language expertise in some situations.
If you are language-focussed, taking on a challenge like Project Euler using that language may be a way to track progress.
How proficient they are in debugging complex problems in that language.
Ask them about projects they have worked on in the past, difficult problems they encountered and how they solved them. Ask them about debugging techniques they have used - you'll be surprised at what you'll hear, and you might even learn something new ;-)
A lot of places have a person or two who is a superstar in their field - the person everyone else goes to when they can't figure out what is wrong with their program. I'm guessing thats the person you are looking for :-)
Facility with a programming language is not enough. What is required is facility with a programming language in the context of a partiular suite of libraries on a particular platform
C++ on winapi on Windows 32bit
C++ on KDE on Linux
C++ on Symbian on a Nokia S60 phone
C# on MS .NET on Windows
C# on Mono on Linux
Within such a context, the measures of competence using the target language on the target platform are as follows:
The ability to express common
patterns succinctly and robustly.
The ability to debug common but subtle bugs like race conditions.
It would be possible to develope a suite of benchmark exercises for a programmer. One might also, once significant samples were available, determine the bell curve for ability. Preparing these things would take literally years and they would rapidly be obsoleted. This (and general tightness) is why organisations don't bother.
It would also be necessary to grade people in both "tool maker" and "tool user" modes. Tool makers are very different people with a much higher level of competence but they are often unsuited to monkey work, for which you really want a tool user.
John
There are a couple of ways to approach your question:
1) If you are interviewing candidates for a particular position requiring a particular language, then the only measure to compare candidates is 'how long has this person been writing in this language.' It's not perfect - it's not even very good - but it's reality. Unless you want to give the candidate a problem, a computer, and a compiler to test them on the spot there's no other measure. And then most programmer-types don't do well in "someone's watching you" scenarios.
2) I interpret your question to be more of 'when can I call MYSELF profecient in a language?' For this I would refer to levels of learning a non-native language: first level is you need to look up words/phrases in a dictionary (book) in order to say or understand anything; second level would be that you can understand hearing the language(or reading code) with only the occasional lookup in your trusted and now well-worn dictionary; third level you can now speak (or write code) with only the occasional lookup; fourth level is where you dream in the language; and the final levels is where fool native speakers into thinking that you're a native speaker also (in programming, other experts would think that you may have helped develop the language syntax).
Note that this doesn't help determine how good of a programmer you are - just like knowing English without having to look up words in the dictionary doesn't show "how gooder you is at writin' stuff" - that's subjective and has nothing to do with a particular language as people that are good at programming are good in any language you give them.
The phrase "a reasonable amount of experience" is dependent upon the language being considered and what that language can be used for.
A metric is the result of a measurement. Stevens (see wikipedia: Level Of Measurement) proposed that measurements use four different scale types: nominal (assigning a label), ordinal (assigning a ranking), interval (ordering the measurements) and ratio (having a non-arbitrary zero starting point). LOC is a ratio measurement. Although far from perfect, I think LOC is a relevant, objective number indicating how much experience you have in a language and can be compared to quantifiable values in the software industry. But, this begs the question: where do these industry values come from?
Personally, I would say that "George" will know that he has a reasonable amount of experience when he has designed, implemented and tested a project, maybe of his own choosing on his personal time on his home computer if need be. For example: database, business application, web page, GUI test tool, etc.
From the hiring managers point of view, I would start off by asking the programmer how good s/he is in the language, but this is not a metric. I have always thought that the best way to measure a persons ability to write programs is to give the programmer several small programming problems that are thought-out in advance and solved in a given amount of time, say, 5 minutes each. I have never objected to this being done to me in job interviews. Several metrics are available: Was the programmer able to solve the problem (yes or no - nominal)? How much time did it take (number of minutes - ratio)? How effective was their approach to solving the problem (good, fair, poor - ordinal)? You learn not only the persons ability to write code, but can observe several subjective things as well, such as their behaviour as they go about solving the problem, the questions s/he asks while solving the problem, the ability to work under pressure, etc, From a "quality" perspective though, remember that people do not like being measured.
Still, I believe there are some good metrics like the McCabe Cyclomatic Metric for cyclomatic complexity or the amount of useful commentary per block of code or even the average amount of code written between two consecutive tests.
I know of no such thing. I don't believe there's concensus on how to quantify experience or what "reasonable" means. Maybe I'll learn something too, but if I do it'll be a great surprise.
This may be pertinent.
I find that testing the ability to debug is a more accurate gauge of programming skill than any test aimed at straightforward programming problems that I have encountered. Given the source for a reasonably sized class or function with a stated (or unstated, in some cases) misbehavior, can the testee locate the problem?
Well, they try that in job interviews. There's no metric, but you can assess a person's abilities through questioning and quizzing.
WTF/s * LOC, smaller is best.
there are none; expertise can only be judged subjectively relative to others, or tested on specifics (which has its own level of inaccuracy)
see what is the fascination with code metrics for more information

Why is good UI design so hard for some Developers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Some of us just have a hard time with the softer aspects of UI design (myself especially). Are "back-end coders" doomed to only design business logic and data layers? Is there something we can do to retrain our brain to be more effective at designing pleasing and useful presentation layers?
Colleagues have recommended a few books me including The Design of Sites, Don't make me think and Why Software sucks , but I am wondering what others have done to remove their deficiencies in this area?
Let me say it directly:
Improving on this does not begin with guidelines. It begins with reframing how you think about software.
Most hardcore developers have practically zero empathy with users of their software. They have no clue how users think, how users build models of software they use and how they use a computer in general.
It is a typical problem when an expert collides with a laymen: How on earth could a normal person be so dumb not to understand what the expert understood 10 years ago?
One of the first facts to acknowledge that is unbelievably difficult to grasp for almost all experienced developers is this:
Normal people have a vastly different concept of software than you have. They have no clue whatsoever of programming. None. Zero. And they don't even care. They don't even think they have to care. If you force them to, they will delete your program.
Now that's unbelievably harsh for a developer. He is proud of the software he produces. He loves every single feature. He can tell you exactly how the code behind it works. Maybe he even invented an unbelievable clever algorithm that made it work 50% faster than before.
And the user doesn't care.
What an idiot.
Many developers can't stand working with normal users. They get depressed by their non-existing knowledge of technology. And that's why most developers shy away and think users must be idiots.
They are not.
If a software developer buys a car, he expects it to run smoothly. He usually does not care about tire pressures, the mechanical fine-tuning that was important to make it run that way. Here he is not the expert. And if he buys a car that does not have the fine-tuning, he gives it back and buys one that does what he wants.
Many software developers like movies. Well-done movies that spark their imagination. But they are not experts in producing movies, in producing visual effects or in writing good movie scripts. Most nerds are very, very, very bad at acting because it is all about displaying complex emotions and little about analytics. If a developer watches a bad film, he just notices that it is bad as a whole. Nerds have even built up IMDB to collect information about good and bad movies so they know which ones to watch and which to avoid. But they are not experts in creating movies. If a movie is bad, they'll not go to the movies (or not download it from BitTorrent ;)
So it boils down to: Shunning normal users as an expert is ignorance. Because in those areas (and there are so many) where they are not experts, they expect the experts of other areas to have already thought about normal people who use their products or services.
What can you do to remedy it? The more hardcore you are as a programmer, the less open you will be to normal user thinking. It will be alien and clueless to you. You will think: I can't imagine how people could ever use a computer with this lack of knowledge. But they can. For every UI element, think about: Is it necessary? Does it fit to the concept a user has of my tool? How can I make him understand? Please read up on usability for this, there are many good books. It's a whole area of science, too.
Ah and before you say it, yes, I'm an Apple fan ;)
UI design is hard
To the question:
why is UI design so hard for most developers?
Try asking the inverse question:
why is programming so hard for most UI designers?
Coding a UI and designing a UI require different skills and a different mindset. UI design is hard for most developers, not some developers, just as writing code is hard for most designers, not some designers.
Coding is hard. Design is hard too. Few people do both well. Good UI designers rarely write code. They may not even know how, yet they are still good designers. So why do good developers feel responsible for UI design?
Knowing more about UI design will make you a better developer, but that doesn't mean you should be responsible for UI design. The reverse is true for designers: knowing how to write code will make them better designers, but that doesn't mean they should be responsible for coding the UI.
How to get better at UI design
For developers wanting to get better at UI design I have 3 basic pieces of advice:
Recognize design as a separate skill. Coding and design are separate but related. UI design is not a subset of coding. It requires a different mindset, knowledge base, and skill group. There are people out there who focus on UI design.
Learn about design. At least a little bit. Try to learn a few of the design concepts and techniques from the long list below. If you are more ambitious, read some books, attend a conference, take a class, get a degree. There are lot of ways to learn about design. Joel Spolky's book on UI design is a good primer for developers, but there's a lot more to it and that's where designers come into the picture.
Work with designers. Good designers, if you can. People who do this work go by various titles. Today, the most common titles are User Experience Designer (UXD), Information Architect (IA), Interaction Designer(ID), and Usability Engineer. They think about design as much as you think about code. You can learn a lot from them, and they from you. Work with them however you can. Find people with these skills in your company. Maybe you need to hire someone. Or go to some conferences, attend webinars, and spend time in the UXD/IA/ID world.
Here are some specific things you can learn. Don't try to learn everything. If you knew everything below you could call yourself an interaction designer or an information architect. Start with things near the top of the list. Focus on specific concepts and skills. Then move down and branch out. If you really like this stuff, consider it as a career path. Many developers move into managements, but UX design is another option.
Learn fundamental design concepts. You should know about affordances, visibility, feedback, mappings, Fitt's law, poka-yokes, and more. I recommend reading The Design of Everyday Things (Don Norman) and Universal Principles of Design (Lidwell, Holden, & Butler)
Learn about user experience. This is becoming the umbrella term for the human-centered design of web sites, applications, and any other digital artifact. The classic primer here is The elements of User Experience (Jesse James Garrett). You can get an overview and the first few chapters from the author's site.
Learn to sketch designs. Sketching is fast way to explore design options and find the right design, whereas usability testing is about getting the design right. Paper prototyping is fast, cheap, and effective during the early design stages. Much faster than coding a digital prototype. The key text here is Sketching User Experience: Getting the design right and the right design (Bill Buxton). Sketching is a particularly useful skill when working with IA/ID/UX designers. Your collaboration will be more effective. For a good primer on how and why designers sketch, watch the presentation How to be a UX team of one by Leah Buley from the 2008 IA Summit.
Learn paper prototyping. The fastest way to iteratively test an interface before you write code. Different from sketching and usability testing. The definitive book here is Paper Prototyping (Carolyn Snyder). You can get a good DVD on this from the Nielsen Norman Group.
Learn usability testing. Discount testing is easy and effective. But for many UIs, usability is hard to do well. You can learn the basics quickly, but good usability people are invaluable. If you want a book, the classic is The Handbook of Usability Testing (Jeffrey Rubin). It's older but offers thorough coverage of lab-based testing. The famous starter book is Don't Make Me Think (2nd Ed) (Steve Krug). I caution people about this one: Krug makes it sound easier than it is. But it is a good starting point. The user research books listed in the next point also cover this topic. And you can find piles about it online.
Learn about information architecture. The main book here is Information Architecture for the World Wide Web (3rd) (Louis Rosenfeld & Peter Morville). A good starter book is Information Architecture: Blueprints for the Web (Christina Wodtke). For more, visit the Information Architecture Institute or attend the annual Information Architecture Summit.
Learn about interaction design. The main book here is The Essentials of Interaction Design (3rd) (Alan Cooper, et al). A good starter book is Designing for interaction (Dan Saffer). For more, visit the Interaction Design Association (IxDA) or attend the annual Interaction Design conference.
Learn fundamentals of graphic design. Graphic design is not UI design, but concepts from graphic design can improve an interface. Graphic design introduces design principles for the visual presentation of information, such as proximity, alignment, and small multiples. I recommend reading The non-designer's design book (Robin Williams) and Envisioning Information (Edward Tufte)
Learn to do user research. Where usability tests an interface, user research tries to model users and their tasks through personas, scenarios, user journeys, and other documents. It's about understanding users and what they do, then using that to inform the design instead of guessing. Some techniques are interviews, surveys, diary studies, and cart sorting. Good books on this are Observing the User Experience (Mike Kuniavsky) and Understanding Your Users (Courage & Baxter)
Learn to do field research. Watching people in the lab under artificial conditions helps (ie: usability), but there is nothing like watching people use your code in context: their home, their office, or wherever they use it. Goes by various names, including ethnography, field studies, and contextual inquiry. Here is a good primer on field research. Two of the better known books here are Rapid Contextual Design (Karen Holtzblatt et al) and User and task analysis for interface design (Hackos & Redish).
Read UX design web sites. Some of the big ones are Boxes & Arrows, UX Mag, UX Matters, and Digital Web magazine.
Use UI pattern libraries. There are patterns for interfaces. For web sites, I recommend The Design of Sites, 2nd ed (Van Duyne, et al) and Homepage usability: 50 websites deconstructed (Jakob Nielsen & Marie Tahir). For desktop applications I recommend Designing interfaces (Jennifer Tidwell), and for web applications I recommend Designing Web Interfaces: Principles and Patterns for Rich Interactions (Bill Scott & Theresa Neil). Online you should check Welie pattern library, UI patterns, and Web UI patterns.
Attend UX design conferences. Some good annual conferences are: Information Architecture Summit, Interaction '09 (IxDA), User Interface, and UX week.
Attend a workshop or webinar. You can take workshops, webinars, and online courses. This is far from a comprehensive list, but you might try the UIE virtual seminars, Adaptive Path virtual seminars, and UX webinars from Rosenfeld Media.
Get a degree. A graduate degree in HCI is one approach, but these programs are mostly about writing coding. If you want to learn about the design of digital artifacts and devices, then you want a graduate program that's not in CS. Some options include Interaction Design at Carnegie Mellon, the d-School at Stanford, the ITP program at NYU, and Information Architecture & Knowledge Management at Kent State (disclosure: I'm on faculty at Kent; we are seeing more and more people with CS degrees moving into UX design instead of management, which is interesting, because management is the traditional path for developers who want to move away from writing code while staying in their field). There are many more programs. Each has their own perspective, areas of emphasis, and technical expectations. Some come out of the arts and visual design, others out of library and information science, and some from CS. Most are hybrids, but every hybrid has deeper roots in one or more fields. If this interests you, look around and try to understand the differences between these programs. Some offer online courses and certificate programs in addition to full-fledged degrees.
Why UI design is hard
Good UI design is hard because it involves 2 vastly different skills:
A deep understanding of the machine. People in this group worry about code first, people second. They have deep technological knowledge and skill. We call them developers, programmers, engineers, and so forth.
A deep understanding of people and design: People in this group worry about people first, code second. They have deep knowledge of how people interact with information, computers, and the world around them. We call them user experience designers, information architects, interaction designers, usability engineers, and so forth.
This is the essential difference between these 2 groups—between developers and designers:
Developers make it work. They implement the functionality on your TiVo, your iPhone, your favorite website, etc. They make sure it actually does what it is supposed to do. Their highest priority is making it work.
Designers make people love it. They figure out how to interact with it, how it should look, and how it should feel. They design the experience of using the application, the web site, the device. Their highest priority is making you fall in love with what developers make. This is what is meant by user experience, and it's not the same as brand experience.
Moreover, programming and design require different mindsets, not just different knowledge and different skills. Good UI design requires both mindsets, both knowledge bases, both skill groups. And it takes years to master either one.
Developers should expect to find UI design hard, just as UI designers should expect to find writing code hard.
What really helps me improve my design is to grab a fellow developer, one the QA guys, the PM, or anyone who happens to walk by and have them try out a particular widget or screen.
Its amazing what you will realize when you watch someone else use your software for the first time
Ultimately, it's really about empathy -- can you put yourself in the shoes of your user?
One thing that helps, of course, is "eating your own dogfood" -- using your applications as a real user yourself, and seeing what's annoying.
Another good idea is to find a way to watch a real user using your application, which may be as complicated as a usability lab with one-way mirrors, screen video capture, video cameras on the users, etc., or can be as simple as paper prototyping using the next person who happens to walk down the hall.
If all else fails, remember that it's almost always better for the UI to be too simple than too complicated. It's very very easy to say "oh, I know how to solve that, I'll just add a checkbox so the user can decide which mode they prefer". Soon your UI is too complicated. Pick a default mode and make the preference setting an advanced configuration option. Or just leave it out.
If you read a lot about design you can easily get hung up on dropped shadows and rounded corners and so forth. That's not the important stuff. Simplicity and discoverability are the important stuff.
Contrary to popular myth there are literally no soft aspects in UI design, at least no more than needed to design a good back end.
Consider the following; good back end design is based upon fairly solid principles and elements any good developer is familiar with:
low coupling
high cohesion
architectural patterns
industry best practices
etc
Good back end design is usually born through a number of interactions, where based on the measurable feedback obtained during tests or actual use the initial blueprint is gradually improved. Sometimes you need to prototype smaller aspects of back end and trial them in isolation etc.
Good UI design is based on the sound principles of:
visibility
affordance
feedback
tolerance
simplicity
consistency
structure
UI is also born through test and trial, through iterations but not with compiler + automated test suit, but people. Similarly to back end there are industry best practises, measurement and evaluation techniques, ways to think of UI and set goals in terms of user model, system image, designer model, structural model, functional model etc.
The skill set needed for designing UI is quite different from designing back-end and hence don’t expect to be able to do good UI without doing some learning first. However that both these activities have in common is the process of design. I believe that anyone who can design good software is capable of designing good UI as long as they spend some time learning how.
I recommend taking a course in Human Computer Interaction, check MIT and Yale site for example for online materials:
MIT User Interface Design and Implementation Course
Structural vs Functional Model in Understanding and Usage
The excellent earlier post by Thorsten79 brings up the topic of software development experts vs users and how their understanding of software differ. Human learning experts distinguish between functional and structural mental models. Finding way to your friend's house can be an excellent example of the difference between the two:
First approach includes a set of detailed instructions: take the first exit of the motorway, then after 100 yards turn left etc. This is an example of functional model: list of concrete steps necessary to achieve a certain goal. Functional models are easy to use, they do not require much thinking just a straight forward execution. Obviously there is a penalty for the simplicity: it might not be the most efficient route and any any exceptional situation (i.e. a traffic diversion) can easilly lead to a complete failure.
A different way to cope with the task is to build a structural mental model. In our example that would be a map that conveyes a lot of information about the internal structure of the "task object". From understanding the map and relative locations of our and friend's house we can deduct the functional model (the route). Obviously it's requires more effort, but much more reliable way of completing the task in spite of the possible deviations.
The choice between conveying functional or structural model through UI (for example, wizard vs advanced mode) is not that straight forward as it might seem from Thorsten79's post. Advanced and frequent users might well prefer the structural model, whereas occassional or less expirienced users — functional.
Google maps is a great example: they include both functional and structural model, so do many sat navs.
Another dimension of the problem is that the structural model presented through UI must not map to the structure of software, but rather naturally map to structure of the user task at hand or task object involved.
The difficulty here is that many developers will have a good structural model of their software internals, but only functional model of the user task the software aims to assist at. To build good UI one needs to understand the task/task object structure and map UI to that structure.
Anyway, I still can't recommend taking a formal HCI course strongly enough. There's a lot of stuff involved such as heuristics, principles derived from Gestalt phychology, ways humans learn etc.
I suggest you start by doing all your UI in the same way as you are doing now, with no focus on usability and stuff.
alt text http://www.stricken.org/uploaded_images/WordToolbars-718376.jpg
Now think of this:
A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.
— Saint-Exupéry
And apply this in your design.
A lot of developers think that because they can write code, they can do it all. Designing an interface is a completely different skill, and it was not taught at all when I attended college. It's not just something that just comes naturally.
Another good book is The Design of Everyday Things by Donald Norman.
There's a huge difference between design and aesthetics, and they are often confused.
A beautiful UI requires artistic or at least aesthetic skills that many, including myself, are incapable of producing. Unfortunately, it is not enough and does not make the UI usable, as we can see in many heavyweight flash-based APIs.
Producing usable UIs requires an understanding of how humans interact with computers, some issues in psychology (e.g., Fitt's law, Hick's law), and other topics. Very few CS programs train for this. Very few developers that I know will pick a user-testing book over a JUnit book, etc.
Many of us are also "core programmers", tending to think of UIs as the facade rather than as a factor that could make or break the success of our project.
In addition, most UI development experience is extremely frustrating. We can either use toy GUI builders like old VB and have to deal with ugly glue code, or we use APIs that frustrate us to no end, like trying to sort out layouts in Swing.
Go over to Slashdot, and read the comments on any article dealing with Apple. You'll find a large number of people talking about how Apple products are nothing special, and ascribing the success of the iPod and iPhone to people trying to be trendy or hip. They will typically go through feature lists, and point out that they do nothing earlier MP3 players or smart phones didn't do.
Then there are people who like the iPod and iPhone because they do what the users want simply and easily, without reference to manuals. The interfaces are about as intuitive as interfaces get, memorable, and discoverable. I'm not as fond of the UI on MacOSX as I was on earlier versions, I think they've given up some usefulness in favor of glitz, but the iPod and iPhone are examples of superb design.
If you are in the first camp, you don't think the way the average person does, and therefore you are likely to make bad user interfaces because you can't tell them from good ones. This doesn't mean you're hopeless, but rather that you have to explicitly learn good interface design principles, and how to recognize a good UI (much as somebody with Asperger's might need to learn social skills explicitly). Obviously, just having a sense of a good UI doesn't mean you can make one; my appreciation for literature, for example, doesn't seem to extend to the ability (currently) to write publishable stories.
So, try to develop a sense for good UI design. This extends to more than just software. Don Norman's "The Design of Everyday Things" is a classic, and there's other books out there. Get examples of successful UI designs, and play with them enough to get a feel for the difference. Recognize that you may be having to learn a new way of thinking abou things, and enjoy it.
The main rule of thumb I hold to, is never try to do both at once. If I'm working on back-end code, I'll finish up doing that, take a break, and return with my UI hat on. If you try to work it in whilst you're doing code, you'll approach it with the wrong mindset, and end up with some horrible interfaces as a result.
I think it's definitely possible to be both a good back-end developer and a good UI designer, you just have to work at it, do some reading and research on the topic (everything from Miller's #7, to Nielsen's archives), and make sure you understand why UI design is of the utmost importance.
I don't think it's a case of needing to be creative but rather, like back-end development, it is a very methodical, very structured thing that needs to be learned. It's people getting 'creative' with UIs that creates some of the biggest usability monstrosities... I mean, take a look at 100% Flash websites, for a start...
Edit: Krug's book is really good... do take a read of it, especially if you're going to be designing for the Web.
There are many reasons for this.
(1) Developer fails to see things from the point of view of the user. This is the usual suspect: lack of empathy. But it is not usually true since developers are not as alien as people make them out to be.
(2) Another, more common reason is that the developer being so close to his own stuff, having stayed with his stuff for so long, fails to realize that his stuff may not be so familiar(a term better than intuitive) to other people.
(3) Still another reason is the developer lacks techniques.
MY BIG CLAIM: read any UI, human interection design, prototyping book. e.g. Designing the Obvious: A Common Sense Approach to Web Application Design, Don't Make Me Think: A Common Sense Approach to Web Usability, Designing the moment, whatever.
How do they discuss task flows? How do they describe decision points? That is, in any use case, there are at least 3 paths: success, failure/exception, alternative.
Thus, from point A, you can go to A.1, A.2, A.3.
From point A.1, you can get to A.1.1, A.1.2, A.1.3, and so on.
How do they show such drill-down task flow?
They don't. They just gloss over it.
Since even UI experst don't have a technique, developers have no chance.
He thinks it is clear in his head. But it is not even clear on paper, let alone clear in software implementation.
I have to use my own hand-made techniques for this.
I try to keep in touch with design-specific websites and texts. I found also the excellent Robin Williams book The Non-Designer's Design Book to be very interesting in these studies.
I believe that design and usability is a very important part of software engineering and we should learn it more and stop giving excuses that we are not supposed to do design.
Everyone can be a designer once in a while, as also everyone can be a programmer.
When approaching UI design, here are a few of the things I keep in mind throughout (by far not a complete list):
Communicating a model. The UI is a narrative that explains a mental model to the user. This model may be a business object, a set of relationships, what have you. The visual prominence, spatial placement, and workflow ordering all play a part in communicating this model to the user. For example, a certain kind of list vs another implies different things, as well as the relationship of what's in the list to the rest of the model. In general I find it best to make sure only one model is communicated at a time. Programmers frequently try to communicate more than one model, or parts of several, in the same UI space.
Consistency. Re-using popular UI metaphors helps a lot. Internal consistency is also very important.
Grouping of tasks. Users should not have to move the mouse all the way across the screen to verify or complete a related sequence of commands. Modal dialogs and flyout-menus can be especially bad in this area.
Knowing your audience. If your users will be doing the same activities over and over, they will quickly become power users at those tasks and be frustrated by attempts to lower the initial entry barrier. If your users do many different kinds of activities infrequently, it's best to ensure the UI holds their hand the whole time.
Read Apple Human Interface Guidelines.
I find the best tool in UI design is to watch a first-time User attempt to use the software. Take loads of notes and ask them some questions. Never direct them or attempt to explain how the software works. This is the job of the UI (and well written documentation).
We consistently adopt this approach in all projects. It is always fascinating to watch a User deal with software in a manner that you never considered before.
Why is UI design so hard? Well generally because the Developer and User never meet.
duffymo just reminded me why: Many Programmers think "*Design" == "Art".
Good UI design is absolutely not artistic. It follows solid principles, that can be backed up with data if you've got the time to do the research.
I think all programmers need to do is take the time to learn the principles. I think it's in our nature to apply best practice whenever we can, be it in code or in layout. All we need to do is make ourselves aware of what the best practices are for this aspect of our job.
What have I done to become better at UI design?
Pay attention to it!
It's like how ever time you see a chart on the news or an electronic bus sign and you wonder 'How did they get that data? Did they do that with raw sql or are they using LINQ?' (or insert your own common geek curiosity here).
You need to start doing that but with visual elements of all kinds.
But just like learning a new language, if you don't really throw yourself into it, you won't ever learn it.
Taken from another answer I wrote:
Learn to look, really look, at the world around you. Why do I like that UI but hate this one? Why is it so hard to find the noodle dishes in this restaurant menu? Wow, I knew what that sign meant before I even read the words. Why was that? How come that book cover looks so wrong? Learn to take the time to think about why you react the way you do to visual elements of all kinds, and then apply this to your work.
However you do it (and there are some great points above), it really helped me once I accepted that there is NO SUCH THING AS INTUITIVE....
I can hear the arguments rumbling on the horizon... so let me explain a little.
Intuitive: using what one feels to be right or true based on an unconscious method or feeling.
If (as Carl Sagan postulated) you accept that you cannot comprehend things that are absolutely unlike anything you have ever encountered then how could you possibly "know" how to use something if you have never used anything remotely like it?
Think about it: kids try to open doors not because they "know" how a doorknob works, but because they have seen someone else do it... often they turn the knob in the wrong direction, or pull too soon. They have to LEARN how a doorknob works. This knowledge then gets applied in different but similar instances: opening a window, opening a drawer, opening almost anything big with a big, knob-looking handle.
Even simple things that seem intuitive to us will not be intuitive at all to people from other cultures. If someone held their arm out in front of them and waived their hand up-and-down at the wrist while keeping the arm still.... are they waiving you away? Probably, unless you are in Japan. There, this hand signal can mean "come here". So who is right? Both, of course, in their own context. But if you travel to both, you need to know both... UI design.
I try to find the things that are already "familiar" to the potential users of my project and then build the UI around them: user-centric design.
Take a look at Apple's iPhone. Even if you hate it, you have to respect the amount of thought that went into it. Is it perfect? Of course not. Over time an object's perceived "intuitiveness" can grow or even fade away completely.
For example. Most everyone knows that a strip of black with two rows of holes along the top and bottom looks like a film strip... or do they?
Ask your average 9 or 10 year old what they think it is. You may be surprised how many kids right now will have a hard time identifying it as a film strip, even though it is something that is still used to represent Hollywood, or anything film (movie) related. Most movies for the past 20 years have been digitally shot. And when was the last time any of us held a piece of film of ANY kind, photos or film?
So, what it all boils down to for me is: Know your audience and constantly research to keep up with trends and changes in things that are "intuitive", target your main users and try not to do things that punish the inexperienced in favor of the advanced users or slow down the advanced users in order to hand-hold the novices.
Ultimately, every program will require a certain amount of training on the user's part to use it. How much training and for which level of user is part of the decisions that need to be made.
Some things are more or less familiar based on your target user's past experience level as a human being, or computer user, or student, or whatever.
I just shoot for the fattest part of the bell curve and try to get as many people as I can but realizing that I will never please everyone....
I know that Microsoft is rather inconsistent with their own guidelines, but I have found that reading their Windows design guidelines have really helped me. I have a copy on my website here, just scroll down a little the the Vista UX Guide. It has helped me with things such as colors, spacing, layouts, and more.
I believe the main problem has nothing to do with different talents or skillsets. The main problem is that as a developer, you know too much about what the application does and how it does it, and you automatically design your UI from the point of view of someone who has that knowledge.
Whereas a user typically starts out knowing absolutely nothing about the application and should never need to learn anything about its inner workings.
It is very hard, almost impossible, to not use knowledge that you have - and that's why an UI should not be designed by someone who's developing the app behind it.
"Designing from both sides of the screen" presents a very simple but profound reason as to why programmers find UI design hard: programmers are trained to think in terms of edge cases while UI designers are trained to think in terms of common cases or usage.
So going from one world to the other is certainly difficult if the default traning in either is the exact opposite of the other.
To say that programms suck at UI design is to miss the point. The point of the problem is that the formal training that most developers get go indepth with the technology. Human - Computer Interaction is not a simple topic. It is not something that I can "mind-meld" to you by providing a simple one line statement that makes you realize "oh the users will use this application more effectively if I do x instead of y."
This is because there is one part of UI design that you are missing. The human brain. In order to understand how to design a UI, you have to understand how the human mind interacts with machinery. There is an excellent course I took at the University of Minnesota on this topic taught by a professor of Psychology. It is named "Human - Machine Interaction". This describes many of the reasons of why UI design is so complicated.
Since Psychology is based on Correlations and not Causality you can never prove that a method of UI design will always work in any given situation. You can correlate that many users will find a particular UI design appealing or efficient, but you cannot prove that it will always generalize.
Additionally, there are two parts to UI design that many people seem to miss. There is the aesthetical appeal, and the functional workflow. If you go for a 100% aesthetical appeal, sure people will but your product. I highly doubt that aesthetics will ever reduce user frustration though.
There are several good books on this topic and course to take (like Bill Buxton's Sketching User Experiences, and Cognition in the Wild by Edwin Hutchins). There are graduate programs on Human - Computer Interaction at many universities.
The overall answer to this question though lies in how individuals are taught computer science. It is all math based, logic based and not based on the user experience. To get that, you need more than a generic 4 year computer science degree (unless your 4 year computer science degree had a minor in psychology and was emphasized in Human - Computer Interaction).
Let's turn your question around -
Are "ui designers" doomed to only design information architecture and presentation layers? Is there something they can do to retrain their brains to be more effective at designing pleasing and efficient system layers?
Seems like them "ui designers" would have to take a completely different perspective - they'd have to look from the inside of the box outwards; instead of looking in from outside the box.
Alan Cooper's "The Inmates are Running the Asylum" opinion is that we can't successfully take both perspectives - we can learn to wear one hat well but we can't just switch hats.
I think its because a good UI is not logical. A good UI is intuitive.
Software developers typically do bad on 'intuitive'
A useful framing is to actively consider what you're doing as designing a process of communication. In a very real sense, your interface is a language that the user must use to tell the computer what to do. This leads to considering a number of points:
Does the user already speak this language? Using a highly idiosyncratic interface is like communicating in a language you've never spoken before. So if your interface must be idiosyncratic at all, it had best introduce itself with the simplest of terms and few distractions. On the other hand, if your interface uses idioms that the user is accustomed to, they'll gain confidence from the start.
The enemy of communication is noise. Auditory noise interferes with spoken communication; visual noise interferes with visual communication. The more noise you can cut out of your interface, the easier communicating with it will be.
As in human conversation, it's often not what you say, it's how you say it. The way most software communicates is rude to a degree that would get it punched in the face if it were a person. How would you feel if you asked someone a question and they sat there and stared at you for several minutes, refusing to respond in any other way, before answering? Many interface elements, like progress bars and automatic focus selection, have the fundamental function of politeness. Ask yourself how you can make the user's day a little more pleasant.
Really, it's somewhat hard to determine what programmers think of interface interaction as being, other than a process of communication, but maybe the problem is that it doesn't get thought of as being anything at all.
There are a lot o good comments already, so I am not sure there is much I can add.
But still...
Why would a developer expect to be able to design good UI?
How much training did he had in that field?
How many books did he read?
How many things did he designed over how many years?
Did he had the opportunity to see the reaction of it's users?
We don't expect that a random "Joe the plumber" to be able to write good code.
So why would we expect the random "Joe the programmer" to design good UI?
Empathy helps. Separating the UI design and the programming helps. Usability testing helps.
But UI design is a craft that has to be learned, and practiced, like any other.
Developers are not (necessarily) good at UI design for the same reason they aren't (necessarily) good at knitting; it's hard, it takes practice, and it doesn't hurt to have someone show you how in the first place.
Most developers (me included) started "designing" UIs because it was a necessary part of writing software. Until a developer puts in the effort to get good at it, s/he won't be.
To improve just look around at existing sites. In addition to the books already suggested, you might like to have a look at Robin Williams's excellent book "The Non-designers Design Book" (sanitised Amazon link)
Have a look at what's possible in visual design by taking a look at the various submissions over at The Zen Garden as well.
UI design is definitely an art though, like pointers in C, some people get it and some people don't.
But at least we can have a chuckle at their attempts. BTW Thanks OK/Cancel for a funny comic and thanks Joel for putting it in your book "The Best Software Writing I" (sanitised Amazon link).
User interface isn't something that can be applied after the fact, like a thin coat of paint. It is something that needs to be there at the start, and based on real research. There's tons of Usability research available of course. It needs to not just be there at the start, it needs to form the core of the very reason you're making the software in the first place: There's some gap in the world out there, some problem, and it needs to be made more usable and more efficient.
Software is not there for its own sake. The reason for a peice of software to exist is FOR PEOPLE. It's absolutely ludicrous to even try to come up with an idea for a new peice of software, without understanding why anyone would need it. Yet this happens all the time.
Before a single line of code is written, you should go through paper versions of the interface, and test it on real people. This is kind of weird and silly, it works best with kids, and someone entertaining acting as "the computer".
The interface needs to take advantage of our natural cognitive facilities. How would a caveman use your program? For instance, we've evolved to be really good at tracking moving objects. That's why interfaces that use physics simulations, like the iphone, work better than interfaces where changes occur instantaneously.
We are good at certain kinds of abstraction, but not others. As programmers, we're trained to do mental gymnastics and backflips to understand some of the weirdest abstractions. For instance, we understand that a sequence of arcane text can represent and be translated into a pattern of electromagnetic state on a metal platter, which when encountered by a carefully designed device, leads to a sequence of invisible events that occur at lightspeed on an electronic circuit, and these events can be directed to produce a useful outcome. This is an incredibly unnatural thing to have to understand. Understand that while it's got a perfectly rational explanation to us, to the outside world, it looks like we're writing incomprehensible incantations to summon invisible sentient spirits to do our bidding.
The sorts of abstractions that normal humans understand are things like maps, diagrams, and symbols. Beware of symbols, because symbols are a very fragile human concept that take conscious mental effort to decode, until the symbol is learned.
The trick with symbols is that there has to be a clear relationship between the symbol, and the thing it represents. The thing it represents either has to be a noun, in which case the symbol should look VERY MUCH like the thing it represents. If a symbol is representing a more abstract concept, that has to be explained IN ADVANCE. See the inscrutable unlabled icons in msword's, or photoshop's toolbar, and the abstract concepts they represent. It has to be LEARNED that the crop tool icon in photoshop means CROP TOOL. it has to be understood what CROP even means. These are prerequisites to correctly using that software. Which brings up an important point, beware of ASSUMED knowledge.
We only gain the ability to understand maps around the age of 4. I think I read somewhere once that chimpanzees gain the ability to understand maps around the age of 6 or 7.
The reason that guis have been so successful to begin with, is that they changed a landscape of mostly textual interfaces to computers, to something that mapped the computer concepts to something that resembled a physical place. Where guis fail in terms of usability, is where they stop resembling something you'd see in real life. There are invisible, unpredictable, incomprehensible things that happen in a computer that bare no resemblance to anything you'd ever see in the physical world. Some of this is necessary, since there'd be no point in just making a reality simulator- The idea is to save work, so there has to be a bit of magic. But that magic has to make sense, and be grounded in an abstraction that human beings are well adapted to understanding. It's when our abstractions start getting deep, and layered, and mismatched with the task at hand that things break down. In other words, the interface doesn't function as a good map for the underlying software.
There are lots of books. The two I've read, and can therefore reccomend, are "The Design of Everyday Things" by donald norman, and "The Human Interface" by Jef Raskin.
I also reccomend a course in psychology. "The Design of Every day Things" talks about this a bit. A lot of interfaces break down because of a developer's "folk understanding" of psychology. This is similar to "folk physics". An object in motion stays in motion doesn't make any sense to most people. "You have to keep pushing it to keep it in motion!" thinks the physics novice. User testing doesn't make sense to most developers. "You can just ask the users what they want, and that should be good enough!" thinks the psychology novice.
I reccomend Discovering Psychology, a PBS documentary series, hosted by Philip Zimbardo. Failing that, try and find a good physics textbook. The expensive kind. Not the pulp fiction self help crap that you find in Borders, but the thick hardbound stuff you can only find in a university library. This is a necesesary foundation. You can do good design without it, but you'll only have an intuitive understanding of what's going on. Reading some good books will give you a good perspective.
If you read the book "Why software sucks" you would have seen Platt's answer, which is a simple one:
Developers prefere control over user-friendliness
Average people prefere user-friendliness over control
But another another answer to your question would be "why is dentistry so hard for some developers?" - UI design is best done by a UI designer.
http://dotmad.net/blog/2007/11/david-platt-on-why-software-sucks/

Resources