What is this user agent? (... Applex*0TV...) - user-agent

This is incredibly odd, so I'm hoping someone can help me out here.
I am seeing a user agent hit our site with a user agent ID that I cannot identify. Furthermore, searching the web and all of the user agent search sites is proving fruitless. If this were some esoteric UA that I only see a few dozen times, I wouldn't care, but I'm seeing it between 130,000 and 200,000 times a day.
The user agent is always a varient of this pattern:
AppleCoreMedia/1.0.0.9A336a (Applex*0TV; U; CPU OS 5_0 like Mac OS X; en_us)
The key commonality is the AppleCoreMedia library used with "Applex*0TV" - my gut says that this is some sort of varient of Apple's iTV, except the iTVs identify themselves quite clearly as "AppleTV"...
...so, what is this? Any help would be greatly appreciated....

Related

Increase quality of crowd-sourced info

I'm working on an app that gives traffic alerts in real time and is based on crowd-sourced information. In other words, people use the app and report traffic problems and at the same time they are informed about traffic problems in their area.
A difficult task is how to distinguish real alert reports from fake ones so that the app behaves properly and is useful.
Do you know of any documentation regarding this issue or any programmer stories, insights into this problem? How should this problem be tackled?
What I've come up until now is:
each person using the app is uniquely identified
each alert report has a reliability value in an interval 1 .. x
the reliability of a report is calculated based on the number of users that reported it or confirmed it and the reputation of those people. But how exactly?
each person has a reputation value which is calculated somehow. But how?
I'm not sure how to handle the reputation/reliability stuff so I'd love some input on this. There must be some documentation on how to create a crowd-sourcing product that works.
Panos Ipeirotis has a fabulous talk on this subject. Yes, you want to incentivize good behavior. If Waze is too complicated this talk will be too, but it will give you a good idea of what is possible if you throw the kitchen sink at it.

How do you reproduce bugs that occur sporadically?

We have a bug in our application that does not occur every time and therefore we don't know its "logic". I don't even get it reproduced in 100 times today.
Disclaimer: This bug exists and I've seen it. It's not a pebkac or something similar.
What are common hints to reproduce this kind of bug?
Analyze the problem in a pair and pair-read the code. Make notes of the problems you KNOW to be true and try to assert which logical preconditions must hold true for this happen. Follow the evidence like a CSI.
Most people instinctively say "add more logging", and this may be a solution. But for a lot of problems this just makes things worse, since logging can change timing-dependencies sufficiently to make the problem more or less frequent. Changing the frequency from 1 in 1000 to 1 in 1,000,000 will not bring you closer to the true source of the problem.
So if your logical reasoning does not solve the problem, it'll probably give you a few specifics you could investigate with logging or assertions in your code.
There is no general good answer to the question, but here is what I have found:
It takes a talent for this kind of thing. Not all developers are best suited for it, even if they are superstars in other areas. So know your team, who has a talent for it, and hope you can give them enough candy to get them excited about helping you out, even if it isn't their area.
Work backwards, and treat it like a scientific investigation. Start with the bug, what you see is wrong. Develop hypotheses about what could cause it (this is the creative/imaginative part, the art that not everyone has the talent for) - and it helps a lot to know how the code works. For each of those hypotheses (preferably sorted by what you think is most likely - again pure gut feel here), develop a test that tries to eliminate it as the cause, and test the hypothesis. Any given failure to meet a prediction doesn't mean the hypothesis is wrong. Test the hypothesis until it is confirmed to be wrong (although as it gets less likely you may want to move on to another hypothesis first, just don't discount this one until you have a definitive failure).
Gather as much data as you can during this process. Extensive logging and whatever else is applicable. Do not discount a hypothesis because you lack the data, rather remedy the lack of data. Quite often the inspiration for the right hypothesis comes from examining the data. Noticing something off in a stack trace, weird issue in a log, something missing that should be there in a database, etc.
Double check every assumption. So many times I have seen an issue not get fixed quickly because some general method call was not further investigated, so the problem was just assumed to be not applicable. "Oh that, that should be simple." (See point 1).
If you run out of hypotheses, that is generally caused by insufficient knowledge of the system (this is true even if you wrote every line of code yourself), and you need to run through and review code and gain additional insight into the system to come up with a new idea.
Of course, none of the above guarantees anything, but that is the approach that I have found gets results consistently.
Add some sort of logging or tracing. For example log the last X actions the user committed before causing the bug (only if you can set a condition to match bug).
It's quite common for programmers not to be able to reiterate a user-experienced crash simply because you have developed a certain workflow and habits in using the application that obviously goes around the bug.
At this frequency of 1/100, I'd say that the first thing to do is to handle exceptions and log anything anywhere or you could be spending another week hunting this bug.
Also make a priority list of potentially sensitive articulations and features in your project. For example :
1 - Multithreading
2 - Wild pointers/ loose arrays
3 - Reliance on input devices
etc.
This will help you segment areas that you can brute-force-until-break-again as suggested by other posters.
Since this is language-agnostic, I'll mention a few axioms of debugging.
Nothing a computer ever does is random. A 'random occurrence' indicates a as-yet-undiscovered pattern. Debugging begins with isolating the pattern. Vary individual elements and assess what makes a change in the behaviour of the bug.
Different user, same computer?
Same user, different computer?
Is the occurrence strongly periodic? Does rebooting change the periodicity?
FYI- I once saw a bug that was experienced by a single person. I literally mean person, not a user account. User A would never see the problem on their system, User B would sit down at that workstation, signed on as User A and could immediately reproduce the bug. There should be no conceivable way for the app to know the difference between the physical body in the chair. However-
The users used the app in different ways. User A habitually used a hotkey to to invoke a action and User B used an on-screen control. The difference in the user behaviour would cascade into a visible error a few actions later.
ANY difference that effects the behaviour of the bug should be investigated, even if it makes no sense.
There's a good chance your application is MTWIDNTBMT (Multi Threaded When It Doesn't Need To Be Multi Threaded), or maybe just multi-threaded (to be polite). A good way to reproduce sporadic errors in multi-threaded applications is to sprinkle code like this around (C#):
Random rnd = new Random();
System.Threading.Thread.Sleep(rnd.Next(2000));
and/or this:
for (int i = 0; i < 4000000000; i++)
{
// tight loop
}
to simulate threads completing their tasks at different times than usual or tying up the processor for long stretches.
I've inherited many buggy, multi-threaded apps over the years, and code like the above examples usually makes the sporadic errors occur much more frequently.
Add verbose logging. It will take multiple -- sometimes dozen(s) -- iterations to add enough logging to understand the scenario.
Now the problem is that if the problem is a race condition, which is likely if it doesn't reproduce reliably, so logging can change timing and the problem will stop happening. In this case do not log to a file, but keep a rotating buffer of the log in memory and only dump it on disk when you detect that the problem has occurred.
Edit: a little more thoughts: if this is a gui application run tests with a qa automation tool which allows you to replay macros. If this is a service-type app, try to come up with at least a guess as to what is happening and then programmatically create 'freak' usage patterns which would exercise the code that you suspect. Create higher than usual loads etc.
What development environment?
For C++, your best bet may be VMWare Workstation record/replay, see:
http://stackframe.blogspot.com/2007/04/workstation-60-and-death-of.html
Other suggestions include inspecting the stack trace, and careful code overview... there is really no silver bullet :)
Try to add code in your app to trace the bug automatically once it happens (or even alert you via mail / SMS)
log whatever you can so when it happens you can catch the right system state.
Another thing- try applying automated testing that can cover more territory than human based testing in a formed manner.. it's a long shot, but a good practice in general.
all the above, plus throw some brute force soft-robot at it that is semi random, and scater a lot of assert/verify (c/c++, probably similar in other langs) through the code
Tons of logging and careful code review are your only options.
These can be especially painful if the app is deployed and you can't adjust the logging. At that point, your only choice is going through the code with a fine-tooth comb and trying to reason about how the program could enter into the bad state (scientific method to the rescue!)
Often these kind of bugs are related to corrupted memory and for that reason they might not appear very often. You should try to run your software with some kind of memory profiler e.g., valgrind, to see if something goes wrong.
Let’s say I’m starting with a production application.
I typically add debug logging around the areas where I think the bug is occurring. I setup the logging statements to give me insight into the state of the application. Then I have the debug log level turned on and ask the user/operator(s) notify me of the time of the next bug occurrence. I then analyze the log to see what hints it gives about the state of the application and if that leads to a better understanding of what could be going wrong.
I repeat step 1 until I have a good idea of where I can start debugging the code in the debugger
Sometimes the number of iterations of the code running is key but other times it maybe the interaction of a component with an outside system (database, specific user machine, operating system, etc.). Take some time to setup a debug environment that matches the production environment as closely as possible. VM technology is a good tool for solving this problem.
Next I proceed via the debugger. This could include creating a test harness of some sort that puts the code/components in the state I’ve observed from the logs. Knowing how to setup conditional break points can save a lot of time, so get familiar with that and other features within your debugger.
Debug, debug , debug. If you’re going nowhere after a few hours, take a break and work on something unrelated for awhile. Come back with a fresh mind and perspective.
If you have gotten nowhere by now, go back to step 1 and make another iteration.
For really difficult problems you may have to resort to installing a debugger on the system where the bug is occurring. That combined with your test harness from step 4 can usually crack the really baffling issues.
Unit Tests. Testing a bug in the app is often horrendous because there is so much noise, so many variable factors. In general the bigger the (hay)stack, the harder it is to pinpoint the issue. Creatively extending your unit test framework to embrace edge cases can save hours or even days of sifting
Having said that there is no silver bullet. I feel your pain.
Add pre and post condition check in methods related to this bug.
You may have a look at Design by contract
Along with a lot of patience, a quiet prayer & cursing you would need:
a good mechanism for logging the user actions
a good mechanism for gathering the data state when the user performs some actions (state in application, database etc.)
Check the server environment (e.g. an anti-virus software running at a particular time etc.) & record the times of the error & see if you can find any trends
some more prayers & cursing...
HTH.
Assuming you're on Windows, and your "bug" is a crash or some sort of corruption in unmanaged code (C/C++), then take a look at Application Verifier from Microsoft. The tool has a number of stops that can be enabled to verify things during runtime. If you have an idea of the scenario where your bug occurs, then try to run through the scenario (or a stress version of the scenario) with AppVerifer running. Make sure to either turn on pageheap in AppVerifier, or consider compiling your code with the /RTCcsu switch (see http://msdn.microsoft.com/en-us/library/8wtf2dfz.aspx for more information).
"Heisenbugs" require great skills to diagnose, and if you want help from people here you have to describe this in much more detail, and patiently listen to various tests and checks, report result here, and iterate this till you solve it (or decide it is too expensive in terms of resources).
You will probably have to tell us your actual situation, language, DB, operative system, workload estimate, time of the day it happened in the past, and a myriad of other things, list tests you did already, how they went, and be ready to do more and share the results.
And this will not guarantee that we collectively can find it, either...
I'd suggest to write down all things that user has been doing. If you have lets say 10 such bug reports You can try to find something that connects them.
Read the stack trace carefully and try to guess what could be happened;
then try to trace\log every line of code that potentially can cause trouble.
Keep your focus on disposing resources; many sneaky sporadical bugs i found were related to close\dispose things :).
For .NET projects You can use Elmah (Error Logging Modules and Handlers) to monitor you application for un-caught exceptions, it's very simple to install and provides a very nice interface to browse unknown errors
http://code.google.com/p/elmah/
This saved me just today in catching a very random error that was occuring during a registration process
Other than that I can only recommend trying to get as much information from your users as possible and having a thorough understanding of the project workflow
They mostly come out at night....
mostly
The team that I work with has enlisted the users in recording their time they spend in our app with CamStudio when we've got a pesky bug to track down. It's easy to install and for them to use, and makes reproducing those nagging bugs much easier, since you can watch what the users are doing. It also has no relationship to the language you're working in, since it's just recording the windows desktop.
However, this route seems to be viable only if you're developing corporate apps and have good relationships with your users.
This varies (as you say), but some of the things that are handy with this can be
immediately going into the debugger when the problem occurs and dumping all the threads (or the equivalent, such as dumping the core immediately or whatever.)
running with logging turned on but otherwise entirely in release/production mode. (This is possible in some random environments like c and rails but not many others.)
do stuff to make the edge conditions on the machine worse... force low memory / high load / more threads / serving more requests
Making sure that you're actually listening to what the users encountering the problem are actually saying. Making sure that they're actually explaining the relevant details. This seems to be the one that breaks people in the field a lot. Trying to reproduce the wrong problem is boring.
Get used to reading assembly that was produced by optimizing compilers. This seems to stop people sometimes, and it isn't applicable to all languages/platforms, but it can help
Be prepared to accept that it is your (the developer's) fault. Don't get into the trap of insisting the code is perfect.
sometimes you need to actually track the problem down on the machine it is happening on.
#p.marino - not enough rep to comment =/
tl;dr - build failures due to time of day
You mentioned time of day and that caught my eye. Had a bug once were someone stayed later at work on night, tried to build and commit before they left and kept getting a failure. They eventually gave up and went home. When they caught in the next morning it built fine, they committed (probably should have been more suspiscious =] ) and the build worked for everyone. A week or two later someone stayed late and had an unexpected build failure. Turns out there was a bug in the code that made any build after 7PM break >.>
We also found a bug in one seldom used corner of the project this january that caused problems marshalling between different schemas because we were not accounting for the different calendars being 0 AND 1 month based. So if no one had messed with that part of the project we wouldn't have possibly found the bug until jan. 2011
These were easier to fix than threading issues, but still interesting I think.
hire some testers!
This has worked for really weird heisenbugs.
(I'd also recommend getting a copy of "Debugging" by Dave Argans, these ideas are partly derived form using his ideas!)
(0) Check the ram of the system using something like Memtest86!
The whole system exhibits the problem, so make a test jig that exercises the whole thing.
Say it's a server side thing with a GUI, you run the whole thing with a GUI test framework doing the necessary input to provoke the problem.
It doesn't fail 100% of the time, so you have to make it fail more often.
Start by cutting the system in half ( binary chop)
worse case, you have to remove sub-systems one at a time.
stub them out if they can't be commented out.
See if it still fails. Does it fail more often ?
Keep proper test records, and only change one variable at a time!
Worst case you use the jig and you test for weeks to get meaningful statistics. This is HARD; but remember, the jig is doing the work.
I've got No threads and only one process, and I don't talk to hardware
If the system has no threads, no communicating processes and contacts no hardware; it's tricky; heisenbugs are generally synchronization, but in the no-thread no processes case it's more likely to be uninitialized data, or data used after being released, either on the heap or the stack. Try to use a checker like valgrind.
For threaded/multi-process problems:
Try running it on a different number of CPU's. If it's running on 1, try on 4! Try forcing a 4-computer system onto 1.
It'll mostly ensure things happen one at a time.
If there are threads or communicating processes this can shake out bugs.
If this is not helping but you suspect it's synchronization or threading, try changing the OS time-slice size.
Make it as fine as your OS vendor allows!
Sometimes this has made race conditions happen almost every time!
Obversely, try going slower on the timeslices.
Then you set the test jig running with debugger(s) attached all over the place and wait for the test jig to stop on a fault.
If all else fails, put the hardware in the freezer and run it there. The timing of everything will be shifted.
Debugging is hard and time consuming especially if you are unable to deterministically reproduce the problem. My advice to you is to find out the steps to reproduce it deterministically (not just sometimes).
There has been a lot of research in the field of failure reproduction in the past years and is still very active. Record&Replay techniques have been (so far) the research direction of most researchers. This is what you need to do:
1) Analyze the source code and determine what are the sources of non-determinism in the application, that is, what are the aspects that may take your application through different execution paths (e.g. user input, OS signals)
2) Log them in the next time you execute the application
3) When your application fails again, you have the steps-to-reproduce the failure in your log.
If your log still does not reproduce the failure, then you are dealing with a concurrency bug. In that case, you should take a look at how your application accesses shared variables. Do not attempt to record the accesses to shared variables, because you would be logging too much data, thereby causing severe slowdowns and large logs. Unfortunately, there is not much I can say that would help you to reproduce concurrency bugs, because research still has a long way to go in this subject. The best I can do is to provide a reference to the most recent advance (so far) in the topic of deterministic replay of concurrency bugs:
http://www.gsd.inesc-id.pt/~nmachado/software/Symbiosis_Tutorial.html
Best regards
Use an enhanced crash reporter. In the Delphi environment, we have EurekaLog and MadExcept. Other tools exist in other environments. Or you can diagnose the core dump. You're looking for the stack trace, which will show you where it's blowing up, how it got there, what's in memory, etc.. It's also useful to have a screenshot of the app, if it's a user-interaction thing. And info about the machine that it crashed on (OS version and patch, what else is running at the time, etc..) Both of the tools that I mentioned can do this.
If it's something that happens with a few users but you can't reproduce it, and they can, go sit with them and watch. If it's not apparent, switch seats - you "drive", and they tell you what to do. You'll uncover the subtle usability issues that way. double-clicks on a single-click button, for example, initiating re-entrancy in the OnClick event. That sort of thing. If the users are remote, use WebEx, Wink, etc., to record them crashing it, so you can analyze the playback.

Ways to enhance a trial user's first time experience

I am looking for some ideas on enhancing a trial-user's user experience when he uses a product for the first time. The product is aimed at a particular domain and has various features/workflows. Experienced users of the product naturally find interesting ways to combine features to get the results they want (somewhat like using an IDE from a programmer's perspective).Trial users get to use all features of the product in a limited fashion (For ex: If there is a search functionality, the trial-user might see only the top 20 results, or he may be allowed to search only a 100 times). My question is: What are the best ways to help a trial-user explore/understand the possibilities of the product in the trial period, especially in the first 20 - 60 mins before the user gives up on the product?
Edit 1: The product is a desktop app (served via JNLP, so no install required) and as pointed out in the comments, the expectations can be different in this case. That said, many webapps do take a virtual desktop form and so, all suggestions are welcome.
Check out how blinksale.com handles this. It's an invoicing app, but to prevent it from looking too empty for a new account, they show static images in places where you'd actually have content if you used the app. Makes it look less barren at first until you get your own data in.
if you can, avoid feature limiting a trial. it stops the user from experiencing what the product is ACTUALLY like. It also prevents a user from finding out if a feature actually works like they want/expect/need it to.
if you have a trial version, and you can, optimise it for first time use. focus on / highlight the features that allow the user to quickly and easily get benefits for useful output from the system.
allow users to export any data they enter into a trial system - and indicate that this is possible/easy. you don't want them to be put off from trying something because of a potential for wasted effort.
avoid users being required to do lots of configuration before using a trial. prepopulate settings based on typical/common/popular settings. you may also want to consider having default settings for different types of usage. e.g. "If you want to see what the system is like for scenario X, use configuration J. If you want to see what the system is like for use case Y, use configuration K." where J & K are collections of settings best suited to a particular type of usage.
I'll speak from personal experience while evaluating trial applications.
The most annoying trial applications are those which keep popping up nag screens or constantly reminding me that I'm using a trial. Trials which act exactly like the real product from the beginning till the end of the trial period are just awesome. Limited features are annoying, the only exception I can think of when you could use it is where you have rarely used feature which would allow people to exploit the trial (by using this "once-in-lifetime" needed feature and uninstalling). If you have for example video editing software trial which puts "trial" watermark on output, I'd uninstall it as soon as I'd notice it. In my opinion trial should seamlessly integrate into user work-flow so that once the trial ends they would think "Hey, I have been using this awesome program almost each day since I got the trial, I absolutely have to buy it." Sure some people will exploit it, but at the end you should target the group which will use your product in daily work-flow instead of one time users. Even if user "trials" it 2 times per year, he will keep coming back to your product and might even buy it after 2nd or 3rd "one-time use".
(Sorry for the wall of the text and rant)
As for how to improve the first session. I usually find my way around programs easily, but one time only pop-up/screen (or with check-box to never show it again) with videos showing off best features and intended work-flow are quite helpful. Also links to sample documents might be helpful. If your application can self-present itself (for example slide-show about the your slide-show program) you could include such document. People don't like to read long and boring help files, but if you have designer in your team, you could ask him to make a short colourful intro pdf. Also don't throw all the features at the user at the same time. Split information into simple categories and if user is interested into one specific category keep feeding him more specific information. That's why videos are so good, with 3-6 x ~3-5 minute videos you can tell a lot. Also depending how complex your program is you could include picture with information where specific things are located on the screen.
Just my personal opinion, I have never made a trial myself. Hope it helps.
An interactive walk through/lab exercise that really highlights the major and exciting offerings of your application.
Example: Yahoo mail does the same when the users opt to use new mail interface
There are so many ways you can go with this. I still can't claim to have found the best approach.
However, my plan from the beginning with my online (Silverlight) software was to give away something thousands of people will find useful and can use for free. The free version is pretty well representative of the professional product, with only a few features missing that enhance productivity (I'm working on those professional features now). And then I do have a nag popup that comes up every 5 minutes suggesting that you should buy it. That popup can be dismissed as many times as you want. I know that popup will annoy some people but I suppose that's the trade off. There is no perfect plan. But I don't think the occasional nag popup scares that many people away, especially when it can be dismissed with a single click.
I was inspired by Balsamiq Mockups, which has been hugely successful over the past couple years. My trial/nag popup way of doing things was copied almost exactly from Balsamiq. I honestly don't know if this is the ideal plan, but it has obviously worked for them. By the way, I think another reason for Balsamiq's success is that the demo doesn't have to be downloaded & installed. Since the demo is in Flash, there's a very high conversion rate of users actually trying it and becoming addicted to it.

How to make sure the visitor is unique

Say you have a pay-site with some online courses. And you want to make sure that one person doesn't just buy access, and then give the username and password to all his friends, so they can do the courses for free.
How would you go about this?
What we've thought of so far:
IP tracking
SMS password for each entry
Max number of runs through each course
Any other suggestions?
It's impossible to get a perfect system to do what you want. You find yourself in a situation where the stronger you make your protection (to defend against cheating customers), the more you annoy all your customers (including the honest ones).
You're going to have to ask yourself at what point the extra protection actually reduces the value of your site to the point that you're losing more honest customers than you're winning customers by converting cheaters into honest (paying) customers.. It might well be that the optimal thing to do is to use cookies, and only take remedial action if you see two concurrent sessions from different IP addresses, since that's fairly likely to be caused by cheating (though not guaranteed; it could be a dual-homed customer).
There's no way you can absolutely, positively guarantee that users are unique - even if you had some way to uniquely identify users, like biometric data (which you don't), you'd still be unable to be certain the the client wasn't just spoofing that information.
The best you can hope to do is make it a hassle for someone to "cheat" the system. IP+SMS would probably do that, although it'd also probably annoy the heck out of your users (at least, the latter part).
Your best bet is probably just to log IPs used for each account - if the number goes above a certain threshold, flag it for inspection, and close the account if it looks like the info is being widely shared.
Associate an IP address with a cookie. Then associate that cookie with the user account and require use of the cookie to login. If the user logs in in with a different IP address then associate that new IP address with the cookie and ask for some sort of verification to authenticate the user.
There is no 100% guarantee at all. Someone can just sit next to the user who bought the access and read the site over his shoulder. Your methods are good (but I personally think that SMS-authorization is a little too much), but I'd suggest maximum personalization of the information you provide, so nobody except the payer can benefit from accessing it.
I'm sure some people would try to use cookies (assuming users don't change computers)

What are some good examples showing that "I am not the user"?

I'm a software developer who has a background in usability engineering. When I studied usability engineering in grad school, one of the professors had a mantra: "You are not the user". The idea was that we need to base UI design on actual user research rather than our own ideas as to how the UI should work.
Since then I've seen some good examples that seem to prove that I'm not the user.
User trying to use an e-mail template authoring tool, and gets stuck trying to enter the pipe (|) character. Problem turns out to be that the pipe on the keyboard has a space in the middle.
In a web app, user doesn't see content below the fold. Not unusual. We tell her to scroll down. She has no idea what we're talking about and is not familiar with the scroll thumb.
I'm listening in on a tech support call. Rep tells the user to close the browser. In the background I hear the Windows shutdown jingle.
What are some other good examples of this?
EDIT: To clarify, I'm looking for examples where developers make assumptions that turn out to be horribly false about what users will know, understand, etc.
I think one of the biggest examples is that expert users tend to play with an application.
They say, "Okay, I have this tool, what can I do with it?"
Your average user sees the ecosystem of an operating system, filesystem, or application as a big scary place where they are likely to get lost and never return.
For them, everything they want to do on a computer is task-based.
"How do I burn a DVD?"
"How do I upload a photo from my camera to this website."
"How do I send my mom a song?"
They want a starting point, a reproducible work flow, and they want to do that every time they have to perform the task. They don't care about streamlining the process or finding the best way to do it, they just want one reproducible way to do it.
In building web applications, I long since learned to make the start page of my application something separate from the menus with task-based links to the main things the application did in a really big font. For the average user, this increased usability hugely.
So remember this: users don't want to "use your application", they want to get something specific done.
In my mind, the most visible example of "developers are not the user" is the common Confirmation Dialog.
In most any document based application, from the most complex (MS Word, Excel, Visual Studio) through the simplest (Notepad, Crimson Editor, UltraEdit), when you close the appliction with unsaved changes you get a dialog like this:
The text in the Untitled file has changed.
Do you want to save the changes?
[Yes] [No] [Cancel]
Assumption: Users will read the dialog
Reality: With an average reading speed of 2 words per second, this would take 9 seconds. Many users won't read the dialog at all.
Observation: Many developers read much much faster than typical users
Assumption: The available options are all equally likely.
Reality: Most (>99%) of the time users will want their changes saved.
Assumption: Users will consider the consequences before clicking a choice
Reality: The true impact of the choice will occur to users a split second after pressing the button.
Assumption: Users will care about the message being displayed.
Reality: Users are focussed on the next task they need to complete, not on the "care and feeding" of their computer.
Assumption: Users will understand that the dialog contains critical information they need to know.
Reality: Users see the dialog as a speedbump in their way and just want to get rid of it in the fastest way possible.
I definitely agree with the bolded comments in Daniel's response--most real users frequently have a goal they want to get to, and just want to reach that goal as easily and quickly as possible. Speaking from experience, this goes not only for computer novices or non-techie people but also for fairly tech-savvy users who just might not be well-versed in your particular domain or technology stack.
Too frequently I've seen customers faced with a rich set of technologies, tools, utilities, APIs, etc. but no obvious way to accomplish their high-level tasks. Sometimes this could be addressed simply with better documentation (think comprehensive walk-throughs), sometimes with some high-level wizards built on top of command-line scripts/tools, and sometimes only with a fundamental re-prioritization of the software project.
With that said... to throw another concrete example on the pile, there's the Windows start menu (excerpt from an article on The Old New Thing blog):
Back in the early days, the taskbar
didn't have a Start button.
...
But one thing kept getting kicked up
by usability tests: People booted up
the computer and just sat there,
unsure what to do next.
That's when we decided to label the
System button "Start".
It says, "You dummy. Click here." And
it sent our usability numbers through
the roof, because all of a sudden,
people knew what to click when they
wanted to do something.
As mentioned by others here, we techie folks are used to playing around with an environment, clicking on everything that can be clicked on, poking around in all available menus, etc. Family members of mine who are afraid of their computers, however, are even more afraid that they'll click on something that will "erase" their data, so they'd prefer to be given clear directions on where to click.
Many years ago, in a CMS, I stupidly assumed that no one would ever try to create a directory with a leading space in the name .... someone did, and made many other parts of the system very very sad.
On another note, trying to explain to my mother to click the Start button to turn the computer off is just a world of pain.
How about the apocryphal tech support call about the user with the broken "cup holder" (CD/ROM)?
Actually, one that bit me was cut/paste -- I always trim my text inputs now since some of my users cut/paste text from emails, etc. and end up selecting extra whitespace. My tests never considered that people would "type" in extra characters.
Today's GUIs do a pretty good job of hiding the underlying OS. But the idosyncracies still show through.
Why won't the Mac let me create a folder called "Photos: Christmas 08"?
Why do I have to "eject" a mounted disk image?
Can't I convert a JPEG to TIFF just by changing the file extension?
(The last one actually happened to me some years ago. It took forever to figure out why the TIFF wasn't loading correctly! It was at that moment that I understood why Apple used to use embedded file types (as metadata) and to this day I do not understand why they foolishly went back to file extensions. Oh, right; it's because Unix is a superior OS.)
I've seen this plenty of times, it seems to be something that always comes up. I seem to be the kind of person who can pick up on these kind of assumptions (in some circumstances), but I've been blown away by what the user was doing other many times.
As I said, it's something I'm quite familiar with. Some of the software I've worked on is used by the general public (as opposed to specially trained people) so we had to be ready for this kind of thing. Yet I've seen it not be taken into account.
A good example is a web form that needs to be completed. We need this form completed, it's important to the process. The user is no good to us if they don't complete the form, but the more information we get out of them the better. Obviously these are two conflicting demands. If just present the user a screen of 150 fields (random large number) they'll run away scared.
These forms had been revised many times in order to improve things, but users weren't asked what they wanted. Decisions were made based on the assumptions or feelings of various people, but how close those feelings were to actual customers wasn't taken into account.
I'm also going to mention the corollary to Bevan's "The users will read the dialog" assumption. Operating off the "the users don't read anything" assumption makes much more sense. Yet people who argue that the user's don't read anything will often suggest putting bits of long dry explanatory text to help users who are confused by some random poor design decision (like using checkboxes for something that should be radio buttons because you can only select one).
Working any kind of tech support can be very informative on how users do (or do not) think.
pretty much anything at the O/S level in Linux is a good example, from the choice of names ("grep" obviously means "search" to the user!) to the choice of syntax ("rm *" is good for you!)
[i'm not hatin' on linux, it's just chock full of unix-legacy un-usability examples]
How about the desktop and wallpaper metaphors? It's getting better, but 5-10 years ago was the bane of a lot of remote tech support calls.
There's also the backslash vs. slash issue, the myriad names for the various keyboard symbols, and the antiquated print screen button.
Modern operating systems are great because they all support multiple user profiles, so everyone that uses my application on the same workstation can have their own settings and user data. Only, a good portion of the support requests I get are asking how to have multiple data files under the same user account.
Back in my college days, I used to train people on how to use a computer and the internet. I'd go to their house, setup their internet service show them email and everything. Well there was this old couple (late 60's). I spent about three hours showing them how to use their computer, made sure they could connect to the internet and everything. I leave feeling very happy.
That weekend I get a frantic call, about them not being able to check their email. Now I'm in the middle of enjoying my weekend but decide to help them out, and walk through all the things, 30 minutes latter, I ask them if they have two phone lines..."of course we only have one" Needless to say they forgot that they need to connect to the internet first (Yes this was back in the day of modems).
I supposed I should have setup shortcuts like DUN - > Check Email Step 1, Eduora - Check Email Step 2....
What users don't know, they will make up. They often work with an incorrect theory of how an application works.
Especially for data entry, users tend to type much faster than developers which can cause a problem if the program is slow to react.
Story: Once upon a time, before the personal computer, there was timesharing. A timesharing company's customer rep told me that once when he was giving a "how to" class to two or three nice older women, he told them how to stop a program that was running (in case it was started in error or taking to long.) He had one of the students type ^K, and the timesharing terminal responded "Killed!". The lady nearly had a heart attack.
One problem that we have at our company is employees who don't trust the computer. If you computerize a function that they do on paper, they will continue to do it on paper, while entering the results in the computer.

Resources