Understanding Users - Does Performance Trump Looks? - performance

It seems to me that whenever a GUI (Graphical User Interface) is involved, the look and feel of the interface nearly always trumps the performance of the application.
Is this a universal phenomenon?

Sufficiently bad looks trump any level of good performance.
Sufficiently bad performance trumps any level of good looks.

This boils down to the psychology of your target audience and about the architecture of your application. If the GUI reacts quickly and is laid out in such a way that it is intuitive to the user (as opposed to the developer), then the underlying layers may not need to perform so well. If however, the user wants to get data from a database and they're left hanging while the data loads, they're going to feel very differently. Compare 2 web applications just as an example:
Application one feels quite responsive but under the covers things take longer than it appears on the surface - it uses AJAX to talk to Web Services. The Web Services aren't the quickest, but everything happens on callback (asynchronously), so the user isn't held up while fields populate. It doesn't impede their workflow. On a bad day when network performance deteriorates the background performance, sure it's noticeable, but user activity isn't impeded any further than normal.
Application two doesn't feel quite so responsive. Everything happens on postback, there's no AJAX or Web Services, on a good day the page loads are fairly quick. Of course, on every postback the user's workflow is impeded while they wait for the page to reload. On a bad day, network performance causes performance to deteriorate noticeably, further impeding the user.
Application one is far less likely to get complaints because the user isn't held up even though fields aren't loaded so quickly. The user can enter data and move on. Everything is handled asynchronously. Of course, in the background, the Web Service process may actually be slower than the full page refresh but the user isn't going to care so much.
From many thousands of hours writing software and directly interacting with my users - frequently those who aren't necessarily as computer literate as your average 10 year old I've noted these points that are key to getting acceptance from just such an audience [written from a user perspective]:
It must do what I want how I want it: Don't just read the spec and expect your code to meet exactly what it says on the paper. Really read what it says on the paper and understand what the user meant by that. Design to the underlying meaning of the words not the black and white of the ink on the paper. If you don't understand exactly what I meant, come and talk to me and I'll explain it until you do. I'll be less happy if you deliver software that missed my whole point than I will by your questions. I'll feel much happier if I get the feeling you're on my side by really trying to understand me.
It must assist my workflow and not impede it: It's great if all I have to do is push one button to complete what would've taken me an hour to do before, but if it freezes my computer for the 20 minutes it takes to complete the task, I'm not going to be a happy camper.
It must be intuitive to use: That means I don't want to have to wade through the documentation you didn't provide me with in order to figure out how to use it. Neither do I want a 20 minute explanation that I'm going to forget 3 minutes after you walk out of the door. Design the software such that my 10 year old could figure it out as easily as they can program the PVR. It means that I should interact with it in a manner that seems logical to me as the person that will be using it day in day out. It doesn't matter if it's functionally correct, if I can't figure out how to use it, I'm not going to use it, much less pay for it.
It must be responsive: I don't want to have to click a button and then wait 10 seconds for a list to load and then select an item from that list and then wait for another screen to load before I can select an action to complete on that item which then takes 5 minutes to complete. Find a way to load the data quickly - if you can't load the data quickly in response to my action, then figure out a trick to make it feel like the data is loading quickly - perhaps by loading it in advance in the background and only displaying what I need displayed in response to my action... my point is, I don't care what you do, just make it appear like it's doing it quickly.
It must be robust: It doesn't matter what I throw at it, it should accept it and move on. If I do something wrong or put something incorrect in a field, tell me - IN PLAIN ENGLISH!! I don't care about buffer overflows or IOExceptions thrown at line 479 while attempting to open a file. Just handle it and tell me what I did wrong in language I understand.
Give me documentation: Okay, I know I'm not going to read it, and I'm more likely to pick the phone up and call you than remember where I put it when you gave it to me. But knowing its there makes me feel warm and fuzzy inside. It shows that you cared about the software enough - and me enough to write instructions that I can reference oustide business hours when you're not available.
Price: This depends entirely on your audience, but in my experience, if you met all of the above points, price tends to be far less of a concern than it might appear on the surface.

Although "you can't judge a book by its cover", people often do with software. I don't know if I would say this is "universal", but certainly common.

I don't think it's even a true phenomenon. I don't care how zippy the "look and feel" is, if it takes second to echo a keypress, the UI experience will suck. If it takes a long time to repaint the page for small changes, the UI will suck.
Now, as long as the response time of the application is less than some amount, then the look and feel will be a big part of the satisfaction.
Check out some of Jakob Neilsen's books on this.

Isn't it a bit of a false dichotomy? If the look and feel of an application isn't clean, well-organized and effective then you don't have a high-performing application. No matter how fast it may be.

I've found that the best combination is a snappy and easy-to-use GUI. This doesn't necessarily mean your app has to have great performance, but having the GUI freeze on you is a kiss of death. The iPhone's Safari does this well--you can continue to scroll around the screen even if the rendering engine can't keep up with you. Yeah, the no-content hatch marks are ugly, but at least the user knows he's in control.

I think it depends on the users. I work in a medium sized company in the IT department constructing web based software for consumption by the employees of said company. The users range from Human Resources, Manufacturing, R&D, Sales, Finance, to making applications for the CEO. Each of the different departments and users within those departments seem to have different criteria for what makes a quality application.
For instance, a Human Resource department usually deals with a lot of textual data. They spend heaps of time inputting things into forms like employee information, entitlements, recruiting etc. These types of users might opt for the look and feel of an application for this purpose, they want an aesthetically pleasing design that is easy to navigate and intuitive.
On the other hand a department like finance might favor performance in their reporting tools. I have had a few experiences with large SQL tables with complicated queries that took a considerable amount of time to execute. Users that run these kinds of reports many times a day soon get fed up of waiting and would gladly lose a bit of interface intuitiveness in exchange for time that could be spent on other tasks.
So, i would say that you can't make a blanket statement like "All users prefer a speedy application" or "All users like pretty applications". It really depends on the users preferences, their job requirements, and the applications purpose.

Balance is everything.
The UI needs to look respectable, professional and flow similar to other applications so the user has a common experience, thus little learning curve. It shouldn't have unecesssary whistles and bells unless specifically requested.
The performance should be at least tolerable. If you have extra time in a project, I would spend that time speeding it up unless the user specifically asks for UI enhancements. Many times, whistles and bells can compromise performance as some UI enhancements require additional CPU time AND sometimes add awkwardness to the app. At first glance, some of these apps look cool but long term usability suffers in favor of the NEATO factor.

Important for the user is that using the program is fun. The program should not only be able to do what I want it must feel good to use the program.
Making the user wait at moments the user does not understand or foresee isn't fun.
Crashing and making errors isn't fun either.
But looking good and helping me doing my task through the look and let me work fast and without work flow interruptions is fun.
Programmers often think that programs that are slow and use much memory are bad and they measure all their software on memory usage and the use of the processor. But most of you users won't start top or the windows task manager and look at the footprint of your program they will use it and if if feels good to use the program, and the rest of their computer with the program running they will fell good to.
One thing I read about often is the usage of as many CPUs as possible to get a task for the user done in the shortest time. Is this high performance? Your program is very fast. But the Computer is very slow at the moment and switching to the email program because I know the task will take its time is a pain in the ass. So sometimes you may want to free some resources to improve the feeling of your program. Even if that would slow down your own program.

The most important are price, functionality, compatibility, and reliability.
Looks and performance are both, relatively unimportant and in practice they are both therefore unable to "trump" anything:
Compatibility: for example, in the real world I use MS Word, not because it's fast or pretty but because it's compatible with everyone else who uses it.
Functionality: when I want to book a train in France, I use http://www.voyages-sncf.com/ not because it's fast or pretty (or even outstandingly reliable) but because it has the necessary functionality.
Reliability: if an application crashes then I'm probably not going to use it again, no matter how fast it crashes, or how nice it looked before it crashed.
Price: etc. (say no more).

Related

Website Response Times - General Performance Rules

I am currently in the process of performance tuning a Web Application and have been doing some research into what is considered 'Good' performance. I know this depends often on the application being built, target audience, plus many other factors, but wondered if people follow a general set of rules.
There is always the risk with tuning that there is no end to the job, and one should at some point have to make a call one when to stop, but when is this? When can we be happy the job is done?
To kick off the discussion, I have been using the following rules, based on the Jakob Nielsen report (http://www.useit.com/alertbox/response-times.html), which says
The 3 response-time limits are the
same today as when I wrote about them
in 1993 (based on 40-year-old research
by human factors pioneers):
0.1 seconds gives the feeling of instantaneous response — that is, the
outcome feels like it was caused by
the user, not the computer. This level
of responsiveness is essential to
support the feeling of direct
manipulation (direct manipulation is
one of the key GUI techniques to
increase user engagement and control —
for more about it, see our Principles
of Interface Design seminar).
1 second
keeps the user's flow of thought
seamless. Users can sense a delay, and
thus know the computer is generating
the outcome, but they still feel in
control of the overall experience and
that they're moving freely rather than
waiting on the computer. This degree
of responsiveness is needed for good
navigation.
10 seconds keeps the
user's attention. From 1–10 seconds,
users definitely feel at the mercy of
the computer and wish it was faster,
but they can handle it. After 10
seconds, they start thinking about
other things, making it harder to get
their brains back on track once the
computer finally does respond.
A 10-second delay will often make users
leave a site immediately. And even if
they stay, it's harder for them to
understand what's going on, making it
less likely that they'll succeed in
any difficult tasks.
Even a few
seconds' delay is enough to create an
unpleasant user experience. Users are
no longer in control, and they're
consciously annoyed by having to wait
for the computer. Thus, with repeated
short delays, users will give up
unless they're extremely committed to
completing the task. The result? You
can easily lose half your sales (to
those less-committed customers) simply
because your site is a few seconds too
slow for each page.slow for each page.
If you have Apache as web server you can use the page-speed module made by Google.
Instead of waiting for developers to change legacy, make use of the CPU and memory you have available to provide a better UX.
http://code.google.com/speed/page-speed/docs/module.htmlct
It provides the solution to the most common pain factors and with immediate effect. No coding, no changes to the legacy code of web applications.
The rules are pretty much sensible. Indeed one should aim to have response times in 1 second or less but sometimes the processing will really take longer (bad design, slow machines, waiting on 3rd parties, intense data processing, etc). In this case one can use various tips & tricks to improve the user experience:
use caching (both in the browser and in your frequently processed data)
use progressive loading of data using ajax where possible (and use progress indicators to give feedback that tings are happening)
use tools such as Firebug, YSlow to detect potential issues with your html design and structure
etc etc

When is the optimization really worth the time spent on it?

After my last question, I want to understand when an optimization is really worth the time a developer spends on it.
Is it worth spending 4 hours to have queries that are 20% quicker? Yes, no, maybe, yes if...?
Is 'wasting' 7 hours to switch a task to another language to save about 40% of the CPU usage 'worth it'?
My normal iteration for a new project is:
Understand what the customer wants, and what he needs;
Plan the project: what languages and where, database design;
Develop the project;
Test and bug-fix;
Final analysis of the running project and final optimizations;
If the project requires it, a further analysis on the real-life usage of the resources followed by further optimization;
"Write good, maintainable code" is implied.
Obviously, the big 'optimization' part happens at point #2, but often when reviewing the code after the project is over I find some sections that, even if they do their job well, could be improved. This is the rationale for point #5.
To give a concrete example of the last point, a simple example is when I expect 90% of queries to be SELECT and 10% to be INSERT/UPDATE, so I charge a DB table with indexes. But after 6 months, I see that in real-life there are 10% SELECT queries and 90% INSERT/UPDATEs, so the query speed is not optimized. This is the first example that comes to my mind (and obviously this is more a 'patch' to an initial mis-design than an optimization ;).
Please note that I'm a developer, not a business-man - but I like to have a clear conscience by giving my clients the best, where possible.
I mean, I know that if I lose 50 hours to gain 5% of an application's total speed-up, and the application is used by 10 users, then it maybe isn't worth the time... but what about when it is?
When do you think that an optimization is crucial?
What formula do you usually apply, aware that the time spent optimizing (and the final gain) is not always quantifiable on paper?
EDIT: sorry, but i cant accept an answer like 'untill people dont complain about id, optimization is not needed'; It can be a business-view (questionable, imho), but not an developer or (imho too) a good-sense answer. I know, this question is really subjective.
I agree with Cheeso, the performance optimization should be deferred, after some analysis about the real-life usage and load of the project, but a small'n'quick optimization can be done immediatly after the project is over.
Thanks to all ;)
YAGNI. Unless people complain, a lot.
EDIT : I built a library that was slightly slower than the alternatives out there. It was still gaining usage and share because it was nicer to use and more powerful. I continued to invest in features and capability, deferring any work on performance.
At some point, there were enough features and performance bubbled to the top of the list I finally spent some time working on perf improvement, but only after considering the effort for a long time.
I think this is the right way to approach it.
There are (at least) two categories of "efficiency" to mention here:
UI applications (and their dependencies), where the most important measure is the response time to the user.
Batch processing, where the main indicator is total running time.
In the first case, there are well-documented rules about response times. If you care about product quality, you need to keep response times short. The shorter the better, of course, but the breaking points are about:
100 ms for an "immediate" response; animation and other "real-time" activities need to happen at least this fast;
1 second for an "uninterrupted" response. Any more than this and users will be frustrated; you also need to start think about showing a progress screen past this point.
10 seconds for retaining user focus. Any worse than this and your users will be pissed off.
If you're finding that several operations are taking more than 10 seconds, and you can fix the performance problems with a sane amount of effort (I don't think there's a hard limit but personally I'd say definitely anything under 1 man-month and probably anything under 3-4 months), then you should definitely put the effort into fixing it.
Similarly, if you find yourself creeping past that 1-second threshold, you should be trying very hard to make it faster. At a minimum, compare the time it would take to improve the performance of your app with the time it would take to redo every slow screen with progress dialogs and background threads that the user can cancel - because it is your responsibility as a designer to provide that if the app is too slow.
But don't make a decision purely on that basis - the user experience matters too. If it'll take you 1 week to stick in some async progress dialogs and 3 weeks to get the running times under 1 second, I would still go with the latter. IMO, anything under a man-month is justifiable if the problem is application-wide; if it's just one report that's run relatively infrequently, I'd probably let it go.
If your application is real-time - graphics-related for example - then I would classify it the same way as the 10-second mark for non-realtime apps. That is, you need to make every effort possible to speed it up. Flickering is unacceptable in a game or in an image editor. Stutters and glitches are unacceptable in audio processing. Even for something as basic as text input, a 500 ms delay between the key being pressed and the character appearing is completely unacceptable unless you're connected via remote desktop or something. No amount of effort is too much for fixing these kinds of problems.
Now for the second case, which I think is mostly self-evident. If you're doing batch processing then you generally have a scalability concern. As long as the batch is able to run in the time allotted, you don't need to improve it. But if your data is growing, if the batch is supposed to run overnight and you start to see it creeping into the wee hours of the morning and interrupting people's work at 9:15 AM, then clearly you need to work on performance.
Actually, you really can't wait that long; once it fails to complete in the required time, you may already be in big trouble. You have to actively monitor the situation and maintain some sort of safety margin - say a maximum running time of 5 hours out of the available 6 before you start to worry.
So the answer for batch processes is obvious. You have a hard requirement that the bast must finish within a certain time. Therefore, if you are getting close to the edge, performance must be improved, regardless of how difficult/costly it is. The question then becomes what is the most economical means of improving the process?
If it costs significantly less to just throw some more hardware at the problem (and you know for a fact that the problem really does scale with hardware), then don't spend any time optimizing, just buy new hardware. Otherwise, figure out what combination of design optimization and hardware upgrades is going to get you the best ROI. It's almost purely a cost decision at this point.
That's about all I have to say on the subject. Shame on the people who respond to this with "YAGNI". It's your professional responsibility to know or at least find out whether or not you "need it." Assuming that anything is acceptable until customers complain is an abdication of this responsibility.
Simply because your customers don't demand it doesn't mean you don't need to consider it. Your customers don't demand unit tests, either, or even reasonably good/maintainable code, but you provide those things anyway because it is part of your profession. And at the end of the day, your customers will be a lot happier with a smooth, fast product than with any of those other developer-centric things.
Optimization is worth it when it is necessary.
If we have promised the client response times on holiday package searches that are 5 seconds or less and that the system will run on a single Oracle server (of whatever spec) and the searches are taking 30 seconds at peak load, then the optimization is definitely worth it because we're not going to get paid otherwise.
When you are initially developing a system, you (if you are a good developer) are designing things to be efficient without wasting time on premature optimization. If the resulting system isn't fast enough, you optimize. But your question seems to be suggesting that there's some hand-wavey additional optimization that you might do if you feel that it's worth it. That's not a good way to think about it because it implies that you haven't got a firm target in mind for what is acceptable to begin with. You need to discuss it with the stakeholders and set some kind of target before you start worrying about what kind of optimizations you need to do.
Like everyone said in the other questions answers is, when it makes monetary sense to change something then it needs changing. In most cases good enough wins the day. If the customers aren't complaining then it is good enough. If they are complaining then fix it enough so they stop complaining. Agile methodologies will give you some guidance on how to know when enough is enough. Who cares if something is using 40% CPU more CPU than you think it needs to be, if it is working and the customers are happy then it is good enough. Really simple, get it working and maintainable and then wait for complaints that probably will never come.
If what you are worried about was really a problem, NO ONE would ever have started using Java to build mission critical server side applications. Or Python or Erlang or anything else that isn't C for that matter. And if they did that, nothing would get done in a time frame to even acquire that first customer that you are so worried about losing. You will know well in advance that you need to change something well before it becomes a problem.
Good posting everyone.
Have you looked at the unnecessary usage of transactions for simple SELECT(s)? I got burned on that one a few times... I also did some code cleanup and found MANY graphs being returned for maybe 10 records needed.... on and on... sometimes it's not YOUR code per se, but someone cutting corners.... Good luck!
If the client doesn't see a need to do performance optimization, then there's no reason to do it.
Defining a measurable performance requirements SLA with the client (i.e., 95% of queries complete in under 2 seconds) near the beginning of the project lets you know if you're meeting that goal, or if you have more optimization to do. Performance testing at current and estimated future loads gives you the data that you need to see if you're meeting the SLA.
Optimization is rarely worth it before you know what needs to be optimized. Always remember that if I/O is basically idle and CPU is low, you're not getting anything out of the computer. Obviously you don't want the CPU pegged all the time and you don't want to be running out of I/O bandwidth, but realize that trying to have the computer basically idle all day while it performs intense operations is unrealistic.
Wait until you start to reach a predefined threshold (80% utilization is the mark I usually use, others think that's too high/low) and then optimize at that point if necessary. Keep in mind that the best solution may be scaling up or scaling out and not actually optimizing the software.
Use Amdal's law. It shows you which is the overall improvement when optimizing a certain part of a system.
Also: If it ain't broke, don't fix it.
Optimization is worth the time spent on it when you get good speedups for spending little time in optimizing (obviously). To get that, you need tools/techniques that lead you very quickly to the code whose optimization yields the most benefit.
It is common to think that the way to find that code is by measuring the time spent by functions, but to my mind that only provides clues - you still have to play detective. What takes me straight to the code is stackshots. Here is an example of a 40-times speedup, achieved by finding and fixing several problems. Others on SO have reported speedup factors from 7 to 60, achieved with little effort.*
*(7x: Comment 1. 60x: Comment 30.)

How to explain at your boss that code/resources optimization is important?

Ahhhw, every time is so frustrating..
We have a dedicated server in our hosting company, and everytime i have to write down a new app (or an add to an pre-existing app), i use to 'lose' some time to optimize the code for many behaviors (reducing the db query, optimizing the db structure, reducing the bandwith, etc..) depending on what the app is supposed to do.
Obviously, is the point is not that i write bad code and then rebuild it, its just that after the project is complete, i allways find somethings that could be done better.
And everytime, if my boss catch me doing this, he say 'Your wasting your time! if the application need more resources, we buy more RAM, more CPU, or more bandwith!'.
What is the best (and simplyest) way to explain he that optimization is still important, and that is not so easy or automagically upgrating the hardware of an (production!) server?
EDIT: im not talking just about database optimization, but every aspect of an application
Chances are you are really wasting time. Most of the optimizations that you learn in college or university are sort of irrelevant in the business world. I recommend concentrating on those optimizations where you can quantify WHY your optimization is worth your company's money (your time spent is their money spent).
When trying to optimise code you need to follow the optimisation cycle. There are no short-cuts.
Define specific, detailed, measurable, performance requirements.
Measure your application's performance.
If it meets the requirements, stop, as continuing is a waste of time/money.
Determine where the bottleneck is (which machine? CPU, memory, disk, etc.?).
Fix the bottleneck (either hardware or software fix, whichever is faster/cheaper).
Go to 2.
From your post it doesn't sound like you have done 1 or 2, so you've skipped over 3 and 4 because you don't know enough to do them, and are now stumbling blindly around in 5 attempting to optimise whatever you think needs optimising.
So from what you've said I'd have to agree with your boss. You are wasting your time. Or at least you haven't followed enough process to prove otherwise, which is essentially the same thing.
Keep detailed notes and time sheets the next time you make a server move/upgrade, and protocol every minute of work that arose from it (including things in the weeks after). Then you'll have hard evidence that upgrading is expensive, and optimizations help delay the investment.
In the current situation, you might be able to do something with benchmarks and statistics along the line of "without my optimization, the server will be at 90% capacity with X number of users. With my optimization, we can cater for Y users more, before we have to get a new machine."
On the other hand, the line between optimization and over-optimization is thin. While writing code that doesn't waste resources is good craftsmanship and should be a human right, RAM and disk space are awfully cheap these days. Optimizations also tend to make code more complicated, and harder to maintain for others. When you control your own hardware, code optimization may not always be the top goal.
Do you have performance objectives for the application(s)? Free capacity, response time, bandwidth usage etc. Without them you will find it difficult to justify why an improvement needs to be made. You need to be able to quantify the problem before people will spend money on it.
If the application is already meeting the requirements then your boss is probably right.
To ensure reserve capacity for future customers you (and your boss) could set a threashold such that once performance comes within a percentage of the requirements you need to investigate the problem. This is the time for you to request/push/negociate for performing optimisations as he will be aware of the problem and the risk of doing nothing. If you can recommend some changes to correct the problem that will cost less than the hardware then hopefully he should agree to let you do it. Of course the reverse still applies. If the total cost of the hardware upgrade is cheaper then that is probably the better option.
Wait until buying RAM and CPU becomes more expensive than paying you to optimize the code. Then you have a case.
There's a break even point between improving software vs. improving hardware. For desktop systems, a good threshold is by purchase time:
If that machine fails, can you go to the next mall and buy a replacement within an hour?
As long as you can answer yes, optimization is futile - an investment into the future maybe, but is that really your decision alone?
A similar rule for web sites might be "how long would it take to get a new hosting contract on that?"
This isn't perfect, it covers just a single aspect of scalability. I like it because it's an easy to understand aspect outside of technical concerns.
Of course, you should have more material if that aspect suddenly gets brushed away. It helps to just ask what are the companies/bosses plans for a few key scenarios. - e.g. what contingency plans for DDOS attacks, for "rush events" etc. Having some data to back that up, some key stories ("company foo was mentioned on popular night show, web server broke down under requests, business lost, potential customers angered") helps a lot.
The key to realize is that (assuming from your description alone) it is your responsibility to forsee such events, and provide the technical solutions, but it's not your responsibility to mke these decisions.
Also, getting a written statement from your boss that "five hours downtime isn't a problem, get bck to work on the important things!" is a good CYA.
about the same way one would explain a developer that code/resources optimization is unimportant ...
i guess this problem arises because your way to measure importance is different than his ...
his job is to care about money and make sure he gets the "most" out of it ... and frankly, servers are cheaper than developers ... and managed servers are cheaper than admins ... and the ugly truth is, that many people think that way ... personally, I think PHP isn't really the right choice for performance-critical parts of an app ... neither are RMDBSs ... but nearly every managed server runs PHP and MySQL and business clients prefer to have an app they can run everywhere, based on a technology widely known, which turns out to be PHP and MySQL on the server side ... the world is a cruel place and the truth is ugly ... now when it comes to you, you have only 2 choices:
live with it
work with people who are more like you, and who share the same ambitions
your boss does the job right, from his perspective ... what you do is a waste of company money, because the company's goal is not to write beautiful and highly optimized code ... whatever your company does, in the end you only provide a tool ... and the value of a tool is measured by the use it provides in relation to the money it costs ...
if your optimizations were economically reasonable, then the easiest way would be not to optimize anymore and wait for the moment until your boss comes to you, asking for speedup ...
i suggest, you either look for a new job, or focus on other important things:
good design and use of best practices such as KISS, DRY, SOLID and GRASP
scalable architecture (doesn't mean you have to actually implement it, just have an idea how the whole thing could scale well)
maintainability, including readability and good documentation
flexibility & extensibility
robustness & correctness
do not reinvent the wheel and use the right tools for a job - use adequate technologies, libraries and frameworks
fast developement
these goals are important for the present and the future of a software project (also when it comes to money (especially the bold ones)) ... and they provide the possibility of doing optimization when actually needed ...
this probably isn't the answer you wanted, but I hope it helps ...
Optimizing is often really just a waste of time, what is more important is optimizing for maintainability:
- keep the code simple (see: KISS)
- move duplicate code into functions (see: DRY)
- optimize comments (as few comments as possible, as many as necessary)
- remove unused code
Those changes do not necessarily affect the speed of the program but they reduce the time needed to implement changes and fix bugs so they have real business value as they save expensive developer time.
If you're having to explain why there is a need for optimisation, then you're already on a low-win-rate path. Unfortunately, the best argument is when there is a situation where performance is becoming a noticeable problem (e.g. customers complaining) - this makes people sit up and listen if they're getting constant ear-ache about it. Often, the solution can be that throwing hardware at it is the best/cheapest solution.
However for me, performance is important, when working on highly scalable systems, you need to spend time on optimisations. It should at the very least be on your mind while you're developing and a lot of things can be swallowed up in the dev process (as opposed to after you've finished, then going back to optimise).
e.g.
- What happens if my database has 100,000,000 records instead of 100,000? (you can easily fool SQL Server into thinking there are more records in a table than there actually are to see what the effect on the execution plan would be).
If you're being pro-active about optimisation (which I personally think is good), then you need to try and come with a real-world meaning to your boss. e.g. "if we optimise our site by cutting down the page size, we could save x amount of bandwidth a month, saving us xx much money".
You do have to be careful about micro-optimisations though as you can micro-optimise until the cows come home, for much lower benefit.
You could just wait until your boss says "Why is it taking so long?" Then you could say "Hey, boss, we could buy 10 times as much hardware, or I could spend a couple days and make it 10 times faster". Here's an example.
Added: If your boss says "Well, why did you make it slow in the first place?" I would have no trouble explaining that he could hire God's Right Hand Programmer, and the code as first written would still have room for optimizing.
If you are on a relational database (and your not working with trivial applications).... you cannot afford to write bad database queries. Since you are you using a red-gate tag I assume you are on SQL server. If you design a bad schema (including queries) on SQL Server a database server when you start getting moderate traffic you will be forced to scale in horrendous ways. Once you get a high level of traffic you will be screwed -- and will need to hire a very expensive consultant/DBA to undo the mess -- use this argument with your boss :D.
What I am trying to get at is even though SQL is declarative and it doesn't appear that you have to pay attention to performance -- it doesn't really work that way. You have to design your data model by keeping in mind the way it is going to be used.
If you are writing business logic in application code ... well that doesn't matter much -- you can fix that latter with less trouble if performance becomes an issue. Most business logic these days is stateless (especially if its web-based or web-service based) meaning you can just chuck more machines at the problem -- if it's not than its just a bit harder.
Every time something takes long than it would have if the refactoring had been done ahead of time make sure your boss hears about it.
IMO, It's less a case of convincing that optimizations need to be done (and based on your Boss's responses, you're not going to win this fight), and more a case of always building in refactoring time into your estimates.
Do a cost analysis...
This will take me X hours and will mean every project needs 50% less data storage. Based on us doing 2 projects a year using on average 100Gb space, that will save $Y after a year, $Z after 3 years.
Crude example but it suffices.
Ultimately you and your boss are both wrong... you think code has to be fast (primarily) to satisfy your idealism, he doesn't care at all as he wants to deliver products.

What techniques might be successful in monitoring UI events for the purpose of identifying friction points regarding the usability of an application?

What techniques/algorithms might be successful in monitoring UI events for the purpose of identifying friction points regarding the usability of an application?
Most battle-tested, real-world software contains extensive error checking and logging. We frequently employ complex logging systems to help diagnose, and occasionally predict, failures before they happen. We generally focus on reporting the catastrophic failures on the server-side.
These failures are important of course, but I think there is another class of errors that are overlooked, and yet perhaps equally important. Whether you are using an iPhone, blackberry, laptop, desktop, or point-of-sale touch screen, user interaction is typically processed as discrete events. I suspect that identifying patterns of UI events can expose areas where the user is having difficulty regarding efficiently interacting with the application. I found an interesting academic paper on this subject here. I think the ideas presented in the paper are great, but perhaps other, simpler, techniques might yield nice results. What are your ideas and experiences in this area?
Interesting paper. One of the things I get out of it is that it's not easy to make sense out of user event logs unless you have a very specific hypothesis you are trying to test. They can be very useful, for example, if you know it's taking users too long to complete Task X or they're failing to complete it altogether. It's clearly a whole different ballgame to try to analyze sequences without any other supporting information and make sense out of them (though it can be done if you use the sophisticated techniques mentioned in the paper).
One simpler method would be to simply measure the total time to complete a given task that you know is common and important. If it's a shopping application, for example, the time to complete the check-out on a purchase would probably be something useful to collect. It's not quite that simple, though, because you'd have to at least take into account interruptions (e.g., the user's boss came into the room and he abandoned his shopping for actual work--not that I've ever done this :-P). You could have a simple rule that says, if there were no events logged for X seconds, assume the user is not paying attention to the screen.
Another simple thing you could do would be to check for obvious signs of errors, such as a user employing the "undo" facility or inputting information into an input box in a web app that causes a validation check to trigger (e.g., failing to enter required information and putting information in the wrong format). If certain input boxes result in a high number of errors, it might be a sign you should be more flexible in allowing different formats (e.g., allow users to enter a date as "6/28/09", "6-28-09", "june 28, 2009" instead of requiring a single format).
One other idea: if your application has contextual help, certainly count how many times people use it for each page/section/module of your application.
I doubt anything I'm saying is earth shattering, but maybe it will give you some ideas.
-Dan
I once wrote an app that tracked, for each command, whether it was accessed via the menu or a command-key equivalent; it gave us pretty good insight into which key-equivalents were expendable. We didn't have toolbars at the time, but the same kind of logging and analysis could of course apply to them, or to context-sensitive menus: anywhere you want to provide a limited set of valuable options.
You might want to Google usability testing. I've never heard of it being done by having a program recognize patterns of events, but rather by having humans watch humans use the program.

Should users see their own performance stats in real time

I know this questions seems extremely open ended. I will try to narrow scope.
I have been struggling for some time as to include or exclude real time user performance stats in an application gui.
Does anyone have any info on the harm vs gain in including these stats in an app?
i.e. number of emails answered, number of customer calls taken, average time per customer etc.
The users are begging for more info on their stats, as it is how they are rated. However there is concern that given access to see their performance real time or near real time it will negatively affect their work.
I can kind of equate it to being measured on how many lines of code I churned out in one day. Would this help me to be more productive or just teach me write code as fast as possible and most likely make a lot of mistakes.
In my application I can think of these scenario's
i.e. BAD: "I see I have spent 10 minutes on this issue already, I need to finish this up asap"
vs
i.e. GOOD: "I was able to help that customer quickly, My productivity is good today"
I think there may indeed be a phenomenon one could call “The Facebook Effect,” where by merely presenting a metric (e.g., number of “friends,” minutes per call), you imply that it’s important. However, I don’t think that’s an issue here because it’s clear that users already regard these metrics as very important, otherwise they wouldn’t be begging you for them.
I believe there is a wealth of psychology research showing that adding faster feedback on a metric will very likely improve user performance on that metric. Furthermore, it is emotionally stressful for people to be evaluated on a metric that they cannot track well themselves, which appears to be the case now.
I say show the metrics real time. Management obviously wants high performance on them otherwise they wouldn’t be rating the users on it. User want it, and it’ll reduce their job stress. Sounds like a win-win to me.
Of course, the customers probably lose out, so there’s a bit of an ethical dilemma. However, if these metrics are problem, then the problem lies in corporate policy, not in the user interface. By showing the metrics to the user, you will highlight whatever shortcomings this policy has to the managers. Maybe management will get a clue.
If it’s part of your role, you could propose measurements of customer satisfaction (e.g., after-call automated survey, random follow-up surveys by a human) so that managers can at least track the impacts of their policy. Assuming they care (how are they evaluated?).
KISS principle, is this information pertinent to what the users will be doing?
If no, then no do not add it. One thing that I like on web apps is the time it takes for a page to load after updating / submitting something (basically posting back data).
That way I / a user can tell if there is an issue with data going to the database or a caching issue.
The problem with displaying say for instance calls to customers is your average times may not be as accurate as you'd hope. You may have a customer who likes to chat, or is having technical issues that are irrelevant to your business but yet get reflected in your apps.
This data shouldn't be trusted because of these types of things. Another thing is, if you start displaying call times you end up having employee competitions to see who can get off the phone first..that's when you start hurting yourself more..bad customer service.
A few big names used to rate employees based on things like call volume, average time on the call, etc. Remember when Dell tried to outsource all their technical calls to India? Customers here in the US were frustrated and calls were either too long (not understanding) or too short (Customers did not want to deal with it). Well the big shooters thought hey call times are pretty inline with what we had forecasted and our costs are going down. But it hit rock bottom as time went on...
Your application is useful as a tool to convey this information, and you should definitely include it if users are clamoring for more visibility and transparency into the data it collects. A user's response to the information, however, will largely depend on what the organizational culture already does with it.
For example, does management aggressively encourage users to keep calls short and deal with as many customers as possible, and fire those who don't meet quotas? If so, that's going to provoke an equally strong reaction from users to keep their calls short when they discover they've been a bit on the longer side. ("Uh-oh, I'm getting to the 2-minute mark. I'd better hang up and fake a disconnection to avoid getting my average call length too high.")
Conversely, if management simply encourages everyone to do a good job and provide excellent customer service, this information can be synthesized by users in the overall context of this work. ("I've spent a lot of time on this customer -- I should see if I can wrap things up shortly, or escalate him if I can't fix this problem.")

Resources