Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When designing a user interface for an application that is going to be used internationally it is possible to accidentally design an aspect of the UI that is offensive to or inappropriate in another culture.
Have you ever encountered such an issue and if so, how did you resolve the design problem?
Some examples:
A GPS skyplot in a surveying application to be used in Northern Ireland. Satellites had to be in a different colour to indicate whether they were in ascent or descent in the sky. Lots of satellites in ascent are considered good as it indicates that GPS coverage will be getting better in the next few hours.I chose green for ascent and orange for descent. I had not realised that these colours are associated with Irish Catholics and Irish Protestants. It was suggested that we change the colours. In the end blue and a deep pink were chosen.
For applications that are going to be translated into German, I've found that you should add about 50% extra space for the German text compared to the English text.
A friend was working on a battlefield planning application for a customer in the Middle East. It was mandated that all crosshairs should take the form of a diagonal cross, to avoid any religious significance.
(Edit - added this) In the UK a tick mark (something like √) means yes whereas a cross (x) means no. In Windows 3.1 selected checkboxes used a cross, which confused me the first time I saw it. Since Windows 95 they've used (what I would call) a tick mark. As far as I can tell both a tick and a cross are called a check mark in the US, and mean the same thing
Edit
Please ensure that any reply you add to this question is as culturally sensitive as the user interfaces we're all trying to build! Thanks.
You should try to follow the i18n and l10n pointers provided by the look and feel guidelines for the UI library you're using, or platform you're delivering to. They often contain hints on how to avoid cultural issues, and may even contain icon libraries that have had extensive testing for such potential banana skins.
Windows User Experience Interaction Guidelines
Java Look and Feel Design Guidelines
Apple Human Interface Guidelines
GNOME Human Interface Guidelines
KDE User Interface Guidelines
I guess the most important thing is designing your application with i18n in mind from the ground up, so that your UI can be resized depending on the translated text; mnemonics are appropriate for different languages; labels are to the left for latin languages, but on the right for Hebrew and Arabic, etc, etc.
Designing with i18n and l10n in mind means that shipping your product to a location with a different culture, language or currency will not require a re-write, or a different version, just different resources.
Generally speaking, I believe you'll run into more problems with graphics and icons that you will with text (apart from embarrassing translations) simply because people identify more strongly with symbols than particular passages of text.
The idiom to use a big (green) checkmark symbol to mean OK/Yes/Correct is somewhat confusing in Sweden, where the checkmark is typically used to mean "wrong". I.e. when grading tests in school, a teacher will often use a capital R (from the Swedish word for "Right") for a correct answer, and a checkmark ("bock" in Swedish) for "wrong".
I find this issue interesting not only because I'm in the affected group (I'm Swedish), but also because it highlights that these kinds of issues can appear where you might not expect them to. Sweden is a generic Western culture, you might assume that usage of these kinds of symbols should be the same.
Another Yes/No example for Japan
I have to use an online database tool that has some user settings that can be toggled on and off. On is indicated by a green cross (×), Off is indicated by a red circle (○).
In Japan this is confusing since Off (NG, stop, closed) in general would be indicated by a cross (× : batsu) and On (ok, open) by a circle (○ : maru).
Adding to that the green red color combination makes things very confusing.
There is a good reason why the Windows resources (and not only) contain more than just strings.
A lot of elements should be considered localizable:
- colors
- images (including icons, toolbars, etc.)
- sounds
- font and font sizes
- alignment
- control flags and attributes (think UI mirroring for Arabic & Hebrew)
- dialog sizes
- etc.
This way all most of the problems can be addressed by the localizers, without any code changes.
For dialogs resizing should either be done by the localizers (to leaving extra space is not necessary), or should use auto-layout (available in frameworks like Java, .NET, Flex, wxWidgets, Qt, etc.)
This might also be a good read: http://msdn.microsoft.com/en-us/goglobal/bb688120.aspx
You will not be able to identify all issues on your own.
Specialists in UI/cultures will cost you lots of money.
Design first for you primary region and then discover and fix (if you can) issues one by one.
/* Here was my opinion on the religious issues as applied to software design. Removed as the thread starter did not like it. */
Well these issues begin to play role when you grow to the big/international/corporate level. Until that happens better not bother. They call it "premature optimization".
The German language, you're right, content/markup ratio is noticeably lower when compared to English. Another difference is that the words tend to be very long which means not only text area will be longer, but you'll likely run into problems when words expand out of the container and not wrap.
How would you like that one: Kesselsteinentfernungsmittelherstellungsbetrieb ?
Honestly, you won't be able do design it that way as to please everyone. World cultures are in many way contradictory. As soon as you reached certain expansion level, you'll inevitably become a rip-off target for lots of jerk around the world who find the UI and colors of your application "offensive".
Related
At this point, everyone knows that there's a limit to the number of ShellIconOverlayIdentifiers (from MSDN):
The number of different icon overlay handlers that the system can support is limited by the amount of space available for icon overlays in the system image list. There are currently fifteen slots allotted for icon overlays, some of which are reserved by the system. For this reason, icon overlay handlers should be implemented only if there are no satisfactory alternatives
I can understand the 15 overlay limt in Windows 95. But in an environment where there's Gigs of RAM, numerous Cores, and GPUs, is there some technical reason for such a low number in a modern operating system?
And why isn't this value configurable?
Before giving the 'performance' answer, consider:
Windows allows for configuration such that you can kill performance... why pick on this issue specifically?
Unless someone here happens to work on the Windows Shell team, I doubt that you're going to get an answer that really addresses the technical limitations and how they affect the design choice. But I'll try...
My guess is that there isn't any technical limitation, or at least there isn't one now. The real reason is presumably that no one has ever taken the time to sit down and update the code, the design, and the spec to lift this limitation. Features aren't implemented by default, and just because the computing environment has changed in the last few years doesn't mean that someone sat down and rewrote Windows to take full advantage of all those changes.
You should also consider that is more than likely a conscious design choice, rather than an imposed limitation. Raymond Chen (who actually does work on the shell team) published a blog entry responding to the uproar about Windows 7 removing the "sharing hand" overlay. He makes a compelling argument that the icon overlay is really not a desirable way of showing information (above and beyond the fact that the system is limited to 15) [emphasis added]:
Generally speaking, overlays are not a
good way of presenting information
because there can be only one overlay
per icon, and there is a limit of 15
overlays per ImageList. If there are
two or more overlays which apply to an
item, then one will win and the others
will lose, at which point the value of
the overlay as a way of determining
what properties apply to an item
diminishes since the only way to be
sure that a property is missing is
when you see no overlay at all. (If
you see some other overlay, you can't
tell whether it's because your
property is missing or because that
other overlay is showing instead of
yours.)
It seems reasonable to me that the extra clutter added to the shell is simply not worth it in the majority of real-world cases. The Windows Shell team obviously reached the same conclusion and cut the "sharing hand" overlay. Raymond's direct explanation:
Given the changes in how people use
computers, sharing information is
becoming more and more of the default
state. When you set up a HomeGroup,
pretty much everything is going to be
shared. To remove the visual clutter,
the information was moved to the
Details pane.
And, I know you specifically asked not to mention performance, but Windows really does try to keep you from shooting yourself in the foot. Users demand responsiveness in the shell, and overlay icons can interfere with this. As further evidence that they are not the priority, another blog post by the same Raymond Chen chastises:
Another example of applications having
a selfish view of performance came
from a company developing an icon
overlay handler. The shell treats
overlay computation as a low-priority
item, since it is more important to
get icons on the screen so the user
can start doing whatever it is they
wanted to be doing. The decorations
can come later. This company wanted to
know if there was a way they could
improve their performance and get
their overlay onto the screen even
before the icon shows up,
demonstrating a phenomenally selfish
interpretation of "performance".
Excellent response on the practical issues by Cody. As to why 15 and not some other number, the limit is baked into the ImageList control itself.
This is all very well and good, as explained by Cody Gray, but frankly it is pretty unimaginative, and as reported behind the scenes, sounding a bit frustrated.
In 2015 and with Windows 10, surely there can and needs to be a better ability, as I noted about thirty overlays present and had to prioritize ones I wanted most to see, which is not what you want most people to worry about at all. Also I see aggressive vendors like Box over-competing to try to prioritize themselves, and that will never go any place good.
Here's a possibility: What if multiply overlaid icons had a generic overlay indicator; a small rectangle matrix of multiple colors like the Google Chrome Apps button? Singly overlaid would just show the overlay out of a long list.
Then when the mouse pointer meets the icon, a small flyout window collects all the icon variations to view (at small icon size or a little larger). Each overlaid icon in turn announces by tooltip what it is, when you mouse over.
Now you can have all the icon overlays you need, for state in various clouds, for repository indications as for Tortoise tools, and so forth.
I quote an extract of the definitive answer here from Why is there a limit of 15 shell icon overlays? Raymond Chen 2019 post
The value 15 came from the corresponding limit for image lists. The
ImageList_SetOverlayImage function supports up to 15 image list
overlays per image list. (Hey, it used to be worse. The limit used to
be only 3!)
Okay, but why only 15? Why not more?
The overlay image is one of the pieces of information used when
drawing an image from an image list. The options are encoded in the
fStyle parameter, and when the bits were divided up for various
purposes, four bits were available to be used to specify the overlay
image. (You get 15 overlay images instead of 16 because you lose one
of the values in order to specify “no overlay.”)
Okay, but the values in the fStyle parameter use only the bottom 16
bits. What about the upper 16 bits? There’s plenty of room there.
The 16-bit limit was carried over from the 16-bit version of the
common controls (which still needed to be supported in Windows 95). Of
course, nowadays, nobody cares about the 16-bit version of the common
controls, so why not start using the upper bits?
There’s an unsatisfying explanation: The code internally that manages
the fStyle still uses a WORD in some places, so all the code that
manages the fStyle would have to be revised. This occurs in multiple
modules across Windows, so a synchronized change would have to be made
across multiple components. This is a breaking change at the binary
level because the interfaces are no longer compatible. Breaking
changes are procedurally difficult to coordinate: The affected code
may not be visible to the shell team because they are sitting in a
far-away leaf branch that has not yet RI’d to the trunk. It might be
that expanding fStyle from a WORD to a DWORD has far-reaching
consequences for some component.
Like I said, this is unsatisfying. Basically it boils down to “It
would be a lot of work and we are lazy.”
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've been thinking about the on going "revolution" in UI design and metaphors for interacting with the computer via a GUI and I'm suprised that as long as computers have been accessible through GUI's that programmers are still searching for the best way to allow the user to interact with their programs. It seems that most of the work centers around astetics(which I understand are important) but I don't understand why we are still looking for the magic bullet in UI design.
My question is: Why is UI design and components not a solved problem with accepted and understood approaches?
Probably because like most things, design (and tech, in general) are constantly changing, being worked on and revised. To say that one of the most crucial elements in software can be 'solved' would be an understatement and would be constantly changed again. There is no true definition to the 'perfect' GUI, only because you don't know who your users will be (power users versus casual, more input required vs less).
perfection is a moving target
Jacob Nielsen rightfully said about ten years ago that users don't scroll. This isn't true anymore.
Users get trained to user interfaces. Windows 7 doesn't show a system menu icon in the top left corner for many apps (e.g. in explorer), but you can still go there and invoke the system menu. Took me a while to notice the icon was missing for some apps - while using it.
(There are probably much better examples.)
The optimum isn't obvious. Consistency is core in UI, but only deviation from consistency can lead to improvements. You just can't optimize for "most consistent" or "most creative", both will fail.
it's a cross-domain skill. How many people are programmers, designers and neuroscientists? How many CS university courses teach cognitive models and how they apply to user interfaces? How many programmers pondered muscle memory, feedback loops and cognitive load?
UI's are still designed largely by programmers and sometimes fixed by designers after the fact.
effect is hard to measure
Take the Microsoft Office Ribbon: Judging from the responses, it seems to work better for many, yet is harder by orders of magnitude for others. It was a bold step, no doubt, but was it good? Microsoft does run UI tests, and they did it for the ribbons - whether they screwed up the tests, whether office politics won over facts, or wether the backslash was just wasn't forseeable in the data, I don't know. (But I'd seriously like to)
How many shops can afford user tests? Everyone can do hallway usability, but that just ensures you don't suck.
Skimming along the line
There is low pressure for the perfect UI, there is high pressure for a good enough UI. Given the lack of common knowledge and the high cost of improvement, perfect would not be affordable. The "Apple tradeoff" involves a higher price and technical shortcomings. They are pushing the limits (good!) with bold steps (very good!), which captures a notable but not major market segment. Still they are far from perfect.
I think if you ask Henry Ford the same question about designing automobiles you would have gotten an answer that would equally apply to your question today.
And that answer is, we're still in the infancy of human computer interaction design and we don't yet have enough data to design genuinely ideal systems. And, even if we did we don't yet have the ability to manufacture such an ideal system at an affordable price point.
Much like Henry Ford could not have designed the Bugatti Veryon in his day, nor could he have built it if he could design it. Or the Prius for that matter.
No, User interfaces isn't that subjective. Ergonomical matter is a scientific topic.
Think about that :
Today, everybody uses a computer. That was not the case 30 years ago.
Today , everybody uses a glass surface to access data. That was not the case 30 years ago.
Today, you've got several devices to access your data. That was not the case 30 years ago.
Today, data is collected everywhere. That was not the case 30 years ago.
Today, you can even control your data with glasses. That was not the case 30 years ago.
There is no magic bullet. just like nature, we're talking about an evolutive and living ecosystem, in the pure darwinian way.
UI design is to make people who have less knowledge about it but can easily understand the application and use it comfortable. That is core challenge of the UI design. So it evolves just like a robot. There is no end to perfect design. As along it makes the users to use easily then it is a perfect design.
User Interface is a very subjective subject, what might be ideal (graphically pleasing, efficient) for one person or task might not be ideal for another task or even another person doing the same task.
Also, the different platforms on which GUIs are implemented is ever changing and thus needing GUIs to evolve to meet specific platform demands (touch screens, ie. lend themselves towards a completely different user interface, then a mouse based platform, or even something like an ATM)
However, there are classes and books written on the subject, so there is some level of continuity in the area that has been there for quite some time.
User Interface is a very subjective subject, what might be ideal (graphically pleasing, efficient) for one person or task might not be ideal for another task or even another person doing the same task.
Also, the different platforms on which GUIs are implemented is ever changing and thus needing GUIs to evolve to meet specific platform demands (touch screens, ie. lend themselves towards a completely different user interface, than a mouse based platform, or even something like an ATM)
However, there are classes and books written on the subject, so there is some level of continuity in the area that has been there for quite some time.
In short, TECHNOLOGY.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
...or why did they fail?
I am going to build a proof of a concept of something which could be classified as CASE, but I want to avoid some of the mistakes done before.
Thanks!
First, I think diagrams provide real value when they're small and simple. Large, highly detailed diagrams mostly waste a lot of paper, time, hard drive space, etc. A pencil and paper work quite nicely for diagrams that are small enough (and simple enough) to be useful. A software tool only helps when you're producing a diagram that's so large and complex that it's practically guaranteed to be useless.
Second, with most CASE tools, the fastest way to draw a diagram is to start by writing some (possibly simplified, mockup) code, and then "reverse engineer" the diagram from the code. Drawing the diagram directly is often slower than writing the code. To provide much real value, producing the high level diagram has to be quite a bit simpler than writing equivalent code.
When you get down to it, I've rarely seen CASE tools used as an actual "aid" to "software engineering" anyway. In most cases I've seen, the software engineering is done entirely separately, and the CASE tools were used to reverse engineer diagrams from code that was already written. The people producing the diagrams generally found them useless, and included them in reports to higher-level managers for "wow factor". The only "aid" they hoped for from the diagrams was impressing management with the complexity of what they were doing in the hope of increasing funding (some included diagrams of things like portions of the standard library, purely to add to apparent complexity).
As to how the tools failed at the software engineering part, I don't know of a single simple answer -- from what I've seen, I'd say it's more of a "death of a thousand nicks", than any single, glaring problem. If I did have to point to a single large problem, it would be that the ones I've looked at don't really take Patterns into account. Just for example, what I'd like is to work at an even higher level of abstraction, so I can point to some functionality, and play with things like "how would things look if I were to implement the following parts of that functionality as decorator classes?" Yes, I can draw one diagram with them as decorator classes, and one without, but I don't have a really quick, easy way to say "transform this entire hierarchy to move X, Y, and Z into decorator classes."
Contrast a typical CASE tool with a spreadsheet. In a spreadsheet, I can change one cell, and it will automatically recalculate how that affects anything else in the spreadsheet that depends on it. By contrast, CASE tools seem (at least to me) stuck at roughly the level of a grid control, where I can make changes in a cell, but I still have to manually track what other cells depend on that one, and what formulas to use, and calculate and modify all the affected cells by hand. Yes, if I want to print out a sheet of the right values, being able to edit them on the computer so I don't have eraser marks in the cells and such would be an improvement -- but only a small improvement, not the kind that turned personal computers from toys for a few hobbyists into a staple of essentially every business on earth.
If you look at the Wikipedia entry: http://en.wikipedia.org/wiki/Computer-aided_software_engineering then you'll see the "classic" tools from the 1990s. Having worked with many of those tools, I would suggest that the focus on commercialisation fragmented the market. Typically, not only did you pay huge sums for the tools, but then for consulting, training and run-time environments. With so many tools on offer, it was hard to build a competent team specialising on given tool.
Furthermore, it didn't help that the tools were over-sold. Promising managements unrealistic increases in productivity. There isn't any other area of IT where I've seen so much shelf-ware - products used for one project and then abandoned, often along with the project too.
The concepts of CASE live on in Eclipse and many other MDE tools. The problems of steep learning curves and fragmentation have still not been solved. Whilst the cost of the tools has reduced (to free in many cases) the training, consulting and ramp up costs are still there.
Before you expend a lot of effort on your CASE tool, have a look at the fields of MDA, MDE, DSL, even UML. Its worth browsing the OMG web site as well.
At the end of the day you should focus on the what you produce and not the tool. If you are able to automate some tasks then that's good. Building yet another CASE-like tool is a great intellectual exercise, but with minimal chances of commercial success. After all IBM, Oracle and Computer Associates have only had sporadic successes with their tools and they are still vigorously marketing them to enterprise customers.
I worked with Knowldegeware back in the early 90's. My simple answer to the demise of CASE is that as soon as you printed the model it was old. Keeping the model and the code in synch became impossible. The first target platform was MicroFocus COBOL, but then came Client-Server 94-95, followed by the internet 97-98 and nobody really wanted to use CASE with those new platforms.
Consider a single button.
At one extreme, we have a black OpenGL window, with:
outline (in white) of a rectangle
bitmap remdered font inside of it, saying "Ok"
At the other extreme, we have Mac OS X, a button that is:
well rounded
has some gradient showing light effects on it
nice antialiased "OK"
soft shadow of some sort
These two UIs present very very different user experiences. The former says "This is from the 80s" the latter says "this is professional".
This is something I do not understand well as a programmer (and don't know where to learn about this).
Does anyone know of a good technical resource for this? [I'd prefer things that draws upon psychology / perception literature to say why to do something rather than design books that just says "use color XYZ with a gradient of blah"]
Here is something on it. http://www.alistapart.com/articles/indefenseofeyecandy
You can check this link out to answer the part of your query in the comment. It has lots of references to samples and some helpful links too. http://www.usernomics.com/user-interface-design.html
The perception and psychology part of designing the UI does not come as any rule or steps, as we all know. It gets developed over time. Making your application user friendly and pleasing, that part of the magic or deal gets added from experience\survey and also you can include layman testing. I do it many times.
Also thinking out of the box. You will get a solution when solve it within the box. But you will get a better solution when you think out of it.
Another useful thing is be a good learner and observer. Note something nice and useful when you visit sites or use other applications. You might not even notice it. It might be something very small or trivial but it makes a lot of difference when it's used in the right places.
You will want to read up on Human User Interface guidelines HIG and Usability:
Apple's Human Interface Guidelines
Windows User Experience Guidelines
Platform agnostic guidelines
Amazon has plenty of books on the HIG subject, but I'd also recommend books based on usability. Steve Krug's "Don't make me think" is a great book (mainly tailored for web usability)
etc.
A classic: The Design of Everyday Things
Pretty quick read, discussed some of the psychology behind using and understanding human interfaces. It's a bit dated and doesn't directly focus on programming GUIs but I would start here.
I'd start with Vitruvius' firmness, commodity, and delight.
Also Gibson's affordances. Also, many HCI researchers have applied activity theory, with mixed results. Norman's DOET is a good start, but I think it covers only the first 2 of Vitruvius' triad - you're asking about the delight. Might also look at McCloud's Understanding Comics.
The Opera web standards curriculum has a very good section on aesthetic aspects, especially regarding color usage. I think it's very useful reading not only for web development, but all application design.
Chapter 8: Color Theory
Chapter 9: Building up a site wireframe
There is also a color scheme designer website, which allows you to play around with some of the color theory aspects. Definitely worth a visit.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am planning to give a Technical presentation for a product we are building.
Intended audience is Technical developers. So, most of the time, I will be debugging trough the code in Visual Studio, performance analysis, some architecture review etc.
I have read couple of blogs on font sizes to use, templates to use on Visual Studio, presentation tools, among other very useful tips.
What I am looking specifically for is how to keep the session interesting without making it a dry code walkthrough? How to avoid making people fall asleep? Would be great to hear some stories..
Update1: Nice youtube clip on zoomit. Glue Audience To Your Presentation With Zoomit.
Update2: New post from Scott Hanselman after his PDC talk - Tips for Preparing for a Technical Presentation
Put interesting comments in the code.
// This better not fail during my next presentation, stupid ##$##%$ code.
Don't talk about them, let them be found by the audience.
-Adam
FYI, that Hanselman article has an update (your link is from 2003).
Use stories. Even with code examples, have a backstory: here's why someone is doing this. To increase audience participation, ask for examples of X where X is something you know you can demo, then phrase the walk-through in those terms.
Or maybe you have war stories about how it was different or how it normally takes longer or whatever. I find people identify with such things, then as you give your examples they're mentally tracking it back to their own experience.
I recommend Scott Hanselman's post (previously mentioned). I've written up a post with some tips, mostly for selfish reasons - I review it every time before I give a technical presentation:
Tips for a Technical Presentation
If you're using a console prompt, make sure the font is readable and that your paths are preset when possible.
Take 15 minutes to install and learn to use ZoomIt, so your audience can clearly see what you're showing off. If you have to ask if they can see something, you've already failed.
Probably most important is to have separate Visual Studio settings pre-configured with big, readable fonts.
One of the best pieces of advice I ever got for doing demos is to just plain record them in advance and play back the video, narrating live. Then the unexpected stuff happens in private and you get as many stabs at it as you need.
You still usually need some environment to use as a reference for questions, but for the presentation bit, recording it in advance (and rehearsing your narration over the video) pretty much guarantees you can be at the top of your game.
I also like to put small jokes into the slides and that recorded video that make it seem like the person who made the slides is commenting on the live proceedings or that someone else is actually running the slides. Often, I make absolutely no reference at all to the joke in the slide.
For instance, in my most recent demo presentation, I had a slide with the text "ASP.NET MVC" centered that I was talking over about how I was using the framework. In a smaller font, I had the text "Catchy name, huh?". When I did that demo live, that slide got a chuckle. It's not stand-up worthy by any stretch of the imagination, but we're often presenting some pretty dry stuff and every little bit helps.
Similarly, I've included slides that are just plain snarky comments from the offscreen guy about what I'm planning to say. So, I'll say, "The codebase for this project needed a little help", while the slide behind me said "It was a pile of spaghetti with 3 meatballs, actually" and a plate of spaghetti as the slide background. Again, with no comment from me and just moving on to the next slide as though I didn't even see it actually made it funnier.
That can also be a help if you don't have the best comedic timing by taking the pressure off while still adding some levity.
Anyway, what it really comes down to is that I've been doing most of my demo/presentation work just like I would if it was a screencast and then substituting the live version of me (pausing the video as appropriate if things go off the rails) for the audio when I give it in front of an audience.
Of course, you can then easily make the real presentation available afterward for those who want it.
For the slides, I generally go out of my way to not say the exact words on the screen more often than not.
If you are showing code that was prepared for you then make sure you can get it to work. I know this is an obvious one but I was just at a conference where 4 out of 5 speakers had code issues. Telling me it is 'cool' or even 'really cool' when it doesn't work is a tough sell.
You should read Mark Jason Dominus excellent presentaton on public speaking:
Conference Presentation Judo
The #1 rule for me is: Don't try to show too much.
It's easy to live with a chunk of code for a couple of weeks and think, "Damn, when I show 'em this they are gonna freak out!" Even during your private rehearsals you feel good about things. But once in front of an audience, the complexity of your code is multiplied by the square of the number of audience members. (It becomes exponentially harder to explain code for each audience member added!)
What seemed so simple and direct privately quickly turns into a giant bowl of spaghetti that under pressure even you don't understand. Don't try to show production code (well factored and well partitioned), make simple inline examples that convey your core message.
My rule #1 could be construed, by the cynical, as don't overestimate you audience. As an optimist, I see it as don't overestimate your ability to explain your code!
rp
Since it sounds like you are doing a live presentation, where you will be working with real systems and not just charts (PPT, Impress, whatever) make sure it is all working just before you start. It never fails, if I don't try it just before I start talking, it doesn't work how I expected it to. Especially with demos. (I'm doing one on Tuesday so I can relate.)
The other thing that helps is simply to practice, practice, practice. Especially if you can do it in the exact environment you will be presenting in. That way you get a feel for where you need to be so as not to block the view for your listeners as well as any other technical gotchas there might be with regards to the room setup or systems.
This is something that was explained to me, and I think it is very useful. You may want to consider not going to slide heavy at the beginning. You want to show your listeners something (obviously probably not the code) up front that will keep them on the edge of their seats wanting to learn about how to do what you just showed them.
I've recently started to use Mind Mapping tools for presentations and found that it goes over very well.
http://en.wikipedia.org/wiki/Mind_map
Basically, I find people just zone out the second you start to go into details with a presentation. Conveying the information with a mind map (at least in my experience), provides a much easier way for the information to be conveyed and tied together.
The key is presenting the information in stages (ie, your high-level ideas first, then in more detail, one at a time). The mind-mapping tools basically let you expand your map, as the audience watches and your present more and more detailed information. Doing it this way lets your audience gradually absorb the data in smaller stages, which tends to aid retention.
Check out FreeMind for a free tool to play with. Mind Manager is a paid product, but is much more polished and fluent.
Keep your "visual representation" simple and standard.
If you're on Vista hide your desktop icons and use one of the default wallpapers. Keep your Visual Studio settings (especially toolbars) as standard and "out of the box" as possible. The more customizations you show in your environment the more likely people are going to focus on those rather than your content.
Keep the content on your slides as consisce as possible. Remember, you're speaking to (and in the best scenario, with) your audience so the slides should serve as discussion points. If you want to include more details, put them in the slide notes. This is especially good if you make the slide decks available afterwards.
If someone asks you a question and you don't know the answer, don't be afraid to say you don't know. It's always better than trying to guess at what you think the answer should be.
Also, if you are using Vista be sure to put it in "presentation mode". PowerPoint also has a similar mode, so be sure to use it as well - you have the slide show on one monitor (the projector) and a smaller view of the slide, plus notes and a timer on your laptop monitor.
Have you heard of Pecha-Kucha?
The idea behind Pecha Kucha is to keep
presentations concise, the interest
level up and to have many presenters
sharing their ideas within the course
of one night. Therefore the 20x20
Pecha Kucha format was created: each
presenter is allowed a slideshow of 20
images, each shown for 20 seconds.
This results in a total presentation
time of 6 minutes 40 seconds on a
stage before the next presenter is up
Now, i am not sure if that short duration could be ok for a product demonstration.
But you can try to get some nice ideas from the concept, such as to be concise and keep to the point, effective time, space management etc..
Besides some software like Mind Manager to show your architecture, you make find a screen recorder as a presentation tool to illustrate your technical task. DemoCreator would be something nice to make video of your onscreen activity. And you can add more callout to make the process easier to understand.
If you use slides at all, follow Guy Kawasaki's 10/20/30 rule:
No more than 10 slides
No more than 20 minutes spent on slides
No less than 30 point type on slides
-Adam