What operating system and UI toolkit is this? It's not some fake Hollywood user interface. It's from Bloomberg.
Don't listen to John. The Bloomberg terminal is the unquestionable standard for trading desks and the UI works. From wikipedia
The Graphical User Interface (GUI) code is also proprietary, though some of it is based on GTK+
The Bloomber GUI carries a vast amount of information in a very compact format:-
Fixed cell monspaced characters are used so that data falls naturally into rows and columns, experienced users can locate the figure they are looking for instantly.
Fonts are used to indicate (I think!) age and importance
Colours generally indicate the direction of the change (blue = increase) (red = decrease)
Traders stare at these screens obsessively, either waiting for some trigger price or trading volume on an individual stock, or, trying to divine an overall trading pattern in a market segement which they can take advantage of.
Most of these conventions stem from when a Bloomberg terminal was just a "dumb" teletype with colours. But they work, they are fast and efficient and traders have years of familiarity with the conventions.
This is similar to the user interface used by travel agents to book flights. Its essentially the same interface that was used in dumb terminals from the 80s.
There is a "modern" GUI interface available but experienced agents just hate it and continue
what is effectively an emulation of a dumb terminal.
Related
I promise I give my best to bring this question in a format matching the SO question format. Most of the questions of that kind seem to get closed.
Please consider not closing this question but giving hints how I could improve the question instead.
I am doing some research on the possibilities how I could implement a user interface which should meet two criteria (besides that it should work and be clearly to understand):
it should look attractive
it should be able to visualize high dynamic data (requires a lot of updates)
Well, the thing is I have a gaming background and a web background. In games the UI is rendered, like everything elsle, from the game loop which means you can visualize data changes with 30fps and up. This would meet criteria 2.
Use cases I can imagine for the high dynamic data updates would be:
stock trading apps where the prize changes (I have to guess ~10 times per second)
audio programs where the current position of the played audio track is synchronized in the display of the audio wave form
dynamic changing network graphs
real time visualization of procedural generated data (images, 3D models)
But developers outside of the gaming industry do not use game engines for their user interfaces.
I am trying to figure out how business software developers (in the real world) implement their user interfaces.
During the research I found those main streams:
business software libraries (user interfaces which get the job done for people who do a job)
like Windows Forms, Java Swing, Qt
they provide a lot of widgets, components (or whatever) and functionality
they are all grey in grey (though to some limited extend customizable)
they are very static (the layout) and react bad if you try to redraw the ui with 30fps
customer software libraries (user interfaces which attract people in their spare time)
those seem to be implemented more and more with HTML(5), CSS(3) and Javascript which utilize an embedded browser
they have an attractive look and feel
even the layout can be dynamic
examples would be the Steam and Spotify client (as I learned recently)
updating an HTML UI with 30fps seems not to be a good idea
the newcomers and the mixtures
meaning those who try to mix these already (JavaFX, Silverlight, dead Flash)
or they are new and limited to their devices (Android, IOS user interfaces)
There is always the option to write an UI system from scratch so it is tailored to the needs. But why reinvent the wheel because displaying dynamic data in an attractive way seems not be a new idea.
So the question: what is used in the real world?
Someone may provide names of applications displaying dynamic data so I can further research what they used.
I try to visualize dynamic data and avoid the (50 shades of grey style, although you can taint it black like VS 2015).
The programming language does not matter. It tends to become a desktop application (since a web application seems to be impossible so far).
At this point, everyone knows that there's a limit to the number of ShellIconOverlayIdentifiers (from MSDN):
The number of different icon overlay handlers that the system can support is limited by the amount of space available for icon overlays in the system image list. There are currently fifteen slots allotted for icon overlays, some of which are reserved by the system. For this reason, icon overlay handlers should be implemented only if there are no satisfactory alternatives
I can understand the 15 overlay limt in Windows 95. But in an environment where there's Gigs of RAM, numerous Cores, and GPUs, is there some technical reason for such a low number in a modern operating system?
And why isn't this value configurable?
Before giving the 'performance' answer, consider:
Windows allows for configuration such that you can kill performance... why pick on this issue specifically?
Unless someone here happens to work on the Windows Shell team, I doubt that you're going to get an answer that really addresses the technical limitations and how they affect the design choice. But I'll try...
My guess is that there isn't any technical limitation, or at least there isn't one now. The real reason is presumably that no one has ever taken the time to sit down and update the code, the design, and the spec to lift this limitation. Features aren't implemented by default, and just because the computing environment has changed in the last few years doesn't mean that someone sat down and rewrote Windows to take full advantage of all those changes.
You should also consider that is more than likely a conscious design choice, rather than an imposed limitation. Raymond Chen (who actually does work on the shell team) published a blog entry responding to the uproar about Windows 7 removing the "sharing hand" overlay. He makes a compelling argument that the icon overlay is really not a desirable way of showing information (above and beyond the fact that the system is limited to 15) [emphasis added]:
Generally speaking, overlays are not a
good way of presenting information
because there can be only one overlay
per icon, and there is a limit of 15
overlays per ImageList. If there are
two or more overlays which apply to an
item, then one will win and the others
will lose, at which point the value of
the overlay as a way of determining
what properties apply to an item
diminishes since the only way to be
sure that a property is missing is
when you see no overlay at all. (If
you see some other overlay, you can't
tell whether it's because your
property is missing or because that
other overlay is showing instead of
yours.)
It seems reasonable to me that the extra clutter added to the shell is simply not worth it in the majority of real-world cases. The Windows Shell team obviously reached the same conclusion and cut the "sharing hand" overlay. Raymond's direct explanation:
Given the changes in how people use
computers, sharing information is
becoming more and more of the default
state. When you set up a HomeGroup,
pretty much everything is going to be
shared. To remove the visual clutter,
the information was moved to the
Details pane.
And, I know you specifically asked not to mention performance, but Windows really does try to keep you from shooting yourself in the foot. Users demand responsiveness in the shell, and overlay icons can interfere with this. As further evidence that they are not the priority, another blog post by the same Raymond Chen chastises:
Another example of applications having
a selfish view of performance came
from a company developing an icon
overlay handler. The shell treats
overlay computation as a low-priority
item, since it is more important to
get icons on the screen so the user
can start doing whatever it is they
wanted to be doing. The decorations
can come later. This company wanted to
know if there was a way they could
improve their performance and get
their overlay onto the screen even
before the icon shows up,
demonstrating a phenomenally selfish
interpretation of "performance".
Excellent response on the practical issues by Cody. As to why 15 and not some other number, the limit is baked into the ImageList control itself.
This is all very well and good, as explained by Cody Gray, but frankly it is pretty unimaginative, and as reported behind the scenes, sounding a bit frustrated.
In 2015 and with Windows 10, surely there can and needs to be a better ability, as I noted about thirty overlays present and had to prioritize ones I wanted most to see, which is not what you want most people to worry about at all. Also I see aggressive vendors like Box over-competing to try to prioritize themselves, and that will never go any place good.
Here's a possibility: What if multiply overlaid icons had a generic overlay indicator; a small rectangle matrix of multiple colors like the Google Chrome Apps button? Singly overlaid would just show the overlay out of a long list.
Then when the mouse pointer meets the icon, a small flyout window collects all the icon variations to view (at small icon size or a little larger). Each overlaid icon in turn announces by tooltip what it is, when you mouse over.
Now you can have all the icon overlays you need, for state in various clouds, for repository indications as for Tortoise tools, and so forth.
I quote an extract of the definitive answer here from Why is there a limit of 15 shell icon overlays? Raymond Chen 2019 post
The value 15 came from the corresponding limit for image lists. The
ImageList_SetOverlayImage function supports up to 15 image list
overlays per image list. (Hey, it used to be worse. The limit used to
be only 3!)
Okay, but why only 15? Why not more?
The overlay image is one of the pieces of information used when
drawing an image from an image list. The options are encoded in the
fStyle parameter, and when the bits were divided up for various
purposes, four bits were available to be used to specify the overlay
image. (You get 15 overlay images instead of 16 because you lose one
of the values in order to specify “no overlay.”)
Okay, but the values in the fStyle parameter use only the bottom 16
bits. What about the upper 16 bits? There’s plenty of room there.
The 16-bit limit was carried over from the 16-bit version of the
common controls (which still needed to be supported in Windows 95). Of
course, nowadays, nobody cares about the 16-bit version of the common
controls, so why not start using the upper bits?
There’s an unsatisfying explanation: The code internally that manages
the fStyle still uses a WORD in some places, so all the code that
manages the fStyle would have to be revised. This occurs in multiple
modules across Windows, so a synchronized change would have to be made
across multiple components. This is a breaking change at the binary
level because the interfaces are no longer compatible. Breaking
changes are procedurally difficult to coordinate: The affected code
may not be visible to the shell team because they are sitting in a
far-away leaf branch that has not yet RI’d to the trunk. It might be
that expanding fStyle from a WORD to a DWORD has far-reaching
consequences for some component.
Like I said, this is unsatisfying. Basically it boils down to “It
would be a lot of work and we are lazy.”
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When designing a user interface for an application that is going to be used internationally it is possible to accidentally design an aspect of the UI that is offensive to or inappropriate in another culture.
Have you ever encountered such an issue and if so, how did you resolve the design problem?
Some examples:
A GPS skyplot in a surveying application to be used in Northern Ireland. Satellites had to be in a different colour to indicate whether they were in ascent or descent in the sky. Lots of satellites in ascent are considered good as it indicates that GPS coverage will be getting better in the next few hours.I chose green for ascent and orange for descent. I had not realised that these colours are associated with Irish Catholics and Irish Protestants. It was suggested that we change the colours. In the end blue and a deep pink were chosen.
For applications that are going to be translated into German, I've found that you should add about 50% extra space for the German text compared to the English text.
A friend was working on a battlefield planning application for a customer in the Middle East. It was mandated that all crosshairs should take the form of a diagonal cross, to avoid any religious significance.
(Edit - added this) In the UK a tick mark (something like √) means yes whereas a cross (x) means no. In Windows 3.1 selected checkboxes used a cross, which confused me the first time I saw it. Since Windows 95 they've used (what I would call) a tick mark. As far as I can tell both a tick and a cross are called a check mark in the US, and mean the same thing
Edit
Please ensure that any reply you add to this question is as culturally sensitive as the user interfaces we're all trying to build! Thanks.
You should try to follow the i18n and l10n pointers provided by the look and feel guidelines for the UI library you're using, or platform you're delivering to. They often contain hints on how to avoid cultural issues, and may even contain icon libraries that have had extensive testing for such potential banana skins.
Windows User Experience Interaction Guidelines
Java Look and Feel Design Guidelines
Apple Human Interface Guidelines
GNOME Human Interface Guidelines
KDE User Interface Guidelines
I guess the most important thing is designing your application with i18n in mind from the ground up, so that your UI can be resized depending on the translated text; mnemonics are appropriate for different languages; labels are to the left for latin languages, but on the right for Hebrew and Arabic, etc, etc.
Designing with i18n and l10n in mind means that shipping your product to a location with a different culture, language or currency will not require a re-write, or a different version, just different resources.
Generally speaking, I believe you'll run into more problems with graphics and icons that you will with text (apart from embarrassing translations) simply because people identify more strongly with symbols than particular passages of text.
The idiom to use a big (green) checkmark symbol to mean OK/Yes/Correct is somewhat confusing in Sweden, where the checkmark is typically used to mean "wrong". I.e. when grading tests in school, a teacher will often use a capital R (from the Swedish word for "Right") for a correct answer, and a checkmark ("bock" in Swedish) for "wrong".
I find this issue interesting not only because I'm in the affected group (I'm Swedish), but also because it highlights that these kinds of issues can appear where you might not expect them to. Sweden is a generic Western culture, you might assume that usage of these kinds of symbols should be the same.
Another Yes/No example for Japan
I have to use an online database tool that has some user settings that can be toggled on and off. On is indicated by a green cross (×), Off is indicated by a red circle (○).
In Japan this is confusing since Off (NG, stop, closed) in general would be indicated by a cross (× : batsu) and On (ok, open) by a circle (○ : maru).
Adding to that the green red color combination makes things very confusing.
There is a good reason why the Windows resources (and not only) contain more than just strings.
A lot of elements should be considered localizable:
- colors
- images (including icons, toolbars, etc.)
- sounds
- font and font sizes
- alignment
- control flags and attributes (think UI mirroring for Arabic & Hebrew)
- dialog sizes
- etc.
This way all most of the problems can be addressed by the localizers, without any code changes.
For dialogs resizing should either be done by the localizers (to leaving extra space is not necessary), or should use auto-layout (available in frameworks like Java, .NET, Flex, wxWidgets, Qt, etc.)
This might also be a good read: http://msdn.microsoft.com/en-us/goglobal/bb688120.aspx
You will not be able to identify all issues on your own.
Specialists in UI/cultures will cost you lots of money.
Design first for you primary region and then discover and fix (if you can) issues one by one.
/* Here was my opinion on the religious issues as applied to software design. Removed as the thread starter did not like it. */
Well these issues begin to play role when you grow to the big/international/corporate level. Until that happens better not bother. They call it "premature optimization".
The German language, you're right, content/markup ratio is noticeably lower when compared to English. Another difference is that the words tend to be very long which means not only text area will be longer, but you'll likely run into problems when words expand out of the container and not wrap.
How would you like that one: Kesselsteinentfernungsmittelherstellungsbetrieb ?
Honestly, you won't be able do design it that way as to please everyone. World cultures are in many way contradictory. As soon as you reached certain expansion level, you'll inevitably become a rip-off target for lots of jerk around the world who find the UI and colors of your application "offensive".
Have you ever heard about an accepted paradigm about how to design those kind of systems?
Im not talking about iphones but photo-kiosk or manufacturing systems
Rulas,
i have worked on a number of touchscreen apps. i never found a published set of standards like the ones you mention but here is a little bit of what i learned:
Create a limited "visual vocabulary" with the following rules:
Buttons should be 30 or more pixels high (and at least as wide) - simply increasing the width of a button will not make it easier to click
Try to place controls on similar points on the screen - exactly the same if feasible, so that users do not have to hunt in different parts of the screen for the same operation.
Avoid the need for scrolling - try tabs, paging, wizards etc. Using scrollbars on a touchscreen is very difficult
Consider how the screen will be used. Where will users put their hands? Will they rest their hands on the corners of the screen? Will the Power button be in the way?
As part of this rule set, create your own controls library that can be easily reused in other parts of the app
Try to omit or minimize typing on a "soft" keyboard. Make as many fields selectable as possible.
Have big buttons for "fat fingered" users.
Usability really matters.
Keep the touch screen well calibrated. This used to be a nightmare back in 1999; don't know how much better it is now.
Speed and learnability do not directly fight each other, but it seems easy enough to design such a GUI that lacks either (or both) of them. GUI designers seem to prefer 'easy to learn' most of the time even when 'fast to apply' would be wiser.
There's only few UI concepts or programs that are weighted towards maximizing the peak efficiency of whatever you are doing with the program. Most of them haven't gotten common.
Normal people prefer gedit instead of vim. For normal people there are already good-enough GUIs because there was tons of research on them two decades ago.
I'd like to get some advices on doing UIs that do the tradeoffs from 'easy to learn' rather than from 'fast to apply'.
We have a product in our lineup that has won numerous awards based largely on its ability to provide more power with an easier interface than any of our competitors. I designed the interface a few years after holding a position in one of Bell Labs' human interface research groups so I had a pretty clear idea of what constituted "success" when I approached it. I have four pieces of design advice for creating easy but powerful interfaces.
First, select a metaphor that makes sense in their environment and do your best to stick to it. This doesn't have to be a physical metaphor although that can help if working with people who are not tech savvy. This was popular in the early days of Windows but its value remains. We used a "folder and page" metaphor that permitted us to organize a wide range of tasks while not crimping power users' style.
Second, offer a consistent layout relationship between data display and tasks. In our interface, each "page" displays a set of action buttons in the exact same position and, wherever possible, uses the same actual buttons. Thus, once one page is learned, the user has a head start on learning the rest. One of these buttons, always placed in a distinctive position, is a "Help" button...which brings me to point #3. The more general rule is: find ways of leveraging learning in one area to assist in learning others.
Third, offer context-sensitive help and make sure that it addresses the user's primary question (which is usually "what do I do now"?) How often have you seen technical help that simply shows you the Inheritance tree, constructor syntax and an alphabetical list of methods? That isn't help, it is abuse. We focused all of our help on walking people through sample tasks. In particularly tough areas, we also offered multimedia tutorials.
Fourth, offer users the ability to customize the interface. Our users often had no use for specific "pages" (analysis types) in their work. Thus, we made it very simple to turn them off so that the user would see an interface that was no more complicated than it had to be. Our app was usually installed by a power user and then used by multiple staff members so this was more of a win for us because we could usually count on the power user to understand what to shut off. However, I think it is good advice in general.
Good luck!
Autocad has a console mode. As you do things using the mouse and toolbars, the text-equivalent of those commands is written to the console. You can type commands directly in there. This provides a great way to learn the power-user names for commands (they are very short, like unix commands) which aids greatly the process of moving from beginner to productive power-user. Generally speaking, one primary focus has to be in minimising movement between the mouse and keyboard, so put lots of functionality into the mouse, or into the keyboard, because when you have to move your hands like that, there is a real delay in trying to find the right place to put them.
Beyond avoiding an angry fruit salad, just try to make it as intuitive as possible. Typically, programs with a very frustrating UI share one common problem, the developers didn't define a clear scope of what the program would actually do prior to marrying a UI design.
Its not so much a question of 'easy' , some people jump right into the UI and begin writing stuff to back the interface, rather than writing the core of a planned program and then planning an interface to use it.
This goes for web apps, desktop apps .. or even command line programs. A good design means writing the user interface after (and only after) you are sure that 'scope creep' is no longer a possibility.
Sure, you need some interface to test your program, but be prepared to trash it and do something better prior to releasing the program. Otherwise, there's a good chance that the UI is only going to make sense to you.
Rant (or, Stuff I think you should keep in mind):
Speed and learnability do directly fight each other. A menu item tells you what it does so that you don't have to remember. But it's much slower than a keyboard shortcut that you have to memorize to benefit from. The general technique for resolving this conflict seems to be allowing more than one way of doing things. While one way of doing something usually cannot be both fast and easy to learn, you can often provide two ways to accomplish the same task: one that's fast, and one that's obvious.
There are different kinds of people. The learning gap is a result of interest, motivation, intellectual capacity, etc. There is a class of person that will never bother to even learn which menu provides the action they want, and they'll scrub the menubar every time. There is also a (minority) class of person that thinks vim (or emacs) is the best thing since sliced bread. Most people probably fall somewhere in between these extremes.
My answer to the actual question:
I think you are asking how to strive for a fast UI. Your question wasn't particularly clear (to me).
First of all, be consistent. This helps both speed and learnability. Self consistency is the most important, but consistency with your environment may also be important.
For real speed, require as little attention and motion as possible. Keyboard shortcuts are fast because experienced users know where they are (they don't have to look), and their hands are already on the keyboard. Especially avoid forcing the user to change their position in front of the computer (e.g., moving one hand between the mouse and keyboard).
The keyboard is almost always faster than the mouse.
Customization (especially the ability to write custom scripts) will let power users make the interface work the way that is fastest for their specific habits.
Make it possible to get by without the most powerful features. All you need to know in order to survive in vim is "i, ESC, :wq, :q!". With that, you can use vi about the same way a lot of people use notepad. but once you start learning "h,j,k,l,w,b,e,d,c" (and so on) you get much more efficient. So there is a steep learning curve, but you can get by until you surmount it.
Keep in mind that if you focus on interface efficiency, you will probably limit your user base. Vim is popular among programmers, but lots of programmers use other tools, and it's virtually unknown among non-programmers. Most people want easy, not fast. Some want a balance. A very few just want fast.
I would like to point you towards Kathy Sierra's old blog for thoughts on 'easy to learn' and 'fast to apply' — I don't necessarily agree there needs to be a tradeoff between the two.
Three posts to get you started:
How much control should users have? This post ponders on whether 'fast to apply' is the ideal we should strive for.
The hi-res user experience talks about what you say about "normal people" vs. others. It's not so much that there are different kinds of people, but there are different levels of learning/expertise/involvement. Some are satisfied with less, some need more. How you get from less to more is arguably pretty much the same for everyone.
Finally, Featuritis vs. the Happy User Peak talks about the scope creep pointed out by #tinkertim.
Have you seen Gimp shortcuts?
Use nice visual controls and show keyboard shortcuts for them while hovering control - that will help to learn fast mode. If your software copy some behavior of other programs - copy shortcuts mapping from them (such as Copy/Paste/New Tab/Close Window/etc), but allow to dynamically re-map them as shown in Gimp. For reaped operations you could implement Action recoder. But it depends on type the software.
The main thing to be careful of is putting UI elements where they are most commonly located for other applications in that environment. For example, if you're going to make use of a menu system, people are accustomed to it being along top of the window by default for a desktop application. If you're in a web browser a menu system on a webpage seems out of place because it's not a consistent feature. If you're going to have an options/preferences configuration window, people are used to finding it under the Tools menu option, occasionally under the Edit menu. The main thing with keeping a UI "easy to learn" is that your UI elements shouldn't break the mold too much of how they're used in other applications.
If you haven't had the opportunity to see Mark Miller's presentation on The Science of Great User Experience, I'd recommend you watch the DNR TV episodes Part 1 and Part 2.
While I've been writing my own UI I've understood couple of things myself.
I imitated vim, but at the same time realized why it's so fast to use for text editing. It is because it acknowledges a thing: People prefer doing one thing at a time (inserting text, navigating around, selecting text), but they may switch the task often.
This means that you can pack different activities into different modes if you keep the mode switching schemes simple. It gives space for more commands. The user also gets better chances at learning the full interface because they are sensibly grouped already.
Vim is practically stuffed full of commands, every letter in the keyboard does something in vim, depending on the mode. Still I can remember most of them. And it's all because of modes.
I know bunch of projects that sneer at mode-dependent behavior. Main argument is the uncertainty of which mode you are in. In vim I'm never uncertain about the mode where I am in. Therefore I say the interface design is a failure if a trained user fails to recognize in which mode the interface is operating at the moment.