A GUI application draws various drawing constructs (texts, buttons, lines etc.) on the screen, and often users are able to click on them. When that happens, how does the program determine the construct to which the very coordinates user clicked on is associated? Does it test if that pixel is inside the construct (what I have in mind is something like Shape.contains in AWT) for every construct in the program? How is this kind of testing done efficiently when hundreds of different constructs are on the screen (like in browsers or drawing programs such as Inkscape)?
Usually by taking advantage of the hierarchical nature of the on screen elements. First the window containing the click, then the top level panel in that window, then the sub-panel, etc... Even with hundreds of widgets on screen, you can converge very quickly this way, in logarithmic time.
Related
I'm trying to implement a cross-platform UI library that takes as little system resource as possible. I'm considering to either use my own software renderer or opengl.
For stationary controls everything's fine, I can repaint only when it's needed. However, when it comes to implementing animations, especially animated blinking carets like the 'phase' caret in sublime text, I don't see a easy way to balance resource usage and performance.
For a blinking caret, it's required that the caret be redrawn very frequently(15-20 times per sec at least, I guess). On one hand, the software renderer supports partial redraw but is far too slow to be practical(3-4 fps for large redraw regions, say, 1000x800, which makes it impossible to implement animations). On the other hand, opengl doesn't support partial redraw very well as far as I know, which means the whole screen needs to be rendered at 15-20 fps constantly.
So my question is:
How are carets usually implemented in various UI systems?
Is there any way to have opengl to render to only a proportion of the screen?
I know that glViewport enables rendering to part of the screen, but due to double buffering or other stuff the rest of the screen is not kept as it was. In this way I still need to render the whole screen again.
First you need to ask yourself.
Do I really need to partially redraw the screen?
OpenGL or better said the GPU can draw thousands of triangles at ease. So before you start fiddling with partial redrawing of the screen, then you should instead benchmark and see whether it's worth looking into at all.
This doesn't however imply that you have to redraw the screen endlessly. You can still just redraw it when changes happen.
Thus if you have a cursor blinking every 500 ms, then you redraw once every 500 ms. If you have an animation running, then you continuously redraw while that animation is playing (or every time the animation does a change that requires redrawing).
This is what Chrome, Firefox, etc does. You can see this if you open the Developer Tools (F12) and go to the Timeline tab.
Take a look at the following screenshot. The first row of the timeline shows how often Chrome redraws the windows.
The first section shows a lot continuously redrawing. Which was because I was scrolling around on the page.
The last section shows a single redraw every few 500 ms. Which was the cursor blinking in a textbox.
Open the image in a new tab, to see better what's going on.
Note that it doesn't tell whether Chrome is fully redrawing the window or only that parts of it. It is just showing the frequency of the redrawing. (If you want to see the redrawn regions, then both Firefox and Chrome has "Show Paint Rectangles".)
To circumvent the problem with double buffering and partially redrawing. Then you could instead draw to a framebuffer object. Now you can utilize glScissor() as much as you want. If you have various things that are static and only a few dynamic things. Then you could have multiple framebuffer objects and only draw the static contents once and continuously update the framebuffer containing the dynamic content.
However (and I can't emphasize this enough) benchmark and check if this is even needed. Having two framebuffer objects could be more expensive than just always redrawing everything. The same goes for say having a buffer for each rectangle, in contrast to packing all rectangles in a single buffer.
Lastly to give an example let's take NanoGUI (a minimalistic GUI library for OpenGL). NanoGUI continuously redraws the screen.
The problem with not just continuously redrawing the screen is that now you need a system for issuing a redraw. Now calling setText() on a label needs to callback and tell the window to redraw. Now what if the parent panel the label is added to isn't visible? Then setText() just issued a redundant redrawing of the screen.
The point I'm trying to make is that if you have a system for issuing redrawing of the screen. Then that might be more prone to errors. Thus unless continuously redrawing is an issue, then that is definitely a more optimal starting point.
When designing a GUI in most languages, you typically don't give exact dimensions for each component. Rather, you say how GUI components fit and size relative to each other. For example, Button1 should take up all the space Button2 and Button3 don't use; the TextPanel should fill as much space as it can; and the horizontal list of images should expand and shrink as the window expands and shrinks. In AnyLogic, I don't see any obvious way to do this, yet I need to develop models that work on multiple screen sizes. Is it possible to auto-scale GUI components in AnyLogic as it is in other languages? If so, how?
Unfortunately, there is no direct support for that as far as I know.
However, some of your requests can be achieved programmatically, i.e. by using the dynamic properties of your GUI elements.
There is the function getWindowWidth() (and height()) for experiments and you can set your button's width to equal that. With a bit of playing, you should be able to get your desired result.
cheers
I've seen cases where people use DrawFrameControl along with PtInRect (where the mouse position is tested to the rectangle of the frame control), to simulate the effect of having a control (like a button). Why would you want to do this, rather than using child windows?
An example where this technique is used is this docking framework, where the docking window's close button isn't a physical window.
For an application I'm writing, I'm using a list view control which will hold up to 1000 items. Each item will hold, let's say, 10 buttons. All buttons are custom drawn.
Would it be considered a more efficient (and faster) approach to use the PtInRect mechanism for this?
Each process has a limit of about 10,000 window handles. Not only would it be inefficient to create windows for 10 buttons on each of 1,000 items, it wouldn't necessarily be possible.
To answer your question: yes, creating "virtual" buttons by painting and hit-testing yourself is a much better solution.
Having separate child windows brings certain overhead with them. Each child has its own attributes: window class, styles, window procedure, position, owning thread, etc. If you need to manage large array of such children many of their attributes will most probably be the same, wasting system resources. It may also become harder to manage them as separate windows:
You may notice painting glitches. Each window paints somewhat independently from the others, so when someone moves another window on top of your multi-children window and then moves that away, depending on how you organize things you may see unpleasant intermediate states like the background being redrawn with holes visible at the location of each child moments before children repaint. CS_PARENTDC might help with that.
Whenever you need to reposition these children, you need use the DeferWindowPos family of functions or else suffer similar repainting issues.
All these children need separate messages to manipulate.
After all it may be even simpler to simulate these children. My feeling is that attempting to put 10000 buttons on top of a list view will be much harder to implement and will suffer from the above visual problems. Using DrawFrameControl with owner-draw List View items would most probably give you simpler implementation with better visual result.
Besides there is a limit on the number of windows you can create, as #arx noted.
Is there a standard Aqua way to handle a practically infinite document?
For example, imagine a level editor for a tile-based game. The level has no preset size (though it's technically limited by NSInteger's size); tiles can be placed anywhere on the grid. Is there a standard interface for scrolling through such a document?
I can't simply limit the scrolling to areas that already have tiles, because the user needs to be able to add tiles outside that boundary. Arbitrarily creating a level size, even if it's easily changeable by the user, doesn't seem ideal either.
Has anyone seen an application that deals with this problem?
One option is to essentially dynamically expand the area as the user scrolls through it - any time the user scrolls within X units of an edge, add another unit in that direction. Essentially, you'll never be able to scroll "all the way" to an edge, because the closer you get the farther it will expand.
If the user scrolls back away from the edge, contract it to back to no more than X units beyond where there is actually content.
Have you seen what Microsoft Excel does for this problem? It has to represent an unbounded space with scrollbars, as well.
One solution is to define a reasonable space for the original level size, and when the user scrolls to one tile away from its bounds, add another row or column of tiles, and adjust the scrollbar accordingly. This way, the user never reaches the actual bounds.
If the user decides to cut down on the level size, you could also add code that shrinks the "reasonable space" once an unused row consists only of empty tiles. This saves the user from being stuck with a huge level that they scrolled through, with no way to shrink it.
Edit: Same as Dav's answer. :)
I'm involved in writing a touchscreen application for a medical device. The program is kiosk-like, in that the start menu, etc, will not be accessible to the user, and the user will use an onscreen keyboard to type any text in the rare event that they need to. The spec'd screen size is 1280x1024.
The question is this: What's the minimum touchable button size for a reasonable interface? I'm thinking that an American dime is a reasonable minimum size in all directions, with the reasoning being that a dime is about as small as we can expect people to feel with their fingers (it's got a diameter of 17.91 mm, according to the almighty Wikipedia).
Or is a dime a bit on the large size?
EDIT: Some extra knowledge about our users. They have to have both hands, because of the nature of the device, and they will not be wearing gloves. They would have to be able to manipulate film cassettes of up to 14x17 inches in size (again, due to the nature of the device), so I feel reasonably confident that they have some manual dexterity.
There's a lot of factors that go into this that can require the buttons to be a lot larger.
Based on my experience, I would use something no smaller than a US Quarter--a dime is really too small for repeated use. If users will wear gloves or other things to "fatten" the finger, you will need even larger buttons. If you have users who are disabled or have poor motor control (more common than you would expect), a dime sized button is absolutely useless. Keep in mind that some touch screens are not particuarly accurate. I've worked with some that were up to 25 pixels off near the edges.
Also, how often are they pushing the buttons and how many buttons will be on the screen? Having to hit many small touch screen buttons in sequence will drive users crazy. That's among the reasons you'll notice that many touch screen systems such as ATM's are starting to have buttons that are in excess of 1 inch x 2 inch.
Also, how fault tolerant is the system if they accidently push the wrong button? Mabye because the screen was off or they were viewing from an angle? The less fault-tolerant it is, the bigger your buttons need to be. Mistakes happen. Smaller buttons means more mistakes.
That's going to depend a lot on the device itself as well as the UI that you're putting on it.
Many touchscreens (excluding multitouch) average the area touched, so chances are you don't want to put UI items any closer than fingertip area to cut down on mis-touches.
Touch screens need to be calibrated post installation, and even then there might be some parallax depending on the screen type or the natural angle from which the screen is being viewed.
You might also consider visual feedback on touch down and only let the event through on touch release. When you get a touch, display an x larger than a fingertip centered on the point of contact and let it track until the touch is released. That point is the click point.
Above all, user test, user test, user test.
If you're really worried about size - make it fluid so you can change it easily and test it on real users under real circumstances. Did I mention user testing?
Depends on so many factors...
All touchscreens degrade over time, and give less accurate, more noisy, and less linear results over time. Resistive touchscreens have vastly improved over time, but you'll still have issues, especially with such a large touchscreen.
Further, people have different sized fingers, and different levels of hand/eye coordination.
Lastly, the hardware and software that actually processes the touches before it even gets into your application needs to be calibrated and characterized.
So it's really much more difficult than merely asking how big the buttons need to be.
First, I'd talk to the manufacturer of the touchscreen itself, and the manufacturer of the hardware/software interface and find out their recommendations.
Second, I'd do some tests - make a few targets onscreen, and then record all the points that the touchscreen hardware sends you when you press them. Do a bunch of tests for quick jabbing presses, hard long presses, light presses, etc. See how much the input jumps around (you might be surprised...) and how fast it actually responds (heavy filtering might make the presses come late, which could reduce usability without speedy feedback).
Thirdly, I'd design the user interface for GREAT user feedback. Make the buttons wider than the finger so that when the user presses it, they still see a little of the button, and it appears to depress when the press it.
Fourthly, I'd consider adding adaptive learning. You'll have to have calibration run and rerun occasionally, but in-between you can adjust to the user slightly to decrease the likelihood of errors. It's not for the faint-hearted, though, and is easy to do wrong so take care if you want to consider it.
-Adam
I would agree somewhat with James that a US Quarter is a better target, but often you can change that so it is about a US Quarter in one dimension, and a US Dime in the other. Of course, you want to keep the center of active UI elements separated from each other by at least the distance of a US Dime, depending on how "solvable" clicking on the wrong element is. (Clicking on the wrong element in a list is easily solvable if all that is being done is selecting it; clicking on the exit button of a dialog isn't so much.)
Be sure to make your most common cases really easy to do. This usually means making a really big button; or, sometimes, making a whole UI area "clickable" (but if you do that, it's often nicer to the user to also see some sort of button in the area to imply they can click on it.)
Depending on the quality of the touchscreen, you can often "cheat" around the edges. Not only are the edges and corners easiest to hit on a traditional mouse-based interface, but they can be for touchscreen-based interfaces as well, since the user can press their finger right up against the framing element and not worry about triggering something else. Make sure your UI elements accept clicks all the way to the edge of the screen if they are near the edge, as well. Using this, you could have a US dime-sized button in each corner that would still be very repeatably clickable (although possibly lacking feedback if your users have large fingers).
Bigger is better, as we know size matters.
It is important for users of touchscreen interface to get feedback when click somewhere on screen and because this cannot be mechanical as on normal keyboard or mouse button we use sound and/or changing color.
Playing sound only when control is clicked instead of each touch anywhere in screen to achieves good usability.
If working environment is noisy you must give color feedback too, and in this case button should be twice or triple bigger than finger. It is important that user can see feedback when his hand is on screen, and you know that hands are not from glass.
Ordering on screen is important too. For mouse driven interface using of drop downs combobxes, menus and etc is good, but for touch screen you will kill usability with them. So instead of placing controls in upper left part of screen put them in bottom right side. This will give your users free of hands information area on screen when they works.
Think about left/right hand layout, and day/night color scheme too.
Currently i am working to create set of .net UI components for touch screen and you can take a look for demos soon on my web site.
Atanas