PtInRect vs child windows - winapi

I've seen cases where people use DrawFrameControl along with PtInRect (where the mouse position is tested to the rectangle of the frame control), to simulate the effect of having a control (like a button). Why would you want to do this, rather than using child windows?
An example where this technique is used is this docking framework, where the docking window's close button isn't a physical window.
For an application I'm writing, I'm using a list view control which will hold up to 1000 items. Each item will hold, let's say, 10 buttons. All buttons are custom drawn.
Would it be considered a more efficient (and faster) approach to use the PtInRect mechanism for this?

Each process has a limit of about 10,000 window handles. Not only would it be inefficient to create windows for 10 buttons on each of 1,000 items, it wouldn't necessarily be possible.
To answer your question: yes, creating "virtual" buttons by painting and hit-testing yourself is a much better solution.

Having separate child windows brings certain overhead with them. Each child has its own attributes: window class, styles, window procedure, position, owning thread, etc. If you need to manage large array of such children many of their attributes will most probably be the same, wasting system resources. It may also become harder to manage them as separate windows:
You may notice painting glitches. Each window paints somewhat independently from the others, so when someone moves another window on top of your multi-children window and then moves that away, depending on how you organize things you may see unpleasant intermediate states like the background being redrawn with holes visible at the location of each child moments before children repaint. CS_PARENTDC might help with that.
Whenever you need to reposition these children, you need use the DeferWindowPos family of functions or else suffer similar repainting issues.
All these children need separate messages to manipulate.
After all it may be even simpler to simulate these children. My feeling is that attempting to put 10000 buttons on top of a list view will be much harder to implement and will suffer from the above visual problems. Using DrawFrameControl with owner-draw List View items would most probably give you simpler implementation with better visual result.
Besides there is a limit on the number of windows you can create, as #arx noted.

Related

How do I automatically resize GUI elements in AnyLogic?

When designing a GUI in most languages, you typically don't give exact dimensions for each component. Rather, you say how GUI components fit and size relative to each other. For example, Button1 should take up all the space Button2 and Button3 don't use; the TextPanel should fill as much space as it can; and the horizontal list of images should expand and shrink as the window expands and shrinks. In AnyLogic, I don't see any obvious way to do this, yet I need to develop models that work on multiple screen sizes. Is it possible to auto-scale GUI components in AnyLogic as it is in other languages? If so, how?
Unfortunately, there is no direct support for that as far as I know.
However, some of your requests can be achieved programmatically, i.e. by using the dynamic properties of your GUI elements.
There is the function getWindowWidth() (and height()) for experiments and you can set your button's width to equal that. With a bit of playing, you should be able to get your desired result.
cheers

How do GUI programs determine which construct on the screen was clicked?

A GUI application draws various drawing constructs (texts, buttons, lines etc.) on the screen, and often users are able to click on them. When that happens, how does the program determine the construct to which the very coordinates user clicked on is associated? Does it test if that pixel is inside the construct (what I have in mind is something like Shape.contains in AWT) for every construct in the program? How is this kind of testing done efficiently when hundreds of different constructs are on the screen (like in browsers or drawing programs such as Inkscape)?
Usually by taking advantage of the hierarchical nature of the on screen elements. First the window containing the click, then the top level panel in that window, then the sub-panel, etc... Even with hundreds of widgets on screen, you can converge very quickly this way, in logarithmic time.

Are off-stage DisplayObjects in Flash still slowing down my game?

How does Flash deal with elements that are off-stage?
Obviously Flash doesn't actually render them (because they don't appear anywhere on-screen), but is the process of rendering them still existent, slowing down my game as much as it would if the elements were on-screen?
Or does Flash intelligently ignore elements who don't fall into a renderable area?
Should I manually manage removing objects off the DisplayList and adding them back on as the exit and enter the stage, or is this going to be irrelevant?
Yes, they are slowing down your game.
In one of my early experiments I've developed a sidescroller game with many NPCs scattered around the map, not all visible in the same screen. I still had to calculate stuff but they weren't on the screen. The performance was significantly better when I handled their removal off the display list when irrelevant (by simply checking their X in relation to the 'camera'). Again, I'm not talking about additional code and events that may be attached to them, just plain graphical children of a movieclip.
The best practice though, in my experience, is drawing the objects in bitmaps. Of course if you're too deep into your game already this may be irrelevant, but if you have the time to invest, this is one of the best ways to get the most out of AS3 regarding 2D games. I found some of the greatest tutorials regarding bitmaps and AS3 in 8bitrocket
http://www.8bitrocket.com/books/the-essential-guide-to-flash-games/ I can elaborate on the subject if you want, but I think I'm going off topic here.
Even if some display objects are out of the stage area, they are still executed. If they have any animation playing in them, that might slow down the performance.
The question arises, why do we need to keep unused items outside the stage area? if you need to 'cache' the movieClips for faster loading , then load them in a keyframe where the control will never go. for eg. load the display objects which you want to show in frame 1, then put a stop() in the actions panel of the frame, make it a key frame, and in frame 2 load the unused animations. since there is a stop() in frame 1, the control never goes in frame 2, but the display objects are cached.
Or, if you have codes in the unsused displayobjects, and thus need to load them along with the main game components, then, try putting stop() in the frames of the unused display objects so that they don't animate.

How Windows (or other OSes) update client's background area?

Or to ask it another way, how OnEraseBkgnd() works?
I'm building a custom control and I hit on this problem. Childs are rectangles, as usual. I had to disable OnEraseBkgnd() and I use only the OnPaint(). What I need is to efficiently clear the area behind the childs and without flickering. Techniques like using back buffers are not an option.
Edit: I am very interested in the algorithm that's under the hood of OnEraseBkgnd(). But any helpful answer will also be accepted.
Usually in Windows, the easiest (but not most effective) means of reducing flicker, is to turn off the WM_ERASEBKGND notification handling. This is because if you erase the background in the notification handler, then paint the window in the WM_PAINT handler, there is a short delay between the two - this delay is seen as a flicker.
Instead, if you do all of the erasing and drawing in the WM_PAINT handler, you will tend to see a lot less flicker. This is because the delay between the two is reduced. You will still see some flicker, especially when resizing because there is still a small delay between the two actions, and you cannot always get in all of the drawing before the next occurance of the vertical blanking interrupt for the monitor. If you cannot use double buffering, then this is probably the most effective method you will be able to use.
You can get better drawing performance by following most of the usual recommendations around client area invalidation - do not invalidate the whole window unless you really need to. Try to invalidate only the areas that have changed. Also, you should use the BeginDeferWindowPos functions if you are updating the positions of a collection of child windows at the same time.

Smallest button size on a touchscreen

I'm involved in writing a touchscreen application for a medical device. The program is kiosk-like, in that the start menu, etc, will not be accessible to the user, and the user will use an onscreen keyboard to type any text in the rare event that they need to. The spec'd screen size is 1280x1024.
The question is this: What's the minimum touchable button size for a reasonable interface? I'm thinking that an American dime is a reasonable minimum size in all directions, with the reasoning being that a dime is about as small as we can expect people to feel with their fingers (it's got a diameter of 17.91 mm, according to the almighty Wikipedia).
Or is a dime a bit on the large size?
EDIT: Some extra knowledge about our users. They have to have both hands, because of the nature of the device, and they will not be wearing gloves. They would have to be able to manipulate film cassettes of up to 14x17 inches in size (again, due to the nature of the device), so I feel reasonably confident that they have some manual dexterity.
There's a lot of factors that go into this that can require the buttons to be a lot larger.
Based on my experience, I would use something no smaller than a US Quarter--a dime is really too small for repeated use. If users will wear gloves or other things to "fatten" the finger, you will need even larger buttons. If you have users who are disabled or have poor motor control (more common than you would expect), a dime sized button is absolutely useless. Keep in mind that some touch screens are not particuarly accurate. I've worked with some that were up to 25 pixels off near the edges.
Also, how often are they pushing the buttons and how many buttons will be on the screen? Having to hit many small touch screen buttons in sequence will drive users crazy. That's among the reasons you'll notice that many touch screen systems such as ATM's are starting to have buttons that are in excess of 1 inch x 2 inch.
Also, how fault tolerant is the system if they accidently push the wrong button? Mabye because the screen was off or they were viewing from an angle? The less fault-tolerant it is, the bigger your buttons need to be. Mistakes happen. Smaller buttons means more mistakes.
That's going to depend a lot on the device itself as well as the UI that you're putting on it.
Many touchscreens (excluding multitouch) average the area touched, so chances are you don't want to put UI items any closer than fingertip area to cut down on mis-touches.
Touch screens need to be calibrated post installation, and even then there might be some parallax depending on the screen type or the natural angle from which the screen is being viewed.
You might also consider visual feedback on touch down and only let the event through on touch release. When you get a touch, display an x larger than a fingertip centered on the point of contact and let it track until the touch is released. That point is the click point.
Above all, user test, user test, user test.
If you're really worried about size - make it fluid so you can change it easily and test it on real users under real circumstances. Did I mention user testing?
Depends on so many factors...
All touchscreens degrade over time, and give less accurate, more noisy, and less linear results over time. Resistive touchscreens have vastly improved over time, but you'll still have issues, especially with such a large touchscreen.
Further, people have different sized fingers, and different levels of hand/eye coordination.
Lastly, the hardware and software that actually processes the touches before it even gets into your application needs to be calibrated and characterized.
So it's really much more difficult than merely asking how big the buttons need to be.
First, I'd talk to the manufacturer of the touchscreen itself, and the manufacturer of the hardware/software interface and find out their recommendations.
Second, I'd do some tests - make a few targets onscreen, and then record all the points that the touchscreen hardware sends you when you press them. Do a bunch of tests for quick jabbing presses, hard long presses, light presses, etc. See how much the input jumps around (you might be surprised...) and how fast it actually responds (heavy filtering might make the presses come late, which could reduce usability without speedy feedback).
Thirdly, I'd design the user interface for GREAT user feedback. Make the buttons wider than the finger so that when the user presses it, they still see a little of the button, and it appears to depress when the press it.
Fourthly, I'd consider adding adaptive learning. You'll have to have calibration run and rerun occasionally, but in-between you can adjust to the user slightly to decrease the likelihood of errors. It's not for the faint-hearted, though, and is easy to do wrong so take care if you want to consider it.
-Adam
I would agree somewhat with James that a US Quarter is a better target, but often you can change that so it is about a US Quarter in one dimension, and a US Dime in the other. Of course, you want to keep the center of active UI elements separated from each other by at least the distance of a US Dime, depending on how "solvable" clicking on the wrong element is. (Clicking on the wrong element in a list is easily solvable if all that is being done is selecting it; clicking on the exit button of a dialog isn't so much.)
Be sure to make your most common cases really easy to do. This usually means making a really big button; or, sometimes, making a whole UI area "clickable" (but if you do that, it's often nicer to the user to also see some sort of button in the area to imply they can click on it.)
Depending on the quality of the touchscreen, you can often "cheat" around the edges. Not only are the edges and corners easiest to hit on a traditional mouse-based interface, but they can be for touchscreen-based interfaces as well, since the user can press their finger right up against the framing element and not worry about triggering something else. Make sure your UI elements accept clicks all the way to the edge of the screen if they are near the edge, as well. Using this, you could have a US dime-sized button in each corner that would still be very repeatably clickable (although possibly lacking feedback if your users have large fingers).
Bigger is better, as we know size matters.
It is important for users of touchscreen interface to get feedback when click somewhere on screen and because this cannot be mechanical as on normal keyboard or mouse button we use sound and/or changing color.
Playing sound only when control is clicked instead of each touch anywhere in screen to achieves good usability.
If working environment is noisy you must give color feedback too, and in this case button should be twice or triple bigger than finger. It is important that user can see feedback when his hand is on screen, and you know that hands are not from glass.
Ordering on screen is important too. For mouse driven interface using of drop downs combobxes, menus and etc is good, but for touch screen you will kill usability with them. So instead of placing controls in upper left part of screen put them in bottom right side. This will give your users free of hands information area on screen when they works.
Think about left/right hand layout, and day/night color scheme too.
Currently i am working to create set of .net UI components for touch screen and you can take a look for demos soon on my web site.
Atanas

Resources