WM_SIZE: size changed by user? - windows

Is it possible in a windows message handler for WM_SIZE to detect if the current size change is triggered by a user action (such as resizing by mouse or through system menu+keyboard)?
(Currently I'm setting/resetting a flag whether the resize is "because of my code" but this is quite unwieldy in some cases)
[edit] use case:
The purpose is to distinguish "the size user sets" from size changes triggered by other operations (also in control of the user).
In this particular case, I have a property sheet control where each page has a different minimum/default size.
The expected user behavior is as follows:
the minimum size of the sheet is no smaller than required by the current page (i.e. it changes when the page changes)
if the user sizes the sheet to "as small as possible" then switches to another page, it should be sized to "as small as possible for that page", too.
(informal first level usability testing - i.e. me toying around with it - has shown that this "use smalles size" is better tracked separately for X and Y)
Yes, this leads to sheet size jumping when page is changed. This is unfortunate but better than alternatives in this particular application.
In this case, Aero docking isn't supported for that wind since it's not top-level.
FWIW, having change messages fire always consistently for all controls, and having an indicator if this was triggered by a user action or programmatically ranks pretty high on my list of "essential for a UI control API".

Related

WCAG 2.1 - high contrast toggle button

I am maintaining a site for a client. The site has a high contrast toggle that sets appropriate contrast to meet WCAG 2.0 AA. However, without the user toggling that button, the site does not pass WCAG 2.0 AA (which is the goal). Once the user toggles the button, it seems to be WCAG 2.0 compliant.
The previous site developer said that it is fine as long as the user is able to switch to high contrast.
Questions:
[1] Does having a high contrast toggle (small icon in the upper left) make the site WCAG 2.0 AA compliant? Or does the default site need to be compliant before making the user hit a special button?
[2] Is there a universally recognized icon/location that gives a visual cue to switch a site to high contrast mode?
Short answer, yes, a high contrast toggle button is sufficient to pass WCAG but is not the greatest user experience. There are also some caveats in doing so as mentioned in "G174: Providing a control with a sufficient contrast ratio that allows users to switch to a presentation that uses sufficient contrast"
The link or control on the original page must itself meet the contrast requirement of the desired SC. (If the user cannot see the control they may not be able to use it to go to the new page.)
The new page must contain all the same information and functionality as the original page.
The new page must conform to all of the SC for the desired level of conformance. (i.e., the new page cannot just have the desired level of contrast but otherwise not conform).
It's always best to have your main page satisfy minimum contrast requirements (WCAG 1.4.3) rather than creating a separate page, and possibly a separate user experience, that conforms to WCAG. If you go this route, hopefully you're just applying a different CSS to the main page so that you don't have to worry about maintaining a separate high contrast page.
I believe this resource from the WCAG answers most of your questions: https://www.w3.org/TR/WCAG20-TECHS/G174.html
In short: Yes, a button with the described functionality is sufficient to meet AA compliance, if:
All other AA success criteria is also met, when in this High contrast Mode (we don’t want to break some accessibility functionality in High Contrast Mode)
The button itself must be of high contrast and visible when entering the site
When enabling High Contrast Mode, all existing content and functionality must still be present on the page
You should also aim for having a label for the button, stating what it does. If nothing else, then as a tooltip that can also be invoked with keyboard. (Using the title attribute to create a tooltip is not sufficient.) This is because not everyone will understand its purpose from a simple icon.

Can I improve upon window redraw and if so how can I?

Some years ago (2008) I wrote an online Child Support Calculator using PHP/Javascript/Ajax/css Advanced Child Support Calculator - May help to explain what I am replicating.
Some time after (2009 I think) I started writing an equivalent for Windows and am now revisiting this. The fundamentals of this version are working.
However, I have an issue with the window noticeably flickering/changing when controls are dynamically added and the window is redrawn/rebuilt.
Note! This calculator is specific to Australia.
In short I'm looking for a way, if possible, to only refresh/re-display/redraw the window after all components have been added.
Basically, the windows controls need to be dynamically added/removed depending upon the scenario (number of adults and children involved).
Adding or removing a child or adult or performing the calculation, results in a complete rebuild of the window. That is, all existing controls are destroyed and then all valid controls are added (this could perhaps be minimised via complex logic).
The issue is that controls are removed but briefly reappear (in an ordered fashion) causing the display to flicker (for want of a better description).
The following screen shots demonstrate the complexity factor (such as a child has a drop down for each defined adult), but not the flicckering(sic).
Here's a screen shot of the Initial display (OK pretty ugly at present) :-
And then if an Adult is added (note that the child now has an extra dropdown for Adult 3, the new adult):-
And now, an Adult (as above) and a Child:-
Coding wise, there is a RebuildAll function. This has two main stages. (1) The removal of the controls. (2) The rebuild (recreation) of the appropriate of the controls (Create Window, SendMessage's and then ShowWindow)
At a minimum there are 61 controls. The number of controls is
23 + ((#children * 12) - 2) + (#children * (#Adults * 8 )) + ((#adults * 10) -4). It's likely suffice to say that the number of controls increases rapidly/exponentially.
I suspect that it might be possible to postpone the ShowWindow untill after all of the builds have been done. Is the solution, as simple (theoretically) as this, assuming that this is feasible or is there another way that would circumvent the need to change all the code to remove the 'ShowWindow' and add a ShowWindow at the end of the builds?
Note replacing the individual ShowWindow's with one significantly reduced the "flicker" but didn't eliminate it. (as per Update below).
As an endnote I'm pretty sure that something should be feasible as the
windows program I have to date is a very poor reflection of the speed
of the browser/javascript version, which basically does the same thing
on windows (albeit 64bit).
Update
I went through and commented out the ShowWindows in all the AddItem???? functions and added a ShowWindows to the function that calls all the functions. This has improved matters. However, the numerous DestroyWindow calls still causes flickering when removing all of the controls. So now I guess that I'm looking for something that can disable them apparently doing an equivalent of ShowWindow.
Update 2
I have found SendMessage(hwnd, WM_SETREDRAW, FALSE, 0); (TRUE to turn drawing back on). However, this appears to supress the DestroyWindow in that the display aspect of the control remains (the controls themselves don't appear to respond though).
By a process of elimination I have cured the flicker with a combination of ShowWindow's and WM_SETREDRAW's at pertinent points.
To briefly re-describe the issue. When adding a child or adult, or performing a calculation. The entire display is rebuilt. The rebuild basically consists of 2 phases. First, destroying the existing control windows and then adding the new/replacement set of window's controls.
The resolution was to :-
Issue a ShowWindow(hwnd,SW_SHOW); (for the main/containing
window) after all the pre-existing windows control's had been
destroyed.
Issue a WM_SETREDRAW FALSE message as per SendMessage(hwnd,
WM_SETREDRAW, FALSE, 0); before rebuilding the windows controls.
Without any ShowWindow's being used.
Build the windows controls via the various functions.
Finally, when all the windows controls have been built, issue a
ShowWindow(hwnd,SW_SHOW); (for the main/containing window).
Rather than seeing the controls appear and then disappear, "the flickering". The window is blank for a while and then the controls are displayed all at once.
I'm still confused about what is actually going on though, due to:-
After some thought I decided to use a MessageBox after all the destroys. When the message was displayed the window was blank. However, when the button was clicked and processing resumed then, in situations when the number of controls was relatively high, controls would appear and then disappear in an orderly fashion, albeit it momentarily.
My guess, is that creating a control in an area where an existing
control had been, was causing that previous control to momentarily
display.

Should an icon show current state or next state?

When using icon images without text captions, should the icon represent the current state or the next state? For example I have a block of text that I want to minimize / maximize or I want to toggle showing All User Records or just My Records. I'm sure there are compelling arguments for either side and know that consistency is key, but what are the arguments related to good intuitive user design?
There is neither standardization nor general human tendency on this. For example, MS Windows UX Interaction Guidelines specifies four basic kinds of toggling progressive disclosure control. Three out of four show the state-when-activated, while one shows the current state.
I believe if you test a particular approach on your users, you'll get different results depending on what you ask. If you show them a control and ask them what state the app is in, they'll tend to read the icon as if it were indicating the state. If you show them a control and ask them to change the state (where in some cases the state is already changed), they'll read the icon as if it were the state to achieve. It's precisely because of this they invented toggling buttons.
If you're lucky, users use the icon primarily for either reading the state or setting the state, and not both. Then let the icon indicate whatever the users use it for.
If they indeed use it for both reading the state and setting the state, you're basically hosed, but there are a few things you can try to minimize hosehood:
Use text in addition to or instead of an icon. When labeled with a verb (e.g., "Connect"), the control indicates the state the user gets. When labeled with an adjective or noun (e.g., "On Line"), it implies the current state.
If your lib doesn't support toggling icons, then consider using a checkbox control, if that's allowed.
If your lib doesn't support checkboxes, then consider two controls, one to set each state, where the current state is disabled. Not too good for reading the current state, but there's some precedence for this in pulldown menus.
Fiddle with graphic design or placement to make it consistent with the meaning you've chosen. For example:
Command buttons are always labeled with the action they commit, so if your icon indicates the state the user gets, then give the icon a raised appearance like a command button. If the icon indicates the current state, then give it a flat appearance.
Toolbar controls usually show the state they bring about, so put the icon at the top of the window if indicates the state the user gets. In contrast, icons in the "work area" of the window indicate objects or attributes, so icons there should show the current state. Icons at the bottom of the window (in the status bar) should also show the current state.
This has not been truly standardized. Folder icons, for example, show open folders when they are open and closed folders when they are closed. Same for disclosure triangles, etc.
However, in other contexts, this is not always true. In a movie player, the "Play" arrow shows when the movie is not playing, and it shows the pause icon when it is playing. Probably the thing to do is use your best judgment, then poll your users. If a preponderance of the people you test are confused by your icon choices, switch them around. Then test them again and see if your initial test holds up. :)
If you are just going to have one button to toggle between two states, then the button should represent the next state, because that is the action that the button will take when clicked.
You gave the example of text that is minimized/maximized. Think of any expandable tree interface you ever see in Windows. A minimized tree has a [+] next to it, because clicking the button will expand the tree. And a maximized tree has a [-] next to it for the same reason.
You could also try to make a toggle that is highlighted or "pressed down" like mihi says, but that might be more confusing.
I prefer the "next state" approach (click plus to expand, click minus to collapse).
One reason is that this is the most widely used approach, so doing anything else would confuse users (and me as well).
Another reason is that the "next state" approach looks more inviting for the user to click.
May I present Zoom's mute button:
It shows the current state, as an icon, and the action that will occur when you push the button, as text. In other words, the icon on the button is the current state and the label on the button is the new state. The current state and the new state are opposites, so the button appears to contradict itself unless you read it very carefully.
(I hate that button.)

How to design a user user interface for high latency conditions?

Working on an application that controls a remote robot where there is the potential for significant delay between pressing a button and that action actually happening. Furthermore, there is the potential that the command did not successfully reach the intended recipient after all (due to network unreliability, etc.). Additionally, there are variables in play whose changes are not instantaneous. For instance, there is a variable both for commanded speed as well as current speed; changing the commanded speed will not immediately make the current speed match that value.
The question is, how do I make the application reflect both the current states the remote robot is reporting, as well as acknowledging to the user that his command was understood by the application, but the system has not yet received notification from the robot that it has been acknowledged? (Popups are an absolute no-go.)
Some ideas that have been discussed:
Disable Buttons
When a command button is pressed, start a timer for some reasonable number of seconds and disable the button during that time. Don't update corresponding label directly, but instead wait for a response from the robot. (e.g. if you press a Speed + button, and to the right is a text label showing current speed, don't immediately change the label but instead wait for a response from the robot). Once this response occurs, or when the timer expires, re-enable the button.
Pros: No additional control widgets needed on page. Labels always reflect current state of the robot.
Cons: If you wanted to send two speed updates in a row, would have to wait until first had been received and acknowledged. Would feel sluggish and unresponsive.
Logging Info
Have a log that users can view that shows a textual representations of all the actions the user has taken, timestamped, and with a history clearly visible. Could be color coded based on user preferences.
Pros: User has immediate feedback that his command was understood, as it appears in the log
Cons: Does not resolve problem of what to do with button (especially radio button) behavior.
Does anyone have experience with building UIs for environments in which there is significant latency between action and response? I would appreciate any and all input.
I would not go for a Log: your main focus is with the Widgets. There are several techniques for reporting status for different components, I will discuss a simple one here:
A button has an status-icon next to it show it's status. Use different colors to denote the latency. Green means "ready", when the user clicks the button, the icon changes to "orange" and that indicates busy. When the user clicks again, the color changes to "red", which means queued. When the queue is empty, the color changes back to orange. If the action was executed, the color is changed to green.
A slider can be used for floating values: use two "sliders". The first is better visible, and can be dragged. The second, and a layer below the first, is the "actual reported value", which shows the latency.
Textual input can also use the green/orange status-icons. While editing, the color changes to orange. If your queue/networking protocol supports canceling editing actions, you can resend the new string every time the user presses a key. If not: change the icon to orange upon a change, send the string, and wait for a status report. The status report should contain the actual value, and if this actual value is equal to the value in the component then change the icon to green. If the the actual is not equal to the value in the component then resend the value in the component.
Radiobuttons/Checkboxes should have a double-display. One editable, one uneditable. The first is for user-input and the second is for actual reported status. Same behavior as the slider component.
These require custom components, or widgets, to be made. You can extend original components or recreate them from scratch.
If your robot can also be "steered":
Create a rectangle which can be dragged upon. The rectangle has a small cross painted on, of the current value. While you drag, you see the latency of the cross. You can use interpolation and time values to smooth out robot control. A user will notice the lag because the cross "follows" his mouse-pointer. (Often used in space shooter simulation games when controlling virtual ships, Allegiance for instance)
It sounds like you have the following information that needs to be communicated to the user:
Current state or value of an attribute of the robot.
Target (i.e., received) value the robot is seeking.
Commanded value sent by the user to the robot.
Status of each command (pending, received, achieved, timed-out).
There also a couple other considerations:
Continuous or discrete feedback. Do you have continuous real-time feedback of the current value from robot? Or is it discrete feedback, where the robot sends the current value only after achieving a target? Obviously continuous is better for the user, since it allows the user to distinguish between the robot being slow and being stuck, but if you don’t have it, you have find a way to live without it.
Synchronous or asynchronous command-sending. If the commands are sent synchronously (i.e., no new one is sent out until the last one is known to be received), then the user may need a means to (a) force out the next command without waiting for a reply from the previous (in case the reply was lost), and (b) cancel a queued command in case conditions change between when a command was created and sent.
Robot with or without conflict resolution. Does the robot have the logic to look ahead in the list of commands it has received and resolved conflicting commands? For example, if it’s at a stop and its received-commands queue includes a command to go 5 m/s followed by a command to go 2 m/s, is it smart enough to delete the 5 m/s command? Or will it first attempt to accelerate to 5 m/s then 2 m/s, possibly resulting in an overshoot? Will it wait until it achieves 5 m/s before it even “looks at” the 2 m/s command? A lack of conflict resolution complicates your UI because the users may have to track all commands sent to understand why the robot is behaving like it is.
Integrated Information with Position-coding Controls
Let’s assume that the robot has conflict resolution and asynchronous command-sending. Rather than have separate controls for commanded, targeted and current values, I recommend you integrate them all in a single control to make it easy for the user to compare current, target, and commanded values, and see discrepancies. Perhaps the best way to do this is by representing values by positions in the window. Such position-coding of values is unmatched in showing the relations among things. There are two standard GUI controls, radio buttons and sliders, that accomplish such position coding. However, you’ll have to augment them to fulfill for your purposes. The best usability could require custom-made position-coding controls where you schematically represent the robot and maybe the environment, and allow the user to control it through direct manipulation. However, I'm assuming you need a simple-to-develop implementation, and a well-laid-out combination of sliders and radio buttons may get you pretty close to this ideal.
Use radio buttons for setting a categorical value and a slider for setting a numeric value. The slider may include a text box to allow the user to fine-tune the value. You augment each of these controls so that they show the commanded, target, and current values at the same time. The “usual” indicator (the dot for the radio button and the handle for the slider) represents the commanded value while separate graphic pointers indicates the current and target values. Discriminate the current from the target by making the current more prominent. I’d design them such that they merge into a single pointer when they are at the same value in order to minimize clutter for the usual state of things. If your users are untrained on the system you may want to include text labels on the pointers (“current,” and “target” when target is different than current).
Using these position-coding controls makes it easy for the user to compare current, target, and commanded values and see discrepancies. The status is implicit in the relative positions of the indicators. When the target pointer moves to the commanded position, the user knows the command was received. When current pointer is at the target, the robot has achieved the commanded value. This is especially good for continuous feedback of numeric values because the users can not only see the difference between the current and target value on a slider, they can estimate how long it will take for it achieve the target by seeing how fast the pointer is closing on the target. For the radio buttons, you can include “X% Done” text by the Target pointer to indicate when the target will be achieved (if this information is available).
For the most responsive UI, changing a value of a slider or radio buttons should send an immediate command. There is no “Apply” button. Users can re-send a commanded value at any time by re-clicking on the appropriate slider position or radio button. I think you'll find this is a natural human tendency anyway when confronted with an apparently unresponsive control (consider elevator buttons).
The descrepency between the target pointer and commanded indicator may be too subtle to signal a lack of reception of a commmand if responses are commonly slow (over a few seconds, such that user attention has likely shifted elsewhere). If that is the case, you may want to include a modeless alert after a time-out period that almost certainly indicates the command was lost in transmission. A modeless alert may include a text annunciator beside the control and/or graphically highlighting the commanded-target descrepancy. Depending on criticality, you may want to use a audible alert like a beep or animation to speed capturing user attention. The modeless alert disappears automatically when the target value matches the commanded value for whatever reason.
Separate Controls for Commanded and Current
If sliders and radio buttons take too much space for your purposes (or have other issues), you can go with separate non-position-coding controls for commanded and current values, as implied by your Disable Buttons design. However, overall, this is a more challenging design with more issues to resolve.
I would favor field controls like text boxes, check boxes, and dropdown menus, rather than command buttons so that the commanded value is clearly shown. Continuous numeric attributes may include spinner buttons with the text box if it doesn’t end up clogging the queue with incremental commands. As with the above option, changing a value sends an immediate command.
You’re right to be concerned about the using a timer and disabling. In addition to the problem of making the system sluggish, it means you gray out the commanded state. That can make it hard to read, and also requires some mental gymnastics by the user to interpret (“it’s unavailable, so that means I already selected it”). The interpretation can also be ambiguous because often disabled means Not Applicable (e.g., the Speed control is disabled because the robot has lowered stabilizers for fixed-base operation).
The solution is to use some other graphic feedback than disabling. I’d stay away from color coding. Color coding tends to be arbitrary and thus confusing (e.g., does red mean queued or timed-out?). This may be one of the (rare) good places to use animation since animation is intuitive for representing an on-going process. A flashing or throbbing border (or other feature) for the commanded-value control can indicate a sent command is awaiting reply. A flashing/throbbing border for the current-value control indicates the command is received and the robot is seeking the target value. If animation would be distracting in this situation (like it is for most other situations), then consider a dashed border (versus solid) to indicate awaiting reply or seeking target; dashed suggests a tentative or transitory state.
The target value and status are implied by what is animated. If the commanded-value border is animated, the value inside is a pending –the reply is yet to be received. If the current border alone is animated, the value inside the commanded control is the target value. If both borders are animated, then the robot is seeking one (unspecified) target value, while another is pending. If you think it’s problematic to leave the target unspecified in such circumstances, then you may need three controls to discriminate commanded, target, and current. However, if this is an edge case, it may be better to display the target value on mouse-over of current value control or with a drop-down button.
If feedback is continuous, you can also update the current value of numeric attributes at regular increments (about every 200 to 500 ms) so this animation provides an additional feedback of the robot seeking the target. For any attribute, if it takes 10-15 seconds or more for the robot to reach common targets and the robot has conflict resolution, you may want to also display a progress bar within or beside the current attribute control so the user can judge when the robot will achieve the target value.
To re-send the command, users can always re-select a value, or hit Enter while focus is on the commanded control. That’s a little odd and awkward for some controls (e.g., checkboxes), so I’d also consider a modeless notification (not a popup) that appear near the control if the command times out for a reply. The notification includes a button to resend the command.
If your users are untrained on the system you may want to include redundant text under the animation (e.g., “Sending” when waiting for a reply and “Seeking” when moving towards the target value).
Log Table
The logging approach is probably best if commands are sent synchronously and/or the robot lacks conflict resolution. This way the user can track the command queue for either sending commands or the commands received by the robot in order to predict robot behavior. However, I wouldn’t make it a read-only text box, but rather a table that can be manipulated. While the table is sorted by default by timestamp, there will be separate sortable fields for the attribute, the commanded value, the status (pending, seeking, achieved, timeout). If feedback is continuous, then the status should indicate progress towards achieving the target value (e.g., percent, or a progress bar).
If there is synchronous command-sending, then users can edit the commanded values of pending commands or force forward, move, or delete pending commands. In any case, commands can be copied and re-inserted in order to resend any command from any time. Maybe even provide a means to save selected commands and retrieve and insert them later –now you have macro facility.
If the robot tends to be is especially cranky (frequent loss of communications, slow responses), then you may want to have this log table beside the controls for creating commands and viewing the current values. The controls should be set up to make the creation of a discrete command clear to the user. With a cranky computer-to-robot interface, spurious commands are costly, so each command should be well-planned and deliberate. Likely this means a set of field controls like text boxes and drop-down lists to set values of various attributes and a button that generates the command(s) for those values. Awkward, yes, but that’s an accurate representation of the communication link with the robot.
Alternatively, if typically the queues are nearly empty, then you may want to make this table available in a separate window for experts to troubleshoot problems with robot behavior. Normally then the users use one of the other two options I gave above.
Maybe you can use a variation of the command pattern. Each action by the user generates a command, which goes into a queue. The queue is visible to the user in a printout on the screen. So you do not disable the button, but allow the user to press the button multiple times but show the user that the command is queued. At the same time you do not update the labels showing current state of robot until you receive the state from the robot.
In the queue the command could show its status somehow, maybe text and colour. And maybe you should allow the user to delete a command before it is processed by the robot(if that is possible).
So the queue might look like this:
Command Status Result of Action
speed+5 pending speed will increase to 200 (Delete This)
speed+5 pending speed will increase to 205 (Delete This)
speed-5 pending speed will decrease to 200 (Delete This)
and so on.

Checking a Win32 Tree View Item Automatically Checks all Child Items

I am using the Win32 API and MS Visual C++ 6 to build a tree view of a directory structure, with check boxes associated with each tree view item. My goal is to be able to check a parent folder, and have that automatically check all of its associated children.
However, after digging through MSDN, I have not been able to find a control notification message for an item being checked, only when an item is selected. I have considered using a selection notification message to prompt the program to poll the item and see if its current 'check state' is true, but I am not sure that checked and selected can be tied together in such a way, and am also concerned about the overhead associated with constantly polling items as a user moves through a very large directory.
Has anyone had experience setting up this scenario? Are my concerns about the overhead of polling a GUI element justified?
There isn't any notification. You can write your own, though. Just handle mouse click and use hittest to see if the mouse cursor is on the state image. For completeness handle the space key and send the same notification for the selected item too.
As of Windows Vista, Microsoft has introduced NM_TVSTATEIMAGECHANGING and a corresponding NMTVSTATEIMAGECHANGING structure. I'm not sure why this is not documented with the Tree View notifications but instead in the General Control Reference section.
This notification code is sent by the tree control when the state image is changing (i.e. when the check box is clicked).
The NMTVSTATEIMAGECHANGING struct has a iOldStateImageIndex and iNewStateImageIndex field that specifies the corresponding change. This information can be used to determine the new state image that will be displayed (1 is the unchecked box and 2 is the checked box)
Vista also introduces dimmed, partial and excluded checks though at the moment I'm not sure how those are represented by the state image index as the two base cases above.
Here is the best way by MSDN (TreeView::AfterCheck Event):
http://msdn.microsoft.com/en-us/library/system.windows.forms.treeview.aftercheck(v=vs.110).aspx

Resources