Using Momentary Tracking with NSSegmentedControl - cocoa

The documentation for NSSegmentedControl says the following about using NSSegmentSwitchTrackingMomentary mode:
A momentary segmented control sends an action when the user clicks a
segment, and another action when the user releases the segment. If
configured as continuous (see setContinuous:), the control also sends
actions at repeating intervals until the user releases the segment, at
which point the control sends its final action.
When the user clicks a segment, the selectedSegment value is the index
of the active segment. When the user releases the segment, the
selectedSegment value is -1.
However, this is not the behaviour I see... unless I am misunderstanding what Apple means by "clicking" and "releasing" a segment.
I would expect, from that description, that when the user presses the mouse button over a segment the action would be called, and upon releasing the mouse button the second action is sent.
However, I only ever see a single action, when the mouse button is released. Additionally, if I enable continuous mode I still also only receive the singe action upon releasing the mouse button.
The documentation states this functionality is available from OSX 10.3 and higher, so that shouldn't be the issue.

Related

Automatically disappearing pop-over messages in Appian?

I am struggeling to find the correct search term for a temporary notification window similar to the Windows pop-up "Safe To Remove Hardware" and how to implement it in Appian once a button is clicked.
The key features of this kind of pop-over are
The messages disappears after e.g. 10s.
Before the 10s passed, one can click the message away by clicking on a X on the upper right.
I am aware of the property confirmHeader, confirmMessage (see documentation on Submit Link), but this pop-over does not disappear automatically.
You could you some combination of local variables along with a!refreshVariable() function but it is not possible to refresh every 10 seconds. The lowest value is 30 seconds.
refreshVariable documentation:
https://docs.appian.com/suite/help/21.3/fnc_evaluation_a_refreshvariable.html

Cocoa listening key events and responding them without view

First of all hi guys!
I was trying to write a mouse controller app for mac os x which is reading inputs from keyboard and moves the mouse accordingly. By garbage input i will describe the input was intented for a mouse event but it creates text on screen.
Before anyone points to the fact that there is a built in one, It was laggy even in shortest lag setting and cannot registers more than two buttons at the same time (you have to press diagonals to go to the diagonal.) If you accidentally press another button when release of the accident button your motion stops. My first and last reaction was "rubbish!". Adding customization and extra features is my goal.
I want to create a key combination that will block the garbage input to be passed to other programs while it was held. But global monitoring and seems like it always passes the event. And unfortunately I see qqqqqqqwwwwwww like text in unwanted places.
I want to see that when i press q w and up, it will make the mouse go up. But i create qqqqqqqwwwwww mess on the way. My first idea was creating a view on popover and handle events there, but whenever I want to use my mouse from keyboard seeing a popover is anoying and I couldn't find a way to show the popover without leaving any garbage keyboard input.
What should i do in this situation?
You will want to use Quartz Event Taps. Note that for an application to tap keyboard events, it has to be trusted for accessibility (as in System Preferences > Security & Privacy > Privacy > Accessibility). Your app can ask to be made trusted using AXIsProcessTrustedWithOptions().

wxHaskell Button State

I'm writing an application using wxHaskell and I want to be able to detect the state of a button (whether or not it is pressed at any given time). I'm having a bit of trouble figuring out how to do this, however. First I thought that there might be a "button is pressed" attribute that I could use, but there didn't seem to be. Then I had the idea of maintaining an IORef which I update on button-up and button-down events. However, that would require that the Button object actually have button-up and button-down events, which is does not appear to. It is an instance of Commanding, but I assume that the command event is fired on button-up only, which isn't enough for that idea. Does anyone have any other suggestions?
Workaround
You can implement this yourself by detecting the low-level actions that trigger those events (eg. mouse button down, space bar down).
In WX you can use the following function and constructor:
mouse :: Reactive w => Event w (EventMouse -> IO ())
data EventMouse = ... | MouseLeftDown !Point !Modifiers
And, as you suggest, you could keep the state yourself in an IORef. My suspicion is that left button here means main button (right for left-handed users).
UI design principles
The second question, which you haven't asked by I'll answer, is whether this is good UI design.
The behaviour of a button (assuming interaction using a mouse) is that click events are reported when the user releases the mouse button in the button area after pressing the mouse button down in the same area. If the user moves away and releases, or presses 'Escape', there is no click.
Taking any action on a button being pressed (not clicked) would feel unnatural for users.
In practice, the only acceptable way to use this would be, imho, to take an action whose effects can only be witnessed after releasing and which is immediately undone if the click is cancelled (ie. mouse button released outside button area).
EDIT: Please, also, take into account that users with accessibility requirements may have OS settings enabled that affect how and when button clicks are reported (but not down/up mouse events).
There is no way to know if a wxButton is pressed or not because it is an abstraction of a push button which intentionally hides this implementation detail. If you need to know the button state, use a wxToggleButton instead.

When reopening an application, how do I restore the zoomed state without causing problems?

If the user zooms our app's main window, closes the app, then reopens it, we want to restore the last state of the main window, so we zoom it before showing the window. The app opens zoomed, so everything is great.
However, if the user clicks the zoom button again, nothing happens. I believe this is because the documentation for zoom: says for step 5:
Determines a new frame. If the window is currently in the standard
state, the new frame represents the user state, saved during a
previous zoom. If the window is currently in the user state, the new
frame represents the standard state, computed in step 1 above. If
there is no saved user state because there has been no previous zoom,
the size and location of the window do not change.
I think it doesn't unzoom because there's no user state, but I'm not sure why - shouldn't the user state be "the size the windows was at before it was zoomed"? If not, how can I make sure the user state is set properly so that the window un-zooms when the user clicks the zoom button again?
Edit: MrGomez responds below that this is basically how it's meant to work. This doesn't seem to be how other apps behave, though. Try Safari - Zoom it, then quit, then reopen the window, and it restores at the zoomed size. Click the zoom button and it goes back to a smaller size. iCal is the same. How are those apps doing it?
The trouble is, this is the expected behavior. As you mentioned here:
we want to restore the last state of the main window, so we zoom it
before showing the window
You've gone ahead and set the user's view as your standard view. The original default view has not been preserved.
This thread goes into detail about the expected functionality of zoom:. Quoting the accepted answer:
According to the documentation for the zoom: method (note the :), the
inverse of zoom: is zoom::
This action method toggles the size and location of the window between
its standard state (provided by the application as the “best” size to
display the window’s data) and its user state (a new size and location
the user may have set by moving or resizing the window).
If it's in the user state (not zoomed), it'll change to the standard
state (zoom), and if it's in the standard state (zoomed), it'll change
to the user state (unzoom).
The documentation also notes:
If there is no saved user state because there has been no previous
zoom, the size and location of the window do not change.
This is what will happen if you started the window out in its standard
state; since it was never in any other state, there is nothing for it
to unzoom back to.
The trouble is you've overrided the standard state; zoom:'s purpose is to toggle between the user's view and your interface's standard view. To save a user's window state for later restore, follow this guide.
And, for reference, here's the remainder of OSX's style guidelines and a very good tutorial for how you should work with this restriction.
Edit: As discussed, this is also a question of UI modality and the fact OSX windows may be bimodal (user state, maximized state) or trimodal (user state, standard state, maximized state) with a loss of the user state when an application is closed in maximized state. This appears to reproduce itself with stock-standard applications for OSX, including, for example, iCal.app.
As a result, the best that can be done here is to define a functioning UI paradigm that's consistent with applications for OSX, but carries many of the same idiosyncrasies of the little green glob.

How to design a user user interface for high latency conditions?

Working on an application that controls a remote robot where there is the potential for significant delay between pressing a button and that action actually happening. Furthermore, there is the potential that the command did not successfully reach the intended recipient after all (due to network unreliability, etc.). Additionally, there are variables in play whose changes are not instantaneous. For instance, there is a variable both for commanded speed as well as current speed; changing the commanded speed will not immediately make the current speed match that value.
The question is, how do I make the application reflect both the current states the remote robot is reporting, as well as acknowledging to the user that his command was understood by the application, but the system has not yet received notification from the robot that it has been acknowledged? (Popups are an absolute no-go.)
Some ideas that have been discussed:
Disable Buttons
When a command button is pressed, start a timer for some reasonable number of seconds and disable the button during that time. Don't update corresponding label directly, but instead wait for a response from the robot. (e.g. if you press a Speed + button, and to the right is a text label showing current speed, don't immediately change the label but instead wait for a response from the robot). Once this response occurs, or when the timer expires, re-enable the button.
Pros: No additional control widgets needed on page. Labels always reflect current state of the robot.
Cons: If you wanted to send two speed updates in a row, would have to wait until first had been received and acknowledged. Would feel sluggish and unresponsive.
Logging Info
Have a log that users can view that shows a textual representations of all the actions the user has taken, timestamped, and with a history clearly visible. Could be color coded based on user preferences.
Pros: User has immediate feedback that his command was understood, as it appears in the log
Cons: Does not resolve problem of what to do with button (especially radio button) behavior.
Does anyone have experience with building UIs for environments in which there is significant latency between action and response? I would appreciate any and all input.
I would not go for a Log: your main focus is with the Widgets. There are several techniques for reporting status for different components, I will discuss a simple one here:
A button has an status-icon next to it show it's status. Use different colors to denote the latency. Green means "ready", when the user clicks the button, the icon changes to "orange" and that indicates busy. When the user clicks again, the color changes to "red", which means queued. When the queue is empty, the color changes back to orange. If the action was executed, the color is changed to green.
A slider can be used for floating values: use two "sliders". The first is better visible, and can be dragged. The second, and a layer below the first, is the "actual reported value", which shows the latency.
Textual input can also use the green/orange status-icons. While editing, the color changes to orange. If your queue/networking protocol supports canceling editing actions, you can resend the new string every time the user presses a key. If not: change the icon to orange upon a change, send the string, and wait for a status report. The status report should contain the actual value, and if this actual value is equal to the value in the component then change the icon to green. If the the actual is not equal to the value in the component then resend the value in the component.
Radiobuttons/Checkboxes should have a double-display. One editable, one uneditable. The first is for user-input and the second is for actual reported status. Same behavior as the slider component.
These require custom components, or widgets, to be made. You can extend original components or recreate them from scratch.
If your robot can also be "steered":
Create a rectangle which can be dragged upon. The rectangle has a small cross painted on, of the current value. While you drag, you see the latency of the cross. You can use interpolation and time values to smooth out robot control. A user will notice the lag because the cross "follows" his mouse-pointer. (Often used in space shooter simulation games when controlling virtual ships, Allegiance for instance)
It sounds like you have the following information that needs to be communicated to the user:
Current state or value of an attribute of the robot.
Target (i.e., received) value the robot is seeking.
Commanded value sent by the user to the robot.
Status of each command (pending, received, achieved, timed-out).
There also a couple other considerations:
Continuous or discrete feedback. Do you have continuous real-time feedback of the current value from robot? Or is it discrete feedback, where the robot sends the current value only after achieving a target? Obviously continuous is better for the user, since it allows the user to distinguish between the robot being slow and being stuck, but if you don’t have it, you have find a way to live without it.
Synchronous or asynchronous command-sending. If the commands are sent synchronously (i.e., no new one is sent out until the last one is known to be received), then the user may need a means to (a) force out the next command without waiting for a reply from the previous (in case the reply was lost), and (b) cancel a queued command in case conditions change between when a command was created and sent.
Robot with or without conflict resolution. Does the robot have the logic to look ahead in the list of commands it has received and resolved conflicting commands? For example, if it’s at a stop and its received-commands queue includes a command to go 5 m/s followed by a command to go 2 m/s, is it smart enough to delete the 5 m/s command? Or will it first attempt to accelerate to 5 m/s then 2 m/s, possibly resulting in an overshoot? Will it wait until it achieves 5 m/s before it even “looks at” the 2 m/s command? A lack of conflict resolution complicates your UI because the users may have to track all commands sent to understand why the robot is behaving like it is.
Integrated Information with Position-coding Controls
Let’s assume that the robot has conflict resolution and asynchronous command-sending. Rather than have separate controls for commanded, targeted and current values, I recommend you integrate them all in a single control to make it easy for the user to compare current, target, and commanded values, and see discrepancies. Perhaps the best way to do this is by representing values by positions in the window. Such position-coding of values is unmatched in showing the relations among things. There are two standard GUI controls, radio buttons and sliders, that accomplish such position coding. However, you’ll have to augment them to fulfill for your purposes. The best usability could require custom-made position-coding controls where you schematically represent the robot and maybe the environment, and allow the user to control it through direct manipulation. However, I'm assuming you need a simple-to-develop implementation, and a well-laid-out combination of sliders and radio buttons may get you pretty close to this ideal.
Use radio buttons for setting a categorical value and a slider for setting a numeric value. The slider may include a text box to allow the user to fine-tune the value. You augment each of these controls so that they show the commanded, target, and current values at the same time. The “usual” indicator (the dot for the radio button and the handle for the slider) represents the commanded value while separate graphic pointers indicates the current and target values. Discriminate the current from the target by making the current more prominent. I’d design them such that they merge into a single pointer when they are at the same value in order to minimize clutter for the usual state of things. If your users are untrained on the system you may want to include text labels on the pointers (“current,” and “target” when target is different than current).
Using these position-coding controls makes it easy for the user to compare current, target, and commanded values and see discrepancies. The status is implicit in the relative positions of the indicators. When the target pointer moves to the commanded position, the user knows the command was received. When current pointer is at the target, the robot has achieved the commanded value. This is especially good for continuous feedback of numeric values because the users can not only see the difference between the current and target value on a slider, they can estimate how long it will take for it achieve the target by seeing how fast the pointer is closing on the target. For the radio buttons, you can include “X% Done” text by the Target pointer to indicate when the target will be achieved (if this information is available).
For the most responsive UI, changing a value of a slider or radio buttons should send an immediate command. There is no “Apply” button. Users can re-send a commanded value at any time by re-clicking on the appropriate slider position or radio button. I think you'll find this is a natural human tendency anyway when confronted with an apparently unresponsive control (consider elevator buttons).
The descrepency between the target pointer and commanded indicator may be too subtle to signal a lack of reception of a commmand if responses are commonly slow (over a few seconds, such that user attention has likely shifted elsewhere). If that is the case, you may want to include a modeless alert after a time-out period that almost certainly indicates the command was lost in transmission. A modeless alert may include a text annunciator beside the control and/or graphically highlighting the commanded-target descrepancy. Depending on criticality, you may want to use a audible alert like a beep or animation to speed capturing user attention. The modeless alert disappears automatically when the target value matches the commanded value for whatever reason.
Separate Controls for Commanded and Current
If sliders and radio buttons take too much space for your purposes (or have other issues), you can go with separate non-position-coding controls for commanded and current values, as implied by your Disable Buttons design. However, overall, this is a more challenging design with more issues to resolve.
I would favor field controls like text boxes, check boxes, and dropdown menus, rather than command buttons so that the commanded value is clearly shown. Continuous numeric attributes may include spinner buttons with the text box if it doesn’t end up clogging the queue with incremental commands. As with the above option, changing a value sends an immediate command.
You’re right to be concerned about the using a timer and disabling. In addition to the problem of making the system sluggish, it means you gray out the commanded state. That can make it hard to read, and also requires some mental gymnastics by the user to interpret (“it’s unavailable, so that means I already selected it”). The interpretation can also be ambiguous because often disabled means Not Applicable (e.g., the Speed control is disabled because the robot has lowered stabilizers for fixed-base operation).
The solution is to use some other graphic feedback than disabling. I’d stay away from color coding. Color coding tends to be arbitrary and thus confusing (e.g., does red mean queued or timed-out?). This may be one of the (rare) good places to use animation since animation is intuitive for representing an on-going process. A flashing or throbbing border (or other feature) for the commanded-value control can indicate a sent command is awaiting reply. A flashing/throbbing border for the current-value control indicates the command is received and the robot is seeking the target value. If animation would be distracting in this situation (like it is for most other situations), then consider a dashed border (versus solid) to indicate awaiting reply or seeking target; dashed suggests a tentative or transitory state.
The target value and status are implied by what is animated. If the commanded-value border is animated, the value inside is a pending –the reply is yet to be received. If the current border alone is animated, the value inside the commanded control is the target value. If both borders are animated, then the robot is seeking one (unspecified) target value, while another is pending. If you think it’s problematic to leave the target unspecified in such circumstances, then you may need three controls to discriminate commanded, target, and current. However, if this is an edge case, it may be better to display the target value on mouse-over of current value control or with a drop-down button.
If feedback is continuous, you can also update the current value of numeric attributes at regular increments (about every 200 to 500 ms) so this animation provides an additional feedback of the robot seeking the target. For any attribute, if it takes 10-15 seconds or more for the robot to reach common targets and the robot has conflict resolution, you may want to also display a progress bar within or beside the current attribute control so the user can judge when the robot will achieve the target value.
To re-send the command, users can always re-select a value, or hit Enter while focus is on the commanded control. That’s a little odd and awkward for some controls (e.g., checkboxes), so I’d also consider a modeless notification (not a popup) that appear near the control if the command times out for a reply. The notification includes a button to resend the command.
If your users are untrained on the system you may want to include redundant text under the animation (e.g., “Sending” when waiting for a reply and “Seeking” when moving towards the target value).
Log Table
The logging approach is probably best if commands are sent synchronously and/or the robot lacks conflict resolution. This way the user can track the command queue for either sending commands or the commands received by the robot in order to predict robot behavior. However, I wouldn’t make it a read-only text box, but rather a table that can be manipulated. While the table is sorted by default by timestamp, there will be separate sortable fields for the attribute, the commanded value, the status (pending, seeking, achieved, timeout). If feedback is continuous, then the status should indicate progress towards achieving the target value (e.g., percent, or a progress bar).
If there is synchronous command-sending, then users can edit the commanded values of pending commands or force forward, move, or delete pending commands. In any case, commands can be copied and re-inserted in order to resend any command from any time. Maybe even provide a means to save selected commands and retrieve and insert them later –now you have macro facility.
If the robot tends to be is especially cranky (frequent loss of communications, slow responses), then you may want to have this log table beside the controls for creating commands and viewing the current values. The controls should be set up to make the creation of a discrete command clear to the user. With a cranky computer-to-robot interface, spurious commands are costly, so each command should be well-planned and deliberate. Likely this means a set of field controls like text boxes and drop-down lists to set values of various attributes and a button that generates the command(s) for those values. Awkward, yes, but that’s an accurate representation of the communication link with the robot.
Alternatively, if typically the queues are nearly empty, then you may want to make this table available in a separate window for experts to troubleshoot problems with robot behavior. Normally then the users use one of the other two options I gave above.
Maybe you can use a variation of the command pattern. Each action by the user generates a command, which goes into a queue. The queue is visible to the user in a printout on the screen. So you do not disable the button, but allow the user to press the button multiple times but show the user that the command is queued. At the same time you do not update the labels showing current state of robot until you receive the state from the robot.
In the queue the command could show its status somehow, maybe text and colour. And maybe you should allow the user to delete a command before it is processed by the robot(if that is possible).
So the queue might look like this:
Command Status Result of Action
speed+5 pending speed will increase to 200 (Delete This)
speed+5 pending speed will increase to 205 (Delete This)
speed-5 pending speed will decrease to 200 (Delete This)
and so on.

Resources