When reopening an application, how do I restore the zoomed state without causing problems? - macos

If the user zooms our app's main window, closes the app, then reopens it, we want to restore the last state of the main window, so we zoom it before showing the window. The app opens zoomed, so everything is great.
However, if the user clicks the zoom button again, nothing happens. I believe this is because the documentation for zoom: says for step 5:
Determines a new frame. If the window is currently in the standard
state, the new frame represents the user state, saved during a
previous zoom. If the window is currently in the user state, the new
frame represents the standard state, computed in step 1 above. If
there is no saved user state because there has been no previous zoom,
the size and location of the window do not change.
I think it doesn't unzoom because there's no user state, but I'm not sure why - shouldn't the user state be "the size the windows was at before it was zoomed"? If not, how can I make sure the user state is set properly so that the window un-zooms when the user clicks the zoom button again?
Edit: MrGomez responds below that this is basically how it's meant to work. This doesn't seem to be how other apps behave, though. Try Safari - Zoom it, then quit, then reopen the window, and it restores at the zoomed size. Click the zoom button and it goes back to a smaller size. iCal is the same. How are those apps doing it?

The trouble is, this is the expected behavior. As you mentioned here:
we want to restore the last state of the main window, so we zoom it
before showing the window
You've gone ahead and set the user's view as your standard view. The original default view has not been preserved.
This thread goes into detail about the expected functionality of zoom:. Quoting the accepted answer:
According to the documentation for the zoom: method (note the :), the
inverse of zoom: is zoom::
This action method toggles the size and location of the window between
its standard state (provided by the application as the “best” size to
display the window’s data) and its user state (a new size and location
the user may have set by moving or resizing the window).
If it's in the user state (not zoomed), it'll change to the standard
state (zoom), and if it's in the standard state (zoomed), it'll change
to the user state (unzoom).
The documentation also notes:
If there is no saved user state because there has been no previous
zoom, the size and location of the window do not change.
This is what will happen if you started the window out in its standard
state; since it was never in any other state, there is nothing for it
to unzoom back to.
The trouble is you've overrided the standard state; zoom:'s purpose is to toggle between the user's view and your interface's standard view. To save a user's window state for later restore, follow this guide.
And, for reference, here's the remainder of OSX's style guidelines and a very good tutorial for how you should work with this restriction.
Edit: As discussed, this is also a question of UI modality and the fact OSX windows may be bimodal (user state, maximized state) or trimodal (user state, standard state, maximized state) with a loss of the user state when an application is closed in maximized state. This appears to reproduce itself with stock-standard applications for OSX, including, for example, iCal.app.
As a result, the best that can be done here is to define a functioning UI paradigm that's consistent with applications for OSX, but carries many of the same idiosyncrasies of the little green glob.

Related

wxHaskell Button State

I'm writing an application using wxHaskell and I want to be able to detect the state of a button (whether or not it is pressed at any given time). I'm having a bit of trouble figuring out how to do this, however. First I thought that there might be a "button is pressed" attribute that I could use, but there didn't seem to be. Then I had the idea of maintaining an IORef which I update on button-up and button-down events. However, that would require that the Button object actually have button-up and button-down events, which is does not appear to. It is an instance of Commanding, but I assume that the command event is fired on button-up only, which isn't enough for that idea. Does anyone have any other suggestions?
Workaround
You can implement this yourself by detecting the low-level actions that trigger those events (eg. mouse button down, space bar down).
In WX you can use the following function and constructor:
mouse :: Reactive w => Event w (EventMouse -> IO ())
data EventMouse = ... | MouseLeftDown !Point !Modifiers
And, as you suggest, you could keep the state yourself in an IORef. My suspicion is that left button here means main button (right for left-handed users).
UI design principles
The second question, which you haven't asked by I'll answer, is whether this is good UI design.
The behaviour of a button (assuming interaction using a mouse) is that click events are reported when the user releases the mouse button in the button area after pressing the mouse button down in the same area. If the user moves away and releases, or presses 'Escape', there is no click.
Taking any action on a button being pressed (not clicked) would feel unnatural for users.
In practice, the only acceptable way to use this would be, imho, to take an action whose effects can only be witnessed after releasing and which is immediately undone if the click is cancelled (ie. mouse button released outside button area).
EDIT: Please, also, take into account that users with accessibility requirements may have OS settings enabled that affect how and when button clicks are reported (but not down/up mouse events).
There is no way to know if a wxButton is pressed or not because it is an abstraction of a push button which intentionally hides this implementation detail. If you need to know the button state, use a wxToggleButton instead.

On Windows, should my child window get a WM_WINDOWPOSCHANGED when its z-order changes relative to one of its siblings?

I am trying to insert a custom widget into the Internet Explorer 8 url bar, next to the stop and reload buttons. This is just a personal productivity enhancer for myself.
The "window model" for this part of the IE frame is an "address bar root" window that owns the windows which comprise the IE8 url bar: an edit box, a combo control, and the stop and reload buttons.
From another process, I create a new WS_CHILD window (with a custom class name) that is parented by IE's address bar root window, thus making it a sibling of the edit box and stop/reload. I call SetWindowPos with an hwndInsertAfter of HWND_TOP to make sure it appears "above" (i.e. "in") the urlbar. This works nicely, and I see my window painted initially inside the IE urlbar.
However, when I activate the IE window, the urlbar edit control jumps back in front of my window. I know this is happening because I still see my window painted behind the urlbar, and because when I print ->GetTopWindow() to the debug console on a timer, it becomes the HWND of the urlbar edit control.
If I update my message loop to call SetWindowPos with HWND_TOP on WM_PAINT, things are better -- now when I activate the IE window and move it around, my control properly stays planted above the edit control in the urlbar. However, as soon as I switch between IE tabs, which updates the text of IE's urlbar Edit control, my control shift backs behind the Edit control. (Note: This also happens when I maximize or restore the window.)
So my questions are:
1) Is it likely that IE is intentionally putting its urlbar edit control back on top of the z-order every time you click on a tab in IE, or is there a gap in my understanding of how Windows painting and z-ordering works? My understanding is that once you specify z-ordering of child windows (which are not manipulable by the end-user), that ordering should remain until programmatically changed. So even though IE is repainting its Edit control upon tab selection whereas I am not repainting or otherwise acting upon my window, my window should stil remain firmly on top.
2) Given that the z-order of my window is apparently changing, shouldn't it receive a WM_WINDOWPOSCHANGING/WM_WINDOWPOSCHANGED? If it did, I could at least respond to that event and keep myself on top of the Edit control. But even though I can see my window painting behind the urlbar Edit control when I click on a tab, and even though my debug window output confirms that the address bar root's GetTopWindow() becomes the HWND of the Edit control when I click on a tab, and even though I see WM_WINDOWPOSCHANGING/WM_WINDOWPOSCHANGED being sent to the Edit control with an hwndInsertAfter of HWND_TOP when I click on a tab, my own window receives no messages whatsoever that would allow me to keep the z-order constant. This seems wrong to me, and addressing it would force me to run in IE's process and hook all messages sent to its Edit control just to have an event to respond to :(
Thank you for your help!
It's quite likely that IE is juggling the Z-order of the controls when you change tabs. In IE9, the URL bar and the tabs have a common parent. When you select a new tab, it activates the URL bar (and activation usually brings the window to the top of its local Z order).
No. You get WM_WINDOWPOSCHANGED when a SetWindowPos function acts on your window. If some of the siblings have their z-orders changed, you don't get a message. Nobody called SetWindowPos on your window. You can see this by writing a test program that juggles the z-order of some child windows.
This makes sense because there might be an arbitrary number of sibling windows, and it could be an unbounded amount of overhead to notify all of them. It also would be nearly impossible to come up with a consistent set of rules for delivering these messages to all the siblings given that some of the siblings could react by further shuffling the z-order. Do the siblings that haven't yet received the first notification now have two pending notifications? Do they get posted or dispatched immediately? What if the queue grows and grows until it overflows?
This is different from WM_KILLFOCUS/WM_SETFOCUS notifications in that it affects, at most, two windows. That puts a reasonable bound on the number of notifications. Even if there's a runaway infinite loop because the losing control tries to steal the focus back, the queue won't overflow because there's only one SetFocus call for each WM_KILLFOCUS delivered.
Also, it's reasonable that windows might need to react to a loss of focus. It's much less likely that window C needs to know that B is now on top of A instead of the other way around, so why design the system to send a jillion unnecessary messages?
Hacking the UI of apps you don't control and that don't have well-defined APIs for doing the types of things you want to do is anywhere from hard to impossible, and it's always fragile. Groups that put out toolbars and browser customizations employee more people than you might expect, and they spend much of their day probing with Spy++ and experimenting. It is by nature hacking.

How to reverse a [NSWindow zoom] method call?

Maybe i'm just too blind to RTFM but what is the method to call to reverse the zoom (maximizing of a window) and bring the window back into the old state.
According to the documentation for the zoom: method (note the :), the inverse of zoom: is zoom::
This action method toggles the size and location of the window between its standard state (provided by the application as the “best” size to display the window’s data) and its user state (a new size and location the user may have set by moving or resizing the window).
If it's in the user state (not zoomed), it'll change to the standard state (zoom), and if it's in the standard state (zoomed), it'll change to the user state (unzoom).
The documentation also notes:
If there is no saved user state because there has been no previous zoom, the size and location of the window do not change.
This is what will happen if you started the window out in its standard state; since it was never in any other state, there is nothing for it to unzoom back to.

Should an icon show current state or next state?

When using icon images without text captions, should the icon represent the current state or the next state? For example I have a block of text that I want to minimize / maximize or I want to toggle showing All User Records or just My Records. I'm sure there are compelling arguments for either side and know that consistency is key, but what are the arguments related to good intuitive user design?
There is neither standardization nor general human tendency on this. For example, MS Windows UX Interaction Guidelines specifies four basic kinds of toggling progressive disclosure control. Three out of four show the state-when-activated, while one shows the current state.
I believe if you test a particular approach on your users, you'll get different results depending on what you ask. If you show them a control and ask them what state the app is in, they'll tend to read the icon as if it were indicating the state. If you show them a control and ask them to change the state (where in some cases the state is already changed), they'll read the icon as if it were the state to achieve. It's precisely because of this they invented toggling buttons.
If you're lucky, users use the icon primarily for either reading the state or setting the state, and not both. Then let the icon indicate whatever the users use it for.
If they indeed use it for both reading the state and setting the state, you're basically hosed, but there are a few things you can try to minimize hosehood:
Use text in addition to or instead of an icon. When labeled with a verb (e.g., "Connect"), the control indicates the state the user gets. When labeled with an adjective or noun (e.g., "On Line"), it implies the current state.
If your lib doesn't support toggling icons, then consider using a checkbox control, if that's allowed.
If your lib doesn't support checkboxes, then consider two controls, one to set each state, where the current state is disabled. Not too good for reading the current state, but there's some precedence for this in pulldown menus.
Fiddle with graphic design or placement to make it consistent with the meaning you've chosen. For example:
Command buttons are always labeled with the action they commit, so if your icon indicates the state the user gets, then give the icon a raised appearance like a command button. If the icon indicates the current state, then give it a flat appearance.
Toolbar controls usually show the state they bring about, so put the icon at the top of the window if indicates the state the user gets. In contrast, icons in the "work area" of the window indicate objects or attributes, so icons there should show the current state. Icons at the bottom of the window (in the status bar) should also show the current state.
This has not been truly standardized. Folder icons, for example, show open folders when they are open and closed folders when they are closed. Same for disclosure triangles, etc.
However, in other contexts, this is not always true. In a movie player, the "Play" arrow shows when the movie is not playing, and it shows the pause icon when it is playing. Probably the thing to do is use your best judgment, then poll your users. If a preponderance of the people you test are confused by your icon choices, switch them around. Then test them again and see if your initial test holds up. :)
If you are just going to have one button to toggle between two states, then the button should represent the next state, because that is the action that the button will take when clicked.
You gave the example of text that is minimized/maximized. Think of any expandable tree interface you ever see in Windows. A minimized tree has a [+] next to it, because clicking the button will expand the tree. And a maximized tree has a [-] next to it for the same reason.
You could also try to make a toggle that is highlighted or "pressed down" like mihi says, but that might be more confusing.
I prefer the "next state" approach (click plus to expand, click minus to collapse).
One reason is that this is the most widely used approach, so doing anything else would confuse users (and me as well).
Another reason is that the "next state" approach looks more inviting for the user to click.
May I present Zoom's mute button:
It shows the current state, as an icon, and the action that will occur when you push the button, as text. In other words, the icon on the button is the current state and the label on the button is the new state. The current state and the new state are opposites, so the button appears to contradict itself unless you read it very carefully.
(I hate that button.)

How does one use onmousedown/onmouseup correctly?

Whenever I write mouse handling code, the onmousedown/onmouseup/onmousemove model always seemed to force me to produce unnecessarily complex code that would still end up causing all sorts of UI bugs.
The main problem which I see even in major pieces of software these days is the "ghost mouse" event where you drag to outside the window and then let go. Once you return back into the window, the application still thinks you have the mouse down even though the button is up. This is especially annoying when you're trying to highlight something that goes to the border of the screen.
Is there a RIGHT way to write mouse code or is the entire model just flawed?
Ordinarily one captures the mouse events on mouse down so the mouse move and mouse up go through your code regardless of the caret moving out of you application window.
More recently this is a problem when running a VM or remote session, its difficult for apps in these to track the mouse outside of the machine screen area represented by a window on a host.
I'm not sure what environment you're attempting to track mouse buttons in, but the best way to handle this is to have a mouse listener that tracks onmouseup 100% of the time after you've detected onmousedown.
That way, it doesn't matter what screen region the user releases the mouse button in. It will reset no matter where it happens.

Resources