Detect window close outside of wndproc? - windows

I am currently working on a win32 GUI app that does most of its work in the window thread.
This thread can sometimes be blocked since it runs a script engine that can be suspended by an external script debugger (another process). This is not a problem most of the time as it is expected behavior.
However, if the user tries to close the window, the app obviously becomes unresponsive and you get the "This application is not responding..." dialog.
My plan was to periodically call back from the "suspend code" to the app and have it do PeekMessage for WM_CLOSE, and if so, terminate the debugger. Unfortunately, from what I can understand, WM_CLOSE it sent directly to the wndproc.
Is there some other way that I could detect that the user wants to close the window, short of redesigning the app which is not an option?
For example, is there some other message that can be checked for with PeekMessage?

How about this: periodically spin a message loop to dispatch any messages on the message queue (that'll cause mouse/input messages to be handled which will generate the WM_CLOSE). In your app's main window set a flag when WM_CLOSE is received and check that flag after spinning the loop.
Simplest case of spinning a message loop to flush any pending messages would be:
while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
though your app frame work may already have functions to do this. Hope this helps.

Would you consider adding another thread redesigning the application? It certainly would make your life much easier! Just let the Gui do all the Gui stuff and run it's message loop and do all the hard work in another thread. If the user wants to quit the application, present him a nice OK/Cancel message and suspend/abort the "worker thread" accordingly. Handling two separate tasks in this one thread - with all the workarounds - will make things way more messier than it has to be. Good luck!

I guess you can keep handling your WM_ CLOSE message in the wndproc, and when you receive it you call PostQuitMessage(), which in turn will generate a WM_ QUIT message that in turn will be read by GetMessage()/PeekMessage().
If your window thread is completely blocked, you're out of luck. You have a few options. The thread must be able to periodically do PeekMessage() while in "script engine mode".
while (IsScripting()) {
ScriptEngineTimeSlice();
while (PeekMessage( .. )) {
TranslateMessage( .. );
DispatchMessage( .. ); // <-- wnd procedure will be called
// ..
}
}
This is probabably old news for you, since you already are aware of this. But if you somehow can't give the UI thread a break, there is no way to solve this. If the thread is blocked, it's blocked.

If you are debugging, wouldn't that mean the user was debugging? Why would they be surprised that the app was then "not responding"? Anyway, you have control of the window procedure for the window, so why not just watch for the WM_CLOSE message in there? It is your window?

Related

How to Terminate/Reset an bundle XPC helper?

Does anyone know how to terminate or reset an XPC helper?
According to the Apple documentation, launchd takes care of the XPC helper if it crashes. But there is no indication of what to do, if you need to interrupt or cancel a running helper.
For instance, if I have an XPC client that is rendering a 32-bit QuickTime movie, how do I get my 64-bit "parent" application to signal the XPC helper to cancel the job and clean up?
Also, What is the proper way for an XPC helper app to handle a parent that has "Quit"?
Currently, to terminate on the parent app's side, I am using (NSXPCConnection):
(void)suspend
(void)invalidate
These seem to close off the connection. But I am not seeing any evidence that the helper app is paying attention.
Thanks in advance!
Your question doesn’t seem to ask for the proper handling of any crashes of the helper. Instead, you seem to simply need a way to properly tell the helper to terminate itself. If this is correct then please read on for a solution. At least if you still need one three years after asking…
You didn’t specify a language to use so please be aware my examples are written in Swift.
As part of your `NSXPCConnection* you had to define a protocol to be used for information sharing between app and helper. In this protocol you could add a simple function like this:
terminateHelper(withReply reply: (String) -> Void) {
reply("Terminating Helper.")
exit(0)
}
This function reports a message string back to the main app using the supplied closure and then terminates itself using the exit system call.
From your main app, you would call this function like this:
if let helper = helperConnection.remoteObjectProxyWithErrorHandler({ (error) in
let e = error as NSError
print("Helper communication failed. Remote proxy error \(e.code): \(e.localizedDescription) \(e.localizedRecoverySuggestion ?? "---")")
}) as? HelperProtocol {
helper.terminateHelper(withReply: { (replyString) in
print(replyString)
})
}
Please not that launchd will immediately restart the terminated helper app despite the fact that it hasn’t crashed but was gracefully terminated instead. This will, however, guarantee that the helper returns to an initialized state with all previous helper processing being stopped.
If you suspend or invalidate the way you put in your question then you only cancel the XPC connection. But neither suspending nor invalidating the connection will send a message of any kind to the helper. The helper itself will simply see that the connection is suspended or invalidated without knowing anything about the reason.
Hope this gives you at least an idea of how to proceed with your problem.

WH_KEYBOARD_LL hook only in my process main thread

Is it possible to have a WH_KEYBOARD_LL hook that only calls the hook function when my application/thread has the focus? Currently I'm receiving calls even when the application is not active.
Sure, the 4th argument to SetWindowsHookEx() can be a thread ID to make it selective. Pass the one for your UI thread, get it by calling GetCurrentThreadId().
Do note that this is not normally very useful, you can intercept keyboard messages in your message loop just as easily. Every GUI class library supports this, required to implement shortcut keystrokes. Even the winapi has this, TranslateAccelerator(). Strongly recommended, debugging a hook is very painful since a breakpoint in the hook callback or any function called by your callback will cause the keyboard to seize up for 5 seconds and your hook to be destroyed.
There's no way for you to install a hook and also apply some form of filter to suppress it firing in certain states. Once it is installed, it will fire.
So, either do nothing in your hook function when your application is inactive, or remove the hook when it becomes inactive. Or, do away with the hook altogether, and respond to the messages that arrive in your message queue.

SetWindowsHookEx() WM_KEYBOARD_LL not coming through with full screen RDC

I'm trying to do a away timer style thing like Skype. If the user is 'away' for a period of time I'll trigger something. I have been using SetWindowsHookEx() with WM_KEYBOARD_LL which works fine. That is until you open a RDC connection and have it full screen. Then I never get the keyboard events.
Anyone come across this? Or know of a better way to achieve this? I have actually tested skype and with a full screen RDC it will correctly go from Away to Online if I type in the RDC.
Thanks
EDIT: After Raymond Chen's comment I did some testing, and he is right. Can not believe I never found this method after all my searching. It also solved an issue I was having with a WPF app not triggering the LL_Mouse/KEYBOARD events.
Thanks Again. Update my accepted answer based on this. The other answer is still good if you need to do LL_MOUSE/KWYBOARD.
Have a look at GetLastInputInfo(). Try calling that periodically.
Yes. You'll not get keys pressed in remote desktop. I had this problem and only solution I found was this:
Using FindWindow API always look to find RDP window, if you've detected that full-screen RDP window has been created you should do this:
a) Unhook all hooks.
b) Reset all hooks.
So create a function which makes SetWindowHookEx API calls and call it SetHook and another one as UnHook function. Then re-call both of them anytime you find out user get into remote desktop.
Now you can get keys pressed even inside remote desktop connection.
I found my old code, I did something like this:
Created a timer with 1 sec.
Then
std::string tmp;
HWND hParent = ::FindWindow(TEXT("TSHELLHWND"), NULL);
GetWindowString(hParent, tmp);
ix = za.find(" - Remote Desktop");
if (hParent != NULL && ix != string::npos)
RestartHook();
You also should have a global variable to set when you've restarted hook, otherwise all the time it will restart the hook. When window closed, you can reset that global variable.

Not receiving WM_QUERYENDSESSION when minimized to system tray

I'm trying to catch WM_QUERYENDSESSION to save some data in the app, but it seems that I'm not receiving this message on User logoff/system restart when the app is minimized to the system tray. How can I catch it?
Thanks.
Relevant code (nothing magic in there, hopefully :)):
ON_WM_QUERYENDSESSION()
BOOL CMainFrame::OnQueryEndSession()
{
AfxMessageBox(L"Are we hitting this?");
return FALSE;
}
For the tray icon I'm using a third-party lib (CodeJock), which I probably can't post here, but generally, it creates a hidden window to process messages, but the main window is simply ShowWindow(SW_HIDE) when needed. Maybe I need to intercept that message in that hidden window and pass it up, I'll need to try that.
This is basically eaten by a third-party class that I'll need to fix up.

Session 0 Isolation

Vista puts out a new security preventing Session 0 from accessing hardware like the video card, and the user no longer logs into session 0. I know this means that I cannot show the user a GUI, however, does that also mean I can't show one at all? The way my code is set up right now, it would be more work to make it command line only, however if I can use my existing code and just programmatically manage the GUI it would take a lot less code.
Is this possible?
The article from MSDN says this:
• A service attempts to create a user interface (UI), such as a dialog box, in Session 0. Because the user is not running in Session 0, he or she never sees the UI and therefore cannot provide the input that the service is looking for. The service appears to stop functioning because it is waiting for a user response that does not occur.
Which makes me think it is possible to have an automated UI, but someone told me that you couldn't use SendKeys with a service because it was disabled in Session 0.
EDIT: I don't actually need to show the user the GUI
You can show one; it just doesn't show up.
There is a little notification in the taskbar about there being a GUI window and a way to switch to it.
Anyway, there actually is a TerminalServices API command to switch active session that you could call if you really needed it to show up.
You can write a separate process which provides the UI for your service process. The communication between your UI and service process can be done in various ways (search the web for "inter process communication" or "IPC").
Your service can have a GUI. It's simply that no human will ever see it. As the MSDN quote suggests, a service can display a dialog box. The call to MessageBox won't fail; it just won't ever return — there won't be anyone to press its buttons.
I'm not sure what you mean by wanting to "manage the GUI." Do you actually mean pretending to send input to the controls, as with SendInput? I see no reason that it wouldn't be possible; you'd be injecting input into your own program's queue, after all, and SendInput's Vista-specific warnings don't say anything about that. But I think you'd be making things much more complicated than they need to be. Revisit the idea to alter your program to have no UI at all. (That's not the same as having a console program. Consoles are UI.)
Instead of simulating the mouse messages necessary to click a button, for instance, eliminate the middle-man and simply call directly the function that the button-click event would have called.

Resources