How GetKeyState works exactly? - windows

I have been struggling with understand how GetKeyState operating. I have done endless google searching and haven't yet managed yet to understand exactly how it works
According to MSDN:
The key status returned from this function changes as a thread reads key messages from its message queue.
Take a look at the following code. I didn't create a message processing loop. 65 represents the virtual key of the character 'A'.
while(true) {
printf("the character %c, the vkey_state is %x",
MapVirtualKey(65, MAPVK_VK_TO_CHAR),GetKeyState(65) & 0x8000);
Sleep(150);
}
I pressed "A" on the keyboard, while being at the window console of my program.
sometimes, the vkey_state value is 0x8000 as expected, sometimes not.
What exactly is happening under the hood? I didn't write any message-processing code, so i assume it is created automatically. When I press 'A', a WM_KEYDOWN is sent to my thread message queue. When I release the key 'A', a WM_KEYUP is sent to my thread message queue. other key-related messages might be sent in between. What happens when I call GetKeyState? when exactly it will set the MSB of its return value to '1'? When will it change back to 0? Is it related to the calls to GetMessage?
In Addition - what confused me the most - is when I switched to another program (cmd.exe), and I typed 'A', my program was able to monitor it while being in the background - but cmd.exe thread has another message queue - why does it work? However - it didn't work If I started cmd.exe in elevated mode (high integrity).
this contradicts the information I found here.
If the user has switched to another program, then the GetKeyState function will not see the input that the user typed into that other program, since that input was not sent to your input queue.

Related

win32 PostMessage WM_APPCOMMAND sends multiple messages instead of one

I'm writing a small accessibility app which simulates certain keyboard gestures, such as volume up\down.
The goal is to send a single command.
In practice, the volume goes all the way up to 100%, as if user pressed a button for couple seconds or as if the message was dispatched multiple times.
This behavior is the same with both PostMessage and SendMessage, in both C and C# (using PInvoke)
C:
PostMessage(0xffff, 0x0319, 0, 0xa0000)
C#:
PostMessage(new IntPtr(0xffff), WindowMessage.WM_APPCOMMAND, (void*)0, (void*)0xa0000);
The meaning of parameters: send to all windows, message, no source, volume up
Question: How do I issue a command which would result in Windows adjusting volume by the smallest increment?
Additionally, I attempted using WP_KEYUP and WP_KEYDOWN, without success
// dispatch to all apps, message, wparam: virtual key, lparam: repeat count = 1
User32.PostMessage(new IntPtr(0xffff), User32.WindowMessage.WM_KEYDOWN, new IntPtr(0xaf000), new IntPtr(1));
User32.PostMessage(new IntPtr(0xffff), User32.WindowMessage.WM_KEYUP, new IntPtr(0xaf000), new IntPtr(1));
The reason why the command is sent multiple times is, as pointed by Hans in the comment, I broadcasted it to all windows using 0xffff as first parameter. Every window handled it by increasing volume by a notch.
The solution to sending multiple messages is to send the message to either
The shell handle GetShellWindow()
The foreground window handle GetForegroundWindow()
Both handles adjusted the volume by one notch. GetDesktopWindow() did not work, though.

What exactly is kqueue's EV_RECEIPT for?

The kqueue mechanism has an event flag, EV_RECEIPT, which according to the linked man page:
... is useful for making bulk changes to a kqueue
without draining any pending events. When passed as input,
it forces EV_ERROR to always be returned. When a filter is
successfully added the data field will be zero.
My understanding however is that it is trivial to make bulk changes to a kqueue without draining any pending events, simply by passing 0 for the nevents parameter to kevent and thus drawing no events from the queue. With that in mind, why is EV_RECEIPT necesary?
Some sample code in Apple documentation for OS X actually uses EV_RECEIPT:
kq = kqueue();
EV_SET(&changes, gTargetPID, EVFILT_PROC, EV_ADD | EV_RECEIPT, NOTE_EXIT, 0, NULL);
(void) kevent(kq, &changes, 1, &changes, 1, NULL);
But, seeing as the changes array is never examined after the kevent call, it's totally unclear to me why EV_RECEIPT was used in this case.
Is EV_RECEIPT actually necessary? In what situation would it really be useful?
If you are making bulk changes and one of them causes an error, then the event will be placed in the eventlist with EV_ERROR set in flags and the system error in data.
Therefore it is possible to identify which changelist element caused the error.
If you set nevents to zero, you get the error code but no indication of which event caused the error.
So EV_RECEIPT allows you to set nevents to a non-zero value without draining any pending events.

OSX Cocoa input source detect change

Does anyone know how to detect when the user changes the current input source in OSX?
I can call TISCopyCurrentKeyboardInputSource() to find out which input source ID is being used like this:
TISInputSourceRef isource = TISCopyCurrentKeyboardInputSource();
if ( isource == NULL )
{
cerr << "Couldn't get the current input source\n.";
return -1;
}
CFStringRef id = (CFStringRef)TISGetInputSourceProperty(
isource,
kTISPropertyInputSourceID);
CFRelease(isource);
If my input source is "German", then id ends up being "com.apple.keylayout.German", which is mostly what I want. Except:
The results of TISCopyCurrentKeyboardInputSource() doesn't change once my process starts? In particular, I can call TISCopyCurrentKeyboardInputSource() in a loop and switch my input source, but TISCopyCurrentKeyboardInputSource() keeps returning the input source that my process started with.
I'd really like to be notified when the input source changes. Is there any way of doing this? To get a notification or an event of some kind telling me that the input source has been changed?
You can observe the NSTextInputContextKeyboardSelectionDidChangeNotification notification posted by NSTextInputContext to the default Cocoa notification center. Alternatively, you can observe the kTISNotifySelectedKeyboardInputSourceChanged notification delivered via the Core Foundation distributed notification center.
However, any such change starts in a system process external to your app. The system then notifies the frameworks in each app process. The frameworks can only receive such notifications when it is allowed to run its event loop. Likewise, if you're observing the distributed notification yourself, that can only happen when the event loop (or at least the main thread's run loop) is allowed to run.
So, that explains why running a loop which repeatedly checks the result of TISCopyCurrentKeyboardInputSource() doesn't work. You're not allowing the frameworks to monitor the channel over which it would be informed of the change. If, rather than a loop, you were to use a repeating timer with a low enough frequency that other stuff has a chance to run, and you returned control to the app's event loop, you would see the result of TISCopyCurrentKeyboardInputSource() changing.

Problem with Boost Asio asynchronous connection using C++ in Windows

Using MS Visual Studio 2008 C++ for Windows 32 (XP brand), I try to construct a POP3 client managed from a modeless dialog box.
Te first step is create a persistent object -say pop3- with all that Boost.asio stuff to do asynchronous connections, in the WM_INITDIALOG message of the dialog-box-procedure. Some like:
case WM_INITDIALOG:
return (iniPop3Dlg (hDlg, lParam));
Here we assume that iniPop3Dlg() create the pop3 heap object -say pointed out by pop3p-. Then connect with the remote server, and a session is initiated with the client’s id and password (USER and PASS commands). Here we assume that the server is in TRANSACTION state.
Then, in response to some user input, the dialog-box-procedure, call the appropriate function. Say:
case IDS_TOTAL: // get how many emails in the server
total (pop3p);
return FALSE;
case IDS_DETAIL: // get date, sender and subject for each email in the server
detail (pop3p);
return FALSE;
Note that total() uses the POP3’s STAT command to get how many emails in the server, while detail() uses two commands consecutively; first STAT to get the total and then a loop with the GET command to retrieve the content of each message.
As an aside: detail() and total() share the same subroutines -the STAT handle routine-, and when finished, both leaves the session as-is. That is, without closing the connection; the socket remains opened an the server in TRANSACTION state.
When any option is selected by the first time, the things run as expected, obtaining the desired results. But when making the second chance, the connection hangs.
A closer inspection show that the first time that the statement
socket_.get_io_service().run();
Is used, never ends.
Note that all asynchronous write and read routines uses the same io_service, and each routine uses socket_.get_io_service().reset() prior to any run()
Not also that all R/W operations also uses the same timer, who is reseted to zero wait after each operation is completed:
dTimer_.expires_from_now (boost::posix_time::seconds(0));
I suspect that the problem is in the io_service or in the timer, and the fact that subsequent executions occurs in a different load of the routine.
As a first approach to my problem, I hope that someone would bring some light in it, prior to a more detailed exposition of the -very few and simple- routines involved.
Have you looked at the asio examples and studied them? There are several asynchronous examples that should help you understand the basic control flow. Pay particular importance to the main event loop started by invoking io_service::run, it's important to understand control is not expected to return to the caller until the io_service has no more remaining work to do.

Best way to close and wait for a child frame window using Win32/MFC

There are a few options here, probably, but what would you suggest to be the safest way to accomplish the following:
I've got a child CFrameWnd with a parent = NULL (so that it can live a separate life from the main application, while the app is running, at least). I've got all those windows stored in a list. When the main app is closing (MainFrame getting an OnClose), I go through the array and issue a PostMessage(WM_CLOSE) to all. However, the problem is that each of them has to do stuff before closing down. So, I need to wait for them. But we're all on the same thread... So, how can I wait for the children to close, without blocking their own processing in a single-threaded application?
Or should I launch a worker thread to take care of that? Would it be easier?
Thanks in advance!
Use SendMessage() instead of PostMessage().
Edit: Another option might be to simply handle WM_DESTROY in your child windows (depending on your code of course).
Well you certainly can't just wait for them to close, you need to at least pump messages so that they would receive and handle the WM_CLOSE. How you do that is up to you I guess. But I see you are doing PostMessage. Why not do SendMessage instead - this will run the close synchronously in the window procedure for the window. Or are you trying to quit the app? Then you should really use PostQuitMessage then pump messages in the normal fashion until GetMessage returns 0. Lots of options.
Pumping messages means to have a loop in your code that looks like this. You don't have to call AfxPumpMessages, but that would probably do something similar. There are in fact many different ways to pump messages depending on what you want to do. In addition there are quite a few functions that pump messages for you.
BOOL bRet;
// note that GetMessage returns 0 when WM_QUIT is received - this is how PostQuitMessage
// would work to get us to shut down
// We are passing NULL for the hWnd parameter - this means receive all window and
// thread messages for this thread
while( (bRet = GetMessage( &msg, NULL /* hWnd */, 0, 0 )) != 0)
{
if (bRet == -1)
{
// handle the error and possibly exit
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
If you post the message to a window or windows, then you need to pump messages. What happens is that the message goes into a queue for the thread associated with that window (this thread) - the message pump extracts them out and dispatches them off to the correct window procedure.
If you had sent the message instead of posting it, then the window procedure for the window is called directly - rather than going into a queue. You wouldn't need to pump messages because once SendMessage returns the message is fully handled.
The way PostQuitMessage works is by setting a flag on the message queue indicating that the application should quit. The WM_QUIT message isn't really a window message that you would send - what happens is that GetMessage will check this flag after all the other posted window messages are processed and returns 0 if it is set. This will cause all windows to correctly close, and you don't need to send it to the windows themselves.

Resources