GetAsyncKeyState from windows.h - winapi

I will use the following code to explain my question:
#include <Windows.h>
#include <iostream>
int main()
{
bool toggle = 0;
while (1)
{
if (GetAsyncKeyState('C') & 0x8000)
{
toggle = !toggle;
if (toggle) std::cout << "Pressed\n";
else std::cout << "Not pressed\n";
}
}
}
Testing, I see that
(GetAsyncKeyState('C') & 0x8000) // 0x8000 to see if the most significant bit is 1
has the same behavior as
(GetAsyncKeyState('C'))
However, to achieve the behavior I want, which is the way any text input out there works (it waits like 1 second, and if you are still pressing the button, it starts spamming in a certain rate), I need to write
(GetAsyncKeyState('C') & 1)
The documentation says
The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
Can someone clarify this please?

MSDN tells you why on the same page you linked to!
Although the least significant bit of the return value indicates whether the key has been pressed since the last query, due to the pre-emptive multitasking nature of Windows, another application can call GetAsyncKeyState and receive the "recently pressed" bit instead of your application. The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
GetAsyncKeyState gives you "the interrupt-level state associated with the hardware" and is probably shared by all processes in the window station/session.
The low bit might be connected to the keyboard key repeat delay you can set in Control Panel, but it does not really matter because MSDN tells you to not look at that bit.
GetAsyncKeyState is usually not the correct way to process keyboard input. Console applications should read stdin, or use the console API. GUI applications should use the WM_CHAR/WM_KEYDOWN/WM_KEYUP window messages.

Related

Is there a race between starting and seeing yourself in WinApi's EnumProcesses()?

I just found this code in the wild:
def _scan_for_self(self):
win32api.Sleep(2000) # sleep to give time for process to be seen in system table.
basename = self.cmdline.split()[0]
pids = win32process.EnumProcesses()
if not pids:
UserLog.warn("WindowsProcess", "no pids", pids)
for pid in pids:
try:
handle = win32api.OpenProcess(
win32con.PROCESS_QUERY_INFORMATION | win32con.PROCESS_VM_READ,
pywintypes.FALSE, pid)
except pywintypes.error, err:
UserLog.warn("WindowsProcess", str(err))
continue
try:
modlist = win32process.EnumProcessModules(handle)
except pywintypes.error,err:
UserLog.warn("WindowsProcess",str(err))
continue
This line caught my eye:
win32api.Sleep(2000) # sleep to give time for process to be seen in system table.
It suggests that if you call EnumProcesses() too fast after starting, you won't see yourself. Is there any truth to this?
There is a race, but it's not the race the code tried to protect against.
A successful call to CreateProcess returns only after the kernel object representing the process has been created and enqueued into the kernel's process list. A subsequent call to EnumProcesses accesses the same list, and will immediately observe the newly created process object.
That is, unless the process object has since been destroyed. This isn't entirely unusual since processes in Windows are initialized in-process. The documentation even makes note of that:
Note that the function returns before the process has finished initialization. If a required DLL cannot be located or fails to initialize, the process is terminated.
What this means is that if a call to EnumProcesses immediately following a successful call to CreateProcess doesn't observe the newly created process, it does so because it was late rather than early. If you are late already then adding a delay will only make you more late.
Which swiftly leads to the actual race here: Process IDs uniquely identify processes only for a finite time interval. Once a process object is gone, its ID is up for grabs, and the system will reuse it at some point. The only reliable way to identify a process is by holding a handle to it.
Now it's anyone's guess what the author of _scan_for_self was trying to accomplish. As written, the code takes more time to do something that's probably altogether wrong1 anyway.
1 Turns out my gut feeling was correct. This is just your average POSIX developer, that, in the process of learning that POSIX is insufficient would rather call out Microsoft instead of actually using an all-around superior API.
The documentation for EnumProcesses (WIn32 API - EnumProcesses function), does not mention anything about a delay needed to see the current process in the list it returns.
The example from Microsoft how to use EnumProcess to enumerate all running processes (Enumerating All Processes), also does not contain any delay before calling EnumProcesses.
A small test application I created in C++ (see below) always reports that the current process is in the list (tested on Windows 10):
#include <Windows.h>
#include <Psapi.h>
#include <iostream>
#include <vector>
const DWORD MAX_NUM_PROCESSES = 4096;
DWORD aProcesses[MAX_NUM_PROCESSES];
int main(void)
{
// Get the list of running process Ids:
DWORD cbNeeded;
if (!EnumProcesses(aProcesses, MAX_NUM_PROCESSES * sizeof(DWORD), &cbNeeded))
{
return 1;
}
// Check if current process is in the list:
DWORD curProcId = GetCurrentProcessId();
bool bFoundCurProcId{ false };
DWORD numProcesses = cbNeeded / sizeof(DWORD);
for (DWORD i=0; i<numProcesses; ++i)
{
if (aProcesses[i] == curProcId)
{
bFoundCurProcId = true;
}
}
std::cout << "bFoundCurProcId: " << bFoundCurProcId << std::endl;
return 0;
}
Note: I am aware that the fact that the program reported the expected result does not mean that there is no race. Maybe I just couldn't catch it manifest. But trying to run code like that can give you a hint sometimes (especially if the result would have been that there is a race).
The fact that I never had a problem running this test (did it many times), together with the lack of any mention of the need for a delay in Microsoft's documentation make me believe that it is not required.
My conclusion is that either:
There is a unique issue when using it from python (doubt it).
or:
The code you found is doing something unnecessary.
There is no race.
EnumProcesses calls a NT API function that switches to kernel mode to walk the linked list of processes. Your own process has been added to the list before it starts running.

stm32f030 doesn't go to sleep

I want to enter the sleep mode with WFI on a stm32f030 (cortex M0).
However my code doesn't seem to work on the stm32f030 but works on an stm32f103.
I think it works because when trying to flash again the f103 (with stlink utility or keil) it doesn't respond and I have to connect under reset which tends to indicate that the cpu is sleeping. But I can connect without problem the f030.
Here is my code:
int main() {
SetupSleep();
__wfi();
while(1){}
}
Here is the content of my SetupSleep() function:
void SetupSleep(void){
SCB->SCR |= (1ul << 1);
SCB->SCR &= ~(1ul << 2);
}
Which according to the page 81 of the f030 programming manual (http://www.st.com/web/en/resource/technical/document/programming_manual/DM00051352.pdf) selects Sleep mode and Sleeponexit.
Does it mean an interrupt occurs that makes the cpu exiting sleep mode ?
It is my first time using the sleep mode so maybe my implementation is not correct.
Instead of manipulating the registers directly, take a look at what the standard peripheral library does. In particular, look at PWR_EnterSleepMode() in stm32f0xx_pwr.c.
At the very least I can see that you're not executing either __WFI() or __WFE() to actually enter sleep mode. There are also the other low power modes: standby and stop that may be of interest to you.

Thread-Ids in Windows greater than 0xFFFF

we have a big and old software project. This software runs in older days on an old OS, so it has an OS-Wrapper. Today it runs on windows.
In the OS-Wrapper we have structs to manage threads. One Member of this struct is the thread-Id, but it is defined with an uint16_t. The thread-Ids will be generated with the Win-API createThreadEx.
Since some month at one of our customers thread-Ids appears which are greater than
numeric_limits<uint16_t>::max()
We run in big troubles, if we try to change this member to an uint32_t. And even if we fix it, we had to test the fix.
So my question is: How is it possible in windows to get thread-Ids which are greater than 0xffff? How must be the circumstances to reach this?
Windows thread IDs are 32 bit unsigned integers, of type DWORD. There's no requirement for them to be less than 0xffff. Whatever thought process led you to that belief was flawed.
If you want to stress test your system to create a scenario where you have thread IDs that go above 0xffff then you simply need to create a large number of threads. To make this tenable, without running out of virtual address space, create threads with very small stacks. You can create the threads suspended too because you don't need the threads to do anything.
Of course, it might still be a little tricky to force the system to allocate that many threads. I found that my simple test application would not readily generate thread IDs above 0xffff when run as a 32 bit process, but would do so as a 64 bit process. You could certainly create a 64 bit process that would consume the low-numbered thread IDs and then allow your 32 bit process to go to work and so deal with lower numbered thread IDs.
Here's the program that I experimented with:
#include <Windows.h>
#include <iostream>
DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
return 0;
}
int main()
{
for (int i = 0; i < 10000; i++)
{
DWORD threadID;
if (CreateThread(NULL, 64, ThreadProc, NULL, CREATE_SUSPENDED, &threadID) == NULL)
return 1;
std::cout << std::hex << threadID << std::endl;
}
return 0;
}
Re
” We run in big troubles, if we try to change this member to an uint32_t. And even if we fix it, we had to test the fix.
Your current software’s use of a 16-bit object to store a value that requires 32 bits, is a bug. So you have to fix it, and test the fix. There are at least two practical fixes:
Changing the declaration of the id, and all uses of it.
It can really help with finding all copying of the id, to introduce a dedicated type that is not implicitly convertible to integer, e.g. a C++11 based enumeration type.
Adding a layer of indirection.
Might be possible without changing the data, only changing the threading library implementation.
A deeper fix might be to replace the current threading with C++11 standard library threading.
Anyway you're up for a bit of work, and/or some cost.

Windows Client graphics written off the window to upper-left of screen

I have a Windows WinMain() window in which I write simple graphics -- merely LineTo() and FillRect(). The rectangles move around. After about an hour, the output that used o go to the main window, all of a sudden goes to the upper left corner of my screen -- as if client coordinates were being interpreted as screen coordinates. My GetDC()'s and ReleaseDC()'s seem to be balanced, and I even checked the return value from ReleaseDC(), make sure it is not 0 (per MSDN). Sometimes the output moves back to my main window. When I got to the debugger (VS 2010), my coordinates do not seem amiss--but output is going to the wrong place. I handle WM_PAINT, WM_CREATE, WM_TIMER, and a few others. I do not know how to debug this. Any help would be appreciated.
This has 'not checking return values' written all over it. Pretty crucial in raw Win32 programming, most every API function returns a boolean or a handle where FALSE or NULL indicates failure. GetLastError() provides the error code.
A cheap way to check for this without modifying code is by using the debugger to look at the EAX register value after the API call. A 0 indicates failure. In Visual Studio you can do so by using the #eax and #err pseudo variables in the Watch window, respectively the function return value and the GetLastError value.
This goes bad once Windows starts failing API calls, probably because of a resource leak. You can see it with TaskMgr.exe, Processes tab. View + Select Columns and tick Handles, USER objects and GDI objects. It is usually the latter, restoring the device context and releasing drawing objects is very easy to fumble. You don't have to wait until it fails, a steadily climbing number in one of those columns is the giveaway. It goes belly-up when the value hits 10,000
You must be calling GetDC(NULL) somewhere by mistake, which would get the DC for the entire desktop.
You could make all your GetDC calls call a wrapper function which asserts if the argument is NULL to help track this down:
#include <assert.h>
HDC GetDCAssert(HWND hWnd)
{
assert(hWnd);
return ::GetDC(hWnd);
}

Win32: How to crash?

i'm trying to figure out where Windows Error Reports are saved; i hit Send on some earlier today, but i forgot that i want to "view the details" so i can examine the memory minidumps.
But i cannot find where they are stored (and google doesn't know).
So i want to write a dummy application that will crash, show the WER dialog, let me click "view the details" so i can get to the folder where the dumps are saved.
How can i crash on Windows?
Edit: The reason i ask is because i've tried overflowing the stack, and floating point dividing by zero. Stack Overflow makes the app vanish, but no WER dialog popped up. Floating point division by zero results in +INF, but no exception, and no crash.
You guys are all so verbose! :-)
Here's a compact way to do it:
*((int*)0)=0;
Should be a good start:
int main(int argc, char* argv[])
{
char *pointer = NULL;
printf("crash please %s", *pointer);
return 0;
}
You are assuming the memory dumps are still around. Once they are sent, AFAIK the dumps are deleted from the machine.
The dumps themselves should be located in %TEMP% somewhere.
As far as crashing, that's not difficult, just do something that causes a segfault.
void crash(void)
{
char* a = 0;
*a = 0;
}
Not sure if this will trigger the Error Reporting dialog, but you could try division by zero.
The officially-supported ways to trigger a crash on purpose can be found here:
http://msdn.microsoft.com/en-us/library/ff545484(v=VS.85).aspx
Basically:
With USB keyboards, you must enable
the keyboard-initiated crash in the
registry. In the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\kbdhid\Parameters,
create a value named
CrashOnCtrlScroll, and set it equal to
a REG_DWORD value of 0x01.
Then:
You must restart the system for these
settings to take effect.
After this is completed, the keyboard
crash can be initiated by using the
following hotkey sequence: Hold down
the rightmost CTRL key, and press the
SCROLL LOCK key twice.
No programming necessary ;) No wheel reinvention here :)
Interesting to know how to crash Windows. But why not have a look at
%allusersprofile%\Application Data\Microsoft\Dr Watson\
first? Look out for application specific crashdata folders as well, I found e.g.
...\FirefoxPortable\Data\profile\minidumps\
and
...\OpenOfficePortable\Data\settings\user\crashdata\.

Resources