I am interested in sending key input through Ruby using the win32API.
I have found the function that I would like to call: SendInput
MSDN describes the signature as follows:
UINT WINAPI SendInput(
_In_ UINT nInputs,
_In_ LPINPUT pInputs,
_In_ int cbSize
);
And INPUT looks like
typedef struct tagINPUT {
DWORD type;
union {
MOUSEINPUT mi;
KEYBDINPUT ki;
HARDWAREINPUT hi;
};
} INPUT, *PINPUT;
So there's one integer that I need showing the input type.
Since I'm interested in keyboard events, I looked at the KEYBDINPUT struct
typedef struct tagKEYBDINPUT {
WORD wVk;
WORD wScan;
DWORD dwFlags;
DWORD time;
ULONG_PTR dwExtraInfo;
} KEYBDINPUT, *PKEYBDINPUT;
And there's another 5 arguments: two 16-bit integers and three 32-bit integers.
Now, from what I have gathered, SendInput takes an integer, a pointer to an INPUT struct, and an integer representing the size of the INPUT struct. Creating the Win32API object looks like
SendInput = Win32API.new("User32", "SendInput", "LPL", "L")
INPUT_KEYBOARD = 1
Now I build my INPUT struct:
input = [INPUT_KEYBOARD, 0x5A, 0, 0, 0, 0].pack('ISSIII')
First argument is the input type, followed by the five arguments for the KEYBDINPUT struct: the virtual keycode for the key I want to press, and some other flags that I don't need for testing purposes.
So I run it:
SendInput.call(1, input, input.size)
Nothing happens.
When I check GetLastError, it returns an error code of 87, which means there were invalid arguments.
I then decided to search around and found that someone else is building their input struct like this:
input = [INPUT_KEYBOARD, 0x5A, 0, 0, 0, 0, 0, 0].pack('ISSIIIII')
I tried running it, and it executes successfully!
Difference? There's two extra arguments for the input.
Now I am confused: what are these two extra arguments for the input struct?
Is there something that I'm missing when I was reading the docs?
I'm thinking it might have something to do with the ULONG_POINTER type, but when I look up the data types, it's a 32-bit integer.
Reference: http://www.dsprobotics.com/support/viewtopic.php?p=7110&sid=cd256b848d00e64a2e74e093863f837f#p7110
The function call takes a union of those structs, I would wager it has something to do with that.
Looks like MOUSEINPUT is larger than KEYBOARDINPUT:
typedef struct tagMOUSEINPUT {
LONG dx;
LONG dy;
DWORD mouseData;
DWORD dwFlags;
DWORD time;
ULONG_PTR dwExtraInfo;
} MOUSEINPUT, *PMOUSEINPUT;
Dunno for sure though, that seems like a pretty terrifying way to bridge those calls!
edit: Yeah, looks like using that MOUSEINPUT size, you end up w/28 bytes of data sent to the function, which is what pack('ISSIIIII') would net you.
Related
I have a problem. As far as I know, processes in Windows share dynamic-linked libraries among each other, allowing only one instance of every library to exist at once in the memory. Knowing that, I wrote a small program in C, which can change some data in this shared section. In my example, I chose to change the beginning of MessageBoxW function. This is the code:
#include <Windows.h>
#include <stdio.h>
#include <tchar.h>
#define SIZE 12 // size of JMP byte array defined below
int WINAPI CustomMessageBoxW(HWND, LPCWSTR, LPCWSTR, UINT);
void BeginRedirect(LPVOID);
// JMP bytes translated to assembly (x64):
// mov rax, 0x1234567890ABCDEF - this value will be changed to newFunction address, in BeginRedirect procedure
// jmp rax - jump to newFunction
BYTE JMP[SIZE] = { 0x48, 0xB8, 0xEF, 0xCD, 0xAB, 0x90, 0x78, 0x56, 0x34, 0x12, 0xFF, 0xE0 };
DWORD oldProtect, myProtect = PAGE_EXECUTE_READWRITE;
int main()
{
printf("MessageBoxW address: %p\n", MessageBoxW);
printf("redirect? ");
char res[4];
scanf_s("%4s", res, _countof(res));
if (strcmp(res, "yes") == 0) // redirect
{
printf("redirecting...\n");
BeginRedirect(MessageBoxW, CustomMessageBoxW);
}
while (1)
{
MessageBoxW(NULL, L"This is original MessageBoxW", L"Caption", MB_OKCANCEL);
printf("MessageBoxW address: %p\nBytes:\n", MessageBoxW);
for (int i = 0; i < 20; i++)
{
printf("0x%hhX ", *((char*)MessageBoxW + i));
}
puts("\n");
SleepEx(1000, FALSE);
}
}
void BeginRedirect(LPVOID oldFunction, LPVOID newFunction)
{
BYTE tempJMP[SIZE];
memcpy(tempJMP, JMP, SIZE);
BOOL result = VirtualProtect(oldFunction, SIZE, PAGE_EXECUTE_READWRITE, &oldProtect);
printf("\tVirtualProtect result: %u\n", result);
memcpy(tempJMP + 2, &newFunction, 8); // change the basic 0x1234... address to the actual function address
memcpy(oldFunction, tempJMP, SIZE);
result = VirtualProtect(oldFunction, SIZE, oldProtect, &myProtect);
printf("\tVirtualProtect result: %u\n", result);
}
int WINAPI CustomMessageBoxW(HWND hWnd, LPCWSTR lpText, LPCWSTR lpCaption, UINT uiType)
{
printf("MyMessageBoxW: Custom message\n");
}
The program allows to choose if I want to redirect the function in the current instance of the program. Here comes the interesting part.
I run a first instance. When asked if to redirect, I type "yes", so the program does. It changes the beginning code of MessageBoxW, so that it points to my CustomMessageBoxW. Then, in the while loop, the program executes MessageBoxW every second and outputs some debugging information (first 20 bytes of the function). In this instance, the redirection works properly and instead of popup, the program outputs "MyMessageBoxW: Custom message" every second (as expected in CustomMessageBoxW)
Then, I run the second instance of the program (the first instance still executes!). Now, I decide not to redirect the function (type anything apart from "yes"). From the information printed by both instances about their MessageBoxW addresses, I can see that they're clearly identical. At that point, I thought that if the addresses are the same (both instances share one instance of user32.dll which contains MessageBoxW), then the second instance which didn't modify the MessageBoxW function itself will still attempt to execute the CustomMessageBoxW, which will probably result in memory access violation. But no. It turns out that the second instance works just fine and pops up a standard Windows message box, while the first instance (which still runs) still executes the redirected function (remember that in both program instances, the addresses of MessageBoxW are the same). Apart from that, the bytes outputed by
printf("MessageBoxW address: %p\nBytes:\n", MessageBoxW);
for (int i = 0; i < 20; i++)
{
printf("0x%hhX ", *((char*)MessageBoxW + i));
}
are completely different in both instances, while the function address is still the same.
I even decided to debug both instances at the same time using WinDbg, and it also showed that both instances stored different values under the same address. I'd really appreciate it if someone figured out what is actually going round here. Thanks!
Is it possible to display value of module_param when read, in hex?
I have this code in my linux device driver:
module_param(num_in_hex, ulong, 0644)
$cat /sys/module/my_module/parameters/num_in_hex
1234512345
Would like to see that value in hex, instead of decimal. Or, should I use different way like debugfs for this?
There is no ready parameter type (2nd argument of module_param macro), which output its argument as hexadecimal. But it is not difficult to implement it.
Module parameters are driven by callback functions, which extract parameter's value from string and write parameter's value to string.
// Set hexadecimal parameter
int param_set_hex(const char *val, const struct kernel_param *kp)
{
return kstrtoul(val, 16, (unsigned long*)kp->arg);
}
// Read hexadecimal parameter
int param_get_hex(char *buffer, const struct kernel_param *kp)
{
return scnprintf(buffer, PAGE_SIZE, "%lx", *((unsigned long*)kp->arg));
}
// Combine operations together
const struct kernel_param_ops param_ops_hex = {
.set = param_set_hex,
.get = param_get_hex
};
/*
* Macro for check type of variable, passed to `module_param`.
* Just reuse already existed macro for `ulong` type.
*/
#define param_check_hex(name, p) param_check_ulong(name, p)
// Everything is ready for use `module_param` with new type.
module_param(num_in_hex, hex, 0644);
Check include/linux/moduleparam.h for implementation module_param macro and kernel/params.c for implementation of operations for ready-made types (macro STANDARD_PARAM_DEF).
I used the 'ntQuerySystemInformation' to get all the handle information like:
NtQuerySystemInformation(SystemHandleInformation, pHandleInfor, ulSize,NULL);//SystemHandleInformation = 16
struct of pHandleInfor is:
typedef struct _SYSTEM_HANDLE_INFORMATION
{
ULONG ProcessId;
UCHAR ObjectTypeNumber;
UCHAR Flags;
USHORT Handle;
PVOID Object;
ACCESS_MASK GrantedAccess;
} SYSTEM_HANDLE_INFORMATION, *PSYSTEM_HANDLE_INFORMATION;
It works well in xp 32bit, but in Win7 64bit can only get the right pid that less than 65535. The type of processId in this struct is ULONG, I think it can get more than 65535. What's wrong with it? Is there any other API instead?
There are two enum values for NtQuerySystemInformation to get handle info:
CNST_SYSTEM_HANDLE_INFORMATION = 16
CNST_SYSTEM_EXTENDED_HANDLE_INFORMATION = 64
And correspondingly two structs: SYSTEM_HANDLE_INFORMATION and SYSTEM_HANDLE_INFORMATION_EX.
The definitions for these structs are:
struct SYSTEM_HANDLE_INFORMATION
{
short UniqueProcessId;
short CreatorBackTraceIndex;
char ObjectTypeIndex;
char HandleAttributes; // 0x01 = PROTECT_FROM_CLOSE, 0x02 = INHERIT
short HandleValue;
size_t Object;
int GrantedAccess;
}
struct SYSTEM_HANDLE_INFORMATION_EX
{
size_t Object;
size_t UniqueProcessId;
size_t HandleValue;
int GrantedAccess;
short CreatorBackTraceIndex;
short ObjectTypeIndex;
int HandleAttributes;
int Reserved;
}
As You can see, the first struct really can only contain 16-bit process id-s...
See for example ProcessExplorer project's source file ntexapi.h for more information.
Note also that the field widths for SYSTEM_HANDLE_INFORMATION_EX in my struct definitions might be different from theirs (that is, in my definition some field widths vary depending on the bitness), but I think I tested the code both under 32-bit and 64-bit and found it to be correct.
Please recheck if necessary and let us know if You have additional info.
From Raymond Chen's article Processes, commit, RAM, threads, and how high can you go?:
I later learned that the Windows NT folks do try to keep the numerical values of process ID from getting too big. Earlier this century, the kernel team experimented with letting the numbers get really huge, in order to reduce the rate at which process IDs get reused, but they had to go back to small numbers, not for any technical reasons, but because people complained that the large process IDs looked ugly in Task Manager. (One customer even asked if something was wrong with his computer.)
It's taken a few years, but I am finally taking the plunge into VC++. I need to be able to read x number of sectors of a physical device (namely a hard drive). I am using the CreateFile() and SetFilePointerEx() and ReadFile() APIs.
I have done a LOT of reading online in all the major forums about this topic. I have exhausted my research and now I feel it's time to ask the experts to weigh in on this dilemma. As this is my very first post ever on this topic, please go easy on my :)
I should also point out that this is a .DLL that I consume with a simple C# app. The plumbing all works fine. It's the SetFilePointer(Ex)() APIs that are causing me grief.
I can get the code to work up until about the size of a LONG (4,xxx,xxx) - I can't remember the exact value. It suffices to say that I can read everything up to and including sector # 4,000,000 but not 5,000,000 or above. The problem lies in the "size" of the parameters for the SetFilePointer() and SetFilePointerEx() APIs. I've tried both and so far, SetFilePointerEx() seems to be what I should use to work on 64-bit systems.
The 2nd and 3rd parameters of the SetFilePointer are defined as follows:
BOOL WINAPI SetFilePointerEx(
__in HANDLE hFile,
__in LARGE_INTEGER liDistanceToMove,
__out_opt PLARGE_INTEGER lpNewFilePointer,
__in DWORD dwMoveMethod
);
Please note that I have tried passing the LowPart and the HighPart as the 2nd and 3 parameters without any success as I get a CANNOT CONVERT LARGE_INTEGER TO PLARGE_INTEGER (for parameter 3).
HERE IS MY CODE. I USE A CODE-BREAK TO VIEW buff[0], etc. I would like to read past the 4,xxx,xxx limitation. Obviously I am doing something wrong. Each read past this limit resets my file pointer to sector 0.
#include "stdafx.h"
#include <windows.h>
#include <conio.h>
extern "C"
__declspec(dllexport) int ReadSectors(long startSector, long numSectors)
{
HANDLE hFile;
const int SECTOR_SIZE = 512;
const int BUFFER_SIZE = 512;
LARGE_INTEGER liDistanceToMove;
PLARGE_INTEGER newFilePtr = NULL; // not used in this context.
// just reading from START to END
liDistanceToMove.QuadPart = startSector * SECTOR_SIZE;
DWORD dwBytesRead, dwPos;
LPCWSTR fname = L"\\\\.\\PHYSICALDRIVE0";
char buff[BUFFER_SIZE];
// Open the PHYSICALDEVICE as a file.
hFile = CreateFile(fname,
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL,
NULL);
// Here's the API definition
/*BOOL WINAPI SetFilePointerEx(
__in HANDLE hFile,
__in LARGE_INTEGER liDistanceToMove,
__out_opt PLARGE_INTEGER lpNewFilePointer,
__in DWORD dwMoveMethod
);*/
dwPos = SetFilePointerEx(hFile, liDistanceToMove, NULL, FILE_BEGIN);
if(ReadFile(hFile, buff, BUFFER_SIZE, &dwBytesRead, NULL))
{
if(dwBytesRead > 5)
{
BYTE x1 = buff[0];
BYTE x2 = buff[1];
BYTE x3 = buff[2];
BYTE x4 = buff[3];
BYTE x5 = buff[4];
}
}
// Close both files.
CloseHandle(hFile);
return 0;
}
startSector * SECTOR_SIZE;
startSector is a long (32bits), SECTOR_SIZE is a int (also 32bits), multiply these two guys and the intermediate result is going to be a long, which will overflow and you then stuff it into the __int64 of the LARGE_INTEGER, which is too late. You want to operate on __int64s, something like
liDistanceToMove.QuadPart = startSector;
liDistanceToMove.QuadPart *= SECTOR_SIZE;
for example.
I'm using IDebugSymbols::GetNameByOffset and I'm finding that I get the same symbol name for different functions that overload the same name.
E.g. The code I'm looking up the symbols for might be as follows:
void SomeFunction(int) {..}
void SomeFunction(float) {..}
At runtime, when I have an address of an instruction from each of these functions I'd like to use GetNameByOffset and tell the two apart somehow. I've experimented with calling SetSymbolOptions toggling the SYMOPT_UNDNAME and SYMOPT_NO_CPP flags as documented here, but this didn't work.
Does anyone know how to tell these to symbols apart in the debugger engine universe?
Edit: Please see me comment on the accepted answer for a minor amendment to the proposed solution.
Quote from dbgeng.h:
// A symbol name may not be unique, particularly
// when overloaded functions exist which all
// have the same name. If GetOffsetByName
// finds multiple matches for the name it
// can return any one of them. In that
// case it will return S_FALSE to indicate
// that ambiguity was arbitrarily resolved.
// A caller can then use SearchSymbols to
// find all of the matches if it wishes to
// perform different disambiguation.
STDMETHOD(GetOffsetByName)(
THIS_
__in PCSTR Symbol,
__out PULONG64 Offset
) PURE;
So, I would get the name with IDebugSymbols::GetNameByOffset() (it comes back like "module!name" I believe), make sure it is an overload (if you're not sure) using IDebugSymbols::GetOffsetByName() (which is supposed to return S_FALSE for multiple overloads), and look up all possibilities with this name using StartSymbolMatch()/EndSymbolMatch(). Not a one liner though (and not really helpful for that matter...)
Another option would be to go with
HRESULT
IDebugSymbols3::GetFunctionEntryByOffset(
IN ULONG64 Offset,
IN ULONG Flags,
OUT OPTIONAL PVOID Buffer,
IN ULONG BufferSize,
OUT OPTIONAL PULONG BufferNeeded
);
// It can be used to retrieve FPO data on a particular function:
FPO_DATA fpo;
HRESULT hres=m_Symbols3->GetFunctionEntryByOffset(
addr, // Offset
0, // Flags
&fpo, // Buffer
sizeof(fpo), // BufferSize
0 // BufferNeeded
));
and then use fpo.cdwParams for basic parameter size discrimination (cdwParams=size of parameters)