I'm P/Invoking to CreateRectRgn in gdi32.dll. The normal P/Invoke signature for this function is:
[DllImport("gdi32", SetLastError=true)]
static extern IntPtr CreateRectRgn(int nLeft, int nTop, int nRight, int nBottom);
As a shortcut, I've also defined this overload:
[DllImport("gdi32", SetLastError=true)]
static extern IntPtr CreateRectRgn(RECT rc);
[StructLayout(LayoutKind.Sequential)]
struct RECT{
public int left;
public int top;
public int right;
public int bottom;
}
(Yes, I am aware of CreateRectRgnIndirect, but since I must use functions to convert between System.Drawing.Rectangle and this RECT structure, the above is more useful to me, as it doesn't involve an intermediate variable.)
This overload should work identically to the normal signature, since it should put the stack in an identical state at entry to CreateRectRgn. And indeed, on Windows XP, 32-bit, it works flawlessly. But on Windows 7, 64-bit, the function returns zero, and Marshal.GetLastWin32Error() returns 87, which is "The parameter is incorrect."
Any ideas as to what could be the problem?
Oh. The calling convention Microsoft uses on x64 is totally different from STDCALL. In the call to CreateRectRgn, the stack isn't used for the parameters at all, they're all passed in registers. When I try to pass a RECT structure, it makes a copy of the structure on the stack, and puts a pointer to this copy in a register. Therefore, this little trick won't work at all in 64-bit Windows. Now I've got to go through all my interop code and find other places I've done this and take them all out.
Related
How can a Guid be associated with a Runtime Class to be used in the function call winrt::create_instance, so a winrt Runtime Class can be late bound? I don't see how in MIDL 3.0--though Interfaces work with uuid and version. Is this the answer to late binding in WinRT?
I still don't know how to use winrt::create_instance, but I took a really deep look into base.h. I found plenty of guts to late bind to a WinRT component dll using def calls and interop. This is my solution to the problem. It works. Choosing the right way to hold IUnknown was important, and there was some specifics I'm still working out; like why I have to wrap the IUnknown in a winrt::impl::abi_t<> (WinRT/COM conversion?). Using the following code you simply load the dll by file name (with qualified path if not in in the executable directory), and call the class according to it's runtime namespace.classname. It is absolutely not necessary to add the activatableclass to the Packackage.appxmanifest or to reference the winmd. A search shows a lot of people looking for this, but there is a general reluctance to point this out. It's just like COM really, with its own quirks.
UINT32 ret = 0;
void* dllHandle = WINRT_IMPL_LoadLibraryW(L"Project.dll");
if (!dllHandle) { throw; };
void* pProc = WINRT_IMPL_GetProcAddress(dllHandle, "DllGetActivationFactory");
auto DllGetActivationFactory = reinterpret_cast<int32_t(__stdcall*)(void* classId, void** factory)>(pProc);
static const WCHAR* cname = L"namespace.classname";
const UINT32 cnamelen = wcslen(cname);
HSTRING hcname = NULL;
HSTRING_HEADER header;
HRESULT hr = WindowsCreateStringReference(cname, cnamelen, &header, &hcname);
com_ptr< winrt::impl::abi_t<winrt::Windows::Foundation::IActivationFactory> > oCOMActivationFactory{ nullptr };
ret = (*DllGetActivationFactory)(&header, oCOMActivationFactory.put_void());
com_ptr< winrt::impl::abi_t<winrt::Windows::Foundation::IUnknown> > iObj{ nullptr };
ret = oCOMActivationFactory.get()->ActivateInstance(iObj.put_void());
winrt::com_ptr<ProxyServer::ImplementationInterface> iImplementationInterface{ nullptr };
winrt::copy_from_abi(iImplementationInterface, iObj.as<ProxyServer::ImplementationInterface>());
iImplementationInterface.get()->MyFunction();
oCOMActivationFactory.detach();
WindowsDeleteString(hcname);
Ok, muddling though Stack on the particulars about void*, books like The C Programming Language (K&R) and The C++ Programming Language (Stroustrup). What have I learned? That void* is a generic pointer with no type inferred. It requires a cast to any defined type and printing void* just yields the address.
What else do I know? void* can't be dereferenced and thus far remains the one item in C/C++ from which I have discovered much written about but little understanding imparted.
I understand that it must be cast such as *(char*)void* but what makes no sense to me for a generic pointer is that I must somehow already know what type I need in order to grab a value. I'm a Java programmer; I understand generic types but this is something I struggle with.
So I wrote some code
typedef struct node
{
void* data;
node* link;
}Node;
typedef struct list
{
Node* head;
}List;
Node* add_new(void* data, Node* link);
void show(Node* head);
Node* add_new(void* data, Node* link)
{
Node* newNode = new Node();
newNode->data = data;
newNode->link = link;
return newNode;
}
void show(Node* head)
{
while (head != nullptr)
{
std::cout << head->data;
head = head->link;
}
}
int main()
{
List list;
list.head = nullptr;
list.head = add_new("My Name", list.head);
list.head = add_new("Your Name", list.head);
list.head = add_new("Our Name", list.head);
show(list.head);
fgetc(stdin);
return 0;
}
I'll handle the memory deallocation later. Assuming I have no understanding of the type stored in void*, how do I get the value out? This implies I already need to know the type, and this reveals nothing about the generic nature of void* while I follow what is here although still no understanding.
Why am I expecting void* to cooperate and the compiler to automatically cast out the type that is hidden internally in some register on the heap or stack?
I'll handle the memory deallocation later. Assuming I have no understanding of the type stored in void*, how do I get the value out?
You can't. You must know the valid types that the pointer can be cast to before you can dereference it.
Here are couple of options for using a generic type:
If you are able to use a C++17 compiler, you may use std::any.
If you are able to use the boost libraries, you may use boost::any.
Unlike Java, you are working with memory pointers in C/C++. There is no encapsulation whatsoever. The void * type means the variable is an address in memory. Anything can be stored there. With a type like int * you tell the compiler what you are referring to. Besides the compiler knows the size of the type (say 4 bytes for int) and the address will be a multiple of 4 in that case (granularity/memory alignment). On top, if you give the compiler the type it will perform consistency checks at compilation time. Not after. This is not happening with void *.
In a nutshell, you are working bare metal. The types are compiler directives and do not hold runtime information. Nor does it track the objects you are dynamically creating. It is merely a segment in memory that is allocated where you can eventually store anything.
The main reason to use void* is that different things may be pointed at. Thus, I may pass in an int* or Node* or anything else. But unless you know either the type or the length, you can't do anything with it.
But if you know the length, you can handle the memory pointed at without knowing the type. Casting it as a char* is used because it is a single byte, so if I have a void* and a number of bytes, I can copy the memory somewhere else, or zero it out.
Additionally, if it is a pointer to a class, but you don't know if it is a parent or inherited class, you may be able to assume one and find out a flag inside the data which tells you which one. But no matter what, when you want to do much beyond passing it to another function, you need to cast it as something. char* is just the easiest single byte value to use.
Your confusion derived from habit to deal with Java programs. Java code is set of instruction for a virtual machine, where function of RAM is given to a sort of database, which stores name, type, size and data of each object. Programming language you're learning now is meant to be compiled into instruction for CPU, with same organization of memory as underlying OS have. Existing model used by C and C++ languages is some abstract built on top of most of popular OSes in way that code would work effectively after being compiled for that platform and OS. Naturally that organization doesn't involve string data about type, except for famous RTTI in C++.
For your case RTTI cannot be used directly, unless you would create a wrapper around your naked pointer, which would store the data.
In fact C++ library contains a vast collection of container class templates that are useable and portable, if they are defined by ISO standard. 3/4 of standard is just description of library often referred as STL. Use of them is preferable over working with naked pointers, unless you mean to create own container for some reason. For particular task only C++17 standard offered std::any class, previously present in boost library. Naturally, it is possible to reimplement it, or, in some cases, to replace by std::variant.
Assuming I have no understanding of the type stored in void*, how do I get the value out
You don't.
What you can do is record the type stored in the void*.
In c, void* is used to pass around a binary chunk of data that points at something through one layer of abstraction, and recieve it at the other end, casting it back to the type that the code knows it will be passed.
void do_callback( void(*pfun)(void*), void* pdata ) {
pfun(pdata);
}
void print_int( void* pint ) {
printf( "%d", *(int*)pint );
}
int main() {
int x = 7;
do_callback( print_int, &x );
}
here, we forget thet ype of &x, pass it through do_callback.
It is later passed to code inside do_callback or elsewhere that knows that the void* is actually an int*. So it casts it back and uses it as an int.
The void* and the consumer void(*)(void*) are coupled. The above code is "provably correct", but the proof does not lie in the type system; instead, it depends on the fact we only use that void* in a context that knows it is an int*.
In C++ you can use void* similarly. But you can also get fancy.
Suppose you want a pointer to anything printable. Something is printable if it can be << to a std::ostream.
struct printable {
void const* ptr = 0;
void(*print_f)(std::ostream&, void const*) = 0;
printable() {}
printable(printable&&)=default;
printable(printable const&)=default;
printable& operator=(printable&&)=default;
printable& operator=(printable const&)=default;
template<class T,std::size_t N>
printable( T(&t)[N] ):
ptr( t ),
print_f( []( std::ostream& os, void const* pt) {
T* ptr = (T*)pt;
for (std::size_t i = 0; i < N; ++i)
os << ptr[i];
})
{}
template<std::size_t N>
printable( char(&t)[N] ):
ptr( t ),
print_f( []( std::ostream& os, void const* pt) {
os << (char const*)pt;
})
{}
template<class T,
std::enable_if_t<!std::is_same<std::decay_t<T>, printable>{}, int> =0
>
printable( T&& t ):
ptr( std::addressof(t) ),
print_f( []( std::ostream& os, void const* pt) {
os << *(std::remove_reference_t<T>*)pt;
})
{}
friend
std::ostream& operator<<( std::ostream& os, printable self ) {
self.print_f( os, self.ptr );
return os;
}
explicit operator bool()const{ return print_f; }
};
what I just did is a technique called "type erasure" in C++ (vaguely similar to Java type erasure).
void send_to_log( printable p ) {
std::cerr << p;
}
Live example.
Here we created an ad-hoc "virtual" interface to the concept of printing on a type.
The type need not support any actual interface (no binary layout requirements), it just has to support a certain syntax.
We create our own virtual dispatch table system for an arbitrary type.
This is used in the C++ standard library. In c++11 there is std::function<Signature>, and in c++17 there is std::any.
std::any is void* that knows how to destroy and copy its contents, and if you know the type you can cast it back to the original type. You can also query it and ask it if it a specific type.
Mixing std::any with the above type-erasure techinque lets you create regular types (that behave like values, not references) with arbitrary duck-typed interfaces.
I'm trying to make a win32 dll that are able to handle ansi and unicode depending what specify in the character set on properties. Unicode or Not Set. ANSI when building in Visual Studio.
The dll has the definition
extern "C" int __stdcall calc(TCHAR *foo)
The definition file is as follow
typedef int (CALLBACK* LPFNDLLCALC)( TCHAR *foo)
Inside the MFC Calling app i load the dll as this
HINSTANCE DllFoo = LoadLibrary(L"foo.dll");
LPFNDLLCALC lpfnDllcalc = (LPFNDLLCALC)GetProcAddress(DllFoo ,"calc");
CString C_SerialNumber;
mvSerialNumber.GetWindowText(C_SerialNumber);
TCHAR* SerialNumber = C_SerialNumber.GetBuffer(0);
LPFNDLLCALC(SerialNumber);
I understand that i make something wrong in the C_SerialNumber.GetBuffer(0) to the TCHAR* pointer. Because in the debugger in the dll only show the first char is passed to the dll. Not the complete string.
How do i get CString to pointer that work in both ansi and unicode.
If change all my code to wchar_t or char in stead of TCHAR i get it to work. Put not with this nativ TCHAR macro.
As I see it you have two options:
Write the code entirely using TCHAR. Then compile the code into two separate DLLs, one narrow and one wide.
Have a single DLL that exports two variants of each function that operates on text. This is how the Windows API is implemented.
If you choose the second option, you don't need to implement each function twice. The primary function is the wide variant. For the narrow variant you convert the input from narrow to wide and then call the wide version. Vice versa for output text. In other words, you use the adapter pattern.
I suppose that you are imagining a third option where you have a single function that can operate on either form of text. Don't go this way. This way abandons type safety and will give you no end of pain. It will also be counter to user's expectations.
As David said, you need to export two separate functions, one for Ansi and one for Unicode, just like the Win32 API does, eg:
#ifdef __cplusplus
extern "C" {
#endif
int WINAPI calcA(LPCSTR foo);
int WINAPI calcW(LPCWSTR foo);
#ifdef __cplusplus
}
#endif
typedef int (WINAPI *LPFNDLLCALC)(LPCTSTR foo);
Then you can do the following:
int WINAPI calcA(LPCSTR foo)
{
return calcW(CStringW(foo));
}
int WINAPI calcW(LPCWSTR foo)
{
//...
}
HINSTANCE DllFoo = LoadLibrary(L"foo.dll");
LPFNDLLCALC lpfnDllcalc = (LPFNDLLCALC) GetProcAddress(DllFoo,
#ifdef UNICODE
"calcW"
#else
"calcA"
#endif
);
CString C_SerialNumber;
mvSerialNumber.GetWindowText(C_SerialNumber);
lpfnDllcalc(C_SerialNumber);
I used the 'ntQuerySystemInformation' to get all the handle information like:
NtQuerySystemInformation(SystemHandleInformation, pHandleInfor, ulSize,NULL);//SystemHandleInformation = 16
struct of pHandleInfor is:
typedef struct _SYSTEM_HANDLE_INFORMATION
{
ULONG ProcessId;
UCHAR ObjectTypeNumber;
UCHAR Flags;
USHORT Handle;
PVOID Object;
ACCESS_MASK GrantedAccess;
} SYSTEM_HANDLE_INFORMATION, *PSYSTEM_HANDLE_INFORMATION;
It works well in xp 32bit, but in Win7 64bit can only get the right pid that less than 65535. The type of processId in this struct is ULONG, I think it can get more than 65535. What's wrong with it? Is there any other API instead?
There are two enum values for NtQuerySystemInformation to get handle info:
CNST_SYSTEM_HANDLE_INFORMATION = 16
CNST_SYSTEM_EXTENDED_HANDLE_INFORMATION = 64
And correspondingly two structs: SYSTEM_HANDLE_INFORMATION and SYSTEM_HANDLE_INFORMATION_EX.
The definitions for these structs are:
struct SYSTEM_HANDLE_INFORMATION
{
short UniqueProcessId;
short CreatorBackTraceIndex;
char ObjectTypeIndex;
char HandleAttributes; // 0x01 = PROTECT_FROM_CLOSE, 0x02 = INHERIT
short HandleValue;
size_t Object;
int GrantedAccess;
}
struct SYSTEM_HANDLE_INFORMATION_EX
{
size_t Object;
size_t UniqueProcessId;
size_t HandleValue;
int GrantedAccess;
short CreatorBackTraceIndex;
short ObjectTypeIndex;
int HandleAttributes;
int Reserved;
}
As You can see, the first struct really can only contain 16-bit process id-s...
See for example ProcessExplorer project's source file ntexapi.h for more information.
Note also that the field widths for SYSTEM_HANDLE_INFORMATION_EX in my struct definitions might be different from theirs (that is, in my definition some field widths vary depending on the bitness), but I think I tested the code both under 32-bit and 64-bit and found it to be correct.
Please recheck if necessary and let us know if You have additional info.
From Raymond Chen's article Processes, commit, RAM, threads, and how high can you go?:
I later learned that the Windows NT folks do try to keep the numerical values of process ID from getting too big. Earlier this century, the kernel team experimented with letting the numbers get really huge, in order to reduce the rate at which process IDs get reused, but they had to go back to small numbers, not for any technical reasons, but because people complained that the large process IDs looked ugly in Task Manager. (One customer even asked if something was wrong with his computer.)
I am running out of ideas here. I have a piece of code adapted from http://thetechnofreak.com/technofreak/keylogger-visual-c/ to convert keycodes to unicode chars. It works fine in all situations except when you try to run the 32-bit version from 64-bit Windows. For some reason pKbd->pVkToWcharTable keeps returning NULL. I have tried __ptr64 as well as explicitly specifying SysWOW64 and System32 for the kbd dll path. I have found several items across the internet referring to this exact or very similar problem but I cannot seem to get any of the solutions to work (See: KbdLayerDescriptor returns NULL at 64bit architecture) The following is my test code that was compiled with mingw-32 on Windows XP (gcc -std=c99 Wow64Test.c) and then executed on Windows 7 64-bit. On Windows XP I am getting a valid pointer, however on Windows 7 I am getting NULL.
***Update: So it looks like the problems I am having are due to mingw not implementing __ptr64 correctly as the sizeof operation gives 4 bytes instead of the 8 bytes given by visual studio. So the real solution would be figuring out a way to make the size of KBD_LONG_POINTER dynamic or at least 64-bits but I am not sure if thats possible. Any ideas?
#include <windows.h>
#include <stdio.h>
#define KBD_LONG_POINTER __ptr64
//#define KBD_LONG_POINTER
typedef struct {
BYTE Vk;
BYTE ModBits;
} VK_TO_BIT, *KBD_LONG_POINTER PVK_TO_BIT;
typedef struct {
PVK_TO_BIT pVkToBit;
WORD wMaxModBits;
BYTE ModNumber[];
} MODIFIERS, *KBD_LONG_POINTER PMODIFIERS;
typedef struct _VK_TO_WCHARS1 {
BYTE VirtualKey;
BYTE Attributes;
WCHAR wch[1];
} VK_TO_WCHARS1, *KBD_LONG_POINTER PVK_TO_WCHARS1;
typedef struct _VK_TO_WCHAR_TABLE {
PVK_TO_WCHARS1 pVkToWchars;
BYTE nModifications;
BYTE cbSize;
} VK_TO_WCHAR_TABLE, *KBD_LONG_POINTER PVK_TO_WCHAR_TABLE;
typedef struct {
DWORD dwBoth;
WCHAR wchComposed;
USHORT uFlags;
} DEADKEY, *KBD_LONG_POINTER PDEADKEY;
typedef struct {
BYTE vsc;
WCHAR *KBD_LONG_POINTER pwsz;
} VSC_LPWSTR, *KBD_LONG_POINTER PVSC_LPWSTR;
typedef struct _VSC_VK {
BYTE Vsc;
USHORT Vk;
} VSC_VK, *KBD_LONG_POINTER PVSC_VK;
typedef struct _LIGATURE1 {
BYTE VirtualKey;
WORD ModificationNumber;
WCHAR wch[1];
} LIGATURE1, *KBD_LONG_POINTER PLIGATURE1;
typedef struct tagKbdLayer {
PMODIFIERS pCharModifiers;
PVK_TO_WCHAR_TABLE pVkToWcharTable;
PDEADKEY pDeadKey;
PVSC_LPWSTR pKeyNames;
PVSC_LPWSTR pKeyNamesExt;
WCHAR *KBD_LONG_POINTER *KBD_LONG_POINTER pKeyNamesDead;
USHORT *KBD_LONG_POINTER pusVSCtoVK;
BYTE bMaxVSCtoVK;
PVSC_VK pVSCtoVK_E0;
PVSC_VK pVSCtoVK_E1;
DWORD fLocaleFlags;
BYTE nLgMax;
BYTE cbLgEntry;
PLIGATURE1 pLigature;
DWORD dwType;
DWORD dwSubType;
} KBDTABLES, *KBD_LONG_POINTER PKBDTABLES;
typedef PKBDTABLES(CALLBACK *KbdLayerDescriptor) (VOID);
int main() {
PKBDTABLES pKbd;
HINSTANCE kbdLibrary = NULL;
//"C:\\WINDOWS\\SysWOW64\\KBDUS.DLL"
//"C:\\WINDOWS\\System32\\KBDUS.DLL"
kbdLibrary = LoadLibrary("C:\\WINDOWS\\SysWOW64\\KBDUS.DLL");
KbdLayerDescriptor pKbdLayerDescriptor = (KbdLayerDescriptor) GetProcAddress(kbdLibrary, "KbdLayerDescriptor");
if(pKbdLayerDescriptor != NULL) {
pKbd = pKbdLayerDescriptor();
printf("Is Null? %d 0x%X\n", sizeof(pKbd->pVkToWcharTable), pKbd->pVkToWcharTable);
}
FreeLibrary(kbdLibrary);
kbdLibrary = NULL;
}
It might be late for you, but here is a solution for anyone having the same problem. This demo and incomplete explanation helps, but only works in Visual Studio:
http://www.codeproject.com/Articles/439275/Loading-keyboard-layout-KbdLayerDescriptor-in-32-6
The pointers in the structures in kbd.h all have the KBD_LONG_POINTER macro, which is defined as *__ptr64* on 64 bit operating systems. In Visual Studio, this makes the pointers take up 8 bytes instead of the usual 4 of 32 bit programs. Unfortunately in MinGW, *__ptr64* is defined to not do anything.
As written in the linked explanation, the KbdLayerDescriptor function returns pointers differently on 32 bit and 64 bit Windows. The size of pointers seem to depend on the operating system and not on the running program. Actually, the pointers are still 4 bytes on a 64 bit operating system for a 32 bit program, but in VS, the __ptr64 keyword lies that they are not.
For example some structures look like this in kbd.h:
typedef struct {
BYTE Vk;
BYTE ModBits;
} VK_TO_BIT, *KBD_LONG_POINTER PVK_TO_BIT;
typedef struct {
PVK_TO_BIT pVkToBit;
WORD wMaxModBits;
BYTE ModNumber[];
} MODIFIERS, *KBD_LONG_POINTER PMODIFIERS;
This can't work neither in MinGW nor in VS for 32 bit programs on 64 bit Windows. Because the pVkToBit member in MODIFIERS is only 4 bytes without __ptr64. The solution is to forget about KBD_LONG_POINTER (you could even remove them all) and define structures similar to the above. i.e. :
struct VK_TO_BIT64
{
BYTE Vk;
BYTE ModBits;
};
struct MODIFIERS64
{
VK_TO_BIT64 *pVkToBit;
int _align1;
WORD wMaxModBits;
BYTE ModNumber[];
};
(You could use VK_TO_BIT and not define your own VK_TO_BIT64, as they are the same, but having separate definitions help understanding what's going on.)
The member pVkToBit still takes up 4 bytes, but KbdLayerDescriptor pads pointers to 8 bytes on a 64 bit OS, so we have to insert some padding (int _align1).
You have to do the same thing for the other structures in kbd.h. For example this will replace KBDTABLES:
struct WCHARARRAY64
{
WCHAR *str;
int _align1;
};
struct KBDTABLES64
{
MODIFIERS64 *pCharModifiers;
int _align1;
VK_TO_WCHAR_TABLE64 *pVkToWcharTable;
int _align2;
DEADKEY64 *pDeadKey;
int _align3;
VSC_LPWSTR64 *pKeyNames;
int _align4;
VSC_LPWSTR64 *pKeyNamesExt;
int _align5;
WCHARARRAY64 *pKeyNamesDead;
int _align6;
USHORT *pusVSCtoVK;
int _align7;
BYTE bMaxVSCtoVK;
int _align8;
VSC_VK64 *pVSCtoVK_E0;
int _align9;
VSC_VK64 *pVSCtoVK_E1;
int _align10;
DWORD fLocaleFlags;
byte nLgMax;
byte cbLgEntry;
LIGATURE64_1 *pLigature;
int _align11;
DWORD dwType;
DWORD dwSubType;
};
(Notice that the _align8 member does not come after a pointer.)
To use this all, you have to check whether you are running on 64 bit windows with this: http://msdn.microsoft.com/en-us/library/ms684139%28v=vs.85%29.aspx
If not, use the original structures from kbd.h, because the pointers behave correctly. They take up 4 bytes. In case the program is running on a 64 bit OS, use the structures you created. You can achieve it with this:
typedef __int64 (CALLBACK *LayerDescriptor64)(); // Result should be cast to KBDTABLES64.
typedef PKBDTABLES (CALLBACK *LayerDescriptor)(); // This is used on 32 bit OS.
static PKBDTABLES kbdtables = NULL;
static KBDTABLES64 *kbdtables64 = NULL;
And in some initialization function:
if (WindowsIs64Bit()) // Your function that checks the OS version.
{
LayerDescriptor64 KbdLayerDescriptor = (LayerDescriptor64)GetProcAddress(kbdLibrary, "KbdLayerDescriptor");
if (KbdLayerDescriptor != NULL)
kbdtables64 = (KBDTABLES64*)KbdLayerDescriptor();
else
kbdtables64 = NULL;
}
else
{
LayerDescriptor KbdLayerDescriptor = (LayerDescriptor)GetProcAddress(kbdLibrary, "KbdLayerDescriptor");
if (KbdLayerDescriptor != NULL)
kbdtables = KbdLayerDescriptor();
else
kbdtables = NULL;
}
This solution does not use __ptr64 at all, and works both in VS and MinGW. The things you have to watch out for are:
The structures should be aligned on 8 byte boundaries. (This is the default in current VS or MinGW, at least for C++.)
Don't define KBD_LONG_POINTER to __ptr64, or remove it from everywhere. Although you are better off not changing kbd.h.
Understand how alignment for structure members work. (I have compiled this as C++ and not C. I'm not sure whether alignment rules would be any different for C.)
Use the correct variable (either kbdtables or kbdtables64) depending on the OS.
This solution is obviously not needed when compiling a 64 bit program.